id
int64 12
1.07M
| title
stringlengths 1
124
| text
stringlengths 0
228k
| paragraphs
list | abstract
stringlengths 0
123k
| date_created
stringlengths 0
20
| date_modified
stringlengths 20
20
| templates
list | url
stringlengths 31
154
|
---|---|---|---|---|---|---|---|---|
9,841 | Expressive aphasia | Expressive aphasia, also known as Broca's aphasia, is a type of aphasia characterized by partial loss of the ability to produce language (spoken, manual, or written), although comprehension generally remains intact. A person with expressive aphasia will exhibit effortful speech. Speech generally includes important content words but leaves out function words that have more grammatical significance than physical meaning, such as prepositions and articles. This is known as "telegraphic speech". The person's intended message may still be understood, but their sentence will not be grammatically correct. In very severe forms of expressive aphasia, a person may only speak using single word utterances. Typically, comprehension is mildly to moderately impaired in expressive aphasia due to difficulty understanding complex grammar.
It is caused by acquired damage to the anterior regions of the brain, such as Broca's area. It is one subset of a larger family of disorders known collectively as aphasia. Expressive aphasia contrasts with receptive aphasia, in which patients are able to speak in grammatical sentences that lack semantic significance and generally also have trouble with comprehension. Expressive aphasia differs from dysarthria, which is typified by a patient's inability to properly move the muscles of the tongue and mouth to produce speech. Expressive aphasia also differs from apraxia of speech, which is a motor disorder characterized by an inability to create and sequence motor plans for speech.
Broca's (expressive) aphasia is a type of non-fluent aphasia in which an individual's speech is halting and effortful. Misarticulations or distortions of consonants and vowels, namely phonetic dissolution, are common. Individuals with expressive aphasia may only produce single words, or words in groups of two or three. Long pauses between words are common and multi-syllabic words may be produced one syllable at a time with pauses between each syllable. The prosody of a person with Broca's aphasia is compromised by shortened length of utterances and the presence of self-repairs and disfluencies. Intonation and stress patterns are also deficient.
For example, in the following passage, a patient with Broca's aphasia is trying to explain how he came to the hospital for dental surgery:
Yes... ah... Monday... er... Dad and Peter H... (his own name), and Dad.... er... hospital... and ah... Wednesday... Wednesday, nine o'clock... and oh... Thursday... ten o'clock, ah doctors... two... an' doctors... and er... teeth... yah.
The speech of a person with expressive aphasia contains mostly content words such as nouns, verbs, and some adjectives. However, function words like conjunctions, articles, and prepositions are rarely used except for "and" which is prevalent in the speech of most patients with aphasia. The omission of function words makes the person's speech agrammatic. A communication partner of a person with aphasia may say that the person's speech sounds telegraphic due to poor sentence construction and disjointed words. For example, a person with expressive aphasia might say "Smart... university... smart... good... good..."
Self-monitoring is typically well preserved in patients with Broca's aphasia. They are usually aware of their communication deficits, and are more prone to depression and outbursts from frustration than are patients with other forms of aphasia.
In general, word comprehension is preserved, allowing patients to have functional receptive language skills. Individuals with Broca's aphasia understand most of the everyday conversation around them, but higher-level deficits in receptive language can occur. Because comprehension is substantially impaired for more complex sentences, it is better to use simple language when speaking with an individual with expressive aphasia. This is exemplified by the difficulty to understand phrases or sentences with unusual structure. A typical patient with Broca's aphasia will misinterpret "the man is bitten by the dog" by switching the subject and object to "the dog is bitten by the man."
Typically, people with expressive aphasia can understand speech and read better than they can produce speech and write. The person's writing will resemble their speech and will be effortful, lacking cohesion, and containing mostly content words. Letters will likely be formed clumsily and distorted and some may even be omitted. Although listening and reading are generally intact, subtle deficits in both reading and listening comprehension are almost always present during assessment of aphasia.
Because Broca's area is anterior to the primary motor cortex, which is responsible for movement of the face, hands, and arms, a lesion affecting Broca's areas may also result in hemiparesis (weakness of both limbs on the same side of the body) or hemiplegia (paralysis of both limbs on the same side of the body). The brain is wired contralaterally, which means the limbs on right side of the body are controlled by the left hemisphere and vice versa. Therefore, when Broca's area or surrounding areas in the left hemisphere are damaged, hemiplegia or hemiparesis often occurs on the right side of the body in individuals with Broca's aphasia.
Severity of expressive aphasia varies among patients. Some people may only have mild deficits and detecting problems with their language may be difficult. In the most extreme cases, patients may be able to produce only a single word. Even in such cases, over-learned and rote-learned speech patterns may be retained– for instance, some patients can count from one to ten, but cannot produce the same numbers in novel conversation.
In deaf patients who use manual language (such as American Sign Language), damage to the left hemisphere of the brain leads to disruptions in their signing ability. Paraphasic errors similar to spoken language have been observed; whereas in spoken language a phonemic substitution would occur (e.g. "tagle" instead of "table"), in ASL case studies errors in movement, hand position, and morphology have been noted. Agrammatism, or the lack of grammatical morphemes in sentence production, has also been observed in lifelong users of ASL who have left hemisphere damage. The lack of syntactic accuracy shows that the errors in signing are not due to damage to the motor cortex, but rather area manifestation of the damage to the language-producing area of the brain. Similar symptoms have been seen in a patient with left hemisphere damage whose first language was British Sign Language, further showing that damage to the left hemisphere primarily hinders linguistic ability, not motor ability. In contrast, patients who have damage to non-linguistic areas on the left hemisphere have been shown to be fluent in signing, but are unable to comprehend written language.
In addition to difficulty expressing oneself, individuals with expressive aphasia are also noted to commonly have trouble with comprehension in certain linguistic areas. This agrammatism overlaps with receptive aphasia, but can be seen in patients who have expressive aphasia without being diagnosed as having receptive aphasia. The most well-noted of these are object-relative clauses, object Wh- questions, and topicalized structures (placing the topic at the beginning of the sentence). These three concepts all share phrasal movement, which can cause words to lose their thematic roles when they change order in the sentence. This is often not an issue for people without agrammatic aphasias, but many people with aphasia rely heavily on word order to understand roles that words play within the sentence.
The most common cause of expressive aphasia is stroke. A stroke is caused by hypoperfusion (lack of oxygen) to an area of the brain, which is commonly caused by thrombosis or embolism. Some form of aphasia occurs in 34 to 38% of stroke patients. Expressive aphasia occurs in approximately 12% of new cases of aphasia caused by stroke.
In most cases, expressive aphasia is caused by a stroke in Broca's area or the surrounding vicinity. Broca's area is in the lower part of the premotor cortex in the language dominant hemisphere and is responsible for planning motor speech movements. However, cases of expressive aphasia have been seen in patients with strokes in other areas of the brain. Patients with classic symptoms of expressive aphasia in general have more acute brain lesions, whereas patients with larger, widespread lesions exhibit a variety of symptoms that may be classified as global aphasia or left unclassified.
Expressive aphasia can also be caused by trauma to the brain, tumor, cerebral hemorrhage and by extradural abscess.
Understanding lateralization of brain function is important for understanding which areas of the brain cause expressive aphasia when damaged. In the past, it has been believed that the area for language production differs between left and right-handed individuals. If this were true, damage to the homologous region of Broca's area in the right hemisphere should cause aphasia in a left-handed individual. More recent studies have shown that even left-handed individuals typically have language functions only in the left hemisphere. However, left-handed individuals are more likely to have a dominance of language in the right hemisphere.
Less common causes of expressive aphasia include primary autoimmune phenomenon and autoimmune phenomenon that are secondary to cancer (as a paraneoplastic syndrome) have been listed as the primary hypothesis for several cases of aphasia, especially when presenting with other psychiatric disturbances and focal neurological deficits. Many case reports exist describing paraneoplastic aphasia, and the reports that are specific tend to describe expressive aphasia. Although most cases attempt to exclude micro-metastasis, it is likely that some cases of paraneoplastic aphasia are actually extremely small metastasis to the vocal motor regions.
Neurodegenerative disorders may present with aphasia. Alzheimer's disease may present with either fluent aphasia or expressive aphasia. There are case reports of Creutzfeldt-Jakob disease presenting with expressive aphasia.
Expressive aphasia is classified as non-fluent aphasia, as opposed to fluent aphasia. Diagnosis is done on a case-by-case basis, as lesions often affect the surrounding cortex and deficits are highly variable among patients with aphasia.
A physician is typically the first person to recognize aphasia in a patient who is being treated for damage to the brain. Routine processes for determining the presence and location of lesion in the brain include magnetic resonance imaging (MRI) and computed tomography (CT) scans. The physician will complete a brief assessment of the patient's ability to understand and produce language. For further diagnostic testing, the physician will refer the patient to a speech-language pathologist, who will complete a comprehensive evaluation.
In order to diagnose a patient with Broca's aphasia, there are certain commonly used tests and procedures. The Western Aphasia Battery (WAB) classifies individuals based on their scores on the subtests; spontaneous speech, auditory comprehension, repetition, and naming. The Boston Diagnostic Aphasia Examination (BDAE) can inform users what specific type of aphasia they may have, infer the location of lesion, and assess current language abilities. The Porch Index of Communication Ability (PICA) can predict potential recovery outcomes of the patients with aphasia. Quality of life measurement is also an important assessment tool. Tests such as the Assessment for Living with Aphasia (ALA) and the Satisfaction with Life Scale (SWLS) allow for therapists to target skills that are important and meaningful for the individual.
In addition to formal assessments, patient and family interviews are valid and important sources of information. The patient's previous hobbies, interests, personality, and occupation are all factors that will not only impact therapy but may motivate them throughout the recovery process. Patient interviews and observations allow professionals to learn the priorities of the patient and family and determine what the patient hopes to regain in therapy. Observations of the patient may also be beneficial to determine where to begin treatment. The current behaviors and interactions of the patient will provide the therapist with more insight about the client and their individual needs. Other information about the patient can be retrieved from medical records, patient referrals from physicians, and the nursing staff.
In non-speaking patients who use manual languages, diagnosis is often based on interviews from the patient's acquaintances, noting the differences in sign production pre- and post-damage to the brain. Many of these patients will also begin to rely on non-linguistic gestures to communicate, rather than signing since their language production is hindered.
Currently, there is no standard treatment for expressive aphasia. Most aphasia treatment is individualized based on a patient's condition and needs as assessed by a speech language pathologist. Patients go through a period of spontaneous recovery following brain injury in which they regain a great deal of language function.
In the months following injury or stroke, most patients receive traditional treatment for a few hours per day. Among other exercises, patients practice the repetition of words and phrases. Mechanisms are also taught in traditional treatment to compensate for lost language function such as drawing and using phrases that are easier to pronounce.
Emphasis is placed on establishing a basis for communication with family and caregivers in everyday life. Treatment is individualized based on the patient's own priorities, along with the family's input.
A patient may have the option of individual or group treatment. Although less common, group treatment has been shown to have advantageous outcomes. Some types of group treatments include family counseling, maintenance groups, support groups and treatment groups.
Augmentative and Alternative Communication (AAC) refers to a set of tools and strategies that support or replace verbal communication for individuals with communication disorders, such as Broca's aphasia or other conditions that affect speech and language abilities. AAC is designed to enhance communication and may be used as a temporary or permanent solution, depending on the individual's needs. Here are some key aspects of AAC:
1. Communication Aids:
2. Symbols and Representations:
3. Types of AAC Systems:
4. Vocabulary and Language Systems:
5. Customization and Individualization:
6. Training and Support:
7. Integration with Therapy:
8. Social and Emotional Aspects:
AAC is a dynamic and evolving field, and advancements in technology continue to enhance the range and effectiveness of communication tools available for individuals with speech and language challenges. The selection of AAC strategies depends on factors such as the individual's abilities, preferences, and the specific nature of their communication disorder.
Melodic intonation therapy was inspired by the observation that individuals with non-fluent aphasia sometimes can sing words or phrases that they normally cannot speak. "Melodic Intonation Therapy was begun as an attempt to use the intact melodic/prosodic processing skills of the right hemisphere in those with aphasia to help cue retrieval words and expressive language." It is believed that this is because singing capabilities are stored in the right hemisphere of the brain, which is likely to remain unaffected after a stroke in the left hemisphere. However, recent evidence demonstrates that the capability of individuals with aphasia to sing entire pieces of text may actually result from rhythmic features and the familiarity with the lyrics.
The goal of Melodic Intonation Therapy is to utilize singing to access the language-capable regions in the right hemisphere and use these regions to compensate for lost function in the left hemisphere. The natural musical component of speech was used to engage the patients' ability to produce phrases. A clinical study revealed that singing and rhythmic speech may be similarly effective in the treatment of non-fluent aphasia and apraxia of speech. Moreover, evidence from randomized controlled trials is still needed to confirm that Melodic Intonation Therapy is suitable to improve propositional utterances and speech intelligibility in individuals with (chronic) non-fluent aphasia and apraxia of speech.
Melodic Intonation Therapy appears to work particularly well in patients who have had a unilateral, left hemisphere stroke, show poor articulation, are non-fluent or have severely restricted speech output, have moderately preserved auditory comprehension, and show good motivation. MIT therapy on average lasts for 1.5 hours per day for five days per week. At the lowest level of therapy, simple words and phrases (such as "water" and "I love you") are broken down into a series of high- and low-pitch syllables. With increased treatment, longer phrases are taught and less support is provided by the therapist. Patients are taught to say phrases using the natural melodic component of speaking and continuous voicing is emphasized. The patient is also instructed to use the left hand to tap the syllables of the phrase while the phrases are spoken. Tapping is assumed to trigger the rhythmic component of speaking to utilize the right hemisphere.
FMRI studies have shown that Melodic Intonation Therapy (MIT) uses both sides of the brain to recover lost function, as opposed to traditional therapies that utilize only the left hemisphere. In MIT, individuals with small lesions in the left hemisphere seem to recover by activation of the left hemisphere perilesional cortex. Meanwhile, individuals with larger left-hemisphere lesions show a recruitment of the use of language-capable regions in the right hemisphere. The interpretation of these results is still a matter of debate. For example, it remains unclear whether changes in neural activity in the right hemisphere result from singing or from the intensive use of common phrases, such as "thank you", "how are you?" or "I am fine." This type of phrases falls into the category of formulaic language and is known to be supported by neural networks of the intact right hemisphere.
A pilot study reported positive results when comparing the efficacy of a modified form of MIT to no treatment in people with nonfluent aphasia with damage to their left-brain. A randomized controlled trial was conducted and the study reported benefits of utilizing modified MIT treatment early in the recovery phase for people with nonfluent aphasia.
Melodic Intonation Therapy is used by music therapists, board-certified professionals that use music as a therapeutic tool to effect certain non-musical outcomes in their patients. Speech language pathologists can also use this therapy for individuals who have had a left hemisphere stroke and non-fluent aphasias such as Broca's or even apraxia of speech.
Constraint-induced aphasia therapy (CIAT) is based on similar principles as constraint-induced movement therapy developed by Dr. Edward Taub at the University of Alabama at Birmingham. Constraint-induced movement therapy is based on the idea that a person with an impairment (physical or communicative) develops a "learned nonuse" by compensating for the lost function with other means such as using an unaffected limb by a paralyzed individual or drawing by a patient with aphasia. In constraint-induced movement therapy, the alternative limb is constrained with a glove or sling and the patient is forced to use the affected limb. In constraint-induced aphasia therapy the interaction is guided by communicative need in a language game context, picture cards, barriers making it impossible to see other players' cards, and other materials, so that patients are encouraged ("constrained") to use the remaining verbal abilities to succeed in the communication game.
Two important principles of constraint-induced aphasia therapy are that treatment is very intense, with sessions lasting for up to 6 hours over the course of 10 days and that language is used in a communication context in which it is closely linked to (nonverbal) actions. These principles are motivated by neuroscience insights about learning at the level of nerve cells (synaptic plasticity) and the coupling between cortical systems for language and action in the human brain. Constraint-induced therapy contrasts sharply with traditional therapy by the strong belief that mechanisms to compensate for lost language function, such as gesturing or writing, should not be used unless absolutely necessary, even in everyday life.
It is believed that CIAT works by the mechanism of increased neuroplasticity. By constraining an individual to use only speech, it is believed that the brain is more likely to reestablish old neural pathways and recruit new neural pathways to compensate for lost function.
The strongest results of CIAT have been seen in patients with chronic aphasia (lasting over 6 months). Studies of CIAT have confirmed that further improvement is possible even after a patient has reached a "plateau" period of recovery. It has also been proven that the benefits of CIAT are retained long term. However, improvements only seem to be made while a patient is undergoing intense therapy. Recent work has investigated combining constraint-induced aphasia therapy with drug treatment, which led to an amplification of therapy benefits.
In addition to active speech therapy, pharmaceuticals have also been considered as a useful treatment for expressive aphasia. This area of study is relatively new and much research continues to be conducted.
The following drugs have been suggested for use in treating aphasia and their efficacy has been studied in control studies.
The most effect has been shown by piracetam and amphetamine, which may increase cerebral plasticity and result in an increased capability to improve language function. It has been seen that piracetam is most effective when treatment is begun immediately following stroke. When used in chronic cases it has been much less efficient.
Bromocriptine has been shown by some studies to increase verbal fluency and word retrieval with therapy than with just therapy alone. Furthermore, its use seems to be restricted to non-fluent aphasia.
Donepezil has shown a potential for helping chronic aphasia.
No study has established irrefutable evidence that any drug is an effective treatment for aphasia therapy. Furthermore, no study has shown any drug to be specific for language recovery. Comparison between the recovery of language function and other motor function using any drug has shown that improvement is due to a global increase plasticity of neural networks.
In transcranial magnetic stimulation (TMS), magnetic fields are used to create electrical currents in specified cortical regions. The procedure is a painless and noninvasive method of stimulating the cortex. TMS works by suppressing the inhibition process in certain areas of the brain. By suppressing the inhibition of neurons by external factors, the targeted area of the brain may be reactivated and thereby recruited to compensate for lost function. Research has shown that patients can demonstrate increased object naming ability with regular transcranial magnetic stimulation than patients not receiving TMS. Furthermore, research suggests this improvement is sustained upon the completion of TMS therapy. However, some patients fail to show any significant improvement from TMS which indicates the need for further research of this treatment.
Described as the linguistic approach to the treatment of expressive aphasia, treatment begins by emphasizing and educating patients on the thematic roles of words within sentences. Sentences that are usually problematic will be reworded into active-voiced, declarative phrasings of their non-canonical counterparts. The simpler sentence phrasings are then transformed into variations that are more difficult to interpret. For example, many individuals who have expressive aphasia struggle with Wh- sentences. "What" and "who" questions are problematic sentences that this treatment method attempts to improve, and they are also two interrogative particles that are strongly related to each other because they reorder arguments from the declarative counterparts. For instance, therapists have used sentences like, "Who is the boy helping?" and "What is the boy fixing?" because both verbs are transitive- they require two arguments in the form of a subject and a direct object, but not necessarily an indirect object. In addition, certain question particles are linked together based on how the reworded sentence is formed. Training "who" sentences increased the generalizations of non-trained "who" sentences as well as untrained "what" sentences, and vice versa. Likewise, "where" and "when" question types are very closely linked. "What" and "who" questions alter placement of arguments, and "where" and "when" sentences move adjunct phrases. Training is in the style of: "The man parked the car in the driveway. What did the man park in the driveway?" Sentence training goes on in this manner for more domains, such as clefts and sentence voice.
Results: Patients' use of sentence types used in the TUF treatment will improve, subjects will generalize sentences of similar category to those used for treatment in TUF, and results are applied to real-world conversations with others. Generalization of sentence types used can be improved when the treatment progresses in the order of more complex sentences to more elementary sentences. Treatment has been shown to affect on-line (real-time) processing of trained sentences and these results can be tracked using fMRI mappings. Training of Wh- sentences has led improvements in three main areas of discourse for aphasics: increased average length of utterances, higher proportions of grammatical sentences, and larger ratios of numbers of verbs to nouns produced. Patients also showed improvements in verb argument structure productions and assigned thematic roles to words in utterances with more accuracy. In terms of on-line sentence processing, patients having undergone this treatment discriminate between anomalous and non-anomalous sentences with more accuracy than control groups and are closer to levels of normalcy than patients not having participated in this treatment.
Mechanisms for recovery differ from patient to patient. Some mechanisms for recovery occur spontaneously after damage to the brain, whereas others are caused by the effects of language therapy. FMRI studies have shown that recovery can be partially attributed to the activation of tissue around the damaged area and the recruitment of new neurons in these areas to compensate for the lost function. Recovery may also be caused in very acute lesions by a return of blood flow and function to damaged tissue that has not died around an injured area. It has been stated by some researchers that the recruitment and recovery of neurons in the left hemisphere opposed to the recruitment of similar neurons in the right hemisphere is superior for long-term recovery and continued rehabilitation. It is thought that, because the right hemisphere is not intended for full language function, using the right hemisphere as a mechanism of recovery is effectively a "dead-end" and can lead only to partial recovery.
There is evidence to support that, among all types of therapies, one of the most important factors and best predictors for a successful outcome is the intensity of the therapy. By comparing the length and intensity of various methods of therapies, it was proven that intensity is a better predictor of recovery than the method of therapy used.
In most individuals with expressive aphasia, the majority of recovery is seen within the first year following a stroke or injury. The majority of this improvement is seen in the first four weeks in therapy following a stroke and slows thereafter. However, this timeline will vary depending upon the type of stroke experienced by the patient. Patients who experienced an ischemic stroke may recover in the days and weeks following the stroke, and then experience a plateau and gradual slowing of recovery. On the contrary, patients who experienced a hemorrhagic stroke experience a slower recovery in the first 4–8 weeks, followed by a faster recovery which eventually stabilizes.
Numerous factors impact the recovery process and outcomes. Site and extent of lesion greatly impacts recovery. Other factors that may affect prognosis are age, education, gender, and motivation. Occupation, handedness, personality, and emotional state may also be associated with recovery outcomes.
Studies have also found that prognosis of expressive aphasia correlates strongly with the initial severity of impairment. However, it has been seen that continued recovery is possible years after a stroke with effective treatment. Timing and intensity of treatment is another factor that impacts outcomes. Research suggests that even in later stages of recovery, intervention is effective at improving function, as well as, preventing loss of function.
Unlike receptive aphasia, patients with expressive aphasia are aware of their errors in language production. This may further motivate a person with expressive aphasia to progress in treatment, which would affect treatment outcomes. On the other hand, awareness of impairment may lead to higher levels of frustration, depression, anxiety, or social withdrawal, which have been proven to negatively affect a person's chance of recovery.
Expressive aphasia was first identified by the French neurologist Paul Broca. By examining the brains of deceased individuals having acquired expressive aphasia in life, he concluded that language ability is localized in the ventroposterior region of the frontal lobe. One of the most important aspects of Paul Broca's discovery was the observation that the loss of proper speech in expressive aphasia is due to the brain's loss of ability to produce language, as opposed to the mouth's loss of ability to produce words.
The discoveries of Paul Broca were made during the same period of time as the German Neurologist Carl Wernicke, who was also studying brains of aphasiacs post-mortem and identified the region now known as Wernicke's area. Discoveries of both men contributed to the concept of localization, which states that specific brain functions are all localized to a specific area of the brain. While both men made significant contributions to the field of aphasia, it was Carl Wernicke who realized the difference between patients with aphasia that could not produce language and those that could not comprehend language (the essential difference between expressive and receptive aphasia). | [
{
"paragraph_id": 0,
"text": "Expressive aphasia, also known as Broca's aphasia, is a type of aphasia characterized by partial loss of the ability to produce language (spoken, manual, or written), although comprehension generally remains intact. A person with expressive aphasia will exhibit effortful speech. Speech generally includes important content words but leaves out function words that have more grammatical significance than physical meaning, such as prepositions and articles. This is known as \"telegraphic speech\". The person's intended message may still be understood, but their sentence will not be grammatically correct. In very severe forms of expressive aphasia, a person may only speak using single word utterances. Typically, comprehension is mildly to moderately impaired in expressive aphasia due to difficulty understanding complex grammar.",
"title": ""
},
{
"paragraph_id": 1,
"text": "It is caused by acquired damage to the anterior regions of the brain, such as Broca's area. It is one subset of a larger family of disorders known collectively as aphasia. Expressive aphasia contrasts with receptive aphasia, in which patients are able to speak in grammatical sentences that lack semantic significance and generally also have trouble with comprehension. Expressive aphasia differs from dysarthria, which is typified by a patient's inability to properly move the muscles of the tongue and mouth to produce speech. Expressive aphasia also differs from apraxia of speech, which is a motor disorder characterized by an inability to create and sequence motor plans for speech.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Broca's (expressive) aphasia is a type of non-fluent aphasia in which an individual's speech is halting and effortful. Misarticulations or distortions of consonants and vowels, namely phonetic dissolution, are common. Individuals with expressive aphasia may only produce single words, or words in groups of two or three. Long pauses between words are common and multi-syllabic words may be produced one syllable at a time with pauses between each syllable. The prosody of a person with Broca's aphasia is compromised by shortened length of utterances and the presence of self-repairs and disfluencies. Intonation and stress patterns are also deficient.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 3,
"text": "For example, in the following passage, a patient with Broca's aphasia is trying to explain how he came to the hospital for dental surgery:",
"title": "Signs and symptoms"
},
{
"paragraph_id": 4,
"text": "Yes... ah... Monday... er... Dad and Peter H... (his own name), and Dad.... er... hospital... and ah... Wednesday... Wednesday, nine o'clock... and oh... Thursday... ten o'clock, ah doctors... two... an' doctors... and er... teeth... yah.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 5,
"text": "The speech of a person with expressive aphasia contains mostly content words such as nouns, verbs, and some adjectives. However, function words like conjunctions, articles, and prepositions are rarely used except for \"and\" which is prevalent in the speech of most patients with aphasia. The omission of function words makes the person's speech agrammatic. A communication partner of a person with aphasia may say that the person's speech sounds telegraphic due to poor sentence construction and disjointed words. For example, a person with expressive aphasia might say \"Smart... university... smart... good... good...\"",
"title": "Signs and symptoms"
},
{
"paragraph_id": 6,
"text": "Self-monitoring is typically well preserved in patients with Broca's aphasia. They are usually aware of their communication deficits, and are more prone to depression and outbursts from frustration than are patients with other forms of aphasia.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 7,
"text": "In general, word comprehension is preserved, allowing patients to have functional receptive language skills. Individuals with Broca's aphasia understand most of the everyday conversation around them, but higher-level deficits in receptive language can occur. Because comprehension is substantially impaired for more complex sentences, it is better to use simple language when speaking with an individual with expressive aphasia. This is exemplified by the difficulty to understand phrases or sentences with unusual structure. A typical patient with Broca's aphasia will misinterpret \"the man is bitten by the dog\" by switching the subject and object to \"the dog is bitten by the man.\"",
"title": "Signs and symptoms"
},
{
"paragraph_id": 8,
"text": "Typically, people with expressive aphasia can understand speech and read better than they can produce speech and write. The person's writing will resemble their speech and will be effortful, lacking cohesion, and containing mostly content words. Letters will likely be formed clumsily and distorted and some may even be omitted. Although listening and reading are generally intact, subtle deficits in both reading and listening comprehension are almost always present during assessment of aphasia.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 9,
"text": "Because Broca's area is anterior to the primary motor cortex, which is responsible for movement of the face, hands, and arms, a lesion affecting Broca's areas may also result in hemiparesis (weakness of both limbs on the same side of the body) or hemiplegia (paralysis of both limbs on the same side of the body). The brain is wired contralaterally, which means the limbs on right side of the body are controlled by the left hemisphere and vice versa. Therefore, when Broca's area or surrounding areas in the left hemisphere are damaged, hemiplegia or hemiparesis often occurs on the right side of the body in individuals with Broca's aphasia.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 10,
"text": "Severity of expressive aphasia varies among patients. Some people may only have mild deficits and detecting problems with their language may be difficult. In the most extreme cases, patients may be able to produce only a single word. Even in such cases, over-learned and rote-learned speech patterns may be retained– for instance, some patients can count from one to ten, but cannot produce the same numbers in novel conversation.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 11,
"text": "In deaf patients who use manual language (such as American Sign Language), damage to the left hemisphere of the brain leads to disruptions in their signing ability. Paraphasic errors similar to spoken language have been observed; whereas in spoken language a phonemic substitution would occur (e.g. \"tagle\" instead of \"table\"), in ASL case studies errors in movement, hand position, and morphology have been noted. Agrammatism, or the lack of grammatical morphemes in sentence production, has also been observed in lifelong users of ASL who have left hemisphere damage. The lack of syntactic accuracy shows that the errors in signing are not due to damage to the motor cortex, but rather area manifestation of the damage to the language-producing area of the brain. Similar symptoms have been seen in a patient with left hemisphere damage whose first language was British Sign Language, further showing that damage to the left hemisphere primarily hinders linguistic ability, not motor ability. In contrast, patients who have damage to non-linguistic areas on the left hemisphere have been shown to be fluent in signing, but are unable to comprehend written language.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 12,
"text": "In addition to difficulty expressing oneself, individuals with expressive aphasia are also noted to commonly have trouble with comprehension in certain linguistic areas. This agrammatism overlaps with receptive aphasia, but can be seen in patients who have expressive aphasia without being diagnosed as having receptive aphasia. The most well-noted of these are object-relative clauses, object Wh- questions, and topicalized structures (placing the topic at the beginning of the sentence). These three concepts all share phrasal movement, which can cause words to lose their thematic roles when they change order in the sentence. This is often not an issue for people without agrammatic aphasias, but many people with aphasia rely heavily on word order to understand roles that words play within the sentence.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 13,
"text": "The most common cause of expressive aphasia is stroke. A stroke is caused by hypoperfusion (lack of oxygen) to an area of the brain, which is commonly caused by thrombosis or embolism. Some form of aphasia occurs in 34 to 38% of stroke patients. Expressive aphasia occurs in approximately 12% of new cases of aphasia caused by stroke.",
"title": "Causes"
},
{
"paragraph_id": 14,
"text": "In most cases, expressive aphasia is caused by a stroke in Broca's area or the surrounding vicinity. Broca's area is in the lower part of the premotor cortex in the language dominant hemisphere and is responsible for planning motor speech movements. However, cases of expressive aphasia have been seen in patients with strokes in other areas of the brain. Patients with classic symptoms of expressive aphasia in general have more acute brain lesions, whereas patients with larger, widespread lesions exhibit a variety of symptoms that may be classified as global aphasia or left unclassified.",
"title": "Causes"
},
{
"paragraph_id": 15,
"text": "Expressive aphasia can also be caused by trauma to the brain, tumor, cerebral hemorrhage and by extradural abscess.",
"title": "Causes"
},
{
"paragraph_id": 16,
"text": "Understanding lateralization of brain function is important for understanding which areas of the brain cause expressive aphasia when damaged. In the past, it has been believed that the area for language production differs between left and right-handed individuals. If this were true, damage to the homologous region of Broca's area in the right hemisphere should cause aphasia in a left-handed individual. More recent studies have shown that even left-handed individuals typically have language functions only in the left hemisphere. However, left-handed individuals are more likely to have a dominance of language in the right hemisphere.",
"title": "Causes"
},
{
"paragraph_id": 17,
"text": "Less common causes of expressive aphasia include primary autoimmune phenomenon and autoimmune phenomenon that are secondary to cancer (as a paraneoplastic syndrome) have been listed as the primary hypothesis for several cases of aphasia, especially when presenting with other psychiatric disturbances and focal neurological deficits. Many case reports exist describing paraneoplastic aphasia, and the reports that are specific tend to describe expressive aphasia. Although most cases attempt to exclude micro-metastasis, it is likely that some cases of paraneoplastic aphasia are actually extremely small metastasis to the vocal motor regions.",
"title": "Causes"
},
{
"paragraph_id": 18,
"text": "Neurodegenerative disorders may present with aphasia. Alzheimer's disease may present with either fluent aphasia or expressive aphasia. There are case reports of Creutzfeldt-Jakob disease presenting with expressive aphasia.",
"title": "Causes"
},
{
"paragraph_id": 19,
"text": "Expressive aphasia is classified as non-fluent aphasia, as opposed to fluent aphasia. Diagnosis is done on a case-by-case basis, as lesions often affect the surrounding cortex and deficits are highly variable among patients with aphasia.",
"title": "Diagnosis"
},
{
"paragraph_id": 20,
"text": "A physician is typically the first person to recognize aphasia in a patient who is being treated for damage to the brain. Routine processes for determining the presence and location of lesion in the brain include magnetic resonance imaging (MRI) and computed tomography (CT) scans. The physician will complete a brief assessment of the patient's ability to understand and produce language. For further diagnostic testing, the physician will refer the patient to a speech-language pathologist, who will complete a comprehensive evaluation.",
"title": "Diagnosis"
},
{
"paragraph_id": 21,
"text": "In order to diagnose a patient with Broca's aphasia, there are certain commonly used tests and procedures. The Western Aphasia Battery (WAB) classifies individuals based on their scores on the subtests; spontaneous speech, auditory comprehension, repetition, and naming. The Boston Diagnostic Aphasia Examination (BDAE) can inform users what specific type of aphasia they may have, infer the location of lesion, and assess current language abilities. The Porch Index of Communication Ability (PICA) can predict potential recovery outcomes of the patients with aphasia. Quality of life measurement is also an important assessment tool. Tests such as the Assessment for Living with Aphasia (ALA) and the Satisfaction with Life Scale (SWLS) allow for therapists to target skills that are important and meaningful for the individual.",
"title": "Diagnosis"
},
{
"paragraph_id": 22,
"text": "In addition to formal assessments, patient and family interviews are valid and important sources of information. The patient's previous hobbies, interests, personality, and occupation are all factors that will not only impact therapy but may motivate them throughout the recovery process. Patient interviews and observations allow professionals to learn the priorities of the patient and family and determine what the patient hopes to regain in therapy. Observations of the patient may also be beneficial to determine where to begin treatment. The current behaviors and interactions of the patient will provide the therapist with more insight about the client and their individual needs. Other information about the patient can be retrieved from medical records, patient referrals from physicians, and the nursing staff.",
"title": "Diagnosis"
},
{
"paragraph_id": 23,
"text": "In non-speaking patients who use manual languages, diagnosis is often based on interviews from the patient's acquaintances, noting the differences in sign production pre- and post-damage to the brain. Many of these patients will also begin to rely on non-linguistic gestures to communicate, rather than signing since their language production is hindered.",
"title": "Diagnosis"
},
{
"paragraph_id": 24,
"text": "Currently, there is no standard treatment for expressive aphasia. Most aphasia treatment is individualized based on a patient's condition and needs as assessed by a speech language pathologist. Patients go through a period of spontaneous recovery following brain injury in which they regain a great deal of language function.",
"title": "Treatment"
},
{
"paragraph_id": 25,
"text": "In the months following injury or stroke, most patients receive traditional treatment for a few hours per day. Among other exercises, patients practice the repetition of words and phrases. Mechanisms are also taught in traditional treatment to compensate for lost language function such as drawing and using phrases that are easier to pronounce.",
"title": "Treatment"
},
{
"paragraph_id": 26,
"text": "Emphasis is placed on establishing a basis for communication with family and caregivers in everyday life. Treatment is individualized based on the patient's own priorities, along with the family's input.",
"title": "Treatment"
},
{
"paragraph_id": 27,
"text": "A patient may have the option of individual or group treatment. Although less common, group treatment has been shown to have advantageous outcomes. Some types of group treatments include family counseling, maintenance groups, support groups and treatment groups.",
"title": "Treatment"
},
{
"paragraph_id": 28,
"text": "Augmentative and Alternative Communication (AAC) refers to a set of tools and strategies that support or replace verbal communication for individuals with communication disorders, such as Broca's aphasia or other conditions that affect speech and language abilities. AAC is designed to enhance communication and may be used as a temporary or permanent solution, depending on the individual's needs. Here are some key aspects of AAC:",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 29,
"text": "1. Communication Aids:",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 30,
"text": "2. Symbols and Representations:",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 31,
"text": "3. Types of AAC Systems:",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 32,
"text": "4. Vocabulary and Language Systems:",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 33,
"text": "5. Customization and Individualization:",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 34,
"text": "6. Training and Support:",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 35,
"text": "7. Integration with Therapy:",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 36,
"text": "8. Social and Emotional Aspects:",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 37,
"text": "AAC is a dynamic and evolving field, and advancements in technology continue to enhance the range and effectiveness of communication tools available for individuals with speech and language challenges. The selection of AAC strategies depends on factors such as the individual's abilities, preferences, and the specific nature of their communication disorder.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 38,
"text": "Melodic intonation therapy was inspired by the observation that individuals with non-fluent aphasia sometimes can sing words or phrases that they normally cannot speak. \"Melodic Intonation Therapy was begun as an attempt to use the intact melodic/prosodic processing skills of the right hemisphere in those with aphasia to help cue retrieval words and expressive language.\" It is believed that this is because singing capabilities are stored in the right hemisphere of the brain, which is likely to remain unaffected after a stroke in the left hemisphere. However, recent evidence demonstrates that the capability of individuals with aphasia to sing entire pieces of text may actually result from rhythmic features and the familiarity with the lyrics.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 39,
"text": "The goal of Melodic Intonation Therapy is to utilize singing to access the language-capable regions in the right hemisphere and use these regions to compensate for lost function in the left hemisphere. The natural musical component of speech was used to engage the patients' ability to produce phrases. A clinical study revealed that singing and rhythmic speech may be similarly effective in the treatment of non-fluent aphasia and apraxia of speech. Moreover, evidence from randomized controlled trials is still needed to confirm that Melodic Intonation Therapy is suitable to improve propositional utterances and speech intelligibility in individuals with (chronic) non-fluent aphasia and apraxia of speech.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 40,
"text": "Melodic Intonation Therapy appears to work particularly well in patients who have had a unilateral, left hemisphere stroke, show poor articulation, are non-fluent or have severely restricted speech output, have moderately preserved auditory comprehension, and show good motivation. MIT therapy on average lasts for 1.5 hours per day for five days per week. At the lowest level of therapy, simple words and phrases (such as \"water\" and \"I love you\") are broken down into a series of high- and low-pitch syllables. With increased treatment, longer phrases are taught and less support is provided by the therapist. Patients are taught to say phrases using the natural melodic component of speaking and continuous voicing is emphasized. The patient is also instructed to use the left hand to tap the syllables of the phrase while the phrases are spoken. Tapping is assumed to trigger the rhythmic component of speaking to utilize the right hemisphere.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 41,
"text": "FMRI studies have shown that Melodic Intonation Therapy (MIT) uses both sides of the brain to recover lost function, as opposed to traditional therapies that utilize only the left hemisphere. In MIT, individuals with small lesions in the left hemisphere seem to recover by activation of the left hemisphere perilesional cortex. Meanwhile, individuals with larger left-hemisphere lesions show a recruitment of the use of language-capable regions in the right hemisphere. The interpretation of these results is still a matter of debate. For example, it remains unclear whether changes in neural activity in the right hemisphere result from singing or from the intensive use of common phrases, such as \"thank you\", \"how are you?\" or \"I am fine.\" This type of phrases falls into the category of formulaic language and is known to be supported by neural networks of the intact right hemisphere.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 42,
"text": "A pilot study reported positive results when comparing the efficacy of a modified form of MIT to no treatment in people with nonfluent aphasia with damage to their left-brain. A randomized controlled trial was conducted and the study reported benefits of utilizing modified MIT treatment early in the recovery phase for people with nonfluent aphasia.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 43,
"text": "Melodic Intonation Therapy is used by music therapists, board-certified professionals that use music as a therapeutic tool to effect certain non-musical outcomes in their patients. Speech language pathologists can also use this therapy for individuals who have had a left hemisphere stroke and non-fluent aphasias such as Broca's or even apraxia of speech.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 44,
"text": "Constraint-induced aphasia therapy (CIAT) is based on similar principles as constraint-induced movement therapy developed by Dr. Edward Taub at the University of Alabama at Birmingham. Constraint-induced movement therapy is based on the idea that a person with an impairment (physical or communicative) develops a \"learned nonuse\" by compensating for the lost function with other means such as using an unaffected limb by a paralyzed individual or drawing by a patient with aphasia. In constraint-induced movement therapy, the alternative limb is constrained with a glove or sling and the patient is forced to use the affected limb. In constraint-induced aphasia therapy the interaction is guided by communicative need in a language game context, picture cards, barriers making it impossible to see other players' cards, and other materials, so that patients are encouraged (\"constrained\") to use the remaining verbal abilities to succeed in the communication game.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 45,
"text": "Two important principles of constraint-induced aphasia therapy are that treatment is very intense, with sessions lasting for up to 6 hours over the course of 10 days and that language is used in a communication context in which it is closely linked to (nonverbal) actions. These principles are motivated by neuroscience insights about learning at the level of nerve cells (synaptic plasticity) and the coupling between cortical systems for language and action in the human brain. Constraint-induced therapy contrasts sharply with traditional therapy by the strong belief that mechanisms to compensate for lost language function, such as gesturing or writing, should not be used unless absolutely necessary, even in everyday life.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 46,
"text": "It is believed that CIAT works by the mechanism of increased neuroplasticity. By constraining an individual to use only speech, it is believed that the brain is more likely to reestablish old neural pathways and recruit new neural pathways to compensate for lost function.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 47,
"text": "The strongest results of CIAT have been seen in patients with chronic aphasia (lasting over 6 months). Studies of CIAT have confirmed that further improvement is possible even after a patient has reached a \"plateau\" period of recovery. It has also been proven that the benefits of CIAT are retained long term. However, improvements only seem to be made while a patient is undergoing intense therapy. Recent work has investigated combining constraint-induced aphasia therapy with drug treatment, which led to an amplification of therapy benefits.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 48,
"text": "In addition to active speech therapy, pharmaceuticals have also been considered as a useful treatment for expressive aphasia. This area of study is relatively new and much research continues to be conducted.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 49,
"text": "The following drugs have been suggested for use in treating aphasia and their efficacy has been studied in control studies.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 50,
"text": "The most effect has been shown by piracetam and amphetamine, which may increase cerebral plasticity and result in an increased capability to improve language function. It has been seen that piracetam is most effective when treatment is begun immediately following stroke. When used in chronic cases it has been much less efficient.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 51,
"text": "Bromocriptine has been shown by some studies to increase verbal fluency and word retrieval with therapy than with just therapy alone. Furthermore, its use seems to be restricted to non-fluent aphasia.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 52,
"text": "Donepezil has shown a potential for helping chronic aphasia.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 53,
"text": "No study has established irrefutable evidence that any drug is an effective treatment for aphasia therapy. Furthermore, no study has shown any drug to be specific for language recovery. Comparison between the recovery of language function and other motor function using any drug has shown that improvement is due to a global increase plasticity of neural networks.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 54,
"text": "In transcranial magnetic stimulation (TMS), magnetic fields are used to create electrical currents in specified cortical regions. The procedure is a painless and noninvasive method of stimulating the cortex. TMS works by suppressing the inhibition process in certain areas of the brain. By suppressing the inhibition of neurons by external factors, the targeted area of the brain may be reactivated and thereby recruited to compensate for lost function. Research has shown that patients can demonstrate increased object naming ability with regular transcranial magnetic stimulation than patients not receiving TMS. Furthermore, research suggests this improvement is sustained upon the completion of TMS therapy. However, some patients fail to show any significant improvement from TMS which indicates the need for further research of this treatment.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 55,
"text": "Described as the linguistic approach to the treatment of expressive aphasia, treatment begins by emphasizing and educating patients on the thematic roles of words within sentences. Sentences that are usually problematic will be reworded into active-voiced, declarative phrasings of their non-canonical counterparts. The simpler sentence phrasings are then transformed into variations that are more difficult to interpret. For example, many individuals who have expressive aphasia struggle with Wh- sentences. \"What\" and \"who\" questions are problematic sentences that this treatment method attempts to improve, and they are also two interrogative particles that are strongly related to each other because they reorder arguments from the declarative counterparts. For instance, therapists have used sentences like, \"Who is the boy helping?\" and \"What is the boy fixing?\" because both verbs are transitive- they require two arguments in the form of a subject and a direct object, but not necessarily an indirect object. In addition, certain question particles are linked together based on how the reworded sentence is formed. Training \"who\" sentences increased the generalizations of non-trained \"who\" sentences as well as untrained \"what\" sentences, and vice versa. Likewise, \"where\" and \"when\" question types are very closely linked. \"What\" and \"who\" questions alter placement of arguments, and \"where\" and \"when\" sentences move adjunct phrases. Training is in the style of: \"The man parked the car in the driveway. What did the man park in the driveway?\" Sentence training goes on in this manner for more domains, such as clefts and sentence voice.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 56,
"text": "Results: Patients' use of sentence types used in the TUF treatment will improve, subjects will generalize sentences of similar category to those used for treatment in TUF, and results are applied to real-world conversations with others. Generalization of sentence types used can be improved when the treatment progresses in the order of more complex sentences to more elementary sentences. Treatment has been shown to affect on-line (real-time) processing of trained sentences and these results can be tracked using fMRI mappings. Training of Wh- sentences has led improvements in three main areas of discourse for aphasics: increased average length of utterances, higher proportions of grammatical sentences, and larger ratios of numbers of verbs to nouns produced. Patients also showed improvements in verb argument structure productions and assigned thematic roles to words in utterances with more accuracy. In terms of on-line sentence processing, patients having undergone this treatment discriminate between anomalous and non-anomalous sentences with more accuracy than control groups and are closer to levels of normalcy than patients not having participated in this treatment.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 57,
"text": "Mechanisms for recovery differ from patient to patient. Some mechanisms for recovery occur spontaneously after damage to the brain, whereas others are caused by the effects of language therapy. FMRI studies have shown that recovery can be partially attributed to the activation of tissue around the damaged area and the recruitment of new neurons in these areas to compensate for the lost function. Recovery may also be caused in very acute lesions by a return of blood flow and function to damaged tissue that has not died around an injured area. It has been stated by some researchers that the recruitment and recovery of neurons in the left hemisphere opposed to the recruitment of similar neurons in the right hemisphere is superior for long-term recovery and continued rehabilitation. It is thought that, because the right hemisphere is not intended for full language function, using the right hemisphere as a mechanism of recovery is effectively a \"dead-end\" and can lead only to partial recovery.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 58,
"text": "There is evidence to support that, among all types of therapies, one of the most important factors and best predictors for a successful outcome is the intensity of the therapy. By comparing the length and intensity of various methods of therapies, it was proven that intensity is a better predictor of recovery than the method of therapy used.",
"title": "Augmentative and Alternative Communication"
},
{
"paragraph_id": 59,
"text": "In most individuals with expressive aphasia, the majority of recovery is seen within the first year following a stroke or injury. The majority of this improvement is seen in the first four weeks in therapy following a stroke and slows thereafter. However, this timeline will vary depending upon the type of stroke experienced by the patient. Patients who experienced an ischemic stroke may recover in the days and weeks following the stroke, and then experience a plateau and gradual slowing of recovery. On the contrary, patients who experienced a hemorrhagic stroke experience a slower recovery in the first 4–8 weeks, followed by a faster recovery which eventually stabilizes.",
"title": "Prognosis"
},
{
"paragraph_id": 60,
"text": "Numerous factors impact the recovery process and outcomes. Site and extent of lesion greatly impacts recovery. Other factors that may affect prognosis are age, education, gender, and motivation. Occupation, handedness, personality, and emotional state may also be associated with recovery outcomes.",
"title": "Prognosis"
},
{
"paragraph_id": 61,
"text": "Studies have also found that prognosis of expressive aphasia correlates strongly with the initial severity of impairment. However, it has been seen that continued recovery is possible years after a stroke with effective treatment. Timing and intensity of treatment is another factor that impacts outcomes. Research suggests that even in later stages of recovery, intervention is effective at improving function, as well as, preventing loss of function.",
"title": "Prognosis"
},
{
"paragraph_id": 62,
"text": "Unlike receptive aphasia, patients with expressive aphasia are aware of their errors in language production. This may further motivate a person with expressive aphasia to progress in treatment, which would affect treatment outcomes. On the other hand, awareness of impairment may lead to higher levels of frustration, depression, anxiety, or social withdrawal, which have been proven to negatively affect a person's chance of recovery.",
"title": "Prognosis"
},
{
"paragraph_id": 63,
"text": "Expressive aphasia was first identified by the French neurologist Paul Broca. By examining the brains of deceased individuals having acquired expressive aphasia in life, he concluded that language ability is localized in the ventroposterior region of the frontal lobe. One of the most important aspects of Paul Broca's discovery was the observation that the loss of proper speech in expressive aphasia is due to the brain's loss of ability to produce language, as opposed to the mouth's loss of ability to produce words.",
"title": "History"
},
{
"paragraph_id": 64,
"text": "The discoveries of Paul Broca were made during the same period of time as the German Neurologist Carl Wernicke, who was also studying brains of aphasiacs post-mortem and identified the region now known as Wernicke's area. Discoveries of both men contributed to the concept of localization, which states that specific brain functions are all localized to a specific area of the brain. While both men made significant contributions to the field of aphasia, it was Carl Wernicke who realized the difference between patients with aphasia that could not produce language and those that could not comprehend language (the essential difference between expressive and receptive aphasia).",
"title": "History"
}
]
| Expressive aphasia, also known as Broca's aphasia, is a type of aphasia characterized by partial loss of the ability to produce language, although comprehension generally remains intact. A person with expressive aphasia will exhibit effortful speech. Speech generally includes important content words but leaves out function words that have more grammatical significance than physical meaning, such as prepositions and articles. This is known as "telegraphic speech". The person's intended message may still be understood, but their sentence will not be grammatically correct. In very severe forms of expressive aphasia, a person may only speak using single word utterances. Typically, comprehension is mildly to moderately impaired in expressive aphasia due to difficulty understanding complex grammar. It is caused by acquired damage to the anterior regions of the brain, such as Broca's area. It is one subset of a larger family of disorders known collectively as aphasia. Expressive aphasia contrasts with receptive aphasia, in which patients are able to speak in grammatical sentences that lack semantic significance and generally also have trouble with comprehension. Expressive aphasia differs from dysarthria, which is typified by a patient's inability to properly move the muscles of the tongue and mouth to produce speech. Expressive aphasia also differs from apraxia of speech, which is a motor disorder characterized by an inability to create and sequence motor plans for speech. | 2001-10-26T14:40:55Z | 2023-12-14T19:53:31Z | [
"Template:Refbegin",
"Template:Cite book",
"Template:Infobox medical condition",
"Template:Blockquote",
"Template:Further",
"Template:Sfn",
"Template:Cite news",
"Template:Medical resources",
"Template:Brain and brainstem lesion symptoms and signs",
"Template:Short description",
"Template:Cite journal",
"Template:Verify source",
"Template:Cite web",
"Template:Refend",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/Expressive_aphasia |
9,843 | Ephesus | Ephesus (/ˈɛfɪsəs/; Greek: Ἔφεσος, translit. Éphesos; Turkish: Efes; may ultimately derive from Hittite: 𒀀𒉺𒊭, romanized: Apaša) was a city in Ancient Greece on the coast of Ionia, 3 kilometres (1.9 mi) southwest of present-day Selçuk in İzmir Province, Turkey. It was built in the 10th century BC on the site of Apasa, the former Arzawan capital, by Attic and Ionian Greek colonists. During the Classical Greek era, it was one of twelve cities that were members of the Ionian League. The city came under the control of the Roman Republic in 129 BC.
The city was famous in its day for the nearby Temple of Artemis (completed around 550 BC), which has been designated one of the Seven Wonders of the Ancient World. Its many monumental buildings included the Library of Celsus and a theatre capable of holding 24,000 spectators.
Ephesus was recipient city of one of the Pauline epistles; one of the seven churches of Asia addressed in the Book of Revelation; the Gospel of John may have been written there; and it was the site of several 5th-century Christian Councils (see Council of Ephesus). The city was destroyed by the Goths in 263. Although it was afterwards rebuilt, its importance as a commercial centre declined as the harbour was slowly silted up by the Küçükmenderes River. In 614, it was partially destroyed by an earthquake.
Today, the ruins of Ephesus are a favourite international and local tourist attraction, being accessible from Adnan Menderes Airport and from the resort town Kuşadası. In 2015, the ruins were designated a UNESCO World Heritage Site.
Humans had begun inhabiting the area surrounding Ephesus by the Neolithic Age (about 6000 BC), as shown by evidence from excavations at the nearby höyük (artificial mounds known as tells) of Arvalya and Cukurici.
Excavations in recent years have unearthed settlements from the early Bronze Age at Ayasuluk Hill. According to Hittite sources, the capital of the kingdom of Arzawa (another independent state in Western and Southern Anatolia/Asia Minor) was Apasa (or Abasa), and some scholars suggest that this is the same place the Greeks later called Ephesus. In 1954, a burial ground from the Mycenaean era (1500–1400 BC), which contained ceramic pots, was discovered close to the ruins of the basilica of St. John. This was the period of the Mycenaean expansion, when the Ahhiyawa began settling in Asia Minor, a process that continued into the 13th century BC. The names Apasa and Ephesus appear to be cognate, and recently found inscriptions seem to pinpoint the places in the Hittite record.
Ephesus was founded as an Attic-Ionian colony in the 10th century BC on a hill (now known as the Ayasuluk Hill), three kilometers (1.9 miles) from the centre of ancient Ephesus (as attested by excavations at the Seljuk castle during the 1990s). The mythical founder of the city was a prince of Athens named Androklos, who had to leave his country after the death of his father, King Kodros. According to the legend, he founded Ephesus on the place where the oracle of Delphi became reality ("A fish and a boar will show you the way"). Androklos drove away most of the native Carian and Lelegian inhabitants of the city and united his people with the remainder. He was a successful warrior, and as a king he was able to join the twelve cities of Ionia together into the Ionian League. During his reign the city began to prosper. He died in a battle against the Carians when he came to the aid of Priene, another city of the Ionian League. Androklos and his dog are depicted on the Hadrian temple frieze, dating from the 2nd century. Later, Greek historians such as Pausanias, Strabo and Herodotos and the poet Kallinos reassigned the city's mythological foundation to Ephos, queen of the Amazons.
The Greek goddess Artemis and the great Anatolian goddess Kybele were identified together as Artemis of Ephesus. The many-breasted "Lady of Ephesus", identified with Artemis, was venerated in the Temple of Artemis, one of the Seven Wonders of the World and the largest building of the ancient world according to Pausanias (4.31.8). Pausanias mentions that the temple was built by Ephesus, son of the river god Caystrus, before the arrival of the Ionians. Of this structure, scarcely a trace remains.
Ancient sources seem to indicate that an older name of the place was Alope (Ancient Greek: Ἀλόπη, romanized: Alópē).
About 650 BC, Ephesus was attacked by the Cimmerians who razed the city, including the temple of Artemis. After the Cimmerians had been driven away, the city was ruled by a series of tyrants. Following a revolt by the people, Ephesus was ruled by a council. The city prospered again under a new rule, producing a number of important historical figures such as the elegiac poet Callinus and the iambic poet Hipponax, the philosopher Heraclitus, the great painter Parrhasius and later the grammarian Zenodotos and physicians Soranus and Rufus.
About 560 BC, Ephesus was conquered by the Lydians under king Croesus, who, though a harsh ruler, treated the inhabitants with respect and even became the main contributor to the reconstruction of the temple of Artemis. His signature has been found on the base of one of the columns of the temple (now on display in the British Museum). Croesus made the populations of the different settlements around Ephesus regroup (synoikismos) in the vicinity of the Temple of Artemis, enlarging the city.
Later in the same century, the Lydians under Croesus invaded Persia. The Ionians refused a peace offer from Cyrus the Great, siding with the Lydians instead. After the Persians defeated Croesus, the Ionians offered to make peace, but Cyrus insisted that they surrender and become part of the empire. They were defeated by the Persian army commander Harpagos in 547 BC. The Persians then incorporated the Greek cities of Asia Minor into the Achaemenid Empire. Those cities were then ruled by satraps.
Ephesus has intrigued archaeologists because for the Archaic Period there is no definite location for the settlement. There are numerous sites to suggest the movement of a settlement between the Bronze Age and the Roman period, but the silting up of the natural harbours as well as the movement of the Kayster River meant that the location never remained the same.
Ephesus continued to prosper, but when taxes were raised under Cambyses II and Darius, the Ephesians participated in the Ionian Revolt against Persian rule in the Battle of Ephesus (498 BC), an event which instigated the Greco-Persian wars. In 479 BC, the Ionians, together with Athens, were able to oust the Persians from the shores of Asia Minor. In 478 BC, the Ionian cities with Athens entered into the Delian League against the Persians. Ephesus did not contribute ships but gave financial support.
During the Peloponnesian War, Ephesus was first allied to Athens but in a later phase, called the Decelean War, or the Ionian War, sided with Sparta, which also had received the support of the Persians. As a result, rule over the cities of Ionia was ceded again to Persia.
These wars did not greatly affect daily life in Ephesus. The Ephesians were surprisingly modern in their social relations: they allowed strangers to integrate and education was valued. In later times, Pliny the Elder mentioned having seen at Ephesus a representation of the goddess Diana by Timarete, the daughter of a painter.
In 356 BC the temple of Artemis was burnt down, according to legend, by a lunatic called Herostratus. The inhabitants of Ephesus at once set about restoring the temple and even planned a larger and grander one than the original.
When Alexander the Great defeated the Persian forces at the Battle of Granicus in 334 BC, the Greek cities of Asia Minor were liberated. The pro-Persian tyrant Syrpax and his family were stoned to death, and Alexander was greeted warmly when he entered Ephesus in triumph. When Alexander saw that the temple of Artemis was not yet finished, he proposed to finance it and have his name inscribed on the front. But the inhabitants of Ephesus demurred, claiming that it was not fitting for one god to build a temple to another. After Alexander's death in 323 BC, Ephesus in 290 BC came under the rule of one of Alexander's generals, Lysimachus.
As the river Cayster (Grk. name Κάϋστρος) silted up the old harbour, the resulting marshes caused malaria and many deaths among the inhabitants. Lysimachus forced the people to move from the ancient settlement around the temple of Artemis to the present site two kilometres (1.2 miles) away, when as a last resort the king flooded the old city by blocking the sewers. The new settlement was officially called Arsinoea (Ancient Greek: Ἀρσινόεια or Ἀρσινοΐα) or Arsinoe (Ἀρσινόη), after the king's second wife, Arsinoe II of Egypt. After Lysimachus had destroyed the nearby cities of Lebedos and Colophon in 292 BC, he relocated their inhabitants to the new city.
Ephesus revolted after the treacherous death of Agathocles, giving the Hellenistic king of Syria and Mesopotamia Seleucus I Nicator an opportunity for removing and killing Lysimachus, his last rival, at the Battle of Corupedium in 281 BC. After the death of Lysimachus the town again was named Ephesus.
Thus Ephesus became part of the Seleucid Empire. After the murder of king Antiochus II Theos and his Egyptian wife in 246 BC, pharaoh Ptolemy III invaded the Seleucid Empire and the Egyptian fleet swept the coast of Asia Minor. Ephesus was betrayed by its governor Sophron into the hands of the Ptolemies who ruled the city for half a century until 197 BC.
The Seleucid king Antiochus III the Great tried to regain the Greek cities of Asia Minor and recaptured Ephesus in 196 BC but he then came into conflict with Rome. After a series of battles, he was defeated by Scipio Asiaticus at the Battle of Magnesia in 190 BC. As a result of the subsequent Treaty of Apamea, Ephesus came under the rule of Eumenes II, the Attalid king of Pergamon, (ruled 197–159 BC). When his grandson Attalus III died in 133 BC without male children of his own, he left his kingdom to the Roman Republic, on condition that the city of Pergamon be kept free and autonomous.
Ephesus, as part of the kingdom of Pergamon, became a subject of the Roman Republic in 129 BC after the revolt of Eumenes III was suppressed.
The city felt Roman influence at once; taxes rose considerably, and the treasures of the city were systematically plundered. Hence in 88 BC Ephesus welcomed Archelaus, a general of Mithridates, king of Pontus, when he conquered Asia (the Roman name for western Anatolia). From Ephesus, Mithridates ordered every Roman citizen in the province to be killed which led to the Asiatic Vespers, the slaughter of 80,000 Roman citizens in Asia, or any person who spoke with a Latin accent. Many had lived in Ephesus, and statues and monument of Roman citizens in Ephesus were also destroyed. But when they saw how badly the people of Chios had been treated by Zenobius, a general of Mithridates, they refused entry to his army. Zenobius was invited into the city to visit Philopoemen, the father of Monime, the favourite wife of Mithridates, and the overseer of Ephesus. As the people expected nothing good of him, they threw him into prison and murdered him. Mithridates took revenge and inflicted terrible punishments. However, the Greek cities were given freedom and several substantial rights. Ephesus became, for a short time, self-governing. When Mithridates was defeated in the First Mithridatic War by the Roman consul Lucius Cornelius Sulla, Ephesus came back under Roman rule in 86 BC. Sulla imposed a huge indemnity, along with five years of back taxes, which left Asian cities heavily in debt for a long time to come.
King Ptolemy XII Auletes of Egypt retired to Ephesus in 57 BCE, passing his time in the sanctuary of the temple of Artemis when the Roman Senate failed to restore him to his throne.
Mark Antony was welcomed by Ephesus for periods when he was proconsul and in 33 BC with Cleopatra when he gathered his fleet of 800 ships before the battle of Actium with Octavius.
When Augustus became emperor in 27 BCE, the most important change was when he made Ephesus the capital of proconsular Asia (which covered western Asia Minor) instead of Pergamum. Ephesus then entered an era of prosperity, becoming both the seat of the governor and a major centre of commerce. According to Strabo, it was second in importance and size only to Rome.
The city and temple were destroyed by the Goths in 263 CE. This marked the decline of the city's splendour. However emperor Constantine the Great rebuilt much of the city and erected new public baths.
Until recently, the population of Ephesus in Roman times was estimated to number up to 225,000 people by Broughton. More recent scholarship regards these estimates as unrealistic. Such a large estimate would require population densities seen in only a few ancient cities, or extensive settlement outside the city walls. This would have been impossible at Ephesus because of the mountain ranges, coastline and quarries which surrounded the city.
The wall of Lysimachus has been estimated to enclose an area of 415 hectares (1,030 acres). Not all of this area was inhabited due to public buildings and spaces in the city center and the steep slope of the Bülbül Dağı mountain, which was enclosed by the wall. Ludwig Burchner estimated this area with the walls at 1000 acres. Jerome Murphy-O'Connor uses an estimate of 345 hectares for the inhabited land or 835 acres (Murphey cites Ludwig Burchner). He cites Josiah Russell using 832 acres and Old Jerusalem in 1918 as the yardstick estimated the population at 51,068 at 148.5 persons per hectare. Using 510 persons per hectare, he arrives at a population between 138,000 and 172,500 . J.W. Hanson estimated the inhabited space to be smaller, at 224 hectares (550 acres). He argues that population densities of 150~250 people per hectare are more realistic, which gives a range of 33,600–56,000 inhabitants. Even with these much lower population estimates, Ephesus was one of the largest cities of Roman Asia Minor, ranking it as the largest city after Sardis and Alexandria Troas. Hanson and Ortman (2017) estimate an inhabited area to be 263 hectares and their demographic model yields an estimate of 71,587 inhabitants, with a population density of 276 inhabitants per hectare. By contrast, Rome within the walls encompassed 1,500 hectares and as over 400 built-up hectares were left outside the Aurelian Wall, whose construction was begun in 274 CE and finished in 279 CE, the total inhabited area plus public spaces inside the walls consisted of ca. 1,900 hectares. Imperial Rome had a population estimated to be between 750,000 and one million (Hanson and Ortman's (2017) model yields an estimate of 923,406 inhabitants), which imply in a population density of 395 to 526 inhabitants per hectare, including public spaces.
Ephesus remained the most important city of the Byzantine Empire in Asia after Constantinople in the 5th and 6th centuries. Emperor Flavius Arcadius raised the level of the street between the theatre and the harbour. The basilica of St. John was built during the reign of emperor Justinian I in the 6th century.
Excavations in 2022 indicate that large parts of the city were destroyed in 614/615 by a military conflict, most likely during the Sasanian War, which initiated a drastic decline in the city's population and standard of living.
The importance of the city as a commercial centre further declined as the harbour, today 5 kilometres inland, was slowly silted up by the river (today, Küçük Menderes) despite repeated dredging during the city's history. The loss of its harbour caused Ephesus to lose its access to the Aegean Sea, which was important for trade. People started leaving the lowland of the city for the surrounding hills. The ruins of the temples were used as building blocks for new homes. Marble sculptures were ground to powder to make lime for plaster.
Sackings by the Arabs first in the year 654–655 by caliph Muawiyah I, and later in 700 and 716 hastened the decline further.
When the Seljuk Turks conquered Ephesus in 1090, it was a small village. The Byzantines resumed control in 1097 and changed the name of the town to Hagios Theologos. They kept control of the region until 1308. Crusaders passing through were surprised that there was only a small village, called Ayasalouk, where they had expected a bustling city with a large seaport. Even the temple of Artemis was completely forgotten by the local population. The Crusaders of the Second Crusade fought the Seljuks just outside the town in December 1147.
The town surrendered, on 24 October 1304, to Sasa Bey, a Turkish warlord of the Menteşoğulları principality. Nevertheless, contrary to the terms of the surrender the Turks pillaged the church of Saint John and deported most of the local population to Thyrea, Greece when a revolt seemed probable. During these events many of the remaining inhabitants were massacred.
Shortly afterwards, Ephesus was ceded to the Aydinid principality that stationed a powerful navy in the harbour of Ayasuluğ (the present-day Selçuk, next to Ephesus). Ayasoluk became an important harbour, from which piratical raids to the surrounding Christian regions were organised, both official by the state and private.
The town knew again a short period of prosperity during the 14th century under these new Seljuk rulers. They added important architectural works such as the İsa Bey Mosque, caravansaries and Turkish bathhouses (hamam).
Ephesians were incorporated as vassals into the Ottoman Empire for the first time in 1390. The Central Asian warlord Tamerlane defeated the Ottomans in Anatolia in 1402, and the Ottoman sultan Bayezid I died in captivity. The region was restored to the Anatolian beyliks. After a period of unrest, the region was again incorporated into the Ottoman Empire in 1425.
Ephesus was completely abandoned by the 15th century. Nearby Ayasuluğ (Ayasoluk being a corrupted form of the original Greek name) was turkified to Selçuk in 1914.
Ephesus was an important centre for Early Christianity from the AD 50s. From AD 52–54, the apostle Paul lived in Ephesus, working with the congregation and apparently organizing missionary activity into the hinterlands. Initially, according to the Acts of the Apostles, Paul attended the Jewish synagogue in Ephesus, but after three months he became frustrated with the stubbornness of some of the Jews, and moved his base to the school of Tyrannus. The Jamieson-Fausset-Brown Bible Commentary reminds readers that the unbelief of "some" (Greek: τινες) implies that "others, probably a large number, believed" and therefore there must have been a community of Jewish Christians in Ephesus. Paul introduced about twelve men to the 'baptism with the Holy Spirit' who had previously only experienced the baptism of John the Baptist. Later a silversmith named Demetrios stirred up a mob against Paul, saying that he was endangering the livelihood of those making silver Artemis shrines. Demetrios in connection with the temple of Artemis mentions some object (perhaps an image or a stone) "fallen from Zeus". Between 53 and 57 AD Paul wrote the letter 1 Corinthians from Ephesus (possibly from the 'Paul tower' near the harbour, where he was imprisoned for a short time). Later, Paul wrote the Epistle to the Ephesians while he was in prison in Rome (around 62 AD).
Roman Asia was associated with John, one of the chief apostles, and the Gospel of John might have been written in Ephesus, c 90–100. Ephesus was one of the seven cities addressed in the Book of Revelation, indicating that the church at Ephesus was strong.
According to Eusebius of Caesarea, Saint Timothy was the first bishop of Ephesus.
Polycrates of Ephesus (Greek: Πολυκράτης) was a bishop at the Church of Ephesus in the 2nd century. He is best known for his letter addressed to the Pope Victor I, Bishop of Rome, defending the Quartodeciman position in the Easter controversy.
In the early 2nd century, the church at Ephesus was still important enough to be addressed by a letter written by Bishop Ignatius of Antioch to the Ephesians which begins with "Ignatius, who is also called Theophorus, to the Church which is at Ephesus, in Asia, deservedly most happy, being blessed in the greatness and fullness of God the Father, and predestinated before the beginning of time, that it should be always for an enduring and unchangeable glory" (Letter to the Ephesians). The church at Ephesus had given their support for Ignatius, who was taken to Rome for execution.
A legend, which was first mentioned by Epiphanius of Salamis in the 4th century, purported that Mary, the mother of Jesus, may have spent the last years of her life in Ephesus. The Ephesians derived the argument from John's presence in the city, and Jesus' instructions to John to take care of his mother, Mary, after his death. Epiphanius, however, was keen to point out that, while the Bible says John was leaving for Asia, it does not say specifically that Mary went with him. He later stated that she was buried in Jerusalem. Since the 19th century, The House of the Virgin Mary, about 7 km (4 mi) from Selçuk, has been considered to have been the last home of Mary, mother of Jesus before her assumption into heaven in the Roman Catholic tradition, based on the visions of Augustinian sister the Blessed Anne Catherine Emmerich (1774–1824). It is a popular place of Catholic pilgrimage which has been visited by three recent popes.
The Church of Mary near the harbour of Ephesus was the setting for the Third Ecumenical Council in 431, which resulted in the condemnation of Nestorius. A Second Council of Ephesus was held in 449, but its controversial acts were never approved by the Catholics. It came to be called the Robber Council of Ephesus or Robber Synod of Latrocinium by its opponents.
Ephesus is believed to be the city of the Seven Sleepers, who were persecuted by the Roman emperor Decius because of their Christianity, and they slept in a cave for three centuries, outlasting their persecution.
They are considered saints by Catholics and Orthodox Christians and whose story is also mentioned in the Qur'an.
Ephesus is one of the largest Roman archaeological sites in the eastern Mediterranean. The visible ruins still give some idea of the city's original splendour, and the names associated with the ruins are evocative of its former life. The theatre dominates the view down Harbour Street, which leads to the silted-up harbour.
The Temple of Artemis, one of the Seven Wonders of the Ancient World, once stood 418' by 239' with over 100 marble pillars each 56' high. The temple earned the city the title "Servant of the Goddess". Pliny tells us that the magnificent structure took 120 years to build but is now represented only by one inconspicuous column, revealed during an archaeological excavation by the British Museum in the 1870s. Some fragments of the frieze (which are insufficient to suggest the form of the original) and other small finds were removed – some to London and some to the İstanbul Archaeology Museums.
The Library of Celsus, the façade of which has been carefully reconstructed from original pieces, was originally built c. 125 in memory of Tiberius Julius Celsus Polemaeanus, an Ancient Greek who served as governor of Roman Asia (105–107) in the Roman Empire. Celsus paid for the construction of the library with his own personal wealth and is buried in a sarcophagus beneath it. The library was mostly built by his son Gaius Julius Aquila and once held nearly 12,000 scrolls. Designed with an exaggerated entrance — so as to enhance its perceived size, speculate many historians — the building faces east so that the reading rooms could make best use of the morning light.
The interior of the library measured roughly 180 square metres (2,000 square feet) and may have contained as many as 12,000 scrolls. By the year 400 C.E. the library was no longer in use after being damaged in 262 C.E. The facade was reconstructed during 1970 to 1978 using fragments found on site or copies of fragments that were previously removed to museums.
At an estimated 25,000 seating capacity, the theatre is believed to be the largest in the ancient world. This open-air theatre was used initially for drama, but during later Roman times gladiatorial combats were also held on its stage; the first archaeological evidence of a gladiator graveyard was found in May 2007.
There were two agoras, one for commercial and one for state business.
Ephesus also had several major bath complexes, built at various times while the city was under Roman rule.
The city had one of the most advanced aqueduct systems in the ancient world, with at least six aqueducts of various sizes supplying different areas of the city. They fed a number of water mills, one of which has been identified as a sawmill for marble.
The Odeon was a small roofed theatre constructed by Publius Vedius Antoninus and his wife around 150 AD. It was a small salon for plays and concerts, seating about 1,500 people. There were 22 stairs in the theatre. The upper part of the theatre was decorated with red granite pillars in the Corinthian style. The entrances were at both sides of the stage and reached by a few steps.
The Temple of Hadrian dates from the 2nd century but underwent repairs in the 4th century and has been reerected from the surviving architectural fragments. The reliefs in the upper sections are casts, the originals now being exhibited in the Ephesus Archaeological Museum. A number of figures are depicted in the reliefs, including the emperor Theodosius I with his wife and eldest son. The temple was depicted on the reverse of the Turkish 20 million lira banknote of 2001–2005 and of the 20 new lira banknote of 2005–2009.
The Temple of the Sebastoi (sometimes called the Temple of Domitian), dedicated to the Flavian dynasty, was one of the largest temples in the city. It was erected on a pseudodipteral plan with 8 × 13 columns. The temple and its statue are some of the few remains connected with Domitian.
The Tomb/Fountain of Pollio was erected in 97 AD in honour of C. Sextilius Pollio, who constructed the Marnas aqueduct, by Offilius Proculus. It has a concave façade.
A part of the site, Basilica of St. John, was built in the 6th century, under emperor Justinian I, over the supposed site of the apostle's tomb. It is now surrounded by Selçuk.
The history of archaeological research in Ephesus stretches back to 1863, when British architect John Turtle Wood, sponsored by the British Museum, began to search for the Artemision. In 1869 he discovered the pavement of the temple, but since further expected discoveries were not made the excavations stopped in 1874. In 1895 German archaeologist Otto Benndorf, financed by a 10,000 guilder donation made by Austrian Karl Mautner Ritter von Markhof, resumed excavations. In 1898 Benndorf founded the Austrian Archaeological Institute, which plays a leading role in Ephesus today.
Finds from the site are exhibited notably in the Ephesos Museum in Vienna, the Ephesus Archaeological Museum in Selçuk and in the British Museum.
In October 2016, Turkey halted the works of the archeologists, which had been ongoing for more than 100 years, due to tensions between Austria and Turkey. In May 2018, Turkey allowed Austrian archeologists to resume their excavations. | [
{
"paragraph_id": 0,
"text": "Ephesus (/ˈɛfɪsəs/; Greek: Ἔφεσος, translit. Éphesos; Turkish: Efes; may ultimately derive from Hittite: 𒀀𒉺𒊭, romanized: Apaša) was a city in Ancient Greece on the coast of Ionia, 3 kilometres (1.9 mi) southwest of present-day Selçuk in İzmir Province, Turkey. It was built in the 10th century BC on the site of Apasa, the former Arzawan capital, by Attic and Ionian Greek colonists. During the Classical Greek era, it was one of twelve cities that were members of the Ionian League. The city came under the control of the Roman Republic in 129 BC.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The city was famous in its day for the nearby Temple of Artemis (completed around 550 BC), which has been designated one of the Seven Wonders of the Ancient World. Its many monumental buildings included the Library of Celsus and a theatre capable of holding 24,000 spectators.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Ephesus was recipient city of one of the Pauline epistles; one of the seven churches of Asia addressed in the Book of Revelation; the Gospel of John may have been written there; and it was the site of several 5th-century Christian Councils (see Council of Ephesus). The city was destroyed by the Goths in 263. Although it was afterwards rebuilt, its importance as a commercial centre declined as the harbour was slowly silted up by the Küçükmenderes River. In 614, it was partially destroyed by an earthquake.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Today, the ruins of Ephesus are a favourite international and local tourist attraction, being accessible from Adnan Menderes Airport and from the resort town Kuşadası. In 2015, the ruins were designated a UNESCO World Heritage Site.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Humans had begun inhabiting the area surrounding Ephesus by the Neolithic Age (about 6000 BC), as shown by evidence from excavations at the nearby höyük (artificial mounds known as tells) of Arvalya and Cukurici.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Excavations in recent years have unearthed settlements from the early Bronze Age at Ayasuluk Hill. According to Hittite sources, the capital of the kingdom of Arzawa (another independent state in Western and Southern Anatolia/Asia Minor) was Apasa (or Abasa), and some scholars suggest that this is the same place the Greeks later called Ephesus. In 1954, a burial ground from the Mycenaean era (1500–1400 BC), which contained ceramic pots, was discovered close to the ruins of the basilica of St. John. This was the period of the Mycenaean expansion, when the Ahhiyawa began settling in Asia Minor, a process that continued into the 13th century BC. The names Apasa and Ephesus appear to be cognate, and recently found inscriptions seem to pinpoint the places in the Hittite record.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Ephesus was founded as an Attic-Ionian colony in the 10th century BC on a hill (now known as the Ayasuluk Hill), three kilometers (1.9 miles) from the centre of ancient Ephesus (as attested by excavations at the Seljuk castle during the 1990s). The mythical founder of the city was a prince of Athens named Androklos, who had to leave his country after the death of his father, King Kodros. According to the legend, he founded Ephesus on the place where the oracle of Delphi became reality (\"A fish and a boar will show you the way\"). Androklos drove away most of the native Carian and Lelegian inhabitants of the city and united his people with the remainder. He was a successful warrior, and as a king he was able to join the twelve cities of Ionia together into the Ionian League. During his reign the city began to prosper. He died in a battle against the Carians when he came to the aid of Priene, another city of the Ionian League. Androklos and his dog are depicted on the Hadrian temple frieze, dating from the 2nd century. Later, Greek historians such as Pausanias, Strabo and Herodotos and the poet Kallinos reassigned the city's mythological foundation to Ephos, queen of the Amazons.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The Greek goddess Artemis and the great Anatolian goddess Kybele were identified together as Artemis of Ephesus. The many-breasted \"Lady of Ephesus\", identified with Artemis, was venerated in the Temple of Artemis, one of the Seven Wonders of the World and the largest building of the ancient world according to Pausanias (4.31.8). Pausanias mentions that the temple was built by Ephesus, son of the river god Caystrus, before the arrival of the Ionians. Of this structure, scarcely a trace remains.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Ancient sources seem to indicate that an older name of the place was Alope (Ancient Greek: Ἀλόπη, romanized: Alópē).",
"title": "History"
},
{
"paragraph_id": 9,
"text": "About 650 BC, Ephesus was attacked by the Cimmerians who razed the city, including the temple of Artemis. After the Cimmerians had been driven away, the city was ruled by a series of tyrants. Following a revolt by the people, Ephesus was ruled by a council. The city prospered again under a new rule, producing a number of important historical figures such as the elegiac poet Callinus and the iambic poet Hipponax, the philosopher Heraclitus, the great painter Parrhasius and later the grammarian Zenodotos and physicians Soranus and Rufus.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "About 560 BC, Ephesus was conquered by the Lydians under king Croesus, who, though a harsh ruler, treated the inhabitants with respect and even became the main contributor to the reconstruction of the temple of Artemis. His signature has been found on the base of one of the columns of the temple (now on display in the British Museum). Croesus made the populations of the different settlements around Ephesus regroup (synoikismos) in the vicinity of the Temple of Artemis, enlarging the city.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Later in the same century, the Lydians under Croesus invaded Persia. The Ionians refused a peace offer from Cyrus the Great, siding with the Lydians instead. After the Persians defeated Croesus, the Ionians offered to make peace, but Cyrus insisted that they surrender and become part of the empire. They were defeated by the Persian army commander Harpagos in 547 BC. The Persians then incorporated the Greek cities of Asia Minor into the Achaemenid Empire. Those cities were then ruled by satraps.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Ephesus has intrigued archaeologists because for the Archaic Period there is no definite location for the settlement. There are numerous sites to suggest the movement of a settlement between the Bronze Age and the Roman period, but the silting up of the natural harbours as well as the movement of the Kayster River meant that the location never remained the same.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Ephesus continued to prosper, but when taxes were raised under Cambyses II and Darius, the Ephesians participated in the Ionian Revolt against Persian rule in the Battle of Ephesus (498 BC), an event which instigated the Greco-Persian wars. In 479 BC, the Ionians, together with Athens, were able to oust the Persians from the shores of Asia Minor. In 478 BC, the Ionian cities with Athens entered into the Delian League against the Persians. Ephesus did not contribute ships but gave financial support.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "During the Peloponnesian War, Ephesus was first allied to Athens but in a later phase, called the Decelean War, or the Ionian War, sided with Sparta, which also had received the support of the Persians. As a result, rule over the cities of Ionia was ceded again to Persia.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "These wars did not greatly affect daily life in Ephesus. The Ephesians were surprisingly modern in their social relations: they allowed strangers to integrate and education was valued. In later times, Pliny the Elder mentioned having seen at Ephesus a representation of the goddess Diana by Timarete, the daughter of a painter.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In 356 BC the temple of Artemis was burnt down, according to legend, by a lunatic called Herostratus. The inhabitants of Ephesus at once set about restoring the temple and even planned a larger and grander one than the original.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "When Alexander the Great defeated the Persian forces at the Battle of Granicus in 334 BC, the Greek cities of Asia Minor were liberated. The pro-Persian tyrant Syrpax and his family were stoned to death, and Alexander was greeted warmly when he entered Ephesus in triumph. When Alexander saw that the temple of Artemis was not yet finished, he proposed to finance it and have his name inscribed on the front. But the inhabitants of Ephesus demurred, claiming that it was not fitting for one god to build a temple to another. After Alexander's death in 323 BC, Ephesus in 290 BC came under the rule of one of Alexander's generals, Lysimachus.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "As the river Cayster (Grk. name Κάϋστρος) silted up the old harbour, the resulting marshes caused malaria and many deaths among the inhabitants. Lysimachus forced the people to move from the ancient settlement around the temple of Artemis to the present site two kilometres (1.2 miles) away, when as a last resort the king flooded the old city by blocking the sewers. The new settlement was officially called Arsinoea (Ancient Greek: Ἀρσινόεια or Ἀρσινοΐα) or Arsinoe (Ἀρσινόη), after the king's second wife, Arsinoe II of Egypt. After Lysimachus had destroyed the nearby cities of Lebedos and Colophon in 292 BC, he relocated their inhabitants to the new city.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Ephesus revolted after the treacherous death of Agathocles, giving the Hellenistic king of Syria and Mesopotamia Seleucus I Nicator an opportunity for removing and killing Lysimachus, his last rival, at the Battle of Corupedium in 281 BC. After the death of Lysimachus the town again was named Ephesus.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Thus Ephesus became part of the Seleucid Empire. After the murder of king Antiochus II Theos and his Egyptian wife in 246 BC, pharaoh Ptolemy III invaded the Seleucid Empire and the Egyptian fleet swept the coast of Asia Minor. Ephesus was betrayed by its governor Sophron into the hands of the Ptolemies who ruled the city for half a century until 197 BC.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "The Seleucid king Antiochus III the Great tried to regain the Greek cities of Asia Minor and recaptured Ephesus in 196 BC but he then came into conflict with Rome. After a series of battles, he was defeated by Scipio Asiaticus at the Battle of Magnesia in 190 BC. As a result of the subsequent Treaty of Apamea, Ephesus came under the rule of Eumenes II, the Attalid king of Pergamon, (ruled 197–159 BC). When his grandson Attalus III died in 133 BC without male children of his own, he left his kingdom to the Roman Republic, on condition that the city of Pergamon be kept free and autonomous.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Ephesus, as part of the kingdom of Pergamon, became a subject of the Roman Republic in 129 BC after the revolt of Eumenes III was suppressed.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "The city felt Roman influence at once; taxes rose considerably, and the treasures of the city were systematically plundered. Hence in 88 BC Ephesus welcomed Archelaus, a general of Mithridates, king of Pontus, when he conquered Asia (the Roman name for western Anatolia). From Ephesus, Mithridates ordered every Roman citizen in the province to be killed which led to the Asiatic Vespers, the slaughter of 80,000 Roman citizens in Asia, or any person who spoke with a Latin accent. Many had lived in Ephesus, and statues and monument of Roman citizens in Ephesus were also destroyed. But when they saw how badly the people of Chios had been treated by Zenobius, a general of Mithridates, they refused entry to his army. Zenobius was invited into the city to visit Philopoemen, the father of Monime, the favourite wife of Mithridates, and the overseer of Ephesus. As the people expected nothing good of him, they threw him into prison and murdered him. Mithridates took revenge and inflicted terrible punishments. However, the Greek cities were given freedom and several substantial rights. Ephesus became, for a short time, self-governing. When Mithridates was defeated in the First Mithridatic War by the Roman consul Lucius Cornelius Sulla, Ephesus came back under Roman rule in 86 BC. Sulla imposed a huge indemnity, along with five years of back taxes, which left Asian cities heavily in debt for a long time to come.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "King Ptolemy XII Auletes of Egypt retired to Ephesus in 57 BCE, passing his time in the sanctuary of the temple of Artemis when the Roman Senate failed to restore him to his throne.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Mark Antony was welcomed by Ephesus for periods when he was proconsul and in 33 BC with Cleopatra when he gathered his fleet of 800 ships before the battle of Actium with Octavius.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "When Augustus became emperor in 27 BCE, the most important change was when he made Ephesus the capital of proconsular Asia (which covered western Asia Minor) instead of Pergamum. Ephesus then entered an era of prosperity, becoming both the seat of the governor and a major centre of commerce. According to Strabo, it was second in importance and size only to Rome.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "The city and temple were destroyed by the Goths in 263 CE. This marked the decline of the city's splendour. However emperor Constantine the Great rebuilt much of the city and erected new public baths.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Until recently, the population of Ephesus in Roman times was estimated to number up to 225,000 people by Broughton. More recent scholarship regards these estimates as unrealistic. Such a large estimate would require population densities seen in only a few ancient cities, or extensive settlement outside the city walls. This would have been impossible at Ephesus because of the mountain ranges, coastline and quarries which surrounded the city.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "The wall of Lysimachus has been estimated to enclose an area of 415 hectares (1,030 acres). Not all of this area was inhabited due to public buildings and spaces in the city center and the steep slope of the Bülbül Dağı mountain, which was enclosed by the wall. Ludwig Burchner estimated this area with the walls at 1000 acres. Jerome Murphy-O'Connor uses an estimate of 345 hectares for the inhabited land or 835 acres (Murphey cites Ludwig Burchner). He cites Josiah Russell using 832 acres and Old Jerusalem in 1918 as the yardstick estimated the population at 51,068 at 148.5 persons per hectare. Using 510 persons per hectare, he arrives at a population between 138,000 and 172,500 . J.W. Hanson estimated the inhabited space to be smaller, at 224 hectares (550 acres). He argues that population densities of 150~250 people per hectare are more realistic, which gives a range of 33,600–56,000 inhabitants. Even with these much lower population estimates, Ephesus was one of the largest cities of Roman Asia Minor, ranking it as the largest city after Sardis and Alexandria Troas. Hanson and Ortman (2017) estimate an inhabited area to be 263 hectares and their demographic model yields an estimate of 71,587 inhabitants, with a population density of 276 inhabitants per hectare. By contrast, Rome within the walls encompassed 1,500 hectares and as over 400 built-up hectares were left outside the Aurelian Wall, whose construction was begun in 274 CE and finished in 279 CE, the total inhabited area plus public spaces inside the walls consisted of ca. 1,900 hectares. Imperial Rome had a population estimated to be between 750,000 and one million (Hanson and Ortman's (2017) model yields an estimate of 923,406 inhabitants), which imply in a population density of 395 to 526 inhabitants per hectare, including public spaces.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "Ephesus remained the most important city of the Byzantine Empire in Asia after Constantinople in the 5th and 6th centuries. Emperor Flavius Arcadius raised the level of the street between the theatre and the harbour. The basilica of St. John was built during the reign of emperor Justinian I in the 6th century.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "Excavations in 2022 indicate that large parts of the city were destroyed in 614/615 by a military conflict, most likely during the Sasanian War, which initiated a drastic decline in the city's population and standard of living.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "The importance of the city as a commercial centre further declined as the harbour, today 5 kilometres inland, was slowly silted up by the river (today, Küçük Menderes) despite repeated dredging during the city's history. The loss of its harbour caused Ephesus to lose its access to the Aegean Sea, which was important for trade. People started leaving the lowland of the city for the surrounding hills. The ruins of the temples were used as building blocks for new homes. Marble sculptures were ground to powder to make lime for plaster.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "Sackings by the Arabs first in the year 654–655 by caliph Muawiyah I, and later in 700 and 716 hastened the decline further.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "When the Seljuk Turks conquered Ephesus in 1090, it was a small village. The Byzantines resumed control in 1097 and changed the name of the town to Hagios Theologos. They kept control of the region until 1308. Crusaders passing through were surprised that there was only a small village, called Ayasalouk, where they had expected a bustling city with a large seaport. Even the temple of Artemis was completely forgotten by the local population. The Crusaders of the Second Crusade fought the Seljuks just outside the town in December 1147.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "The town surrendered, on 24 October 1304, to Sasa Bey, a Turkish warlord of the Menteşoğulları principality. Nevertheless, contrary to the terms of the surrender the Turks pillaged the church of Saint John and deported most of the local population to Thyrea, Greece when a revolt seemed probable. During these events many of the remaining inhabitants were massacred.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "Shortly afterwards, Ephesus was ceded to the Aydinid principality that stationed a powerful navy in the harbour of Ayasuluğ (the present-day Selçuk, next to Ephesus). Ayasoluk became an important harbour, from which piratical raids to the surrounding Christian regions were organised, both official by the state and private.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "The town knew again a short period of prosperity during the 14th century under these new Seljuk rulers. They added important architectural works such as the İsa Bey Mosque, caravansaries and Turkish bathhouses (hamam).",
"title": "History"
},
{
"paragraph_id": 38,
"text": "Ephesians were incorporated as vassals into the Ottoman Empire for the first time in 1390. The Central Asian warlord Tamerlane defeated the Ottomans in Anatolia in 1402, and the Ottoman sultan Bayezid I died in captivity. The region was restored to the Anatolian beyliks. After a period of unrest, the region was again incorporated into the Ottoman Empire in 1425.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "Ephesus was completely abandoned by the 15th century. Nearby Ayasuluğ (Ayasoluk being a corrupted form of the original Greek name) was turkified to Selçuk in 1914.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "Ephesus was an important centre for Early Christianity from the AD 50s. From AD 52–54, the apostle Paul lived in Ephesus, working with the congregation and apparently organizing missionary activity into the hinterlands. Initially, according to the Acts of the Apostles, Paul attended the Jewish synagogue in Ephesus, but after three months he became frustrated with the stubbornness of some of the Jews, and moved his base to the school of Tyrannus. The Jamieson-Fausset-Brown Bible Commentary reminds readers that the unbelief of \"some\" (Greek: τινες) implies that \"others, probably a large number, believed\" and therefore there must have been a community of Jewish Christians in Ephesus. Paul introduced about twelve men to the 'baptism with the Holy Spirit' who had previously only experienced the baptism of John the Baptist. Later a silversmith named Demetrios stirred up a mob against Paul, saying that he was endangering the livelihood of those making silver Artemis shrines. Demetrios in connection with the temple of Artemis mentions some object (perhaps an image or a stone) \"fallen from Zeus\". Between 53 and 57 AD Paul wrote the letter 1 Corinthians from Ephesus (possibly from the 'Paul tower' near the harbour, where he was imprisoned for a short time). Later, Paul wrote the Epistle to the Ephesians while he was in prison in Rome (around 62 AD).",
"title": "Ephesus and Christianity"
},
{
"paragraph_id": 41,
"text": "Roman Asia was associated with John, one of the chief apostles, and the Gospel of John might have been written in Ephesus, c 90–100. Ephesus was one of the seven cities addressed in the Book of Revelation, indicating that the church at Ephesus was strong.",
"title": "Ephesus and Christianity"
},
{
"paragraph_id": 42,
"text": "According to Eusebius of Caesarea, Saint Timothy was the first bishop of Ephesus.",
"title": "Ephesus and Christianity"
},
{
"paragraph_id": 43,
"text": "Polycrates of Ephesus (Greek: Πολυκράτης) was a bishop at the Church of Ephesus in the 2nd century. He is best known for his letter addressed to the Pope Victor I, Bishop of Rome, defending the Quartodeciman position in the Easter controversy.",
"title": "Ephesus and Christianity"
},
{
"paragraph_id": 44,
"text": "In the early 2nd century, the church at Ephesus was still important enough to be addressed by a letter written by Bishop Ignatius of Antioch to the Ephesians which begins with \"Ignatius, who is also called Theophorus, to the Church which is at Ephesus, in Asia, deservedly most happy, being blessed in the greatness and fullness of God the Father, and predestinated before the beginning of time, that it should be always for an enduring and unchangeable glory\" (Letter to the Ephesians). The church at Ephesus had given their support for Ignatius, who was taken to Rome for execution.",
"title": "Ephesus and Christianity"
},
{
"paragraph_id": 45,
"text": "A legend, which was first mentioned by Epiphanius of Salamis in the 4th century, purported that Mary, the mother of Jesus, may have spent the last years of her life in Ephesus. The Ephesians derived the argument from John's presence in the city, and Jesus' instructions to John to take care of his mother, Mary, after his death. Epiphanius, however, was keen to point out that, while the Bible says John was leaving for Asia, it does not say specifically that Mary went with him. He later stated that she was buried in Jerusalem. Since the 19th century, The House of the Virgin Mary, about 7 km (4 mi) from Selçuk, has been considered to have been the last home of Mary, mother of Jesus before her assumption into heaven in the Roman Catholic tradition, based on the visions of Augustinian sister the Blessed Anne Catherine Emmerich (1774–1824). It is a popular place of Catholic pilgrimage which has been visited by three recent popes.",
"title": "Ephesus and Christianity"
},
{
"paragraph_id": 46,
"text": "The Church of Mary near the harbour of Ephesus was the setting for the Third Ecumenical Council in 431, which resulted in the condemnation of Nestorius. A Second Council of Ephesus was held in 449, but its controversial acts were never approved by the Catholics. It came to be called the Robber Council of Ephesus or Robber Synod of Latrocinium by its opponents.",
"title": "Ephesus and Christianity"
},
{
"paragraph_id": 47,
"text": "Ephesus is believed to be the city of the Seven Sleepers, who were persecuted by the Roman emperor Decius because of their Christianity, and they slept in a cave for three centuries, outlasting their persecution.",
"title": "Ephesus and Christianity"
},
{
"paragraph_id": 48,
"text": "They are considered saints by Catholics and Orthodox Christians and whose story is also mentioned in the Qur'an.",
"title": "Ephesus and Christianity"
},
{
"paragraph_id": 49,
"text": "Ephesus is one of the largest Roman archaeological sites in the eastern Mediterranean. The visible ruins still give some idea of the city's original splendour, and the names associated with the ruins are evocative of its former life. The theatre dominates the view down Harbour Street, which leads to the silted-up harbour.",
"title": "Main sites"
},
{
"paragraph_id": 50,
"text": "The Temple of Artemis, one of the Seven Wonders of the Ancient World, once stood 418' by 239' with over 100 marble pillars each 56' high. The temple earned the city the title \"Servant of the Goddess\". Pliny tells us that the magnificent structure took 120 years to build but is now represented only by one inconspicuous column, revealed during an archaeological excavation by the British Museum in the 1870s. Some fragments of the frieze (which are insufficient to suggest the form of the original) and other small finds were removed – some to London and some to the İstanbul Archaeology Museums.",
"title": "Main sites"
},
{
"paragraph_id": 51,
"text": "The Library of Celsus, the façade of which has been carefully reconstructed from original pieces, was originally built c. 125 in memory of Tiberius Julius Celsus Polemaeanus, an Ancient Greek who served as governor of Roman Asia (105–107) in the Roman Empire. Celsus paid for the construction of the library with his own personal wealth and is buried in a sarcophagus beneath it. The library was mostly built by his son Gaius Julius Aquila and once held nearly 12,000 scrolls. Designed with an exaggerated entrance — so as to enhance its perceived size, speculate many historians — the building faces east so that the reading rooms could make best use of the morning light.",
"title": "Main sites"
},
{
"paragraph_id": 52,
"text": "The interior of the library measured roughly 180 square metres (2,000 square feet) and may have contained as many as 12,000 scrolls. By the year 400 C.E. the library was no longer in use after being damaged in 262 C.E. The facade was reconstructed during 1970 to 1978 using fragments found on site or copies of fragments that were previously removed to museums.",
"title": "Main sites"
},
{
"paragraph_id": 53,
"text": "At an estimated 25,000 seating capacity, the theatre is believed to be the largest in the ancient world. This open-air theatre was used initially for drama, but during later Roman times gladiatorial combats were also held on its stage; the first archaeological evidence of a gladiator graveyard was found in May 2007.",
"title": "Main sites"
},
{
"paragraph_id": 54,
"text": "There were two agoras, one for commercial and one for state business.",
"title": "Main sites"
},
{
"paragraph_id": 55,
"text": "Ephesus also had several major bath complexes, built at various times while the city was under Roman rule.",
"title": "Main sites"
},
{
"paragraph_id": 56,
"text": "The city had one of the most advanced aqueduct systems in the ancient world, with at least six aqueducts of various sizes supplying different areas of the city. They fed a number of water mills, one of which has been identified as a sawmill for marble.",
"title": "Main sites"
},
{
"paragraph_id": 57,
"text": "The Odeon was a small roofed theatre constructed by Publius Vedius Antoninus and his wife around 150 AD. It was a small salon for plays and concerts, seating about 1,500 people. There were 22 stairs in the theatre. The upper part of the theatre was decorated with red granite pillars in the Corinthian style. The entrances were at both sides of the stage and reached by a few steps.",
"title": "Main sites"
},
{
"paragraph_id": 58,
"text": "The Temple of Hadrian dates from the 2nd century but underwent repairs in the 4th century and has been reerected from the surviving architectural fragments. The reliefs in the upper sections are casts, the originals now being exhibited in the Ephesus Archaeological Museum. A number of figures are depicted in the reliefs, including the emperor Theodosius I with his wife and eldest son. The temple was depicted on the reverse of the Turkish 20 million lira banknote of 2001–2005 and of the 20 new lira banknote of 2005–2009.",
"title": "Main sites"
},
{
"paragraph_id": 59,
"text": "The Temple of the Sebastoi (sometimes called the Temple of Domitian), dedicated to the Flavian dynasty, was one of the largest temples in the city. It was erected on a pseudodipteral plan with 8 × 13 columns. The temple and its statue are some of the few remains connected with Domitian.",
"title": "Main sites"
},
{
"paragraph_id": 60,
"text": "The Tomb/Fountain of Pollio was erected in 97 AD in honour of C. Sextilius Pollio, who constructed the Marnas aqueduct, by Offilius Proculus. It has a concave façade.",
"title": "Main sites"
},
{
"paragraph_id": 61,
"text": "A part of the site, Basilica of St. John, was built in the 6th century, under emperor Justinian I, over the supposed site of the apostle's tomb. It is now surrounded by Selçuk.",
"title": "Main sites"
},
{
"paragraph_id": 62,
"text": "The history of archaeological research in Ephesus stretches back to 1863, when British architect John Turtle Wood, sponsored by the British Museum, began to search for the Artemision. In 1869 he discovered the pavement of the temple, but since further expected discoveries were not made the excavations stopped in 1874. In 1895 German archaeologist Otto Benndorf, financed by a 10,000 guilder donation made by Austrian Karl Mautner Ritter von Markhof, resumed excavations. In 1898 Benndorf founded the Austrian Archaeological Institute, which plays a leading role in Ephesus today.",
"title": "Archaeology"
},
{
"paragraph_id": 63,
"text": "Finds from the site are exhibited notably in the Ephesos Museum in Vienna, the Ephesus Archaeological Museum in Selçuk and in the British Museum.",
"title": "Archaeology"
},
{
"paragraph_id": 64,
"text": "In October 2016, Turkey halted the works of the archeologists, which had been ongoing for more than 100 years, due to tensions between Austria and Turkey. In May 2018, Turkey allowed Austrian archeologists to resume their excavations.",
"title": "Archaeology"
}
]
| Ephesus was a city in Ancient Greece on the coast of Ionia, 3 kilometres (1.9 mi) southwest of present-day Selçuk in İzmir Province, Turkey. It was built in the 10th century BC on the site of Apasa, the former Arzawan capital, by Attic and Ionian Greek colonists. During the Classical Greek era, it was one of twelve cities that were members of the Ionian League. The city came under the control of the Roman Republic in 129 BC. The city was famous in its day for the nearby Temple of Artemis, which has been designated one of the Seven Wonders of the Ancient World. Its many monumental buildings included the Library of Celsus and a theatre capable of holding 24,000 spectators. Ephesus was recipient city of one of the Pauline epistles; one of the seven churches of Asia addressed in the Book of Revelation; the Gospel of John may have been written there; and it was the site of several 5th-century Christian Councils. The city was destroyed by the Goths in 263. Although it was afterwards rebuilt, its importance as a commercial centre declined as the harbour was slowly silted up by the Küçükmenderes River. In 614, it was partially destroyed by an earthquake. Today, the ruins of Ephesus are a favourite international and local tourist attraction, being accessible from Adnan Menderes Airport and from the resort town Kuşadası. In 2015, the ruins were designated a UNESCO World Heritage Site. | 2001-09-29T16:53:29Z | 2023-12-16T15:37:49Z | [
"Template:Lang-tr",
"Template:Convert",
"Template:Multiple image",
"Template:Main",
"Template:Bibleref",
"Template:AmCyc Poster",
"Template:Pp-move-indef",
"Template:Lang-grc",
"Template:Cite Pauly",
"Template:Cite DARE",
"Template:Citation",
"Template:ISBN",
"Template:About",
"Template:Lang-gr",
"Template:Cite web",
"Template:Cite journal",
"Template:Webarchive",
"Template:Authority control",
"Template:Short description",
"Template:Lang-grc-gre",
"Template:Lang-hit",
"Template:See also",
"Template:Bibleref2",
"Template:Commons",
"Template:IPAc-en",
"Template:Ionian League",
"Template:Journeys of Paul of Tarsus",
"Template:Former settlements in Turkey",
"Template:Dead link",
"Template:Seven churches of Asia",
"Template:World Heritage Sites in Turkey",
"Template:Ancient Greece topics",
"Template:Portal",
"Template:Reflist",
"Template:History of Anatolia",
"Template:Redirect",
"Template:Lang-el",
"Template:Circa",
"Template:Cite book",
"Template:Cite Barrington",
"Template:Cite news",
"Template:Library resources box",
"Template:Infobox ancient site"
]
| https://en.wikipedia.org/wiki/Ephesus |
9,845 | JavaScript | JavaScript (/ˈdʒɑːvəskrɪpt/), often abbreviated as JS, is a programming language and core technology of the World Wide Web, alongside HTML and CSS. As of 2023, 98.7% of websites use JavaScript on the client side for webpage behavior, often incorporating third-party libraries. All major web browsers have a dedicated JavaScript engine to execute the code on users' devices.
JavaScript is a high-level, often just-in-time compiled language that conforms to the ECMAScript standard. It has dynamic typing, prototype-based object-orientation, and first-class functions. It is multi-paradigm, supporting event-driven, functional, and imperative programming styles. It has application programming interfaces (APIs) for working with text, dates, regular expressions, standard data structures, and the Document Object Model (DOM).
The ECMAScript standard does not include any input/output (I/O), such as networking, storage, or graphics facilities. In practice, the web browser or other runtime system provides JavaScript APIs for I/O.
JavaScript engines were originally used only in web browsers, but are now core components of some servers and a variety of applications. The most popular runtime system for this usage is Node.js.
Although Java and JavaScript are similar in name, syntax, and respective standard libraries, the two languages are distinct and differ greatly in design.
The first popular web browser with a graphical user interface, Mosaic, was released in 1993. Accessible to non-technical people, it played a prominent role in the rapid growth of the nascent World Wide Web. The lead developers of Mosaic then founded the Netscape corporation, which released a more polished browser, Netscape Navigator, in 1994. This quickly became the most-used.
During these formative years of the Web, web pages could only be static, lacking the capability for dynamic behavior after the page was loaded in the browser. There was a desire in the flourishing web development scene to remove this limitation, so in 1995, Netscape decided to add a scripting language to Navigator. They pursued two routes to achieve this: collaborating with Sun Microsystems to embed the Java programming language, while also hiring Brendan Eich to embed the Scheme language.
Netscape management soon decided that the best option was for Eich to devise a new language, with syntax similar to Java and less like Scheme or other extant scripting languages. Although the new language and its interpreter implementation were called LiveScript when first shipped as part of a Navigator beta in September 1995, the name was changed to JavaScript for the official release in December.
The choice of the JavaScript name has caused confusion, implying that it is directly related to Java. At the time, the dot-com boom had begun and Java was a popular new language, so Eich considered the JavaScript name a marketing ploy by Netscape.
Microsoft debuted Internet Explorer in 1995, leading to a browser war with Netscape. On the JavaScript front, Microsoft reverse-engineered the Navigator interpreter to create its own, called JScript.
JavaScript was first released in 1996, alongside initial support for CSS and extensions to HTML. Each of these implementations was noticeably different from their counterparts in Navigator. These differences made it difficult for developers to make their websites work well in both browsers, leading to widespread use of "best viewed in Netscape" and "best viewed in Internet Explorer" logos for several years.
In November 1996, Netscape submitted JavaScript to Ecma International, as the starting point for a standard specification that all browser vendors could conform to. This led to the official release of the first ECMAScript language specification in June 1997.
The standards process continued for a few years, with the release of ECMAScript 2 in June 1998 and ECMAScript 3 in December 1999. Work on ECMAScript 4 began in 2000.
Meanwhile, Microsoft gained an increasingly dominant position in the browser market. By the early 2000s, Internet Explorer's market share reached 95%. This meant that JScript became the de facto standard for client-side scripting on the Web.
Microsoft initially participated in the standards process and implemented some proposals in its JScript language, but eventually it stopped collaborating on Ecma work. Thus ECMAScript 4 was mothballed.
During the period of Internet Explorer dominance in the early 2000s, client-side scripting was stagnant. This started to change in 2004, when the successor of Netscape, Mozilla, released the Firefox browser. Firefox was well received by many, taking significant market share from Internet Explorer.
In 2005, Mozilla joined ECMA International, and work started on the ECMAScript for XML (E4X) standard. This led to Mozilla working jointly with Macromedia (later acquired by Adobe Systems), who were implementing E4X in their ActionScript 3 language, which was based on an ECMAScript 4 draft. The goal became standardizing ActionScript 3 as the new ECMAScript 4. To this end, Adobe Systems released the Tamarin implementation as an open source project. However, Tamarin and ActionScript 3 were too different from established client-side scripting, and without cooperation from Microsoft, ECMAScript 4 never reached fruition.
Meanwhile, very important developments were occurring in open-source communities not affiliated with ECMA work. In 2005, Jesse James Garrett released a white paper in which he coined the term Ajax and described a set of technologies, of which JavaScript was the backbone, to create web applications where data can be loaded in the background, avoiding the need for full page reloads. This sparked a renaissance period of JavaScript, spearheaded by open-source libraries and the communities that formed around them. Many new libraries were created, including jQuery, Prototype, Dojo Toolkit, and MooTools.
Google debuted its Chrome browser in 2008, with the V8 JavaScript engine that was faster than its competition. The key innovation was just-in-time compilation (JIT), so other browser vendors needed to overhaul their engines for JIT.
In July 2008, these disparate parties came together for a conference in Oslo. This led to the eventual agreement in early 2009 to combine all relevant work and drive the language forward. The result was the ECMAScript 5 standard, released in December 2009.
Ambitious work on the language continued for several years, culminating in an extensive collection of additions and refinements being formalized with the publication of ECMAScript 6 in 2015.
The creation of Node.js in 2009 by Ryan Dahl sparked a significant increase in the usage of JavaScript outside of web browsers. Node combines the V8 engine, an event loop, and I/O APIs, thereby providing a stand-alone JavaScript runtime system. As of 2018, Node had been used by millions of developers, and npm had the most modules of any package manager in the world.
The ECMAScript draft specification is currently maintained openly on GitHub, and editions are produced via regular annual snapshots. Potential revisions to the language are vetted through a comprehensive proposal process. Now, instead of edition numbers, developers check the status of upcoming features individually.
The current JavaScript ecosystem has many libraries and frameworks, established programming practices, and substantial usage of JavaScript outside of web browsers. Plus, with the rise of single-page applications and other JavaScript-heavy websites, several transpilers have been created to aid the development process.
"JavaScript" is a trademark of Oracle Corporation in the United States. The trademark was originally issued to Sun Microsystems on 6 May 1997, and was transferred to Oracle when they acquired Sun in 2009.
JavaScript is the dominant client-side scripting language of the Web, with 98% of all websites (mid–2022) using it for this purpose. Scripts are embedded in or included from HTML documents and interact with the DOM. All major web browsers have a built-in JavaScript engine that executes the code on the user's device.
Over 80% of websites use a third-party JavaScript library or web framework for their client-side scripting.
jQuery is by far the most popular client-side library, used by over 75% of websites.
React (also known as React.js or ReactJS) is a free and open-source front-end JavaScript library for building user interfaces based on components. It is maintained by Meta (formerly Facebook) and a community of individual developers and companies.
In contrast, the term "Vanilla JS" has been coined for websites not using any libraries or frameworks, instead relying entirely on standard JavaScript functionality.
The use of JavaScript has expanded beyond its web browser roots. JavaScript engines are now embedded in a variety of other software systems, both for server-side website deployments and non-browser applications.
Initial attempts at promoting server-side JavaScript usage were Netscape Enterprise Server and Microsoft's Internet Information Services, but they were small niches. Server-side usage eventually started to grow in the late 2000s, with the creation of Node.js and other approaches.
Electron, Cordova, React Native, and other application frameworks have been used to create many applications with behavior implemented in JavaScript. Other non-browser applications include Adobe Acrobat support for scripting PDF documents and GNOME Shell extensions written in JavaScript.
JavaScript has recently begun to appear in some embedded systems, usually by leveraging Node.js.
A JavaScript engine is a software component that executes JavaScript code. The first JavaScript engines were mere interpreters, but all relevant modern engines use just-in-time compilation for improved performance.
JavaScript engines are typically developed by web browser vendors, and every major browser has one. In a browser, the JavaScript engine runs in concert with the rendering engine via the Document Object Model.
The use of JavaScript engines is not limited to browsers. For example, the V8 engine is a core component of the Node.js and Deno runtime systems.
JavaScript typically relies on a run-time environment (e.g., a web browser) to provide objects and methods by which scripts can interact with the environment (e.g., a web page DOM). These environments are single-threaded. JavaScript also relies on the run-time environment to provide the ability to include/import scripts (e.g., HTML <script> elements). This is not a language feature per se, but it is common in most JavaScript implementations. JavaScript processes messages from a queue one at a time. JavaScript calls a function associated with each new message, creating a call stack frame with the function's arguments and local variables. The call stack shrinks and grows based on the function's needs. When the call stack is empty upon function completion, JavaScript proceeds to the next message in the queue. This is called the event loop, described as "run to completion" because each message is fully processed before the next message is considered. However, the language's concurrency model describes the event loop as non-blocking: program input/output is performed using events and callback functions. This means, for instance, that JavaScript can process a mouse click while waiting for a database query to return information.
The following features are common to all conforming ECMAScript implementations unless explicitly specified otherwise.
JavaScript supports much of the structured programming syntax from C (e.g., if statements, while loops, switch statements, do while loops, etc.). One partial exception is scoping: originally JavaScript only had function scoping with var; block scoping was added in ECMAScript 2015 with the keywords let and const. Like C, JavaScript makes a distinction between expressions and statements. One syntactic difference from C is automatic semicolon insertion, which allow semicolons (which terminate statements) to be omitted.
JavaScript is weakly typed, which means certain types are implicitly cast depending on the operation used.
Values are cast to strings like the following:
Values are cast to numbers by casting to strings and then casting the strings to numbers. These processes can be modified by defining toString and valueOf functions on the prototype for string and number casting respectively.
JavaScript has received criticism for the way it implements these conversions as the complexity of the rules can be mistaken for inconsistency. For example, when adding a number to a string, the number will be cast to a string before performing concatenation, but when subtracting a number from a string, the string is cast to a number before performing subtraction.
Often also mentioned is {} + [] resulting in 0 (number). This is misleading: the {} is interpreted as an empty code block instead of an empty object, and the empty array is cast to a number by the remaining unary + operator. If you wrap the expression in parentheses ({} + []) the curly brackets are interpreted as an empty object and the result of the expression is "[object Object]" as expected.
JavaScript is dynamically typed like most other scripting languages. A type is associated with a value rather than an expression. For example, a variable initially bound to a number may be reassigned to a string. JavaScript supports various ways to test the type of objects, including duck typing.
JavaScript includes an eval function that can execute statements provided as strings at run-time.
Prototypal inheritance in JavaScript is described by Douglas Crockford as:
You make prototype objects, and then ... make new instances. Objects are mutable in JavaScript, so we can augment the new instances, giving them new fields and methods. These can then act as prototypes for even newer objects. We don't need classes to make lots of similar objects... Objects inherit from objects. What could be more object oriented than that?
In JavaScript, an object is an associative array, augmented with a prototype (see below); each key provides the name for an object property, and there are two syntactical ways to specify such a name: dot notation (obj.x = 10) and bracket notation (obj['x'] = 10). A property may be added, rebound, or deleted at run-time. Most properties of an object (and any property that belongs to an object's prototype inheritance chain) can be enumerated using a for...in loop.
JavaScript uses prototypes where many other object-oriented languages use classes for inheritance. It is possible to simulate many class-based features with prototypes in JavaScript.
Functions double as object constructors, along with their typical role. Prefixing a function call with new will create an instance of a prototype, inheriting properties and methods from the constructor (including properties from the Object prototype). ECMAScript 5 offers the Object.create method, allowing explicit creation of an instance without automatically inheriting from the Object prototype (older environments can assign the prototype to null). The constructor's prototype property determines the object used for the new object's internal prototype. New methods can be added by modifying the prototype of the function used as a constructor. JavaScript's built-in constructors, such as Array or Object, also have prototypes that can be modified. While it is possible to modify the Object prototype, it is generally considered bad practice because most objects in JavaScript will inherit methods and properties from the Object prototype, and they may not expect the prototype to be modified.
Unlike in many object-oriented languages, in JavaScript there is no distinction between a function definition and a method definition. Rather, the distinction occurs during function calling. When a function is called as a method of an object, the function's local this keyword is bound to that object for that invocation.
JavaScript functions are first-class; a function is considered to be an object. As such, a function may have properties and methods, such as .call() and .bind().
A nested function is a function defined within another function. It is created each time the outer function is invoked.
In addition, each nested function forms a lexical closure: the lexical scope of the outer function (including any constant, local variable, or argument value) becomes part of the internal state of each inner function object, even after execution of the outer function concludes.
JavaScript also supports anonymous functions.
JavaScript supports implicit and explicit delegation.
JavaScript natively supports various function-based implementations of Role patterns like Traits and Mixins. Such a function defines additional behavior by at least one method bound to the this keyword within its function body. A Role then has to be delegated explicitly via call or apply to objects that need to feature additional behavior that is not shared via the prototype chain.
Whereas explicit function-based delegation does cover composition in JavaScript, implicit delegation already happens every time the prototype chain is walked in order to, e.g., find a method that might be related to but is not directly owned by an object. Once the method is found it gets called within this object's context. Thus inheritance in JavaScript is covered by a delegation automatism that is bound to the prototype property of constructor functions.
JavaScript is a zero-index language.
An indefinite number of parameters can be passed to a function. The function can access them through formal parameters and also through the local arguments object. Variadic functions can also be created by using the bind method.
Like in many scripting languages, arrays and objects (associative arrays in other languages) can each be created with a succinct shortcut syntax. In fact, these literals form the basis of the JSON data format.
In a manner similar to Perl, JavaScript also supports regular expressions, which provide a concise and powerful syntax for text manipulation that is more sophisticated than the built-in string functions.
JavaScript supports promises and Async/await for handling asynchronous operations.
A built-in Promise object provides functionality for handling promises and associating handlers with an asynchronous action's eventual result. Recently, the JavaScript specification introduced combinator methods, which allow developers to combine multiple JavaScript promises and do operations based on different scenarios. The methods introduced are: Promise.race, Promise.all, Promise.allSettled and Promise.any.
Async/await allows an asynchronous, non-blocking function to be structured in a way similar to an ordinary synchronous function. Asynchronous, non-blocking code can be written, with minimal overhead, structured similarly to traditional synchronous, blocking code.
Historically, some JavaScript engines supported these non-standard features:
Variables in JavaScript can be defined using either the var, let or const keywords. Variables defined without keywords will be defined at the global scope.
Note the comments in the examples above, all of which were preceded with two forward slashes.
There is no built-in Input/output functionality in JavaScript, instead it is provided by the run-time environment. The ECMAScript specification in edition 5.1 mentions that "there are no provisions in this specification for input of external data or output of computed results". However, most runtime environments have a console object that can be used to print output. Here is a minimalist Hello World program in JavaScript in a runtime environment with a console object:
In HTML documents, a program like this is required for an output:
A simple recursive function to calculate the factorial of a natural number:
An anonymous function (or lambda):
This example shows that, in JavaScript, function closures capture their non-local variables by reference.
Arrow functions were first introduced in 6th Edition - ECMAScript 2015. They shorten the syntax for writing functions in JavaScript. Arrow functions are anonymous, so a variable is needed to refer to them in order to invoke them after their creation, unless surrounded by parenthesis and executed immediately.
Example of arrow function:
In JavaScript, objects can be created as instances of a class.
Object class example:
In JavaScript, objects can be instantiated directly from a function.
Object functional example:
Variadic function demonstration (arguments is a special variable):
Immediately-invoked function expressions are often used to create closures. Closures allow gathering properties and methods in a namespace and making some of them private:
Generator objects (in the form of generator functions) provide a function which can be called, exited, and re-entered while maintaining internal context (statefulness).
JavaScript can export and import from modules:
Export example:
Import example:
This sample code displays various JavaScript features.
The following output should be displayed in the browser window.
JavaScript and the DOM provide the potential for malicious authors to deliver scripts to run on a client computer via the Web. Browser authors minimize this risk using two restrictions. First, scripts run in a sandbox in which they can only perform Web-related actions, not general-purpose programming tasks like creating files. Second, scripts are constrained by the same-origin policy: scripts from one Website do not have access to information such as usernames, passwords, or cookies sent to another site. Most JavaScript-related security bugs are breaches of either the same origin policy or the sandbox.
There are subsets of general JavaScript—ADsafe, Secure ECMAScript (SES)—that provide greater levels of security, especially on code created by third parties (such as advertisements). Closure Toolkit is another project for safe embedding and isolation of third-party JavaScript and HTML.
Content Security Policy is the main intended method of ensuring that only trusted code is executed on a Web page.
A common JavaScript-related security problem is cross-site scripting (XSS), a violation of the same-origin policy. XSS vulnerabilities occur when an attacker can cause a target Website, such as an online banking website, to include a malicious script in the webpage presented to a victim. The script in this example can then access the banking application with the privileges of the victim, potentially disclosing secret information or transferring money without the victim's authorization. A solution to XSS vulnerabilities is to use HTML escaping whenever displaying untrusted data.
Some browsers include partial protection against reflected XSS attacks, in which the attacker provides a URL including malicious script. However, even users of those browsers are vulnerable to other XSS attacks, such as those where the malicious code is stored in a database. Only correct design of Web applications on the server-side can fully prevent XSS.
XSS vulnerabilities can also occur because of implementation mistakes by browser authors.
Another cross-site vulnerability is cross-site request forgery (CSRF). In CSRF, code on an attacker's site tricks the victim's browser into taking actions the user did not intend at a target site (like transferring money at a bank). When target sites rely solely on cookies for request authentication, requests originating from code on the attacker's site can carry the same valid login credentials of the initiating user. In general, the solution to CSRF is to require an authentication value in a hidden form field, and not only in the cookies, to authenticate any request that might have lasting effects. Checking the HTTP Referrer header can also help.
"JavaScript hijacking" is a type of CSRF attack in which a <script> tag on an attacker's site exploits a page on the victim's site that returns private information such as JSON or JavaScript. Possible solutions include:
Developers of client-server applications must recognize that untrusted clients may be under the control of attackers. The application author cannot assume that their JavaScript code will run as intended (or at all) because any secret embedded in the code could be extracted by a determined adversary. Some implications are:
Package management systems such as npm and Bower are popular with JavaScript developers. Such systems allow a developer to easily manage their program's dependencies upon other developers' program libraries. Developers trust that the maintainers of the libraries will keep them secure and up to date, but that is not always the case. A vulnerability has emerged because of this blind trust. Relied-upon libraries can have new releases that cause bugs or vulnerabilities to appear in all programs that rely upon the libraries. Inversely, a library can go unpatched with known vulnerabilities out in the wild. In a study done looking over a sample of 133,000 websites, researchers found 37% of the websites included a library with at least one known vulnerability. "The median lag between the oldest library version used on each website and the newest available version of that library is 1,177 days in ALEXA, and development of some libraries still in active use ceased years ago." Another possibility is that the maintainer of a library may remove the library entirely. This occurred in March 2016 when Azer Koçulu removed his repository from npm. This caused tens of thousands of programs and websites depending upon his libraries to break.
JavaScript provides an interface to a wide range of browser capabilities, some of which may have flaws such as buffer overflows. These flaws can allow attackers to write scripts that would run any code they wish on the user's system. This code is not by any means limited to another JavaScript application. For example, a buffer overrun exploit can allow an attacker to gain access to the operating system's API with superuser privileges.
These flaws have affected major browsers including Firefox, Internet Explorer, and Safari.
Plugins, such as video players, Adobe Flash, and the wide range of ActiveX controls enabled by default in Microsoft Internet Explorer, may also have flaws exploitable via JavaScript (such flaws have been exploited in the past).
In Windows Vista, Microsoft has attempted to contain the risks of bugs such as buffer overflows by running the Internet Explorer process with limited privileges. Google Chrome similarly confines its page renderers to their own "sandbox".
Web browsers are capable of running JavaScript outside the sandbox, with the privileges necessary to, for example, create or delete files. Such privileges are not intended to be granted to code from the Web.
Incorrectly granting privileges to JavaScript from the Web has played a role in vulnerabilities in both Internet Explorer and Firefox. In Windows XP Service Pack 2, Microsoft demoted JScript's privileges in Internet Explorer.
Microsoft Windows allows JavaScript source files on a computer's hard drive to be launched as general-purpose, non-sandboxed programs (see: Windows Script Host). This makes JavaScript (like VBScript) a theoretically viable vector for a Trojan horse, although JavaScript Trojan horses are uncommon in practice.
In 2015, a JavaScript-based proof-of-concept implementation of a rowhammer attack was described in a paper by security researchers.
In 2017, a JavaScript-based attack via browser was demonstrated that could bypass ASLR. It is called "ASLR⊕Cache" or AnC.
In 2018, the paper that announced the Spectre attacks against Speculative Execution in Intel and other processors included a JavaScript implementation.
Important tools have evolved with the language.
A common misconception is that JavaScript is the same as Java. Both indeed have a C-like syntax (the C language being their most immediate common ancestor language). They are also typically sandboxed (when used inside a browser), and JavaScript was designed with Java's syntax and standard library in mind. In particular, all Java keywords were reserved in original JavaScript, JavaScript's standard library follows Java's naming conventions, and JavaScript's Math and Date objects are based on classes from Java 1.0.
Java and JavaScript both first appeared in 1995, but Java was developed by James Gosling of Sun Microsystems and JavaScript by Brendan Eich of Netscape Communications.
The differences between the two languages are more prominent than their similarities. Java has static typing, while JavaScript's typing is dynamic. Java is loaded from compiled bytecode, while JavaScript is loaded as human-readable source code. Java's objects are class-based, while JavaScript's are prototype-based. Finally, Java did not support functional programming until Java 8, while JavaScript has done so from the beginning, being influenced by Scheme.
JSON (JavaScript Object Notation, pronounced /ˈdʒeɪsən/; also /ˈdʒeɪˌsɒn/) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). It is a common data format with diverse uses in electronic data interchange, including that of web applications with servers.
JSON is a language-independent data format. It was derived from JavaScript, but many modern programming languages include code to generate and parse JSON-format data. JSON filenames use the extension .json.
TypeScript (TS) is a strictly-typed variant of JavaScript. TS differs by introducing type annotations to variables and functions, and introducing a type language to describe the types within JS. Otherwise TS shares much the same featureset as JS, to allow it to be easily transpiled to JS for running client-side, and to interoperate with other JS code.
Since 2017, web browsers have supported WebAssembly, a binary format that enables a JavaScript engine to execute performance-critical portions of web page scripts close to native speed. WebAssembly code runs in the same sandbox as regular JavaScript code.
asm.js is a subset of JavaScript that served as the forerunner of WebAssembly.
JavaScript is the dominant client-side language of the Web, and many websites are script-heavy. Thus transpilers have been created to convert code written in other languages, which can aid the development process.
Ajax (also AJAX /ˈeɪdʒæks/; short for "Asynchronous JavaScript and XML" or "Asynchronous JavaScript transfer (x-fer) is a set of web development techniques that uses various web technologies on the client-side to create asynchronous web applications. With Ajax, web applications can send and retrieve data from a server asynchronously (in the background) without interfering with the display and behaviour of the existing page. By decoupling the data interchange layer from the presentation layer, Ajax allows web pages and, by extension, web applications, to change content dynamically without the need to reload the entire page. In practice, modern implementations commonly utilize JSON instead of XML. | [
{
"paragraph_id": 0,
"text": "JavaScript (/ˈdʒɑːvəskrɪpt/), often abbreviated as JS, is a programming language and core technology of the World Wide Web, alongside HTML and CSS. As of 2023, 98.7% of websites use JavaScript on the client side for webpage behavior, often incorporating third-party libraries. All major web browsers have a dedicated JavaScript engine to execute the code on users' devices.",
"title": ""
},
{
"paragraph_id": 1,
"text": "JavaScript is a high-level, often just-in-time compiled language that conforms to the ECMAScript standard. It has dynamic typing, prototype-based object-orientation, and first-class functions. It is multi-paradigm, supporting event-driven, functional, and imperative programming styles. It has application programming interfaces (APIs) for working with text, dates, regular expressions, standard data structures, and the Document Object Model (DOM).",
"title": ""
},
{
"paragraph_id": 2,
"text": "The ECMAScript standard does not include any input/output (I/O), such as networking, storage, or graphics facilities. In practice, the web browser or other runtime system provides JavaScript APIs for I/O.",
"title": ""
},
{
"paragraph_id": 3,
"text": "JavaScript engines were originally used only in web browsers, but are now core components of some servers and a variety of applications. The most popular runtime system for this usage is Node.js.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Although Java and JavaScript are similar in name, syntax, and respective standard libraries, the two languages are distinct and differ greatly in design.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The first popular web browser with a graphical user interface, Mosaic, was released in 1993. Accessible to non-technical people, it played a prominent role in the rapid growth of the nascent World Wide Web. The lead developers of Mosaic then founded the Netscape corporation, which released a more polished browser, Netscape Navigator, in 1994. This quickly became the most-used.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "During these formative years of the Web, web pages could only be static, lacking the capability for dynamic behavior after the page was loaded in the browser. There was a desire in the flourishing web development scene to remove this limitation, so in 1995, Netscape decided to add a scripting language to Navigator. They pursued two routes to achieve this: collaborating with Sun Microsystems to embed the Java programming language, while also hiring Brendan Eich to embed the Scheme language.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Netscape management soon decided that the best option was for Eich to devise a new language, with syntax similar to Java and less like Scheme or other extant scripting languages. Although the new language and its interpreter implementation were called LiveScript when first shipped as part of a Navigator beta in September 1995, the name was changed to JavaScript for the official release in December.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The choice of the JavaScript name has caused confusion, implying that it is directly related to Java. At the time, the dot-com boom had begun and Java was a popular new language, so Eich considered the JavaScript name a marketing ploy by Netscape.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Microsoft debuted Internet Explorer in 1995, leading to a browser war with Netscape. On the JavaScript front, Microsoft reverse-engineered the Navigator interpreter to create its own, called JScript.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "JavaScript was first released in 1996, alongside initial support for CSS and extensions to HTML. Each of these implementations was noticeably different from their counterparts in Navigator. These differences made it difficult for developers to make their websites work well in both browsers, leading to widespread use of \"best viewed in Netscape\" and \"best viewed in Internet Explorer\" logos for several years.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In November 1996, Netscape submitted JavaScript to Ecma International, as the starting point for a standard specification that all browser vendors could conform to. This led to the official release of the first ECMAScript language specification in June 1997.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The standards process continued for a few years, with the release of ECMAScript 2 in June 1998 and ECMAScript 3 in December 1999. Work on ECMAScript 4 began in 2000.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Meanwhile, Microsoft gained an increasingly dominant position in the browser market. By the early 2000s, Internet Explorer's market share reached 95%. This meant that JScript became the de facto standard for client-side scripting on the Web.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Microsoft initially participated in the standards process and implemented some proposals in its JScript language, but eventually it stopped collaborating on Ecma work. Thus ECMAScript 4 was mothballed.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "During the period of Internet Explorer dominance in the early 2000s, client-side scripting was stagnant. This started to change in 2004, when the successor of Netscape, Mozilla, released the Firefox browser. Firefox was well received by many, taking significant market share from Internet Explorer.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In 2005, Mozilla joined ECMA International, and work started on the ECMAScript for XML (E4X) standard. This led to Mozilla working jointly with Macromedia (later acquired by Adobe Systems), who were implementing E4X in their ActionScript 3 language, which was based on an ECMAScript 4 draft. The goal became standardizing ActionScript 3 as the new ECMAScript 4. To this end, Adobe Systems released the Tamarin implementation as an open source project. However, Tamarin and ActionScript 3 were too different from established client-side scripting, and without cooperation from Microsoft, ECMAScript 4 never reached fruition.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Meanwhile, very important developments were occurring in open-source communities not affiliated with ECMA work. In 2005, Jesse James Garrett released a white paper in which he coined the term Ajax and described a set of technologies, of which JavaScript was the backbone, to create web applications where data can be loaded in the background, avoiding the need for full page reloads. This sparked a renaissance period of JavaScript, spearheaded by open-source libraries and the communities that formed around them. Many new libraries were created, including jQuery, Prototype, Dojo Toolkit, and MooTools.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Google debuted its Chrome browser in 2008, with the V8 JavaScript engine that was faster than its competition. The key innovation was just-in-time compilation (JIT), so other browser vendors needed to overhaul their engines for JIT.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In July 2008, these disparate parties came together for a conference in Oslo. This led to the eventual agreement in early 2009 to combine all relevant work and drive the language forward. The result was the ECMAScript 5 standard, released in December 2009.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Ambitious work on the language continued for several years, culminating in an extensive collection of additions and refinements being formalized with the publication of ECMAScript 6 in 2015.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "The creation of Node.js in 2009 by Ryan Dahl sparked a significant increase in the usage of JavaScript outside of web browsers. Node combines the V8 engine, an event loop, and I/O APIs, thereby providing a stand-alone JavaScript runtime system. As of 2018, Node had been used by millions of developers, and npm had the most modules of any package manager in the world.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "The ECMAScript draft specification is currently maintained openly on GitHub, and editions are produced via regular annual snapshots. Potential revisions to the language are vetted through a comprehensive proposal process. Now, instead of edition numbers, developers check the status of upcoming features individually.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "The current JavaScript ecosystem has many libraries and frameworks, established programming practices, and substantial usage of JavaScript outside of web browsers. Plus, with the rise of single-page applications and other JavaScript-heavy websites, several transpilers have been created to aid the development process.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "\"JavaScript\" is a trademark of Oracle Corporation in the United States. The trademark was originally issued to Sun Microsystems on 6 May 1997, and was transferred to Oracle when they acquired Sun in 2009.",
"title": "Trademark"
},
{
"paragraph_id": 25,
"text": "JavaScript is the dominant client-side scripting language of the Web, with 98% of all websites (mid–2022) using it for this purpose. Scripts are embedded in or included from HTML documents and interact with the DOM. All major web browsers have a built-in JavaScript engine that executes the code on the user's device.",
"title": "Website client-side usage"
},
{
"paragraph_id": 26,
"text": "Over 80% of websites use a third-party JavaScript library or web framework for their client-side scripting.",
"title": "Website client-side usage"
},
{
"paragraph_id": 27,
"text": "jQuery is by far the most popular client-side library, used by over 75% of websites.",
"title": "Website client-side usage"
},
{
"paragraph_id": 28,
"text": "React (also known as React.js or ReactJS) is a free and open-source front-end JavaScript library for building user interfaces based on components. It is maintained by Meta (formerly Facebook) and a community of individual developers and companies.",
"title": "Website client-side usage"
},
{
"paragraph_id": 29,
"text": "In contrast, the term \"Vanilla JS\" has been coined for websites not using any libraries or frameworks, instead relying entirely on standard JavaScript functionality.",
"title": "Website client-side usage"
},
{
"paragraph_id": 30,
"text": "The use of JavaScript has expanded beyond its web browser roots. JavaScript engines are now embedded in a variety of other software systems, both for server-side website deployments and non-browser applications.",
"title": "Other usage"
},
{
"paragraph_id": 31,
"text": "Initial attempts at promoting server-side JavaScript usage were Netscape Enterprise Server and Microsoft's Internet Information Services, but they were small niches. Server-side usage eventually started to grow in the late 2000s, with the creation of Node.js and other approaches.",
"title": "Other usage"
},
{
"paragraph_id": 32,
"text": "Electron, Cordova, React Native, and other application frameworks have been used to create many applications with behavior implemented in JavaScript. Other non-browser applications include Adobe Acrobat support for scripting PDF documents and GNOME Shell extensions written in JavaScript.",
"title": "Other usage"
},
{
"paragraph_id": 33,
"text": "JavaScript has recently begun to appear in some embedded systems, usually by leveraging Node.js.",
"title": "Other usage"
},
{
"paragraph_id": 34,
"text": "A JavaScript engine is a software component that executes JavaScript code. The first JavaScript engines were mere interpreters, but all relevant modern engines use just-in-time compilation for improved performance.",
"title": "Execution system"
},
{
"paragraph_id": 35,
"text": "JavaScript engines are typically developed by web browser vendors, and every major browser has one. In a browser, the JavaScript engine runs in concert with the rendering engine via the Document Object Model.",
"title": "Execution system"
},
{
"paragraph_id": 36,
"text": "The use of JavaScript engines is not limited to browsers. For example, the V8 engine is a core component of the Node.js and Deno runtime systems.",
"title": "Execution system"
},
{
"paragraph_id": 37,
"text": "JavaScript typically relies on a run-time environment (e.g., a web browser) to provide objects and methods by which scripts can interact with the environment (e.g., a web page DOM). These environments are single-threaded. JavaScript also relies on the run-time environment to provide the ability to include/import scripts (e.g., HTML <script> elements). This is not a language feature per se, but it is common in most JavaScript implementations. JavaScript processes messages from a queue one at a time. JavaScript calls a function associated with each new message, creating a call stack frame with the function's arguments and local variables. The call stack shrinks and grows based on the function's needs. When the call stack is empty upon function completion, JavaScript proceeds to the next message in the queue. This is called the event loop, described as \"run to completion\" because each message is fully processed before the next message is considered. However, the language's concurrency model describes the event loop as non-blocking: program input/output is performed using events and callback functions. This means, for instance, that JavaScript can process a mouse click while waiting for a database query to return information.",
"title": "Execution system"
},
{
"paragraph_id": 38,
"text": "The following features are common to all conforming ECMAScript implementations unless explicitly specified otherwise.",
"title": "Features"
},
{
"paragraph_id": 39,
"text": "JavaScript supports much of the structured programming syntax from C (e.g., if statements, while loops, switch statements, do while loops, etc.). One partial exception is scoping: originally JavaScript only had function scoping with var; block scoping was added in ECMAScript 2015 with the keywords let and const. Like C, JavaScript makes a distinction between expressions and statements. One syntactic difference from C is automatic semicolon insertion, which allow semicolons (which terminate statements) to be omitted.",
"title": "Features"
},
{
"paragraph_id": 40,
"text": "JavaScript is weakly typed, which means certain types are implicitly cast depending on the operation used.",
"title": "Features"
},
{
"paragraph_id": 41,
"text": "Values are cast to strings like the following:",
"title": "Features"
},
{
"paragraph_id": 42,
"text": "Values are cast to numbers by casting to strings and then casting the strings to numbers. These processes can be modified by defining toString and valueOf functions on the prototype for string and number casting respectively.",
"title": "Features"
},
{
"paragraph_id": 43,
"text": "JavaScript has received criticism for the way it implements these conversions as the complexity of the rules can be mistaken for inconsistency. For example, when adding a number to a string, the number will be cast to a string before performing concatenation, but when subtracting a number from a string, the string is cast to a number before performing subtraction.",
"title": "Features"
},
{
"paragraph_id": 44,
"text": "Often also mentioned is {} + [] resulting in 0 (number). This is misleading: the {} is interpreted as an empty code block instead of an empty object, and the empty array is cast to a number by the remaining unary + operator. If you wrap the expression in parentheses ({} + []) the curly brackets are interpreted as an empty object and the result of the expression is \"[object Object]\" as expected.",
"title": "Features"
},
{
"paragraph_id": 45,
"text": "JavaScript is dynamically typed like most other scripting languages. A type is associated with a value rather than an expression. For example, a variable initially bound to a number may be reassigned to a string. JavaScript supports various ways to test the type of objects, including duck typing.",
"title": "Features"
},
{
"paragraph_id": 46,
"text": "JavaScript includes an eval function that can execute statements provided as strings at run-time.",
"title": "Features"
},
{
"paragraph_id": 47,
"text": "Prototypal inheritance in JavaScript is described by Douglas Crockford as:",
"title": "Features"
},
{
"paragraph_id": 48,
"text": "You make prototype objects, and then ... make new instances. Objects are mutable in JavaScript, so we can augment the new instances, giving them new fields and methods. These can then act as prototypes for even newer objects. We don't need classes to make lots of similar objects... Objects inherit from objects. What could be more object oriented than that?",
"title": "Features"
},
{
"paragraph_id": 49,
"text": "In JavaScript, an object is an associative array, augmented with a prototype (see below); each key provides the name for an object property, and there are two syntactical ways to specify such a name: dot notation (obj.x = 10) and bracket notation (obj['x'] = 10). A property may be added, rebound, or deleted at run-time. Most properties of an object (and any property that belongs to an object's prototype inheritance chain) can be enumerated using a for...in loop.",
"title": "Features"
},
{
"paragraph_id": 50,
"text": "JavaScript uses prototypes where many other object-oriented languages use classes for inheritance. It is possible to simulate many class-based features with prototypes in JavaScript.",
"title": "Features"
},
{
"paragraph_id": 51,
"text": "Functions double as object constructors, along with their typical role. Prefixing a function call with new will create an instance of a prototype, inheriting properties and methods from the constructor (including properties from the Object prototype). ECMAScript 5 offers the Object.create method, allowing explicit creation of an instance without automatically inheriting from the Object prototype (older environments can assign the prototype to null). The constructor's prototype property determines the object used for the new object's internal prototype. New methods can be added by modifying the prototype of the function used as a constructor. JavaScript's built-in constructors, such as Array or Object, also have prototypes that can be modified. While it is possible to modify the Object prototype, it is generally considered bad practice because most objects in JavaScript will inherit methods and properties from the Object prototype, and they may not expect the prototype to be modified.",
"title": "Features"
},
{
"paragraph_id": 52,
"text": "Unlike in many object-oriented languages, in JavaScript there is no distinction between a function definition and a method definition. Rather, the distinction occurs during function calling. When a function is called as a method of an object, the function's local this keyword is bound to that object for that invocation.",
"title": "Features"
},
{
"paragraph_id": 53,
"text": "JavaScript functions are first-class; a function is considered to be an object. As such, a function may have properties and methods, such as .call() and .bind().",
"title": "Features"
},
{
"paragraph_id": 54,
"text": "A nested function is a function defined within another function. It is created each time the outer function is invoked.",
"title": "Features"
},
{
"paragraph_id": 55,
"text": "In addition, each nested function forms a lexical closure: the lexical scope of the outer function (including any constant, local variable, or argument value) becomes part of the internal state of each inner function object, even after execution of the outer function concludes.",
"title": "Features"
},
{
"paragraph_id": 56,
"text": "JavaScript also supports anonymous functions.",
"title": "Features"
},
{
"paragraph_id": 57,
"text": "JavaScript supports implicit and explicit delegation.",
"title": "Features"
},
{
"paragraph_id": 58,
"text": "JavaScript natively supports various function-based implementations of Role patterns like Traits and Mixins. Such a function defines additional behavior by at least one method bound to the this keyword within its function body. A Role then has to be delegated explicitly via call or apply to objects that need to feature additional behavior that is not shared via the prototype chain.",
"title": "Features"
},
{
"paragraph_id": 59,
"text": "Whereas explicit function-based delegation does cover composition in JavaScript, implicit delegation already happens every time the prototype chain is walked in order to, e.g., find a method that might be related to but is not directly owned by an object. Once the method is found it gets called within this object's context. Thus inheritance in JavaScript is covered by a delegation automatism that is bound to the prototype property of constructor functions.",
"title": "Features"
},
{
"paragraph_id": 60,
"text": "JavaScript is a zero-index language.",
"title": "Features"
},
{
"paragraph_id": 61,
"text": "An indefinite number of parameters can be passed to a function. The function can access them through formal parameters and also through the local arguments object. Variadic functions can also be created by using the bind method.",
"title": "Features"
},
{
"paragraph_id": 62,
"text": "Like in many scripting languages, arrays and objects (associative arrays in other languages) can each be created with a succinct shortcut syntax. In fact, these literals form the basis of the JSON data format.",
"title": "Features"
},
{
"paragraph_id": 63,
"text": "In a manner similar to Perl, JavaScript also supports regular expressions, which provide a concise and powerful syntax for text manipulation that is more sophisticated than the built-in string functions.",
"title": "Features"
},
{
"paragraph_id": 64,
"text": "JavaScript supports promises and Async/await for handling asynchronous operations.",
"title": "Features"
},
{
"paragraph_id": 65,
"text": "A built-in Promise object provides functionality for handling promises and associating handlers with an asynchronous action's eventual result. Recently, the JavaScript specification introduced combinator methods, which allow developers to combine multiple JavaScript promises and do operations based on different scenarios. The methods introduced are: Promise.race, Promise.all, Promise.allSettled and Promise.any.",
"title": "Features"
},
{
"paragraph_id": 66,
"text": "Async/await allows an asynchronous, non-blocking function to be structured in a way similar to an ordinary synchronous function. Asynchronous, non-blocking code can be written, with minimal overhead, structured similarly to traditional synchronous, blocking code.",
"title": "Features"
},
{
"paragraph_id": 67,
"text": "Historically, some JavaScript engines supported these non-standard features:",
"title": "Features"
},
{
"paragraph_id": 68,
"text": "Variables in JavaScript can be defined using either the var, let or const keywords. Variables defined without keywords will be defined at the global scope.",
"title": "Syntax"
},
{
"paragraph_id": 69,
"text": "Note the comments in the examples above, all of which were preceded with two forward slashes.",
"title": "Syntax"
},
{
"paragraph_id": 70,
"text": "There is no built-in Input/output functionality in JavaScript, instead it is provided by the run-time environment. The ECMAScript specification in edition 5.1 mentions that \"there are no provisions in this specification for input of external data or output of computed results\". However, most runtime environments have a console object that can be used to print output. Here is a minimalist Hello World program in JavaScript in a runtime environment with a console object:",
"title": "Syntax"
},
{
"paragraph_id": 71,
"text": "In HTML documents, a program like this is required for an output:",
"title": "Syntax"
},
{
"paragraph_id": 72,
"text": "A simple recursive function to calculate the factorial of a natural number:",
"title": "Syntax"
},
{
"paragraph_id": 73,
"text": "An anonymous function (or lambda):",
"title": "Syntax"
},
{
"paragraph_id": 74,
"text": "This example shows that, in JavaScript, function closures capture their non-local variables by reference.",
"title": "Syntax"
},
{
"paragraph_id": 75,
"text": "Arrow functions were first introduced in 6th Edition - ECMAScript 2015. They shorten the syntax for writing functions in JavaScript. Arrow functions are anonymous, so a variable is needed to refer to them in order to invoke them after their creation, unless surrounded by parenthesis and executed immediately.",
"title": "Syntax"
},
{
"paragraph_id": 76,
"text": "Example of arrow function:",
"title": "Syntax"
},
{
"paragraph_id": 77,
"text": "In JavaScript, objects can be created as instances of a class.",
"title": "Syntax"
},
{
"paragraph_id": 78,
"text": "Object class example:",
"title": "Syntax"
},
{
"paragraph_id": 79,
"text": "In JavaScript, objects can be instantiated directly from a function.",
"title": "Syntax"
},
{
"paragraph_id": 80,
"text": "Object functional example:",
"title": "Syntax"
},
{
"paragraph_id": 81,
"text": "Variadic function demonstration (arguments is a special variable):",
"title": "Syntax"
},
{
"paragraph_id": 82,
"text": "Immediately-invoked function expressions are often used to create closures. Closures allow gathering properties and methods in a namespace and making some of them private:",
"title": "Syntax"
},
{
"paragraph_id": 83,
"text": "Generator objects (in the form of generator functions) provide a function which can be called, exited, and re-entered while maintaining internal context (statefulness).",
"title": "Syntax"
},
{
"paragraph_id": 84,
"text": "JavaScript can export and import from modules:",
"title": "Syntax"
},
{
"paragraph_id": 85,
"text": "Export example:",
"title": "Syntax"
},
{
"paragraph_id": 86,
"text": "Import example:",
"title": "Syntax"
},
{
"paragraph_id": 87,
"text": "This sample code displays various JavaScript features.",
"title": "Syntax"
},
{
"paragraph_id": 88,
"text": "The following output should be displayed in the browser window.",
"title": "Syntax"
},
{
"paragraph_id": 89,
"text": "JavaScript and the DOM provide the potential for malicious authors to deliver scripts to run on a client computer via the Web. Browser authors minimize this risk using two restrictions. First, scripts run in a sandbox in which they can only perform Web-related actions, not general-purpose programming tasks like creating files. Second, scripts are constrained by the same-origin policy: scripts from one Website do not have access to information such as usernames, passwords, or cookies sent to another site. Most JavaScript-related security bugs are breaches of either the same origin policy or the sandbox.",
"title": "Security"
},
{
"paragraph_id": 90,
"text": "There are subsets of general JavaScript—ADsafe, Secure ECMAScript (SES)—that provide greater levels of security, especially on code created by third parties (such as advertisements). Closure Toolkit is another project for safe embedding and isolation of third-party JavaScript and HTML.",
"title": "Security"
},
{
"paragraph_id": 91,
"text": "Content Security Policy is the main intended method of ensuring that only trusted code is executed on a Web page.",
"title": "Security"
},
{
"paragraph_id": 92,
"text": "A common JavaScript-related security problem is cross-site scripting (XSS), a violation of the same-origin policy. XSS vulnerabilities occur when an attacker can cause a target Website, such as an online banking website, to include a malicious script in the webpage presented to a victim. The script in this example can then access the banking application with the privileges of the victim, potentially disclosing secret information or transferring money without the victim's authorization. A solution to XSS vulnerabilities is to use HTML escaping whenever displaying untrusted data.",
"title": "Security"
},
{
"paragraph_id": 93,
"text": "Some browsers include partial protection against reflected XSS attacks, in which the attacker provides a URL including malicious script. However, even users of those browsers are vulnerable to other XSS attacks, such as those where the malicious code is stored in a database. Only correct design of Web applications on the server-side can fully prevent XSS.",
"title": "Security"
},
{
"paragraph_id": 94,
"text": "XSS vulnerabilities can also occur because of implementation mistakes by browser authors.",
"title": "Security"
},
{
"paragraph_id": 95,
"text": "Another cross-site vulnerability is cross-site request forgery (CSRF). In CSRF, code on an attacker's site tricks the victim's browser into taking actions the user did not intend at a target site (like transferring money at a bank). When target sites rely solely on cookies for request authentication, requests originating from code on the attacker's site can carry the same valid login credentials of the initiating user. In general, the solution to CSRF is to require an authentication value in a hidden form field, and not only in the cookies, to authenticate any request that might have lasting effects. Checking the HTTP Referrer header can also help.",
"title": "Security"
},
{
"paragraph_id": 96,
"text": "\"JavaScript hijacking\" is a type of CSRF attack in which a <script> tag on an attacker's site exploits a page on the victim's site that returns private information such as JSON or JavaScript. Possible solutions include:",
"title": "Security"
},
{
"paragraph_id": 97,
"text": "Developers of client-server applications must recognize that untrusted clients may be under the control of attackers. The application author cannot assume that their JavaScript code will run as intended (or at all) because any secret embedded in the code could be extracted by a determined adversary. Some implications are:",
"title": "Security"
},
{
"paragraph_id": 98,
"text": "Package management systems such as npm and Bower are popular with JavaScript developers. Such systems allow a developer to easily manage their program's dependencies upon other developers' program libraries. Developers trust that the maintainers of the libraries will keep them secure and up to date, but that is not always the case. A vulnerability has emerged because of this blind trust. Relied-upon libraries can have new releases that cause bugs or vulnerabilities to appear in all programs that rely upon the libraries. Inversely, a library can go unpatched with known vulnerabilities out in the wild. In a study done looking over a sample of 133,000 websites, researchers found 37% of the websites included a library with at least one known vulnerability. \"The median lag between the oldest library version used on each website and the newest available version of that library is 1,177 days in ALEXA, and development of some libraries still in active use ceased years ago.\" Another possibility is that the maintainer of a library may remove the library entirely. This occurred in March 2016 when Azer Koçulu removed his repository from npm. This caused tens of thousands of programs and websites depending upon his libraries to break.",
"title": "Security"
},
{
"paragraph_id": 99,
"text": "JavaScript provides an interface to a wide range of browser capabilities, some of which may have flaws such as buffer overflows. These flaws can allow attackers to write scripts that would run any code they wish on the user's system. This code is not by any means limited to another JavaScript application. For example, a buffer overrun exploit can allow an attacker to gain access to the operating system's API with superuser privileges.",
"title": "Security"
},
{
"paragraph_id": 100,
"text": "These flaws have affected major browsers including Firefox, Internet Explorer, and Safari.",
"title": "Security"
},
{
"paragraph_id": 101,
"text": "Plugins, such as video players, Adobe Flash, and the wide range of ActiveX controls enabled by default in Microsoft Internet Explorer, may also have flaws exploitable via JavaScript (such flaws have been exploited in the past).",
"title": "Security"
},
{
"paragraph_id": 102,
"text": "In Windows Vista, Microsoft has attempted to contain the risks of bugs such as buffer overflows by running the Internet Explorer process with limited privileges. Google Chrome similarly confines its page renderers to their own \"sandbox\".",
"title": "Security"
},
{
"paragraph_id": 103,
"text": "Web browsers are capable of running JavaScript outside the sandbox, with the privileges necessary to, for example, create or delete files. Such privileges are not intended to be granted to code from the Web.",
"title": "Security"
},
{
"paragraph_id": 104,
"text": "Incorrectly granting privileges to JavaScript from the Web has played a role in vulnerabilities in both Internet Explorer and Firefox. In Windows XP Service Pack 2, Microsoft demoted JScript's privileges in Internet Explorer.",
"title": "Security"
},
{
"paragraph_id": 105,
"text": "Microsoft Windows allows JavaScript source files on a computer's hard drive to be launched as general-purpose, non-sandboxed programs (see: Windows Script Host). This makes JavaScript (like VBScript) a theoretically viable vector for a Trojan horse, although JavaScript Trojan horses are uncommon in practice.",
"title": "Security"
},
{
"paragraph_id": 106,
"text": "In 2015, a JavaScript-based proof-of-concept implementation of a rowhammer attack was described in a paper by security researchers.",
"title": "Security"
},
{
"paragraph_id": 107,
"text": "In 2017, a JavaScript-based attack via browser was demonstrated that could bypass ASLR. It is called \"ASLR⊕Cache\" or AnC.",
"title": "Security"
},
{
"paragraph_id": 108,
"text": "In 2018, the paper that announced the Spectre attacks against Speculative Execution in Intel and other processors included a JavaScript implementation.",
"title": "Security"
},
{
"paragraph_id": 109,
"text": "Important tools have evolved with the language.",
"title": "Development tools"
},
{
"paragraph_id": 110,
"text": "A common misconception is that JavaScript is the same as Java. Both indeed have a C-like syntax (the C language being their most immediate common ancestor language). They are also typically sandboxed (when used inside a browser), and JavaScript was designed with Java's syntax and standard library in mind. In particular, all Java keywords were reserved in original JavaScript, JavaScript's standard library follows Java's naming conventions, and JavaScript's Math and Date objects are based on classes from Java 1.0.",
"title": "Related technologies"
},
{
"paragraph_id": 111,
"text": "Java and JavaScript both first appeared in 1995, but Java was developed by James Gosling of Sun Microsystems and JavaScript by Brendan Eich of Netscape Communications.",
"title": "Related technologies"
},
{
"paragraph_id": 112,
"text": "The differences between the two languages are more prominent than their similarities. Java has static typing, while JavaScript's typing is dynamic. Java is loaded from compiled bytecode, while JavaScript is loaded as human-readable source code. Java's objects are class-based, while JavaScript's are prototype-based. Finally, Java did not support functional programming until Java 8, while JavaScript has done so from the beginning, being influenced by Scheme.",
"title": "Related technologies"
},
{
"paragraph_id": 113,
"text": "JSON (JavaScript Object Notation, pronounced /ˈdʒeɪsən/; also /ˈdʒeɪˌsɒn/) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). It is a common data format with diverse uses in electronic data interchange, including that of web applications with servers.",
"title": "Related technologies"
},
{
"paragraph_id": 114,
"text": "JSON is a language-independent data format. It was derived from JavaScript, but many modern programming languages include code to generate and parse JSON-format data. JSON filenames use the extension .json.",
"title": "Related technologies"
},
{
"paragraph_id": 115,
"text": "TypeScript (TS) is a strictly-typed variant of JavaScript. TS differs by introducing type annotations to variables and functions, and introducing a type language to describe the types within JS. Otherwise TS shares much the same featureset as JS, to allow it to be easily transpiled to JS for running client-side, and to interoperate with other JS code.",
"title": "Related technologies"
},
{
"paragraph_id": 116,
"text": "Since 2017, web browsers have supported WebAssembly, a binary format that enables a JavaScript engine to execute performance-critical portions of web page scripts close to native speed. WebAssembly code runs in the same sandbox as regular JavaScript code.",
"title": "Related technologies"
},
{
"paragraph_id": 117,
"text": "asm.js is a subset of JavaScript that served as the forerunner of WebAssembly.",
"title": "Related technologies"
},
{
"paragraph_id": 118,
"text": "",
"title": "Related technologies"
},
{
"paragraph_id": 119,
"text": "JavaScript is the dominant client-side language of the Web, and many websites are script-heavy. Thus transpilers have been created to convert code written in other languages, which can aid the development process.",
"title": "Related technologies"
},
{
"paragraph_id": 120,
"text": "Ajax (also AJAX /ˈeɪdʒæks/; short for \"Asynchronous JavaScript and XML\" or \"Asynchronous JavaScript transfer (x-fer) is a set of web development techniques that uses various web technologies on the client-side to create asynchronous web applications. With Ajax, web applications can send and retrieve data from a server asynchronously (in the background) without interfering with the display and behaviour of the existing page. By decoupling the data interchange layer from the presentation layer, Ajax allows web pages and, by extension, web applications, to change content dynamically without the need to reload the entire page. In practice, modern implementations commonly utilize JSON instead of XML.",
"title": "Related technologies"
}
]
| JavaScript, often abbreviated as JS, is a programming language and core technology of the World Wide Web, alongside HTML and CSS. As of 2023, 98.7% of websites use JavaScript on the client side for webpage behavior, often incorporating third-party libraries. All major web browsers have a dedicated JavaScript engine to execute the code on users' devices. JavaScript is a high-level, often just-in-time compiled language that conforms to the ECMAScript standard. It has dynamic typing, prototype-based object-orientation, and first-class functions. It is multi-paradigm, supporting event-driven, functional, and imperative programming styles. It has application programming interfaces (APIs) for working with text, dates, regular expressions, standard data structures, and the Document Object Model (DOM). The ECMAScript standard does not include any input/output (I/O), such as networking, storage, or graphics facilities. In practice, the web browser or other runtime system provides JavaScript APIs for I/O. JavaScript engines were originally used only in web browsers, but are now core components of some servers and a variety of applications. The most popular runtime system for this usage is Node.js. Although Java and JavaScript are similar in name, syntax, and respective standard libraries, the two languages are distinct and differ greatly in design. | 2001-11-19T00:23:45Z | 2023-12-23T22:21:30Z | [
"Template:As of?",
"Template:Blockquote",
"Template:Citation",
"Template:Portal bar",
"Template:Anchor",
"Template:Further",
"Template:Webarchive",
"Template:Cite book",
"Template:Cite arXiv",
"Template:Cite web",
"Template:Web browsers",
"Template:Nowrap",
"Template:Main",
"Template:Excerpt",
"Template:Reflist",
"Template:Distinguish",
"Template:Sfn",
"Template:Sister project links",
"Template:Curlie",
"Template:Programming languages",
"Template:Authority control",
"Template:NodeJs",
"Template:Infobox programming language",
"Template:See also",
"Template:Failed verification",
"Template:Cite news",
"Template:Spoken Wikipedia",
"Template:JavaScript",
"Template:Short description",
"Template:For",
"Template:IPAc-en",
"Template:Code",
"Template:Cite magazine",
"Template:ECMAScript",
"Template:Redirect",
"Template:Pp-pc",
"Template:As of",
"Template:Cn",
"Template:Isbn",
"Template:ISBN"
]
| https://en.wikipedia.org/wiki/JavaScript |
9,846 | Elbing (disambiguation) | Elbing is the German name of Elbląg, a city in northern Poland which until 1945 was a German city in the province of East Prussia.
Elbing may also refer to: | [
{
"paragraph_id": 0,
"text": "Elbing is the German name of Elbląg, a city in northern Poland which until 1945 was a German city in the province of East Prussia.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Elbing may also refer to:",
"title": ""
}
]
| Elbing is the German name of Elbląg, a city in northern Poland which until 1945 was a German city in the province of East Prussia. Elbing may also refer to: | 2020-08-11T23:22:45Z | [
"Template:TOC right",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/Elbing_(disambiguation) |
|
9,855 | Exile | Exile or banishment, is primarily penal expulsion from one's native country, and secondarily expatriation or prolonged absence from one's homeland under either the compulsion of circumstance or the rigors of some high purpose. Usually persons and peoples suffer exile, but sometimes social entities like institutions (e.g. the papacy or a government) are forced from their homeland.
In Roman law, exsilium denoted both voluntary exile and banishment as a capital punishment alternative to death. Deportation was forced exile, and entailed the lifelong loss of citizenship and property. Relegation was a milder form of deportation, which preserved the subject's citizenship and property.
The term diaspora describes group exile, both voluntary and forced. "Government in exile" describes a government of a country that has relocated and argues its legitimacy from outside that country. Voluntary exile is often depicted as a form of protest by the person who claims it, to avoid persecution and prosecution (such as tax or criminal allegations), an act of shame or repentance, or isolating oneself to be able to devote time to a particular pursuit.
Article 9 of the Universal Declaration of Human Rights states that "No one shall be subjected to arbitrary arrest, detention or exile."
In some cases the deposed head of state is allowed to go into exile following a coup or other change of government, allowing a more peaceful transition to take place or to escape justice.
A wealthy citizen who moves to a jurisdiction with lower taxes is termed a tax exile. Creative people such as authors and musicians who achieve sudden wealth sometimes choose this. Examples include the British-Canadian writer Arthur Hailey, who moved to the Bahamas to avoid taxes following the runaway success of his novels Hotel and Airport, and the English rock band the Rolling Stones who, in the spring of 1971, owed more in taxes than they could pay and left Britain before the government could seize their assets. Members of the band all moved to France for a period of time where they recorded music for the album that came to be called Exile on Main Street, the Main Street of the title referring to the French Riviera. In 2012, Eduardo Saverin, one of the founders of Facebook, made headlines by renouncing his U.S. citizenship before his company's IPO. The dual Brazilian/U.S. citizen's decision to move to Singapore and renounce his citizenship spurred a bill in the U.S. Senate, the Ex-PATRIOT Act, which would have forced such wealthy tax exiles to pay a special tax in order to re-enter the United States.
In some cases a person voluntarily lives in exile to avoid legal issues, such as litigation or criminal prosecution. An example of this is Asil Nadir, who fled to the Turkish Republic of Northern Cyprus for 17 years rather than face prosecution in connection with the failed £1.7 bn company Polly Peck in the United Kingdom.
Examples include:
Exile, government man and assigned servant were all euphemisms used in the 19th century for convicts under sentence who had been transported from Britain to Australia.
Comfortable exile is an alternative theory recently developed by anthropologist Binesh Balan in 2018. According to him, comfortable exile is a "social exile of people who have been excluded from the mainstream society. Such people are considered 'aliens' or internal 'others' on the grounds of their religious, racial, ethnic, linguistic or caste-based identity and therefore they migrate to a comfortable space elsewhere after having risked their lives to restore representation, identity and civil rights in their own country and often capture a comfortable identity to being part of a dominant religion, society or culture."
When a large group, or occasionally a whole people or nation is exiled, it can be said that this nation is in exile, or "diaspora". Nations that have been in exile for substantial periods include the Israelites by the Assyrian king Sargon II in 720 BCE ,the Judeans who were deported by Babylonian king Nebuchadnezzar II in 586 BC , and the Jews following the destruction of the second Temple in Jerusalem in AD 70. Many Jewish prayers include a yearning to return to Jerusalem and Judea.
After the Partitions of Poland in the late 18th century, and following the uprisings (like Kościuszko Uprising, November Uprising and January Uprising) against the partitioning powers (Russia, Prussia and Austria), many Poles have chosen – or been forced – to go into exile, forming large diasporas (known as Polonia), especially in France and the United States. The entire population of Crimean Tatars (numbering 200,000 in all) that remained in their homeland of Crimea was exiled on 18 May 1944 to Central Asia as a form of ethnic cleansing and collective punishment on false accusations.
Since the Cuban Revolution, over a million Cubans have left Cuba. Most of these self-identified as exiles as their motivation for leaving the island is political in nature. At the time of the Cuban Revolution, Cuba only had a population of 6.5 million, and was not a country that had a history of significant emigration, it being the sixth largest recipient of immigrants in the world as of 1958. Most of the exiles' children also consider themselves to be Cuban exiles. Under Cuban law, children of Cubans born abroad are considered Cuban citizens. An extension of colonial practices, Latin America saw widespread exile, of a political variety, during the 19th and 20th century.
During a foreign occupation or after a coup d'état, a government in exile of a such afflicted country may be established abroad. One of the most well-known instances of this is the Polish government-in-exile, a government in exile that commanded Polish armed forces operating outside Poland after German occupation during World War II. Other examples include the Free French Forces government of Charles de Gaulle of the same time, and the Central Tibetan Administration, commonly known as the Tibetan government-in-exile, and headed by the 14th Dalai Lama.
Ivan the Terrible once exiled to Siberia an inanimate object: a bell. "When the inhabitants of the town of Uglich rang their bell to rally a demonstration against Ivan the Terrible, the cruel Czar executed two hundred (nobles), and exiled the Uglich bell to Siberia, where it remained for two hundred years."
Exile is an early motif in ancient Greek tragedy. In the ancient Greek world, this was seen as a fate worse than death. The motif reaches its peak on the play Medea, written by Euripides in the fifth century BC, and rooted in the very old oral traditions of Greek mythology. Euripides' Medea has remained the most frequently performed Greek tragedy through the 20th century.
After Medea was abandoned by Jason and had become a murderess out of revenge, she fled to Athens and married king Aigeus there, and became the stepmother of the hero Theseus. Due to a conflict with him, she must leave the Polis and go away into exile. John William Waterhouse (1849–1917), the English Pre-Raphaelite painter's famous picture Jason and Medea shows a key moment before, when Medea tries to poison Theseus.
In ancient Rome, the Roman Senate had the power to declare the exile to individuals, families or even entire regions. One of the Roman victims was the poet Ovid, who lived during the reign of Augustus. He was forced to leave Rome and move away to the city of Tomis on the Black Sea, now Constanța. There he wrote his famous work Tristia (Sorrows) about his bitter feelings in exile. Another, at least in a temporary exile, was Dante.
The German-language writer Franz Kafka described the exile of Karl Rossmann in the posthumously published novel Amerika.
During the period of National Socialism in the first few years after 1933, many Jews, as well as a significant number of German artists and intellectuals fled into exile; for instance, the authors Klaus Mann and Anna Seghers. So Germany's own exile literature emerged and received worldwide credit. Klaus Mann finished his novel Der Vulkan (The Volcano. A Novel Among Emigrants) in 1939 describing the German exile scene, "to bring the rich, scattered and murky experience of exile into epic form", as he wrote in his literary balance sheet. At the same place and in the same year, Anna Seghers published her famous novel Das siebte Kreuz (The Seventh Cross, published in the United States in 1942).
Important exile literature in recent years include that of the Caribbean, many of whose artists emigrated to Europe or the United States for political or economic reasons. These writers include Nobel Prize winners V. S. Naipaul and Derek Walcott as well as the novelists Edwidge Danticat and Sam Selvon.
Media related to Exile at Wikimedia Commons | [
{
"paragraph_id": 0,
"text": "Exile or banishment, is primarily penal expulsion from one's native country, and secondarily expatriation or prolonged absence from one's homeland under either the compulsion of circumstance or the rigors of some high purpose. Usually persons and peoples suffer exile, but sometimes social entities like institutions (e.g. the papacy or a government) are forced from their homeland.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In Roman law, exsilium denoted both voluntary exile and banishment as a capital punishment alternative to death. Deportation was forced exile, and entailed the lifelong loss of citizenship and property. Relegation was a milder form of deportation, which preserved the subject's citizenship and property.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The term diaspora describes group exile, both voluntary and forced. \"Government in exile\" describes a government of a country that has relocated and argues its legitimacy from outside that country. Voluntary exile is often depicted as a form of protest by the person who claims it, to avoid persecution and prosecution (such as tax or criminal allegations), an act of shame or repentance, or isolating oneself to be able to devote time to a particular pursuit.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Article 9 of the Universal Declaration of Human Rights states that \"No one shall be subjected to arbitrary arrest, detention or exile.\"",
"title": ""
},
{
"paragraph_id": 4,
"text": "In some cases the deposed head of state is allowed to go into exile following a coup or other change of government, allowing a more peaceful transition to take place or to escape justice.",
"title": "For individuals"
},
{
"paragraph_id": 5,
"text": "A wealthy citizen who moves to a jurisdiction with lower taxes is termed a tax exile. Creative people such as authors and musicians who achieve sudden wealth sometimes choose this. Examples include the British-Canadian writer Arthur Hailey, who moved to the Bahamas to avoid taxes following the runaway success of his novels Hotel and Airport, and the English rock band the Rolling Stones who, in the spring of 1971, owed more in taxes than they could pay and left Britain before the government could seize their assets. Members of the band all moved to France for a period of time where they recorded music for the album that came to be called Exile on Main Street, the Main Street of the title referring to the French Riviera. In 2012, Eduardo Saverin, one of the founders of Facebook, made headlines by renouncing his U.S. citizenship before his company's IPO. The dual Brazilian/U.S. citizen's decision to move to Singapore and renounce his citizenship spurred a bill in the U.S. Senate, the Ex-PATRIOT Act, which would have forced such wealthy tax exiles to pay a special tax in order to re-enter the United States.",
"title": "For individuals"
},
{
"paragraph_id": 6,
"text": "In some cases a person voluntarily lives in exile to avoid legal issues, such as litigation or criminal prosecution. An example of this is Asil Nadir, who fled to the Turkish Republic of Northern Cyprus for 17 years rather than face prosecution in connection with the failed £1.7 bn company Polly Peck in the United Kingdom.",
"title": "For individuals"
},
{
"paragraph_id": 7,
"text": "Examples include:",
"title": "For individuals"
},
{
"paragraph_id": 8,
"text": "Exile, government man and assigned servant were all euphemisms used in the 19th century for convicts under sentence who had been transported from Britain to Australia.",
"title": "For individuals"
},
{
"paragraph_id": 9,
"text": "Comfortable exile is an alternative theory recently developed by anthropologist Binesh Balan in 2018. According to him, comfortable exile is a \"social exile of people who have been excluded from the mainstream society. Such people are considered 'aliens' or internal 'others' on the grounds of their religious, racial, ethnic, linguistic or caste-based identity and therefore they migrate to a comfortable space elsewhere after having risked their lives to restore representation, identity and civil rights in their own country and often capture a comfortable identity to being part of a dominant religion, society or culture.\"",
"title": "For groups, nations, and governments"
},
{
"paragraph_id": 10,
"text": "When a large group, or occasionally a whole people or nation is exiled, it can be said that this nation is in exile, or \"diaspora\". Nations that have been in exile for substantial periods include the Israelites by the Assyrian king Sargon II in 720 BCE ,the Judeans who were deported by Babylonian king Nebuchadnezzar II in 586 BC , and the Jews following the destruction of the second Temple in Jerusalem in AD 70. Many Jewish prayers include a yearning to return to Jerusalem and Judea.",
"title": "For groups, nations, and governments"
},
{
"paragraph_id": 11,
"text": "After the Partitions of Poland in the late 18th century, and following the uprisings (like Kościuszko Uprising, November Uprising and January Uprising) against the partitioning powers (Russia, Prussia and Austria), many Poles have chosen – or been forced – to go into exile, forming large diasporas (known as Polonia), especially in France and the United States. The entire population of Crimean Tatars (numbering 200,000 in all) that remained in their homeland of Crimea was exiled on 18 May 1944 to Central Asia as a form of ethnic cleansing and collective punishment on false accusations.",
"title": "For groups, nations, and governments"
},
{
"paragraph_id": 12,
"text": "Since the Cuban Revolution, over a million Cubans have left Cuba. Most of these self-identified as exiles as their motivation for leaving the island is political in nature. At the time of the Cuban Revolution, Cuba only had a population of 6.5 million, and was not a country that had a history of significant emigration, it being the sixth largest recipient of immigrants in the world as of 1958. Most of the exiles' children also consider themselves to be Cuban exiles. Under Cuban law, children of Cubans born abroad are considered Cuban citizens. An extension of colonial practices, Latin America saw widespread exile, of a political variety, during the 19th and 20th century.",
"title": "For groups, nations, and governments"
},
{
"paragraph_id": 13,
"text": "During a foreign occupation or after a coup d'état, a government in exile of a such afflicted country may be established abroad. One of the most well-known instances of this is the Polish government-in-exile, a government in exile that commanded Polish armed forces operating outside Poland after German occupation during World War II. Other examples include the Free French Forces government of Charles de Gaulle of the same time, and the Central Tibetan Administration, commonly known as the Tibetan government-in-exile, and headed by the 14th Dalai Lama.",
"title": "For groups, nations, and governments"
},
{
"paragraph_id": 14,
"text": "Ivan the Terrible once exiled to Siberia an inanimate object: a bell. \"When the inhabitants of the town of Uglich rang their bell to rally a demonstration against Ivan the Terrible, the cruel Czar executed two hundred (nobles), and exiled the Uglich bell to Siberia, where it remained for two hundred years.\"",
"title": "For groups, nations, and governments"
},
{
"paragraph_id": 15,
"text": "Exile is an early motif in ancient Greek tragedy. In the ancient Greek world, this was seen as a fate worse than death. The motif reaches its peak on the play Medea, written by Euripides in the fifth century BC, and rooted in the very old oral traditions of Greek mythology. Euripides' Medea has remained the most frequently performed Greek tragedy through the 20th century.",
"title": "In popular culture"
},
{
"paragraph_id": 16,
"text": "After Medea was abandoned by Jason and had become a murderess out of revenge, she fled to Athens and married king Aigeus there, and became the stepmother of the hero Theseus. Due to a conflict with him, she must leave the Polis and go away into exile. John William Waterhouse (1849–1917), the English Pre-Raphaelite painter's famous picture Jason and Medea shows a key moment before, when Medea tries to poison Theseus.",
"title": "In popular culture"
},
{
"paragraph_id": 17,
"text": "In ancient Rome, the Roman Senate had the power to declare the exile to individuals, families or even entire regions. One of the Roman victims was the poet Ovid, who lived during the reign of Augustus. He was forced to leave Rome and move away to the city of Tomis on the Black Sea, now Constanța. There he wrote his famous work Tristia (Sorrows) about his bitter feelings in exile. Another, at least in a temporary exile, was Dante.",
"title": "In popular culture"
},
{
"paragraph_id": 18,
"text": "The German-language writer Franz Kafka described the exile of Karl Rossmann in the posthumously published novel Amerika.",
"title": "In popular culture"
},
{
"paragraph_id": 19,
"text": "During the period of National Socialism in the first few years after 1933, many Jews, as well as a significant number of German artists and intellectuals fled into exile; for instance, the authors Klaus Mann and Anna Seghers. So Germany's own exile literature emerged and received worldwide credit. Klaus Mann finished his novel Der Vulkan (The Volcano. A Novel Among Emigrants) in 1939 describing the German exile scene, \"to bring the rich, scattered and murky experience of exile into epic form\", as he wrote in his literary balance sheet. At the same place and in the same year, Anna Seghers published her famous novel Das siebte Kreuz (The Seventh Cross, published in the United States in 1942).",
"title": "In popular culture"
},
{
"paragraph_id": 20,
"text": "Important exile literature in recent years include that of the Caribbean, many of whose artists emigrated to Europe or the United States for political or economic reasons. These writers include Nobel Prize winners V. S. Naipaul and Derek Walcott as well as the novelists Edwidge Danticat and Sam Selvon.",
"title": "In popular culture"
},
{
"paragraph_id": 21,
"text": "Media related to Exile at Wikimedia Commons",
"title": "External links"
}
]
| Exile or banishment, is primarily penal expulsion from one's native country, and secondarily expatriation or prolonged absence from one's homeland under either the compulsion of circumstance or the rigors of some high purpose. Usually persons and peoples suffer exile, but sometimes social entities like institutions are forced from their homeland. In Roman law, exsilium denoted both voluntary exile and banishment as a capital punishment alternative to death. Deportation was forced exile, and entailed the lifelong loss of citizenship and property. Relegation was a milder form of deportation, which preserved the subject's citizenship and property. The term diaspora describes group exile, both voluntary and forced. "Government in exile" describes a government of a country that has relocated and argues its legitimacy from outside that country. Voluntary exile is often depicted as a form of protest by the person who claims it, to avoid persecution and prosecution, an act of shame or repentance, or isolating oneself to be able to devote time to a particular pursuit. Article 9 of the Universal Declaration of Human Rights states that "No one shall be subjected to arbitrary arrest, detention or exile." | 2001-09-30T07:28:44Z | 2023-12-06T17:07:07Z | [
"Template:Reflist",
"Template:Citation",
"Template:Cite news",
"Template:Cite journal",
"Template:Cite book",
"Template:Short description",
"Template:Cite web",
"Template:ISBN",
"Template:Tcmdb title",
"Template:IMDb title",
"Template:Navboxes",
"Template:Main",
"Template:Cite encyclopedia",
"Template:Wiktionary",
"Template:Commonscatinline",
"Template:Wikiquote",
"Template:Authority control",
"Template:Hatgrp"
]
| https://en.wikipedia.org/wiki/Exile |
9,857 | Elbląg | Elbląg (Polish: [ˈɛlblɔŋk] ; German: Elbing; Prussian: Elbings) is a city in the Warmian-Masurian Voivodeship, Poland, located in the eastern edge of the Żuławy region with 117,390 inhabitants, as of December 2021. It is the capital of Elbląg County.
Elbląg is one of the oldest cities in the province. Its history dates back to 1237, when the Teutonic Order constructed their fortified stronghold on the banks of a nearby river. The castle subsequently served as the official seat of the Teutonic Order Masters.
Elbląg became part of the Hanseatic League, which contributed much to the city's wealth. Through the Hanseatic League, the city was linked to other major ports like Gdańsk, Lübeck and Amsterdam. Elbląg joined Poland in 1454 and after the defeat of the Teutonic Knights in the Thirteen Years’ War was recognized as part of Poland in 1466. It then flourished and turned into a significant trading point, but its growth was eventually hindered by the Second Northern War and the Swedish Deluge.
The city was transferred to Prussia after the first partition of Poland in 1772. Its trading role greatly weakened, until the era of industrialization, which occurred in the 19th century. It was then that the famous Elbląg Canal was commissioned. A tourist site and important engineering monument, it has been named one of the Seven Wonders of Poland and a Historic Monument of Poland.
After World War II the city again became part of Poland. The war casualties were catastrophic – especially the severe destruction of the Old Town district, one of the grandest in Prussia.
Today, Elbląg has over 120,000 inhabitants and is a "vibrant city with an attractive tourist base". It serves as an academic and financial center and among its numerous historic monuments is the Market Gate from 1309 and St. Nicholas Cathedral. Elbląg is also known for its archaeological sites, museums and the largest brewery in the country.
Elbląg derives from the earlier German-language Elbing, which is the name by which the Teutonic Knights knew both the river here and the citadel they established on its banks in 1237. The purpose of the citadel was to prevent the Old Prussian settlement of Truso from being reoccupied, the German crusaders being at war with the pagan Prussians. The citadel was named after the river, itself of uncertain etymology. One traditional etymology connects it to the name of the Helveconae, a Germanic tribe mentioned in Ancient Greek and Latin sources, but the etymology or language of the tribal name remains unknown. The oldest known mention of the river or town Elbląg is in the form Ylfing in the report of a sailor Wulfstan from the end of the 9th century, in The Voyages of Ohthere and Wulfstan which was written in Anglo-Saxon in King Alfred's reign.
The city was almost completely destroyed at the end of World War II. Parts of the inner city were gradually rebuilt, and around 2000 rebuilding was begun in a style emulating the previous architecture, in many cases over the same foundations and utilizing old bricks and portions of the same walls. The western suburbs of the old city have not been reconstructed.
The modern city adjoins about half the length of the river between Lake Drużno and Elbląg Bay (Zatoka Elbląska, an arm of the Vistula Lagoon), and spreads out on both banks, though mainly on the eastern side. To the east is the Elbląg Upland (Wysoczyzna Elbląska), a dome pushed up by glacial compression, 390 km in diameter and 200 m (656.17 ft) high at its greatest elevation.
Views to the west show flat fields extending to the horizon; this part of the Vistula Delta (Żuławy Wiślane) is used mainly for agricultural purposes. To the south are the marshes and swamps of Drużno. The Elbląg River has been left in a more natural state through the city, but elsewhere it is a controlled channel with branches. One of them, the Jagiellonski Channel (Kanał Jagielloński), leads to the Nogat River, along which navigation to Gdańsk is common. The Elbląg Canal (Kanał Elbląski) connecting Lake Drużno with Drwęca River and Lake Jeziorak is a tourist site.
Elbląg is not a deep-water port. The draft of vessels using its waterways must be no greater than 1.5 m (4 ft 11.06 in) by law. The turning area at Elbląg is 120 m (393.70 ft) diameter and a pilot is required for large vessels. Deep water vessels cannot manoeuvre; in that sense, Elbląg has become a subsidiary port of Gdańsk. Traffic of smaller vessels at Elbląg is within the river and very marginal, while larger vessels cannot reach the open Baltic Sea because the channel has belonged to Russia since 1945. As of September 17, the construction of Vistula Spit canal on Polish territory has been completed. The city features three quay complexes, movable cranes, and railways.
Elbląg is located about 55 kilometres (34 miles) south-east of Gdańsk and 90 km (56 mi) south-west of Kaliningrad, Russia. The city is a port on the river Elbląg, which flows into the Vistula Lagoon about 10 km (6 mi) to the north, thus giving the city access to the Baltic Sea via the Russian-controlled Strait of Baltiysk. The Old Town (Polish: Stare Miasto) is located on the river Elbląg connecting Lake Drużno to the Vistula Lagoon, about 10 km (6 mi) from the lagoon and 60 km (37 mi) from Gdańsk.
The climate of Elbląg is an oceanic climate (Köppen Cfb) closely bordering on a humid continental climate (Köppen Dfb), owing to its position of the Baltic Sea, which moderates the temperatures, compared to the interior of Poland. The climate is cool throughout the year and there is a somewhat uniform precipitation throughout the year. Typical of Northern Europe, there is little sunshine during the year.
Teutonic Order 1246–1454 Kingdom of Poland 1454–1569 Polish–Lithuanian Commonwealth 1569–1772 Kingdom of Prussia 1772–1871 German Empire 1871–1918 Weimar Germany 1918–1933 Nazi Germany 1933–1945 People's Republic of Poland 1945–1989 Republic of Poland 1989–present
The settlement was first mentioned as "Ilfing" in The Voyages of Ohthere and Wulfstan, an Anglo-Saxon chronicle written in King Alfred's reign using information from a Viking who had visited the area.
During the Middle Ages, the Viking settlement of Truso was located on Lake Drużno, near the current site of Elbląg in historical Pogesania; the settlement burned down in the 10th century. Early in the 13th century the Teutonic Knights conquered the region, built a castle, and founded Elbing on the lake, with a population mostly from Lübeck (today the lake, now much smaller, no longer reaches the city). After the uprising against the Teutonic Knights and the destruction of the castle by the inhabitants, the city successively came under the sovereignty of the Kingdom of Poland (1454), the Kingdom of Prussia (1772), and Germany (1871). Elbing was heavily damaged in World War II, and its remaining German citizens were expelled upon the war's end in accordance with the Potsdam Agreement. The city became again part of Poland in 1945 and was repopulated with Polish citizens.
The seaport of Truso was first mentioned ca. 890 by Wulfstan of Hedeby, an Anglo-Saxon sailor, travelling on the south coast of the Baltic Sea at the behest of King Alfred the Great of England. The exact location of Truso was not known for a long time, as the seashore has significantly changed, but most historians trace the settlement inside or near to modern Elbląg on Lake Drużno. Truso was located at territory already known to the Roman Empire and earlier.
It was an important seaport serving the Vistula River bay on the early medieval Baltic Sea trade routes which led from Birka in the north to the island of Gotland and to Visby in the Baltic Sea. From there, traders continued further south to Carnuntum along the Amber Road. The ancient Amber Road led further southwest and southeast to the Black Sea and eventually to Asia. The east–west trade route went from Truso, along the Baltic Sea to Jutland, and from there inland by river to Hedeby, a large trading center in Jutland. The main goods of Truso were amber, furs, and slaves.
Archaeological finds in 1897 and diggings in the 1920s placed Truso at Gut Hansdorf. A large burial field was also found at Elbląg. Recent Polish diggings have found burned beams and ashes and thousand-year-old artifacts in an area of about 20 hectares. Many of these artifacts are now displayed at the Muzeum w Elblągu.
Attempts to conquer Prussian land began in 997, when Bolesław I the Brave, at the urging of the Pope, sent a contingent of soldiers and a missionary (Adalbert of Prague) to the pagan Prussians, a non-Slavic people, on a crusade of conquest and conversion. The crusade encompassed much of the Baltic Sea coast east of the Polish city of Gdańsk, up to Sambia. Starting in 1209 additional crusades were called for by Konrad of Masovia, who mainly sought to conquer Prussian territory, rather than actually convert the indigenous Prussians. Despite heroic efforts, Old Prussian sovereignty would eventually collapse after a succession of wars instigated by Pope Honorius III and his frequent calls for crusade.
Before the Prussians were finally brought to heel, Polish rulers and the Duchy of Masovia, both by then Christianised peoples, would be continually frustrated in their attempts at northern expansion. Aside from minor border raids, major campaigns against the Prussians would be launched in 1219, 1220, and 1222. After a particularly sound defeat by Prussian forces in 1223, Polish forces in Chełmno, the seat of Christian of Oliva and the Duchy of Masovia, were forced onto the defensive.
In 1226 Duke Konrad I of Masovia summoned the Teutonic Knights for assistance; by 1230 they had secured Chełmno (Culm) and begun claiming conquered territories for themselves under the authority of the Holy Roman Empire, although these claims were rejected by the Poles, whose ambition had been to conquer Prussia all along. The Teutonic Order's strategy was to move down the Vistula and secure the delta, establishing a barrier between the Prussians and Gdańsk. The victorious Teutonic Knights built a castle at Elbing.
The Chronicon terrae Prussiae describes the conflict in the vicinity of Lake Drużno shortly before the founding of Elbing:
Truso did not disappear suddenly to be replaced with the citadel and town of Elbing during the Prussian Crusade. It had already burned down in the tenth century, with the population dispersed in the area.
The Chronicon terrae Prussiae describes the founding of Elbing under the leadership of Hermann Balk. After building two ships, the Pilgerim (Pilgrim) and the Vridelant (Friedland), with the assistance of Margrave Henry III of Margraviate of Meissen, the Teutonic Knights used them to clear the Vistula Lagoon (Frisches Haff) and the Vistula Spit of Prussians:
Apparently the river was in Pomesania, which the knights had just finished clearing, but the bay was in Pogesania. The first Elbing was placed in Pogesania:
Both landings were amphibious operations conducted from the ships. The Chronicon relates that they were in use for many years and then were sunk in Lake Drużno. In 1238 the Dominican Order was invited to build a monastery on a grant of land. Pomesania was not secured, however, and from 1240 to 1242 the order began building a brick castle on the south side of the settlement. It may be significant that Elbing's first industry was the same as Truso's had been: manufacture of amber and bone artifacts for export. In 1243 William of Modena created the Diocese of Pomesania and three others. They were at first only ideological constructs, but the tides of time turned them into reality in that same century.
The foundation of Elbing was perhaps not the end of the Old Prussian story in the region. In 1825 a manuscript listing a vocabulary of the Baltic Old Prussian language, commonly known in English as Elbing Vocabulary, was found among some manuscripts from a merchant's house. It contained 802 words in a dialect now termed Pomesanian with their equivalents in an early form of High German.
The origin of the vocabulary remains unknown. Its format is like that of modern travel dictionaries; i.e., it may have been used by German speakers to communicate with Old Prussians, but the specific circumstances are only speculative. The manuscript became the Codex Neumannianus. It disappeared after a British bombing raid destroyed the library at Elbing but before then facsimiles had been made. The date of the MSS was estimated at ca. 1400, but it was a copy. There is no evidence concerning the provenance of the original, except that it must have been in Pomesanian.
In 1246 the town was granted a constitution under Lübeck law, used in maritime circumstances, instead of Magdeburg rights common in other cities in Central Europe. This decision of the Order was in keeping with its general strategy of espousing the trade association that in 1358 would become the Hanseatic League. The Order seized on this association early and used it to establish bases throughout the Baltic. The Order's involvement in the League was somewhat contradictory. In whatever cities they founded the ultimate authority was the commander of the town, who kept office in the citadel, typically used as a prison. Lübeck law, on the other hand, provided for self-government of the town.
Membership in the Hanseatic League meant having important trading contacts with England, Flanders, France, and the Netherlands. The city received numerous merchant privileges from the rulers of England, Poland, Pomerania, and the Teutonic Order. For instance, the privilege of the Old Town was upgraded in 1343, while in 1393 it was granted an emporium privilege for grains, metals, and forest products.
Except for the citadel and churches, Elbing at the time was more of a small village by modern standards. Its area was 300 m × 500 m (984.25 ft × 1,640.42 ft). It featured a wharf, a marketplace and five streets, as well as a number of churches. The castle was completed in 1251. In 1288 fire destroyed the entire settlement except for the churches, which were of brick. A new circuit wall was started immediately. From 1315 to 1340 Elbląg was rebuilt. A separate settlement called New Town was founded ca. 1337 and received Lübeck rights in 1347. In 1349 the Black Death struck the town, toward the end of the European plague. After the population recovered it continued building up the city and in 1364 a crane was built for the port.
The German-language Elbinger Rechtsbuch, written in Elbing documented among other laws for the first time Polish common law. The German-language Polish laws are based on the Sachsenspiegel and were written down to aid the judges. It is thus the oldest source for documented Polish common law and is in Polish referred to as the Księga Elbląska (Book of Elbląg). It was written down in the second half of the 13th century.
In 1410, during the Polish–Lithuanian–Teutonic War, the inhabitants of the city rebelled against the Teutonic Knights and expelled them, while welcoming Polish troops and paying homage to Polish King Władysław II Jagiełło, who afterwards vested Elbląg with new privileges. As the castle was lightly defended by a Polish garrison, the Teutonic Knights managed to retake it, promising the Polish defenders that they will be given free passage back to Poland. After the castle was taken, the Knights broke their promise and subsequently murdered a number of the captured defenders while imprisoning the rest.
In February 1440, the city hosted a convention at which delegates from various cities (including Elbing itself) and nobility from the region decided to establish the anti-Teutonic Prussian Confederation. In April and May 1440, further meetings were held in Elbing, at which more towns and noblemen joined the organisation. In 1454, the organisation led the revolt against the rule of the Teutonic Knights, and then its delegation submitted a petition to King Casimir IV of Poland asking him to include the region within the Kingdom of Poland. The King agreed and signed the act of incorporation of the region (including Elbing) to the Kingdom of Poland in March 1454 in Kraków, which sparked the Thirteen Years' War, the longest of all Polish–Teutonic wars. The local mayor pledged allegiance to the Polish King during the incorporation in March 1454, and the burghers of Elbląg recognized Casimir IV as rightful ruler. After paying homage to the King, the city was granted great privileges, similar to those of Toruń and Gdańsk. Since 1454, the city was authorized by King Casimir IV to mint Polish coins. The war ended in a Polish victory in 1466, with the Second Peace of Thorn, in which the Teutonic Order renounced any claims to the city and recognised it as part of Poland.
Within the Kingdom of Poland, the city was administratively part of the Malbork Voivodeship in the newly established autonomous province of Royal Prussia, later also within the larger Greater Poland Province. The city was known to the Polish crown by its Polish name Elbląg. With the creation of the Polish–Lithuanian Commonwealth in 1569, the city was brought under direct control of the Polish crown. As one of the largest and most influential cities of Poland, it enjoyed voting rights during the royal election period in Poland.
Elbląg was often visited by Nicolaus Copernicus between 1504 and 1530.
With the 16th century Protestant Reformation the burghers became Lutherans and the first Lutheran Gymnasium was established in Elbląg in 1535.
From 1579 Elbląg had close trade relations with England, to which the city accorded free trade. English, Scottish, and Irish merchants settled in the city. They formed the Scottish Reformed Church of Elbląg and became Elbląg citizens, aiding Lutheran Sweden in the Thirty Years' War. The rivalry of nearby Gdańsk interrupted trading links several times. By 1618 Elbląg had left the Hanseatic League owing to its close business dealings with England.
Famous inhabitants of the city at that time included native sons Hans von Bodeck and Samuel Hartlib. During the Thirty Years' War, Swedish Chancellor Axel Oxenstierna brought the Moravian Brethren refugee John Amos Comenius to Elbląg for six years (1642–1648). In 1642 Johann Stobäus, who composed with Johann Eccard, published the Preussische Fest-Lieder, a number of evangelical Prussian songs. In 1646 the city recorder Daniel Barholz noted that the city council employed Bernsteindreher, or Paternostermacher, licensed and guilded amber craftsmen who worked on prayer beads, rosaries, and many other items made of amber. Members of the Barholz family became mayors and councillors.
During the Thirty Years' War, the Vistula Lagoon was the main southern Baltic base of King Gustavus Adolphus of Sweden, who was hailed as the protector of the Protestants. By 1660 the Vistula Lagoon had gone to Elector Frederick William of Brandenburg-Prussia, but was returned in 1700.
The poet Christian Wernicke was born in 1661 in Elbląg, while Gottfried Achenwall became famous for his teachings in natural law and human rights law. In 1700–1710 it was occupied by Swedish troops. In 1709 it was besieged, taken by storm on February 2, 1710, by Russian troops with support of Prussian artillery. The city was handed over to Polish King Augustus II in 1712.
The Royal-Polish mathematician and cartographer Johann Friedrich Endersch completed a map of Warmia in 1755 and also made a copper etching of the galley named "The City of Elbing" .
During the War of the Polish Succession in 1734, Elbląg was placed under military occupation by Russia and Saxony. The town came again under occupation by Russia from 1758 to 1762 during the Seven Years' War.
During the First Partition of Poland in 1772 Elbląg was annexed by King Frederick the Great of the Kingdom of Prussia. Elbing became part of the newly established province of West Prussia in 1773. In the 1815 provincial reorganization following the Napoleonic Wars, Elbing and its hinterland were included within Regierungsbezirk Danzig in West Prussia.
In October and November 1831, various Polish infantry, cavalry and artillery units, engineer corps and sappers of the November Uprising stopped in the city and its environs on the way to their internment locations, whereas the general staff with Commander-in-Chief General Maciej Rybiński and generals Józef Bem, Marcin Klemensowski, Kazimierz Małachowski, Ludwik Michał Pac and Antoni Wroniecki was interned in the city. On December 22, 1831, the Prussian army attempted to pacify the Polish insurgents and launched a charge on the disarmed Poles, who resisted relocation, fearing deportation to the Russian Partition of Poland. Some insurgents eventually left partitioned Poland for the Great Emigration, including Józef Bem, who was expelled by the Prussians in December 1831, and Maciej Rybiński, who left the city in February 1832.
Elbing industrialized. In 1828 the first steamship was built by Ignatz Grunau. In 1837 Ferdinand Schichau started the Schichau-Werke company in Elbing as well as another shipyard in Danzig (Gdańsk) later on. Schichau constructed the Borussia, the first screw-vessel in Germany. Schichau-Werke built hydraulic machinery, ships, steam engines, and torpedoes. After the inauguration of the railway to Königsberg in 1853, Elbing's industry began to grow. Schichau worked together with his son-in-law Carl H. Zise, who continued the industrial complex after Schichau's death. Schichau erected large complexes for his many thousands of workers.
Georg Steenke, an engineer from Königsberg, connected Elbing near the Baltic Sea with the southern part of Prussia by building the Oberländischer Kanal (Elbląg Canal).
Elbing became part of the Prussian-led German Empire in 1871 during the unification of Germany. As Elbing became an industrial city, the Social Democratic Party of Germany (SPD) frequently received the majority of votes; in the 1912 Reichstag elections the SPD received 51% of the vote. After World War I, as most of the province of West Prussia was reintegrated with the reborn Polish Republic, Elbing was joined to the German province of East Prussia, and was separated from Weimar Germany by the so-called Polish Corridor.
During World War II, under Nazi Germany, a Nazi prison, a forced labour subcamp of the Stalag I-A POW camp, a forced labour subcamp of the Stalag XX-B POW camp, and three subcamps of the Stutthof concentration camp were operated in the city. The Germans also enslaved Poles as forced labour in the city. The Polish resistance was active and infiltrated the German arms industry. Dozens of Polish resistance members were held in the local prison, and at least 15 were sentenced to death in the city in 1942.
The prison and forced labour camps were closed and many of the German inhabitants forced to flee as the Soviet Red Army approached the city toward the end of the war. Laid under siege since January 23, 1945, about 65% of the city infrastructure was destroyed, including most of the historical city center. The town was captured by the Soviet Red Army during the night of February 9/10, 1945. During the first days of the siege most of the population of approximately 100,000 persons fled. After the end of war, in spring 1945, the region together with the city became again part of Poland, although with a Soviet-installed communist regime, which stayed in power until the Fall of Communism in the 1980s, as a result of the Potsdam Conference. The area was settled by Poles after remaining Germans were either transferred or fled to Germany. As of 1 November 1945 16.838 Germans remained in the town.
Elbląg was part of the so-called Recovered Territories and out of the new inhabitants, 98% were Poles expelled from former eastern Polish areas annexed by the Soviet Union. Parts of the damaged historical city center were completely demolished, with the bricks being used to rebuild Warsaw and Gdańsk. The Communist authorities had originally planned that the Old Town, utterly destroyed during the fighting since January 23, 1945, would be built over with blocks of flats; however, economic difficulties thwarted this effort. Two churches were reconstructed and the remaining ruins of the old town were torn down in the 1960s.
Along with Tricity and Szczecin, Elbląg was the scene of the Polish 1970 protests. Since 1990 the German minority population has had a modest resurgence, with the Elbinger Deutsche Minderheit Organization counting around 450 members in 2000.
Restoration of the Old Town began after 1989. Since the beginning of the restoration, an extensive archaeological programme has been carried out. Most of the city's heritage was destroyed during the construction of basements in the 19th century or during World War II, but the backyards and latrines of the houses remained largely unchanged, and have provided information on the city's history. In some instances, private investors have incorporated parts of preserved stonework into new architecture. By 2006, approximately 75% of the Old Town had been reconstructed.
Elbląg is also home to the Elbrewery, Poland's largest brewery, which belongs to the Żywiec Group (Heineken). The history of the Elblag Brewing Tradition dates back to 1309, when Teutonic Master Siegfried von Leuchtwangen granted brewing privileges to the city. The present brewery was founded in 1872 as the Elbinger Aktien-Brauerei. In the early 1900s, the brewery was the exclusive supplier of Pilsner beer to the court of German Emperor Wilhelm II.
Until World War II there were many Gothic, Renaissance and Baroque houses in Elbląg's Old Town; some of them are reconstructed. Other preserved buildings are:
The Elbląg Canal, built in 1825–44, is a tourist site of Elbląg. The canal is believed to be one of the most important monuments related to the history of engineering, and has been named one of the Seven Wonders of Poland. The canal was also named one of Poland's official national Historic Monuments (Pomnik historii) in 2011. Its listing is maintained by the National Heritage Board of Poland.
The primary cultural institutions in Elbląg are the Archaeological and Historical Museum, the Cyprian Norwid Elbląg Library, the EL Gallery Art Center and the Aleksander Sewruk Theater. The museum presents many pieces of art and items of everyday use, including the only 15th century binoculars preserved in Europe.
Members of Parliament (Sejm) elected from Elbląg constituency.
Elbląg is twinned with:
On 28 February 2022, Elbląg ended its partnership with the Russian cities of Kaliningrad and Baltiysk and the Belarusian city of Novogrudok as a response to the 2022 Russian invasion of Ukraine and its active support by the Republic of Belarus. | [
{
"paragraph_id": 0,
"text": "Elbląg (Polish: [ˈɛlblɔŋk] ; German: Elbing; Prussian: Elbings) is a city in the Warmian-Masurian Voivodeship, Poland, located in the eastern edge of the Żuławy region with 117,390 inhabitants, as of December 2021. It is the capital of Elbląg County.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Elbląg is one of the oldest cities in the province. Its history dates back to 1237, when the Teutonic Order constructed their fortified stronghold on the banks of a nearby river. The castle subsequently served as the official seat of the Teutonic Order Masters.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Elbląg became part of the Hanseatic League, which contributed much to the city's wealth. Through the Hanseatic League, the city was linked to other major ports like Gdańsk, Lübeck and Amsterdam. Elbląg joined Poland in 1454 and after the defeat of the Teutonic Knights in the Thirteen Years’ War was recognized as part of Poland in 1466. It then flourished and turned into a significant trading point, but its growth was eventually hindered by the Second Northern War and the Swedish Deluge.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The city was transferred to Prussia after the first partition of Poland in 1772. Its trading role greatly weakened, until the era of industrialization, which occurred in the 19th century. It was then that the famous Elbląg Canal was commissioned. A tourist site and important engineering monument, it has been named one of the Seven Wonders of Poland and a Historic Monument of Poland.",
"title": ""
},
{
"paragraph_id": 4,
"text": "After World War II the city again became part of Poland. The war casualties were catastrophic – especially the severe destruction of the Old Town district, one of the grandest in Prussia.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Today, Elbląg has over 120,000 inhabitants and is a \"vibrant city with an attractive tourist base\". It serves as an academic and financial center and among its numerous historic monuments is the Market Gate from 1309 and St. Nicholas Cathedral. Elbląg is also known for its archaeological sites, museums and the largest brewery in the country.",
"title": ""
},
{
"paragraph_id": 6,
"text": "Elbląg derives from the earlier German-language Elbing, which is the name by which the Teutonic Knights knew both the river here and the citadel they established on its banks in 1237. The purpose of the citadel was to prevent the Old Prussian settlement of Truso from being reoccupied, the German crusaders being at war with the pagan Prussians. The citadel was named after the river, itself of uncertain etymology. One traditional etymology connects it to the name of the Helveconae, a Germanic tribe mentioned in Ancient Greek and Latin sources, but the etymology or language of the tribal name remains unknown. The oldest known mention of the river or town Elbląg is in the form Ylfing in the report of a sailor Wulfstan from the end of the 9th century, in The Voyages of Ohthere and Wulfstan which was written in Anglo-Saxon in King Alfred's reign.",
"title": "Etymology"
},
{
"paragraph_id": 7,
"text": "The city was almost completely destroyed at the end of World War II. Parts of the inner city were gradually rebuilt, and around 2000 rebuilding was begun in a style emulating the previous architecture, in many cases over the same foundations and utilizing old bricks and portions of the same walls. The western suburbs of the old city have not been reconstructed.",
"title": "Modern city"
},
{
"paragraph_id": 8,
"text": "The modern city adjoins about half the length of the river between Lake Drużno and Elbląg Bay (Zatoka Elbląska, an arm of the Vistula Lagoon), and spreads out on both banks, though mainly on the eastern side. To the east is the Elbląg Upland (Wysoczyzna Elbląska), a dome pushed up by glacial compression, 390 km in diameter and 200 m (656.17 ft) high at its greatest elevation.",
"title": "Modern city"
},
{
"paragraph_id": 9,
"text": "Views to the west show flat fields extending to the horizon; this part of the Vistula Delta (Żuławy Wiślane) is used mainly for agricultural purposes. To the south are the marshes and swamps of Drużno. The Elbląg River has been left in a more natural state through the city, but elsewhere it is a controlled channel with branches. One of them, the Jagiellonski Channel (Kanał Jagielloński), leads to the Nogat River, along which navigation to Gdańsk is common. The Elbląg Canal (Kanał Elbląski) connecting Lake Drużno with Drwęca River and Lake Jeziorak is a tourist site.",
"title": "Modern city"
},
{
"paragraph_id": 10,
"text": "Elbląg is not a deep-water port. The draft of vessels using its waterways must be no greater than 1.5 m (4 ft 11.06 in) by law. The turning area at Elbląg is 120 m (393.70 ft) diameter and a pilot is required for large vessels. Deep water vessels cannot manoeuvre; in that sense, Elbląg has become a subsidiary port of Gdańsk. Traffic of smaller vessels at Elbląg is within the river and very marginal, while larger vessels cannot reach the open Baltic Sea because the channel has belonged to Russia since 1945. As of September 17, the construction of Vistula Spit canal on Polish territory has been completed. The city features three quay complexes, movable cranes, and railways.",
"title": "Port of Elbląg"
},
{
"paragraph_id": 11,
"text": "Elbląg is located about 55 kilometres (34 miles) south-east of Gdańsk and 90 km (56 mi) south-west of Kaliningrad, Russia. The city is a port on the river Elbląg, which flows into the Vistula Lagoon about 10 km (6 mi) to the north, thus giving the city access to the Baltic Sea via the Russian-controlled Strait of Baltiysk. The Old Town (Polish: Stare Miasto) is located on the river Elbląg connecting Lake Drużno to the Vistula Lagoon, about 10 km (6 mi) from the lagoon and 60 km (37 mi) from Gdańsk.",
"title": "Geography"
},
{
"paragraph_id": 12,
"text": "The climate of Elbląg is an oceanic climate (Köppen Cfb) closely bordering on a humid continental climate (Köppen Dfb), owing to its position of the Baltic Sea, which moderates the temperatures, compared to the interior of Poland. The climate is cool throughout the year and there is a somewhat uniform precipitation throughout the year. Typical of Northern Europe, there is little sunshine during the year.",
"title": "Geography"
},
{
"paragraph_id": 13,
"text": "Teutonic Order 1246–1454 Kingdom of Poland 1454–1569 Polish–Lithuanian Commonwealth 1569–1772 Kingdom of Prussia 1772–1871 German Empire 1871–1918 Weimar Germany 1918–1933 Nazi Germany 1933–1945 People's Republic of Poland 1945–1989 Republic of Poland 1989–present",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The settlement was first mentioned as \"Ilfing\" in The Voyages of Ohthere and Wulfstan, an Anglo-Saxon chronicle written in King Alfred's reign using information from a Viking who had visited the area.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "During the Middle Ages, the Viking settlement of Truso was located on Lake Drużno, near the current site of Elbląg in historical Pogesania; the settlement burned down in the 10th century. Early in the 13th century the Teutonic Knights conquered the region, built a castle, and founded Elbing on the lake, with a population mostly from Lübeck (today the lake, now much smaller, no longer reaches the city). After the uprising against the Teutonic Knights and the destruction of the castle by the inhabitants, the city successively came under the sovereignty of the Kingdom of Poland (1454), the Kingdom of Prussia (1772), and Germany (1871). Elbing was heavily damaged in World War II, and its remaining German citizens were expelled upon the war's end in accordance with the Potsdam Agreement. The city became again part of Poland in 1945 and was repopulated with Polish citizens.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The seaport of Truso was first mentioned ca. 890 by Wulfstan of Hedeby, an Anglo-Saxon sailor, travelling on the south coast of the Baltic Sea at the behest of King Alfred the Great of England. The exact location of Truso was not known for a long time, as the seashore has significantly changed, but most historians trace the settlement inside or near to modern Elbląg on Lake Drużno. Truso was located at territory already known to the Roman Empire and earlier.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "It was an important seaport serving the Vistula River bay on the early medieval Baltic Sea trade routes which led from Birka in the north to the island of Gotland and to Visby in the Baltic Sea. From there, traders continued further south to Carnuntum along the Amber Road. The ancient Amber Road led further southwest and southeast to the Black Sea and eventually to Asia. The east–west trade route went from Truso, along the Baltic Sea to Jutland, and from there inland by river to Hedeby, a large trading center in Jutland. The main goods of Truso were amber, furs, and slaves.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Archaeological finds in 1897 and diggings in the 1920s placed Truso at Gut Hansdorf. A large burial field was also found at Elbląg. Recent Polish diggings have found burned beams and ashes and thousand-year-old artifacts in an area of about 20 hectares. Many of these artifacts are now displayed at the Muzeum w Elblągu.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Attempts to conquer Prussian land began in 997, when Bolesław I the Brave, at the urging of the Pope, sent a contingent of soldiers and a missionary (Adalbert of Prague) to the pagan Prussians, a non-Slavic people, on a crusade of conquest and conversion. The crusade encompassed much of the Baltic Sea coast east of the Polish city of Gdańsk, up to Sambia. Starting in 1209 additional crusades were called for by Konrad of Masovia, who mainly sought to conquer Prussian territory, rather than actually convert the indigenous Prussians. Despite heroic efforts, Old Prussian sovereignty would eventually collapse after a succession of wars instigated by Pope Honorius III and his frequent calls for crusade.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Before the Prussians were finally brought to heel, Polish rulers and the Duchy of Masovia, both by then Christianised peoples, would be continually frustrated in their attempts at northern expansion. Aside from minor border raids, major campaigns against the Prussians would be launched in 1219, 1220, and 1222. After a particularly sound defeat by Prussian forces in 1223, Polish forces in Chełmno, the seat of Christian of Oliva and the Duchy of Masovia, were forced onto the defensive.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In 1226 Duke Konrad I of Masovia summoned the Teutonic Knights for assistance; by 1230 they had secured Chełmno (Culm) and begun claiming conquered territories for themselves under the authority of the Holy Roman Empire, although these claims were rejected by the Poles, whose ambition had been to conquer Prussia all along. The Teutonic Order's strategy was to move down the Vistula and secure the delta, establishing a barrier between the Prussians and Gdańsk. The victorious Teutonic Knights built a castle at Elbing.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "The Chronicon terrae Prussiae describes the conflict in the vicinity of Lake Drużno shortly before the founding of Elbing:",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Truso did not disappear suddenly to be replaced with the citadel and town of Elbing during the Prussian Crusade. It had already burned down in the tenth century, with the population dispersed in the area.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The Chronicon terrae Prussiae describes the founding of Elbing under the leadership of Hermann Balk. After building two ships, the Pilgerim (Pilgrim) and the Vridelant (Friedland), with the assistance of Margrave Henry III of Margraviate of Meissen, the Teutonic Knights used them to clear the Vistula Lagoon (Frisches Haff) and the Vistula Spit of Prussians:",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Apparently the river was in Pomesania, which the knights had just finished clearing, but the bay was in Pogesania. The first Elbing was placed in Pogesania:",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Both landings were amphibious operations conducted from the ships. The Chronicon relates that they were in use for many years and then were sunk in Lake Drużno. In 1238 the Dominican Order was invited to build a monastery on a grant of land. Pomesania was not secured, however, and from 1240 to 1242 the order began building a brick castle on the south side of the settlement. It may be significant that Elbing's first industry was the same as Truso's had been: manufacture of amber and bone artifacts for export. In 1243 William of Modena created the Diocese of Pomesania and three others. They were at first only ideological constructs, but the tides of time turned them into reality in that same century.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "The foundation of Elbing was perhaps not the end of the Old Prussian story in the region. In 1825 a manuscript listing a vocabulary of the Baltic Old Prussian language, commonly known in English as Elbing Vocabulary, was found among some manuscripts from a merchant's house. It contained 802 words in a dialect now termed Pomesanian with their equivalents in an early form of High German.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "The origin of the vocabulary remains unknown. Its format is like that of modern travel dictionaries; i.e., it may have been used by German speakers to communicate with Old Prussians, but the specific circumstances are only speculative. The manuscript became the Codex Neumannianus. It disappeared after a British bombing raid destroyed the library at Elbing but before then facsimiles had been made. The date of the MSS was estimated at ca. 1400, but it was a copy. There is no evidence concerning the provenance of the original, except that it must have been in Pomesanian.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "In 1246 the town was granted a constitution under Lübeck law, used in maritime circumstances, instead of Magdeburg rights common in other cities in Central Europe. This decision of the Order was in keeping with its general strategy of espousing the trade association that in 1358 would become the Hanseatic League. The Order seized on this association early and used it to establish bases throughout the Baltic. The Order's involvement in the League was somewhat contradictory. In whatever cities they founded the ultimate authority was the commander of the town, who kept office in the citadel, typically used as a prison. Lübeck law, on the other hand, provided for self-government of the town.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "Membership in the Hanseatic League meant having important trading contacts with England, Flanders, France, and the Netherlands. The city received numerous merchant privileges from the rulers of England, Poland, Pomerania, and the Teutonic Order. For instance, the privilege of the Old Town was upgraded in 1343, while in 1393 it was granted an emporium privilege for grains, metals, and forest products.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "Except for the citadel and churches, Elbing at the time was more of a small village by modern standards. Its area was 300 m × 500 m (984.25 ft × 1,640.42 ft). It featured a wharf, a marketplace and five streets, as well as a number of churches. The castle was completed in 1251. In 1288 fire destroyed the entire settlement except for the churches, which were of brick. A new circuit wall was started immediately. From 1315 to 1340 Elbląg was rebuilt. A separate settlement called New Town was founded ca. 1337 and received Lübeck rights in 1347. In 1349 the Black Death struck the town, toward the end of the European plague. After the population recovered it continued building up the city and in 1364 a crane was built for the port.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "The German-language Elbinger Rechtsbuch, written in Elbing documented among other laws for the first time Polish common law. The German-language Polish laws are based on the Sachsenspiegel and were written down to aid the judges. It is thus the oldest source for documented Polish common law and is in Polish referred to as the Księga Elbląska (Book of Elbląg). It was written down in the second half of the 13th century.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "In 1410, during the Polish–Lithuanian–Teutonic War, the inhabitants of the city rebelled against the Teutonic Knights and expelled them, while welcoming Polish troops and paying homage to Polish King Władysław II Jagiełło, who afterwards vested Elbląg with new privileges. As the castle was lightly defended by a Polish garrison, the Teutonic Knights managed to retake it, promising the Polish defenders that they will be given free passage back to Poland. After the castle was taken, the Knights broke their promise and subsequently murdered a number of the captured defenders while imprisoning the rest.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "In February 1440, the city hosted a convention at which delegates from various cities (including Elbing itself) and nobility from the region decided to establish the anti-Teutonic Prussian Confederation. In April and May 1440, further meetings were held in Elbing, at which more towns and noblemen joined the organisation. In 1454, the organisation led the revolt against the rule of the Teutonic Knights, and then its delegation submitted a petition to King Casimir IV of Poland asking him to include the region within the Kingdom of Poland. The King agreed and signed the act of incorporation of the region (including Elbing) to the Kingdom of Poland in March 1454 in Kraków, which sparked the Thirteen Years' War, the longest of all Polish–Teutonic wars. The local mayor pledged allegiance to the Polish King during the incorporation in March 1454, and the burghers of Elbląg recognized Casimir IV as rightful ruler. After paying homage to the King, the city was granted great privileges, similar to those of Toruń and Gdańsk. Since 1454, the city was authorized by King Casimir IV to mint Polish coins. The war ended in a Polish victory in 1466, with the Second Peace of Thorn, in which the Teutonic Order renounced any claims to the city and recognised it as part of Poland.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "Within the Kingdom of Poland, the city was administratively part of the Malbork Voivodeship in the newly established autonomous province of Royal Prussia, later also within the larger Greater Poland Province. The city was known to the Polish crown by its Polish name Elbląg. With the creation of the Polish–Lithuanian Commonwealth in 1569, the city was brought under direct control of the Polish crown. As one of the largest and most influential cities of Poland, it enjoyed voting rights during the royal election period in Poland.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "Elbląg was often visited by Nicolaus Copernicus between 1504 and 1530.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "With the 16th century Protestant Reformation the burghers became Lutherans and the first Lutheran Gymnasium was established in Elbląg in 1535.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "From 1579 Elbląg had close trade relations with England, to which the city accorded free trade. English, Scottish, and Irish merchants settled in the city. They formed the Scottish Reformed Church of Elbląg and became Elbląg citizens, aiding Lutheran Sweden in the Thirty Years' War. The rivalry of nearby Gdańsk interrupted trading links several times. By 1618 Elbląg had left the Hanseatic League owing to its close business dealings with England.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "Famous inhabitants of the city at that time included native sons Hans von Bodeck and Samuel Hartlib. During the Thirty Years' War, Swedish Chancellor Axel Oxenstierna brought the Moravian Brethren refugee John Amos Comenius to Elbląg for six years (1642–1648). In 1642 Johann Stobäus, who composed with Johann Eccard, published the Preussische Fest-Lieder, a number of evangelical Prussian songs. In 1646 the city recorder Daniel Barholz noted that the city council employed Bernsteindreher, or Paternostermacher, licensed and guilded amber craftsmen who worked on prayer beads, rosaries, and many other items made of amber. Members of the Barholz family became mayors and councillors.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "During the Thirty Years' War, the Vistula Lagoon was the main southern Baltic base of King Gustavus Adolphus of Sweden, who was hailed as the protector of the Protestants. By 1660 the Vistula Lagoon had gone to Elector Frederick William of Brandenburg-Prussia, but was returned in 1700.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "The poet Christian Wernicke was born in 1661 in Elbląg, while Gottfried Achenwall became famous for his teachings in natural law and human rights law. In 1700–1710 it was occupied by Swedish troops. In 1709 it was besieged, taken by storm on February 2, 1710, by Russian troops with support of Prussian artillery. The city was handed over to Polish King Augustus II in 1712.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "The Royal-Polish mathematician and cartographer Johann Friedrich Endersch completed a map of Warmia in 1755 and also made a copper etching of the galley named \"The City of Elbing\" .",
"title": "History"
},
{
"paragraph_id": 43,
"text": "During the War of the Polish Succession in 1734, Elbląg was placed under military occupation by Russia and Saxony. The town came again under occupation by Russia from 1758 to 1762 during the Seven Years' War.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "During the First Partition of Poland in 1772 Elbląg was annexed by King Frederick the Great of the Kingdom of Prussia. Elbing became part of the newly established province of West Prussia in 1773. In the 1815 provincial reorganization following the Napoleonic Wars, Elbing and its hinterland were included within Regierungsbezirk Danzig in West Prussia.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "In October and November 1831, various Polish infantry, cavalry and artillery units, engineer corps and sappers of the November Uprising stopped in the city and its environs on the way to their internment locations, whereas the general staff with Commander-in-Chief General Maciej Rybiński and generals Józef Bem, Marcin Klemensowski, Kazimierz Małachowski, Ludwik Michał Pac and Antoni Wroniecki was interned in the city. On December 22, 1831, the Prussian army attempted to pacify the Polish insurgents and launched a charge on the disarmed Poles, who resisted relocation, fearing deportation to the Russian Partition of Poland. Some insurgents eventually left partitioned Poland for the Great Emigration, including Józef Bem, who was expelled by the Prussians in December 1831, and Maciej Rybiński, who left the city in February 1832.",
"title": "History"
},
{
"paragraph_id": 46,
"text": "Elbing industrialized. In 1828 the first steamship was built by Ignatz Grunau. In 1837 Ferdinand Schichau started the Schichau-Werke company in Elbing as well as another shipyard in Danzig (Gdańsk) later on. Schichau constructed the Borussia, the first screw-vessel in Germany. Schichau-Werke built hydraulic machinery, ships, steam engines, and torpedoes. After the inauguration of the railway to Königsberg in 1853, Elbing's industry began to grow. Schichau worked together with his son-in-law Carl H. Zise, who continued the industrial complex after Schichau's death. Schichau erected large complexes for his many thousands of workers.",
"title": "History"
},
{
"paragraph_id": 47,
"text": "Georg Steenke, an engineer from Königsberg, connected Elbing near the Baltic Sea with the southern part of Prussia by building the Oberländischer Kanal (Elbląg Canal).",
"title": "History"
},
{
"paragraph_id": 48,
"text": "Elbing became part of the Prussian-led German Empire in 1871 during the unification of Germany. As Elbing became an industrial city, the Social Democratic Party of Germany (SPD) frequently received the majority of votes; in the 1912 Reichstag elections the SPD received 51% of the vote. After World War I, as most of the province of West Prussia was reintegrated with the reborn Polish Republic, Elbing was joined to the German province of East Prussia, and was separated from Weimar Germany by the so-called Polish Corridor.",
"title": "History"
},
{
"paragraph_id": 49,
"text": "During World War II, under Nazi Germany, a Nazi prison, a forced labour subcamp of the Stalag I-A POW camp, a forced labour subcamp of the Stalag XX-B POW camp, and three subcamps of the Stutthof concentration camp were operated in the city. The Germans also enslaved Poles as forced labour in the city. The Polish resistance was active and infiltrated the German arms industry. Dozens of Polish resistance members were held in the local prison, and at least 15 were sentenced to death in the city in 1942.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "The prison and forced labour camps were closed and many of the German inhabitants forced to flee as the Soviet Red Army approached the city toward the end of the war. Laid under siege since January 23, 1945, about 65% of the city infrastructure was destroyed, including most of the historical city center. The town was captured by the Soviet Red Army during the night of February 9/10, 1945. During the first days of the siege most of the population of approximately 100,000 persons fled. After the end of war, in spring 1945, the region together with the city became again part of Poland, although with a Soviet-installed communist regime, which stayed in power until the Fall of Communism in the 1980s, as a result of the Potsdam Conference. The area was settled by Poles after remaining Germans were either transferred or fled to Germany. As of 1 November 1945 16.838 Germans remained in the town.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "Elbląg was part of the so-called Recovered Territories and out of the new inhabitants, 98% were Poles expelled from former eastern Polish areas annexed by the Soviet Union. Parts of the damaged historical city center were completely demolished, with the bricks being used to rebuild Warsaw and Gdańsk. The Communist authorities had originally planned that the Old Town, utterly destroyed during the fighting since January 23, 1945, would be built over with blocks of flats; however, economic difficulties thwarted this effort. Two churches were reconstructed and the remaining ruins of the old town were torn down in the 1960s.",
"title": "History"
},
{
"paragraph_id": 52,
"text": "Along with Tricity and Szczecin, Elbląg was the scene of the Polish 1970 protests. Since 1990 the German minority population has had a modest resurgence, with the Elbinger Deutsche Minderheit Organization counting around 450 members in 2000.",
"title": "History"
},
{
"paragraph_id": 53,
"text": "Restoration of the Old Town began after 1989. Since the beginning of the restoration, an extensive archaeological programme has been carried out. Most of the city's heritage was destroyed during the construction of basements in the 19th century or during World War II, but the backyards and latrines of the houses remained largely unchanged, and have provided information on the city's history. In some instances, private investors have incorporated parts of preserved stonework into new architecture. By 2006, approximately 75% of the Old Town had been reconstructed.",
"title": "History"
},
{
"paragraph_id": 54,
"text": "Elbląg is also home to the Elbrewery, Poland's largest brewery, which belongs to the Żywiec Group (Heineken). The history of the Elblag Brewing Tradition dates back to 1309, when Teutonic Master Siegfried von Leuchtwangen granted brewing privileges to the city. The present brewery was founded in 1872 as the Elbinger Aktien-Brauerei. In the early 1900s, the brewery was the exclusive supplier of Pilsner beer to the court of German Emperor Wilhelm II.",
"title": "History"
},
{
"paragraph_id": 55,
"text": "Until World War II there were many Gothic, Renaissance and Baroque houses in Elbląg's Old Town; some of them are reconstructed. Other preserved buildings are:",
"title": "Historic buildings"
},
{
"paragraph_id": 56,
"text": "The Elbląg Canal, built in 1825–44, is a tourist site of Elbląg. The canal is believed to be one of the most important monuments related to the history of engineering, and has been named one of the Seven Wonders of Poland. The canal was also named one of Poland's official national Historic Monuments (Pomnik historii) in 2011. Its listing is maintained by the National Heritage Board of Poland.",
"title": "Historic buildings"
},
{
"paragraph_id": 57,
"text": "The primary cultural institutions in Elbląg are the Archaeological and Historical Museum, the Cyprian Norwid Elbląg Library, the EL Gallery Art Center and the Aleksander Sewruk Theater. The museum presents many pieces of art and items of everyday use, including the only 15th century binoculars preserved in Europe.",
"title": "Culture"
},
{
"paragraph_id": 58,
"text": "Members of Parliament (Sejm) elected from Elbląg constituency.",
"title": "Politics"
},
{
"paragraph_id": 59,
"text": "Elbląg is twinned with:",
"title": "International relations"
},
{
"paragraph_id": 60,
"text": "On 28 February 2022, Elbląg ended its partnership with the Russian cities of Kaliningrad and Baltiysk and the Belarusian city of Novogrudok as a response to the 2022 Russian invasion of Ukraine and its active support by the Republic of Belarus.",
"title": "International relations"
}
]
| Elbląg is a city in the Warmian-Masurian Voivodeship, Poland, located in the eastern edge of the Żuławy region with 117,390 inhabitants, as of December 2021. It is the capital of Elbląg County. Elbląg is one of the oldest cities in the province. Its history dates back to 1237, when the Teutonic Order constructed their fortified stronghold on the banks of a nearby river. The castle subsequently served as the official seat of the Teutonic Order Masters. Elbląg became part of the Hanseatic League, which contributed much to the city's wealth. Through the Hanseatic League, the city was linked to other major ports like Gdańsk, Lübeck and Amsterdam. Elbląg joined Poland in 1454 and after the defeat of the Teutonic Knights in the Thirteen Years’ War was recognized as part of Poland in 1466. It then flourished and turned into a significant trading point, but its growth was eventually hindered by the Second Northern War and the Swedish Deluge. The city was transferred to Prussia after the first partition of Poland in 1772. Its trading role greatly weakened, until the era of industrialization, which occurred in the 19th century. It was then that the famous Elbląg Canal was commissioned. A tourist site and important engineering monument, it has been named one of the Seven Wonders of Poland and a Historic Monument of Poland. After World War II the city again became part of Poland. The war casualties were catastrophic – especially the severe destruction of the Old Town district, one of the grandest in Prussia. Today, Elbląg has over 120,000 inhabitants and is a "vibrant city with an attractive tourist base". It serves as an academic and financial center and among its numerous historic monuments is the Market Gate from 1309 and St. Nicholas Cathedral. Elbląg is also known for its archaeological sites, museums and the largest brewery in the country. | 2001-10-18T02:31:25Z | 2023-12-09T19:03:52Z | [
"Template:Lang-de",
"Template:Quote box",
"Template:Webarchive",
"Template:In lang",
"Template:Use mdy dates",
"Template:Convert",
"Template:Main",
"Template:Short description",
"Template:Lang-pl",
"Template:Cite web",
"Template:Wikivoyage",
"Template:Redirect",
"Template:See also",
"Template:Circa",
"Template:Cite news",
"Template:Flagicon",
"Template:Commons",
"Template:Fact",
"Template:Historical populations",
"Template:Cite book",
"Template:Navboxes",
"Template:IPA-pl",
"Template:Authority control",
"Template:Cite journal",
"Template:Infobox settlement",
"Template:Lang-prg",
"Template:Weather box",
"Template:Lang",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/Elbl%C4%85g |
9,860 | ESR | ESR may refer to: | [
{
"paragraph_id": 0,
"text": "ESR may refer to:",
"title": ""
}
]
| ESR may refer to: | 2023-07-01T05:47:02Z | [
"Template:Wiktionary",
"Template:TOC right",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/ESR |
|
9,862 | Europe of Democracies and Diversities | Europe of Democracies and Diversities (EDD) was an Eurosceptic political group with seats in the European Parliament between 1999 and 2004. Following the 2004 European elections, the group reformed as Independence/Democracy (IND/DEM). | [
{
"paragraph_id": 0,
"text": "Europe of Democracies and Diversities (EDD) was an Eurosceptic political group with seats in the European Parliament between 1999 and 2004. Following the 2004 European elections, the group reformed as Independence/Democracy (IND/DEM).",
"title": ""
}
]
| Europe of Democracies and Diversities (EDD) was an Eurosceptic political group with seats in the European Parliament between 1999 and 2004. Following the 2004 European elections, the group reformed as Independence/Democracy (IND/DEM). | 2023-03-18T11:14:58Z | [
"Template:European Parliament groups",
"Template:Infobox European Parliament group",
"Template:Flag",
"Template:Composition bar",
"Template:Reflist",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/Europe_of_Democracies_and_Diversities |
|
9,864 | European Free Alliance | The European Free Alliance (EFA) is a European political party that consists of various regionalist, separatist and ethnic minority political parties in Europe. Member parties advocate either for full political independence and sovereignty, or some form of devolution or self-governance for their country or region. The alliance has generally limited its membership to centre-left and left-wing parties; therefore, only a fraction of European regionalist parties are members of the EFA.
Since 1999, the EFA and the European Green Party (EGP) have joined forces within Greens–European Free Alliance (Greens/EFA) group in the European Parliament, although some EFA members have joined other groups from time to time.
The EFA's youth wing is the European Free Alliance Youth (EFAY), founded in 2000.
As of 2023, four European regions are led by EFA politicians: Scotland with Humza Yousaf of the Scottish National Party, Flanders with Jan Jambon of the New Flemish Alliance, Corsica with Gilles Simeoni of For Corsica, and Catalonia with Pere Aragonès of the Republican Left of Catalonia.
Regionalists have long been represented in the European Parliament. In the 1979 election four regionalist parties obtained seats: the Scottish National Party (SNP), the Flemish People's Union (VU), the Brussels-based Democratic Front of Francophones (FDF) and the South Tyrolean People's Party (SVP). The SNP, although being predominantly social-democratic, joined the European Progressive Democrats, a conservative group led by the French Rally for the Republic. The VU and the FDF joined the heterogeneous Technical Group of Independents, while the SVP joined the European People's Party group.
In 1981 six parties (VU, the Frisian National Party, Independent Fianna Fáil, the Party of German-speaking Belgians, the Party for the Organization of a Free Brittany and the Alsace-Lorraine National Association), plus three observers (the Union of the Corsican People, UPC, the Occitan Party and the Democratic Convergence of Catalonia, CDC), joined forces to form the European Free Alliance. Regionalist MEPs continued, however, to sit in different groups also after the 1984 election: the SNP in the Gaullist-dominated European Democratic Alliance; the VU, the Sardinian Action Party (PSd'Az) and Basque Solidarity (EA) in the Rainbow Group, together with Green parties; the SVP in the European People's Party group; the CDC with the Liberal Democrats; and Herri Batasuna among Non-Inscrits.
Only after the 1989 European Parliament election did EFA members form a united group, called Rainbow like its green predecessor. It consisted of three Italian MEPs (two for Lega Lombarda and one for the PSd'Az), two Spanish MEPs (one each for the PNV and the Andalusian Party, PA), one Belgian MEP (for VU), one French MEP (UPC), one British MEP (SNP) and one independent MEP from Ireland. They were joined by 4 MEPs from the Danish left-wing Eurosceptic People's Movement against the EU, while the other regionalist parties, including the SVP, Batasuna and the Convergence and Union of Catalonia (CiU) declined to join.
In the 1994 European Parliament election the regionalists lost many seats. Moreover, the EFA had suspended its major affiliate, Lega Nord, for having joined forces in government with the post-fascist National Alliance. Also, the PNV chose to switch to the European People's Party (EPP). The three remaining EFA MEPs (representing the SNP, the VU and the Canarian Coalition) formed a group with the French Énergie Radicale list and the Italian Pannella List: the European Radical Alliance.
Following the 1999 European Parliament election, in which EFA parties did quite well, EFA elected MEPs formed a joint group with the European Green Party, under the name Greens–European Free Alliance (Greens/EFA). In the event the EFA supplied ten members: two each from the Scottish SNP, the Welsh Plaid Cymru, and the Flemish VU, and one each from the Basque PNV and EA, the Andalusian PA and the Galician Nationalist Bloc (BNG).
In the 2004 European Parliament election, the EFA, which had formally become a European political party, was reduced to four MEPs: two from the SNP (Ian Hudghton and Alyn Smith), one from Plaid Cymru (Jill Evans) and one from the Republican Left of Catalonia (ERC; Bernat Joan i Marí, replaced at the mid-term by MEP Mikel Irujo of the Basque EA). They were joined by two associate members: Tatjana Ždanoka of For Human Rights in United Latvia (PCTVL) and László Tőkés, an independent MEP and former member of the Democratic Alliance of Hungarians in Romania (UMDR). Co-operation between the EFA and the Greens continued.
Following the 2008 revision of the EU Regulation that governs European political parties allowing the creation of European foundations affiliated to European political parties, the EFA established its official foundation/think tank, the Coppieters Foundation (CF), in September 2007.
In the 2009 European Parliament election, six MEPs were returned for the EFA: two from the SNP (Ian Hudghton and Alyn Smith), one from Plaid Cymru (Jill Evans), one from the Party of the Corsican Nation (PNC; François Alfonsi), one from the ERC (Oriol Junqueras), and Tatjana Ždanoka, an individual member of the EFA from Latvia. After the election, the New Flemish Alliance (N-VA) also joined the EFA. The EFA subgroup thus counted seven MEPs.
In the 2014 European Parliament election, EFA-affiliated parties returned twelve seats to the Parliament: four for the N-VA, two for the SNP, two for "The Left for the Right to Decide" (an electoral list primarily composed of the ERC), one for "The Peoples Decide" (an electoral list mainly comprising EH Bildu, a Basque coalition including EA), one for "European Spring" (an electoral list comprising the Valencian Nationalist Bloc, BNV, and the Aragonese Union, ChA), one from Plaid Cymru, and one from the Latvian Russian Union (LKS). Due to ideological divergences with the Flemish Greens, the N-VA defected to the European Conservatives and Reformists (ECR) group and the EH Bildu MEP joined the European United Left–Nordic Green Left (GUE/NGL) group. Thus, EFA had seven members in the Greens/EFA group and four within ECR.
In the 2019 European Parliament election the EFA gained a fourth seat in the United Kingdom, due to the SNP gaining a third seat to add to Plaid's one. However, the EFA suffered the loss of these seats in January 2020 due to Brexit, which meant SNP and PC MEPs had to leave.
In the Brussels declaration of 2000 the EFA codified its political principles. The EFA stands for "a Europe of Free Peoples based on the principle of subsidiarity, which believe in solidarity with each other and the peoples of the world." The EFA sees itself as an alliance of stateless peoples, striving towards recognition, autonomy, independence or wanting a proper voice in Europe. It supports European integration on basis of the subsidiarity-principle. It believes also that Europe should move away from further centralisation and works towards the formation of a "Europe of regions". It believes that regions should have more power in Europe, for instance participating in the Council of the European Union, when matters within their competence are discussed. It also wants to protect the linguistic and cultural diversity within the EU.
The EFA broadly stands on the left wing of the political spectrum. EFA members are generally progressive, although there are some notable exceptions as the conservative New Flemish Alliance, Bavaria Party, Democratic Party of Artsakh, Schleswig Party and Future of Åland, the Christian-democratic Slovene Union, the centre-right Liga Veneta Repubblica and the far-right South Tyrolean Freedom.
The main organs of the EFA organisation are the General Assembly, the Bureau and the Secretariat.
In the General Assembly, the supreme council of the EFA, every member party has one vote.
The Bureau takes care of daily affairs. It is chaired by Lorena Lopez de Lacalle (Basque Solidarity), president of the EFA, while Jordi Solé (Republican Left of Catalonia) is secretary-general and Anke Spoorendonk (South Schleswig Voters' Association) treasurer.
The Bureau is completed by ten vice-presidents: Peggy Eriksson (Future of Åland), Jill Evans (Plaid Cymru), Fernando Fuente Cortina (More—Commitment), David Grosclaude (Occitan Party), Wouter Patho (New Flemish Alliance), Frank de Boer (Frisian National Party), Patrik Peroša (The Olive Tree – Slovene Istria Party) and Livia Ceccaldi-Volpei (Femu à Corsica).
Before becoming a member party, an organisation needs to have been an observer of the EFA for at least one year. Only one member party per region is allowed. If a second party from a region wants to join the EFA, the first party needs to agree, at which point these two parties will then form a common delegation with one vote. The EFA also recognises friends of the EFA, a special status for regionalist parties outside of the European Union.
The following is the list of EFA members and former members. | [
{
"paragraph_id": 0,
"text": "The European Free Alliance (EFA) is a European political party that consists of various regionalist, separatist and ethnic minority political parties in Europe. Member parties advocate either for full political independence and sovereignty, or some form of devolution or self-governance for their country or region. The alliance has generally limited its membership to centre-left and left-wing parties; therefore, only a fraction of European regionalist parties are members of the EFA.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Since 1999, the EFA and the European Green Party (EGP) have joined forces within Greens–European Free Alliance (Greens/EFA) group in the European Parliament, although some EFA members have joined other groups from time to time.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The EFA's youth wing is the European Free Alliance Youth (EFAY), founded in 2000.",
"title": ""
},
{
"paragraph_id": 3,
"text": "As of 2023, four European regions are led by EFA politicians: Scotland with Humza Yousaf of the Scottish National Party, Flanders with Jan Jambon of the New Flemish Alliance, Corsica with Gilles Simeoni of For Corsica, and Catalonia with Pere Aragonès of the Republican Left of Catalonia.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Regionalists have long been represented in the European Parliament. In the 1979 election four regionalist parties obtained seats: the Scottish National Party (SNP), the Flemish People's Union (VU), the Brussels-based Democratic Front of Francophones (FDF) and the South Tyrolean People's Party (SVP). The SNP, although being predominantly social-democratic, joined the European Progressive Democrats, a conservative group led by the French Rally for the Republic. The VU and the FDF joined the heterogeneous Technical Group of Independents, while the SVP joined the European People's Party group.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In 1981 six parties (VU, the Frisian National Party, Independent Fianna Fáil, the Party of German-speaking Belgians, the Party for the Organization of a Free Brittany and the Alsace-Lorraine National Association), plus three observers (the Union of the Corsican People, UPC, the Occitan Party and the Democratic Convergence of Catalonia, CDC), joined forces to form the European Free Alliance. Regionalist MEPs continued, however, to sit in different groups also after the 1984 election: the SNP in the Gaullist-dominated European Democratic Alliance; the VU, the Sardinian Action Party (PSd'Az) and Basque Solidarity (EA) in the Rainbow Group, together with Green parties; the SVP in the European People's Party group; the CDC with the Liberal Democrats; and Herri Batasuna among Non-Inscrits.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Only after the 1989 European Parliament election did EFA members form a united group, called Rainbow like its green predecessor. It consisted of three Italian MEPs (two for Lega Lombarda and one for the PSd'Az), two Spanish MEPs (one each for the PNV and the Andalusian Party, PA), one Belgian MEP (for VU), one French MEP (UPC), one British MEP (SNP) and one independent MEP from Ireland. They were joined by 4 MEPs from the Danish left-wing Eurosceptic People's Movement against the EU, while the other regionalist parties, including the SVP, Batasuna and the Convergence and Union of Catalonia (CiU) declined to join.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In the 1994 European Parliament election the regionalists lost many seats. Moreover, the EFA had suspended its major affiliate, Lega Nord, for having joined forces in government with the post-fascist National Alliance. Also, the PNV chose to switch to the European People's Party (EPP). The three remaining EFA MEPs (representing the SNP, the VU and the Canarian Coalition) formed a group with the French Énergie Radicale list and the Italian Pannella List: the European Radical Alliance.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Following the 1999 European Parliament election, in which EFA parties did quite well, EFA elected MEPs formed a joint group with the European Green Party, under the name Greens–European Free Alliance (Greens/EFA). In the event the EFA supplied ten members: two each from the Scottish SNP, the Welsh Plaid Cymru, and the Flemish VU, and one each from the Basque PNV and EA, the Andalusian PA and the Galician Nationalist Bloc (BNG).",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In the 2004 European Parliament election, the EFA, which had formally become a European political party, was reduced to four MEPs: two from the SNP (Ian Hudghton and Alyn Smith), one from Plaid Cymru (Jill Evans) and one from the Republican Left of Catalonia (ERC; Bernat Joan i Marí, replaced at the mid-term by MEP Mikel Irujo of the Basque EA). They were joined by two associate members: Tatjana Ždanoka of For Human Rights in United Latvia (PCTVL) and László Tőkés, an independent MEP and former member of the Democratic Alliance of Hungarians in Romania (UMDR). Co-operation between the EFA and the Greens continued.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Following the 2008 revision of the EU Regulation that governs European political parties allowing the creation of European foundations affiliated to European political parties, the EFA established its official foundation/think tank, the Coppieters Foundation (CF), in September 2007.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In the 2009 European Parliament election, six MEPs were returned for the EFA: two from the SNP (Ian Hudghton and Alyn Smith), one from Plaid Cymru (Jill Evans), one from the Party of the Corsican Nation (PNC; François Alfonsi), one from the ERC (Oriol Junqueras), and Tatjana Ždanoka, an individual member of the EFA from Latvia. After the election, the New Flemish Alliance (N-VA) also joined the EFA. The EFA subgroup thus counted seven MEPs.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In the 2014 European Parliament election, EFA-affiliated parties returned twelve seats to the Parliament: four for the N-VA, two for the SNP, two for \"The Left for the Right to Decide\" (an electoral list primarily composed of the ERC), one for \"The Peoples Decide\" (an electoral list mainly comprising EH Bildu, a Basque coalition including EA), one for \"European Spring\" (an electoral list comprising the Valencian Nationalist Bloc, BNV, and the Aragonese Union, ChA), one from Plaid Cymru, and one from the Latvian Russian Union (LKS). Due to ideological divergences with the Flemish Greens, the N-VA defected to the European Conservatives and Reformists (ECR) group and the EH Bildu MEP joined the European United Left–Nordic Green Left (GUE/NGL) group. Thus, EFA had seven members in the Greens/EFA group and four within ECR.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In the 2019 European Parliament election the EFA gained a fourth seat in the United Kingdom, due to the SNP gaining a third seat to add to Plaid's one. However, the EFA suffered the loss of these seats in January 2020 due to Brexit, which meant SNP and PC MEPs had to leave.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In the Brussels declaration of 2000 the EFA codified its political principles. The EFA stands for \"a Europe of Free Peoples based on the principle of subsidiarity, which believe in solidarity with each other and the peoples of the world.\" The EFA sees itself as an alliance of stateless peoples, striving towards recognition, autonomy, independence or wanting a proper voice in Europe. It supports European integration on basis of the subsidiarity-principle. It believes also that Europe should move away from further centralisation and works towards the formation of a \"Europe of regions\". It believes that regions should have more power in Europe, for instance participating in the Council of the European Union, when matters within their competence are discussed. It also wants to protect the linguistic and cultural diversity within the EU.",
"title": "Ideology"
},
{
"paragraph_id": 15,
"text": "The EFA broadly stands on the left wing of the political spectrum. EFA members are generally progressive, although there are some notable exceptions as the conservative New Flemish Alliance, Bavaria Party, Democratic Party of Artsakh, Schleswig Party and Future of Åland, the Christian-democratic Slovene Union, the centre-right Liga Veneta Repubblica and the far-right South Tyrolean Freedom.",
"title": "Ideology"
},
{
"paragraph_id": 16,
"text": "The main organs of the EFA organisation are the General Assembly, the Bureau and the Secretariat.",
"title": "Organisation"
},
{
"paragraph_id": 17,
"text": "In the General Assembly, the supreme council of the EFA, every member party has one vote.",
"title": "Organisation"
},
{
"paragraph_id": 18,
"text": "The Bureau takes care of daily affairs. It is chaired by Lorena Lopez de Lacalle (Basque Solidarity), president of the EFA, while Jordi Solé (Republican Left of Catalonia) is secretary-general and Anke Spoorendonk (South Schleswig Voters' Association) treasurer.",
"title": "Organisation"
},
{
"paragraph_id": 19,
"text": "The Bureau is completed by ten vice-presidents: Peggy Eriksson (Future of Åland), Jill Evans (Plaid Cymru), Fernando Fuente Cortina (More—Commitment), David Grosclaude (Occitan Party), Wouter Patho (New Flemish Alliance), Frank de Boer (Frisian National Party), Patrik Peroša (The Olive Tree – Slovene Istria Party) and Livia Ceccaldi-Volpei (Femu à Corsica).",
"title": "Organisation"
},
{
"paragraph_id": 20,
"text": "Before becoming a member party, an organisation needs to have been an observer of the EFA for at least one year. Only one member party per region is allowed. If a second party from a region wants to join the EFA, the first party needs to agree, at which point these two parties will then form a common delegation with one vote. The EFA also recognises friends of the EFA, a special status for regionalist parties outside of the European Union.",
"title": "Organisation"
},
{
"paragraph_id": 21,
"text": "The following is the list of EFA members and former members.",
"title": "Organisation"
}
]
| The European Free Alliance (EFA) is a European political party that consists of various regionalist, separatist and ethnic minority political parties in Europe. Member parties advocate either for full political independence and sovereignty, or some form of devolution or self-governance for their country or region. The alliance has generally limited its membership to centre-left and left-wing parties; therefore, only a fraction of European regionalist parties are members of the EFA. Since 1999, the EFA and the European Green Party (EGP) have joined forces within Greens–European Free Alliance (Greens/EFA) group in the European Parliament, although some EFA members have joined other groups from time to time. The EFA's youth wing is the European Free Alliance Youth (EFAY), founded in 2000. As of 2023, four European regions are led by EFA politicians: Scotland with Humza Yousaf of the Scottish National Party, Flanders with Jan Jambon of the New Flemish Alliance, Corsica with Gilles Simeoni of For Corsica, and Catalonia with Pere Aragonès of the Republican Left of Catalonia. | 2002-02-25T15:51:15Z | 2023-12-21T22:40:09Z | [
"Template:Cite book",
"Template:Cite journal",
"Template:Official",
"Template:EU politics",
"Template:Authority control",
"Template:Use British English",
"Template:More citations needed",
"Template:Members of the European Free Alliance",
"Template:Efn",
"Template:Reflist",
"Template:Use dmy dates",
"Template:Infobox political party",
"Template:Flag",
"Template:Composition bar",
"Template:Small",
"Template:Dead link",
"Template:Cite web",
"Template:Cite news",
"Template:Short description",
"Template:Flagicon",
"Template:Flagicon image",
"Template:Notelist",
"Template:Ill"
]
| https://en.wikipedia.org/wiki/European_Free_Alliance |
9,865 | Alliance of Liberals and Democrats for Europe Party | The Alliance of Liberals and Democrats for Europe Party (ALDE Party) is a European political party composed of 60 national-level parties from across Europe, mainly active in the European Union. The ALDE Party is affiliated with Liberal International and a recognised European political party, incorporated as a non-profit association under Belgian law.
It was founded on 26 March 1976 in Stuttgart as a confederation of national political parties under the name "Federation of Liberal and Democrat Parties in Europe" and renamed "European Liberals and Democrats" (ELD) in 1977 and "European Liberal Democrats and Reformists" (ELDR) in 1986. On 30 April 2004, the ELDR was reformed as an official European party, the "European Liberal Democrat and Reform Party" (ELDR Party).
On 10 November 2012, the party chose its current name ALDE Party, taken from its then-European Parliament group, the Alliance of Liberals and Democrats for Europe (ALDE), which had been formed on 20 July 2004 in conjunction with the European Democratic Party (EDP). Prior to the 2004 European election, the European party had been represented through its own group, the European Liberal Democrat and Reform Party Group (ELDR) Group. In June 2019, the ALDE group was succeeded by Renew Europe.
As of 2020, ALDE is represented in European Union institutions, with 70 MEPs and five members of the European Commission. Of the 27 EU member states, there are four with ALDE-affiliated Prime Ministers: Mark Rutte (VVD) in the Netherlands, Xavier Bettel (DP) in Luxembourg, Kaja Kallas (Estonian Reform Party) in Estonia and Alexander De Croo (Open VLD) in Belgium. ALDE member parties are also in governments in seven other EU member states: Croatia, Finland, Ireland, Latvia, Slovenia, Lithuania and Germany. Some other ALDE member parties offer parliamentary support to governments in Croatia, Denmark, Italy, Romania and Sweden. Charles Michel, former Belgian Prime Minister, is the current President of the European Council.
ALDE's think tank is the European Liberal Forum, led by Hilde Vautmans, MEP, and gathers 46 member organisations. The youth wing of ALDE is the European Liberal Youth (LYMEC), which is predominantly based upon youth and student liberal organisations but contains also a small number of individual members. LYMEC is led by Dan-Aria Sucuri.
In 2011, the ALDE Party became the first pan-European party to create the status of individual membership. Since then, between 1000 and close to 3000 members (the numbers fluctuate annually) have maintained direct membership in the ALDE Party from several EU countries. Over 40 coordinators mobilise liberal ideas, initiatives and expertise across the continent under the leadership of the Steering Committee, which was first chaired by Julie Cantalou. The ALDE Party took a step further in the direction of becoming a truly pan-European party when granting voting rights to individual members’ delegates at the Party Congress.
The day-to-day management of the ALDE Party is handled by the Bureau, the members of which are:
Pan-European liberalism has a long history dating back to the foundation of Liberal International in April 1947. On 26 March 1976, the Federation of Liberal and Democrat Parties in Europe was established in Stuttgart. The founding parties of the federation were the Free Democratic Party of Germany, Radical Party of France, Venstre of Denmark, Italian Liberal Party, Dutch People's Party for Freedom and Democracy and Democratic Party of Luxembourg. Observer members joining later in 1976 were the Danish Social Liberal Party, French Radical Party of the Left and Independent Republicans, British Liberal Party, and Italian Republican Party. In 1977, the federation was renamed European Liberals and Democrats, in 1986, European Liberal Democrats and Reformists.
It evolved into the European Liberal Democrat and Reform Party (ELDR Party) in 2004, when it was founded as an official European party under that name and incorporated under Belgian law at an extraordinary Congress in Brussels, held on 30 April 2004 the day before the enlargement of the European Union. At the same time the matching group in the European Parliament, the European Liberal Democrats and Reformists Group allied with the members of the newly elected European Democratic Party, forming the Alliance of Liberals and Democrats for Europe (ALDE) with a matching ALDE Group in the European Parliament.
On 10 November 2012, the ELDR Party adopted the name of the alliance between the two parties, to match the parliamentary group and the alliance.
On 12 June 2019, the ALDE group was succeeded by a new enlarged group, Renew Europe, which primarily consists of ALDE and EDP member parties and France's La République En Marche! (LREM).
ALDE Member Parties contribute five out of the 27 members of the European Commission: | [
{
"paragraph_id": 0,
"text": "The Alliance of Liberals and Democrats for Europe Party (ALDE Party) is a European political party composed of 60 national-level parties from across Europe, mainly active in the European Union. The ALDE Party is affiliated with Liberal International and a recognised European political party, incorporated as a non-profit association under Belgian law.",
"title": ""
},
{
"paragraph_id": 1,
"text": "It was founded on 26 March 1976 in Stuttgart as a confederation of national political parties under the name \"Federation of Liberal and Democrat Parties in Europe\" and renamed \"European Liberals and Democrats\" (ELD) in 1977 and \"European Liberal Democrats and Reformists\" (ELDR) in 1986. On 30 April 2004, the ELDR was reformed as an official European party, the \"European Liberal Democrat and Reform Party\" (ELDR Party).",
"title": ""
},
{
"paragraph_id": 2,
"text": "On 10 November 2012, the party chose its current name ALDE Party, taken from its then-European Parliament group, the Alliance of Liberals and Democrats for Europe (ALDE), which had been formed on 20 July 2004 in conjunction with the European Democratic Party (EDP). Prior to the 2004 European election, the European party had been represented through its own group, the European Liberal Democrat and Reform Party Group (ELDR) Group. In June 2019, the ALDE group was succeeded by Renew Europe.",
"title": ""
},
{
"paragraph_id": 3,
"text": "As of 2020, ALDE is represented in European Union institutions, with 70 MEPs and five members of the European Commission. Of the 27 EU member states, there are four with ALDE-affiliated Prime Ministers: Mark Rutte (VVD) in the Netherlands, Xavier Bettel (DP) in Luxembourg, Kaja Kallas (Estonian Reform Party) in Estonia and Alexander De Croo (Open VLD) in Belgium. ALDE member parties are also in governments in seven other EU member states: Croatia, Finland, Ireland, Latvia, Slovenia, Lithuania and Germany. Some other ALDE member parties offer parliamentary support to governments in Croatia, Denmark, Italy, Romania and Sweden. Charles Michel, former Belgian Prime Minister, is the current President of the European Council.",
"title": ""
},
{
"paragraph_id": 4,
"text": "ALDE's think tank is the European Liberal Forum, led by Hilde Vautmans, MEP, and gathers 46 member organisations. The youth wing of ALDE is the European Liberal Youth (LYMEC), which is predominantly based upon youth and student liberal organisations but contains also a small number of individual members. LYMEC is led by Dan-Aria Sucuri.",
"title": ""
},
{
"paragraph_id": 5,
"text": "In 2011, the ALDE Party became the first pan-European party to create the status of individual membership. Since then, between 1000 and close to 3000 members (the numbers fluctuate annually) have maintained direct membership in the ALDE Party from several EU countries. Over 40 coordinators mobilise liberal ideas, initiatives and expertise across the continent under the leadership of the Steering Committee, which was first chaired by Julie Cantalou. The ALDE Party took a step further in the direction of becoming a truly pan-European party when granting voting rights to individual members’ delegates at the Party Congress.",
"title": ""
},
{
"paragraph_id": 6,
"text": "The day-to-day management of the ALDE Party is handled by the Bureau, the members of which are:",
"title": "Structure"
},
{
"paragraph_id": 7,
"text": "Pan-European liberalism has a long history dating back to the foundation of Liberal International in April 1947. On 26 March 1976, the Federation of Liberal and Democrat Parties in Europe was established in Stuttgart. The founding parties of the federation were the Free Democratic Party of Germany, Radical Party of France, Venstre of Denmark, Italian Liberal Party, Dutch People's Party for Freedom and Democracy and Democratic Party of Luxembourg. Observer members joining later in 1976 were the Danish Social Liberal Party, French Radical Party of the Left and Independent Republicans, British Liberal Party, and Italian Republican Party. In 1977, the federation was renamed European Liberals and Democrats, in 1986, European Liberal Democrats and Reformists.",
"title": "History of pan-European liberalism"
},
{
"paragraph_id": 8,
"text": "It evolved into the European Liberal Democrat and Reform Party (ELDR Party) in 2004, when it was founded as an official European party under that name and incorporated under Belgian law at an extraordinary Congress in Brussels, held on 30 April 2004 the day before the enlargement of the European Union. At the same time the matching group in the European Parliament, the European Liberal Democrats and Reformists Group allied with the members of the newly elected European Democratic Party, forming the Alliance of Liberals and Democrats for Europe (ALDE) with a matching ALDE Group in the European Parliament.",
"title": "History of pan-European liberalism"
},
{
"paragraph_id": 9,
"text": "On 10 November 2012, the ELDR Party adopted the name of the alliance between the two parties, to match the parliamentary group and the alliance.",
"title": "History of pan-European liberalism"
},
{
"paragraph_id": 10,
"text": "On 12 June 2019, the ALDE group was succeeded by a new enlarged group, Renew Europe, which primarily consists of ALDE and EDP member parties and France's La République En Marche! (LREM).",
"title": "History of pan-European liberalism"
},
{
"paragraph_id": 11,
"text": "ALDE Member Parties contribute five out of the 27 members of the European Commission:",
"title": "European Commissioners"
}
]
| The Alliance of Liberals and Democrats for Europe Party is a European political party composed of 60 national-level parties from across Europe, mainly active in the European Union. The ALDE Party is affiliated with Liberal International and a recognised European political party, incorporated as a non-profit association under Belgian law. It was founded on 26 March 1976 in Stuttgart as a confederation of national political parties under the name "Federation of Liberal and Democrat Parties in Europe" and renamed "European Liberals and Democrats" (ELD) in 1977 and "European Liberal Democrats and Reformists" (ELDR) in 1986. On 30 April 2004, the ELDR was reformed as an official European party, the "European Liberal Democrat and Reform Party". On 10 November 2012, the party chose its current name ALDE Party, taken from its then-European Parliament group, the Alliance of Liberals and Democrats for Europe (ALDE), which had been formed on 20 July 2004 in conjunction with the European Democratic Party (EDP). Prior to the 2004 European election, the European party had been represented through its own group, the European Liberal Democrat and Reform Party Group (ELDR) Group. In June 2019, the ALDE group was succeeded by Renew Europe. As of 2020, ALDE is represented in European Union institutions, with 70 MEPs and five members of the European Commission. Of the 27 EU member states, there are four with ALDE-affiliated Prime Ministers: Mark Rutte (VVD) in the Netherlands, Xavier Bettel (DP) in Luxembourg, Kaja Kallas in Estonia and Alexander De Croo in Belgium. ALDE member parties are also in governments in seven other EU member states: Croatia, Finland, Ireland, Latvia, Slovenia, Lithuania and Germany. Some other ALDE member parties offer parliamentary support to governments in Croatia, Denmark, Italy, Romania and Sweden. Charles Michel, former Belgian Prime Minister, is the current President of the European Council. ALDE's think tank is the European Liberal Forum, led by Hilde Vautmans, MEP, and gathers 46 member organisations. The youth wing of ALDE is the European Liberal Youth (LYMEC), which is predominantly based upon youth and student liberal organisations but contains also a small number of individual members. LYMEC is led by Dan-Aria Sucuri. In 2011, the ALDE Party became the first pan-European party to create the status of individual membership. Since then, between 1000 and close to 3000 members have maintained direct membership in the ALDE Party from several EU countries. Over 40 coordinators mobilise liberal ideas, initiatives and expertise across the continent under the leadership of the Steering Committee, which was first chaired by Julie Cantalou. The ALDE Party took a step further in the direction of becoming a truly pan-European party when granting voting rights to individual members’ delegates at the Party Congress. | 2002-02-25T15:51:15Z | 2023-12-21T13:34:53Z | [
"Template:Reflist",
"Template:As of",
"Template:SWE",
"Template:NOR",
"Template:ROM",
"Template:Cite web",
"Template:Infobox political party",
"Template:ARM",
"Template:IRL",
"Template:EU politics",
"Template:FIN",
"Template:HUN",
"Template:Dead link",
"Template:Commons category",
"Template:Liberalism sidebar",
"Template:CHE",
"Template:CZE",
"Template:Alliance of Liberals and Democrats for Europe Party",
"Template:Yes2",
"Template:GBR",
"Template:GER",
"Template:AZE",
"Template:FRA",
"Template:LUX",
"Template:Small",
"Template:CYP",
"Template:NLD",
"Template:EU",
"Template:AUT",
"Template:HRV",
"Template:ITA",
"Template:No",
"Template:MNE",
"Template:Short description",
"Template:Composition bar",
"Template:Official site",
"Template:ISL",
"Template:DEU",
"Template:Use dmy dates",
"Template:No2",
"Template:BIH",
"Template:Use British English",
"Template:EST",
"Template:ESP",
"Template:GEO",
"Template:MDA",
"Template:LAT",
"Template:POR",
"Template:Renew Europe",
"Template:Main",
"Template:Dts",
"Template:ELDR member parties",
"Template:Authority control",
"Template:About",
"Template:Yes",
"Template:BGR",
"Template:DNK",
"Template:POL",
"Template:ROU",
"Template:UKR",
"Template:Columns-list",
"Template:Flagicon",
"Template:BEL",
"Template:LVA",
"Template:SVK",
"Template:AND",
"Template:Legend",
"Template:Cite book",
"Template:Flag",
"Template:LTU"
]
| https://en.wikipedia.org/wiki/Alliance_of_Liberals_and_Democrats_for_Europe_Party |
9,866 | European People's Party Group | The European People's Party Group (EPP Group) is a centre-right political group of the European Parliament consisting of deputies (MEPs) from the member parties of the European People's Party (EPP). Sometimes it also includes independent MEPs and/or deputies from unaffiliated national parties. The EPP Group comprises politicians of Christian-democratic, conservative and liberal-conservative orientation.
The European People's Party was officially founded as a European political party in 1976. However, the European People's Party Group in the European Parliament has existed in one form or another since June 1953, from the Common Assembly of the European Coal and Steel Community, making it one of the oldest European-level political groups. It has been the largest political group in the European Parliament since 1999.
The Common Assembly of the European Coal and Steel Community (the predecessor of the present day European Parliament) first met on 10 September 1952 and the first Christian Democratic Group was unofficially formed the next day, with Maan Sassen as president. The group held 38 of the 78 seats, two short of an absolute majority. On 16 June 1953, the Common Assembly passed a resolution enabling the official formation of political groups; further, on 23 June 1953 the constituent declaration of the group was published and the group was officially formed.
The Christian Democrat group was the biggest group at formation, but as time wore on, it lost support and was the second-biggest group by the time of the 1979 elections. As the European Community expanded into the European Union, the dominant centre-right parties in the new member states were not necessarily Christian democratic, and the EPP (European People's Party, the pan-continental political party founded in 1976, to which all group members are now affiliated) feared being sidelined. To counter this, the EPP expanded its remit to cover the centre-right regardless of tradition and pursued a policy of integrating liberal-conservative parties.
This policy led to Greek New Democracy and Spanish People's Party MEPs joining the EPP Group. The British Conservative Party and Danish Conservative People's Party tried to maintain a group of their own, named the European Democrats (ED), but lack of support and the problems inherent in maintaining a small group forced ED's collapse in the 1990s, and its members crossed the floor to join the EPP Group. The parties of these MEPs also became full members of the EPP (with the exception of the British Conservative Party, which did not join) and this consolidation process of the European centre-right continued during the 1990s with the acquisition of members from the Italian party Forza Italia. However, the consolidation was not unalloyed and a split emerged with the Eurosceptic MEPs who congregated in a subgroup within the Group, also called the European Democrats (ED).
Nevertheless, the consolidation held through the 1990s, assisted by the group being renamed the European People's Party – European Democrats (EPP-ED) Group, and after the 1999 European elections the EPP-ED reclaimed its position as the largest group in the Parliament from the Party of European Socialists (PES) Group.
Size was not enough, however: the group did not have a majority. It continued therefore to engage in the Grand Coalition (a coalition with the PES Group, or occasionally the Liberals) to generate the majorities required by the cooperation procedure under the Single European Act.
Meanwhile, the parties in the European Democrats subgroup were growing restless, with the establishment in July 2006 of the Movement for European Reform, and finally left following the 2009 elections, when the Czech Civic Democratic Party and British Conservative Party formed their own right-wing European Conservatives and Reformists (ECR) group on 22 June 2009, abolishing the European Democrats subgroup from that date. The EPP-ED Group reverted to its original name – the EPP Group – immediately.
In the 7th European Parliament the EPP Group remained the largest parliamentary group with 275 MEPs. It is currently the only political group in the European parliament to fully represent its corresponding European political party, i.e. the European People's Party. The United Kingdom was the only member state to not be represented in the group; this state of affairs ceased temporarily on 28 February 2018, when two MEPs suspended from the British Conservative Party left the ECR group and joined the EPP. The two MEPs later joined a breakaway political party in the UK, The Independent Group.
After twelve member parties in the EPP called for Hungary's Fidesz's expulsion or suspension, Fidesz's membership was suspended with a common agreement on 20 March 2019. The suspension was applied only to the EPP but not to its group in the Parliament. On 3 March 2021, Fidesz decided to leave the EPP group, after the group's new rules, however still kept their membership in the party. On 18 March 2021, Fidesz decided to leave the European People's Party.
In the 9th European Parliament the EPP won 182 seats out of a total of 751. They formed a coalition with Progressive Alliance of Socialists and Democrats and Renew Europe to elect Ursula von der Leyen as president of the European Commission.
The 38 members in the group on 11 September 1952 were as follows:
The EPP Group is governed by a collective (referred to as the Presidency) that allocates tasks. The Presidency consists of the Group Chair and a maximum of ten Vice-Chairs, including the Treasurer. The day-to-day running of the EPP Group is performed by its secretariat in the European Parliament, led by its Secretary-General. The Group runs its own think-tank, the European Ideas Network, which brings together opinion-formers from across Europe to discuss issues facing the European Union from a centre-right perspective.
The EPP Group Presidency includes:
The chairs of the group and its predecessors from 1952 to 2020 are as follows:
Activities performed by the group in the period between June 2004 and June 2008 include monitoring elections in Palestine and Ukraine; encouraging transeuropean rail travel, telecoms deregulation, energy security, a common energy policy, the accession of Bulgaria and Romania to the Union, partial reform of the CAP and attempts to tackle illegal immigration; denouncing Russian involvement in South Ossetia; supporting the Constitution Treaty and the Lisbon Treaty; debating globalisation, relations with China, and Taiwan; backing plans to outlaw Holocaust denial; nominating Anna Politkovskaya for the 2007 Sakharov Prize; expelling Daniel Hannan from the Group; the discussion about whether ED MEPs should remain within EPP-ED or form a group of their own; criticisms of the group's approach to tackling low turnout for the 2009 elections; the group's use of the two-President arrangement; and the group's proposal to ban the Islamic Burka dress across the EU.
The debates and votes in the European Parliament are tracked by its website and categorised by the groups that participate in them and the rule of procedure that they fall into. The results give a profile for each group by category and the total indicates the group's level of participation in Parliamentary debates. The activity profile for each group for the period 1 August 2004 to 1 August 2008 in the Sixth Parliament is given on the diagram on the right. The group is denoted in blue.
The website shows the group as participating in 659 motions, making it the third most active group during the period.
The group produces many publications, which can be found on its website. Documents produced in 2008 cover subjects such as dialogue with the Orthodox Church, study days, its strategy for 2008–09, Euro-Mediterranean relations, and the Lisbon Treaty. It also publishes a yearbook and irregularly publishes a presentation, a two-page summary of the group.
The group has been characterised as a three-quarters-male group that, prior to ED's departure, was only 80% cohesive and split between centre-right Europhiles (the larger EPP subgroup) and right-wing Eurosceptics (the smaller ED subgroup). The group as a whole is described as ambiguous on hypothetical EU taxes, against taxation, Green issues, social liberal issues (LGBT rights, abortion, euthanasia) and full Turkish accession to the European Union, and for a deeper Federal Europe, deregulation, the Common Foreign and Security Policy and controlling migration into the EU. | [
{
"paragraph_id": 0,
"text": "The European People's Party Group (EPP Group) is a centre-right political group of the European Parliament consisting of deputies (MEPs) from the member parties of the European People's Party (EPP). Sometimes it also includes independent MEPs and/or deputies from unaffiliated national parties. The EPP Group comprises politicians of Christian-democratic, conservative and liberal-conservative orientation.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The European People's Party was officially founded as a European political party in 1976. However, the European People's Party Group in the European Parliament has existed in one form or another since June 1953, from the Common Assembly of the European Coal and Steel Community, making it one of the oldest European-level political groups. It has been the largest political group in the European Parliament since 1999.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Common Assembly of the European Coal and Steel Community (the predecessor of the present day European Parliament) first met on 10 September 1952 and the first Christian Democratic Group was unofficially formed the next day, with Maan Sassen as president. The group held 38 of the 78 seats, two short of an absolute majority. On 16 June 1953, the Common Assembly passed a resolution enabling the official formation of political groups; further, on 23 June 1953 the constituent declaration of the group was published and the group was officially formed.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The Christian Democrat group was the biggest group at formation, but as time wore on, it lost support and was the second-biggest group by the time of the 1979 elections. As the European Community expanded into the European Union, the dominant centre-right parties in the new member states were not necessarily Christian democratic, and the EPP (European People's Party, the pan-continental political party founded in 1976, to which all group members are now affiliated) feared being sidelined. To counter this, the EPP expanded its remit to cover the centre-right regardless of tradition and pursued a policy of integrating liberal-conservative parties.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "This policy led to Greek New Democracy and Spanish People's Party MEPs joining the EPP Group. The British Conservative Party and Danish Conservative People's Party tried to maintain a group of their own, named the European Democrats (ED), but lack of support and the problems inherent in maintaining a small group forced ED's collapse in the 1990s, and its members crossed the floor to join the EPP Group. The parties of these MEPs also became full members of the EPP (with the exception of the British Conservative Party, which did not join) and this consolidation process of the European centre-right continued during the 1990s with the acquisition of members from the Italian party Forza Italia. However, the consolidation was not unalloyed and a split emerged with the Eurosceptic MEPs who congregated in a subgroup within the Group, also called the European Democrats (ED).",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Nevertheless, the consolidation held through the 1990s, assisted by the group being renamed the European People's Party – European Democrats (EPP-ED) Group, and after the 1999 European elections the EPP-ED reclaimed its position as the largest group in the Parliament from the Party of European Socialists (PES) Group.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Size was not enough, however: the group did not have a majority. It continued therefore to engage in the Grand Coalition (a coalition with the PES Group, or occasionally the Liberals) to generate the majorities required by the cooperation procedure under the Single European Act.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Meanwhile, the parties in the European Democrats subgroup were growing restless, with the establishment in July 2006 of the Movement for European Reform, and finally left following the 2009 elections, when the Czech Civic Democratic Party and British Conservative Party formed their own right-wing European Conservatives and Reformists (ECR) group on 22 June 2009, abolishing the European Democrats subgroup from that date. The EPP-ED Group reverted to its original name – the EPP Group – immediately.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In the 7th European Parliament the EPP Group remained the largest parliamentary group with 275 MEPs. It is currently the only political group in the European parliament to fully represent its corresponding European political party, i.e. the European People's Party. The United Kingdom was the only member state to not be represented in the group; this state of affairs ceased temporarily on 28 February 2018, when two MEPs suspended from the British Conservative Party left the ECR group and joined the EPP. The two MEPs later joined a breakaway political party in the UK, The Independent Group.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "After twelve member parties in the EPP called for Hungary's Fidesz's expulsion or suspension, Fidesz's membership was suspended with a common agreement on 20 March 2019. The suspension was applied only to the EPP but not to its group in the Parliament. On 3 March 2021, Fidesz decided to leave the EPP group, after the group's new rules, however still kept their membership in the party. On 18 March 2021, Fidesz decided to leave the European People's Party.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In the 9th European Parliament the EPP won 182 seats out of a total of 751. They formed a coalition with Progressive Alliance of Socialists and Democrats and Renew Europe to elect Ursula von der Leyen as president of the European Commission.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The 38 members in the group on 11 September 1952 were as follows:",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The EPP Group is governed by a collective (referred to as the Presidency) that allocates tasks. The Presidency consists of the Group Chair and a maximum of ten Vice-Chairs, including the Treasurer. The day-to-day running of the EPP Group is performed by its secretariat in the European Parliament, led by its Secretary-General. The Group runs its own think-tank, the European Ideas Network, which brings together opinion-formers from across Europe to discuss issues facing the European Union from a centre-right perspective.",
"title": "Structure"
},
{
"paragraph_id": 13,
"text": "The EPP Group Presidency includes:",
"title": "Structure"
},
{
"paragraph_id": 14,
"text": "The chairs of the group and its predecessors from 1952 to 2020 are as follows:",
"title": "Structure"
},
{
"paragraph_id": 15,
"text": "",
"title": "Membership"
},
{
"paragraph_id": 16,
"text": "Activities performed by the group in the period between June 2004 and June 2008 include monitoring elections in Palestine and Ukraine; encouraging transeuropean rail travel, telecoms deregulation, energy security, a common energy policy, the accession of Bulgaria and Romania to the Union, partial reform of the CAP and attempts to tackle illegal immigration; denouncing Russian involvement in South Ossetia; supporting the Constitution Treaty and the Lisbon Treaty; debating globalisation, relations with China, and Taiwan; backing plans to outlaw Holocaust denial; nominating Anna Politkovskaya for the 2007 Sakharov Prize; expelling Daniel Hannan from the Group; the discussion about whether ED MEPs should remain within EPP-ED or form a group of their own; criticisms of the group's approach to tackling low turnout for the 2009 elections; the group's use of the two-President arrangement; and the group's proposal to ban the Islamic Burka dress across the EU.",
"title": "Activities"
},
{
"paragraph_id": 17,
"text": "The debates and votes in the European Parliament are tracked by its website and categorised by the groups that participate in them and the rule of procedure that they fall into. The results give a profile for each group by category and the total indicates the group's level of participation in Parliamentary debates. The activity profile for each group for the period 1 August 2004 to 1 August 2008 in the Sixth Parliament is given on the diagram on the right. The group is denoted in blue.",
"title": "Activities"
},
{
"paragraph_id": 18,
"text": "The website shows the group as participating in 659 motions, making it the third most active group during the period.",
"title": "Activities"
},
{
"paragraph_id": 19,
"text": "The group produces many publications, which can be found on its website. Documents produced in 2008 cover subjects such as dialogue with the Orthodox Church, study days, its strategy for 2008–09, Euro-Mediterranean relations, and the Lisbon Treaty. It also publishes a yearbook and irregularly publishes a presentation, a two-page summary of the group.",
"title": "Activities"
},
{
"paragraph_id": 20,
"text": "The group has been characterised as a three-quarters-male group that, prior to ED's departure, was only 80% cohesive and split between centre-right Europhiles (the larger EPP subgroup) and right-wing Eurosceptics (the smaller ED subgroup). The group as a whole is described as ambiguous on hypothetical EU taxes, against taxation, Green issues, social liberal issues (LGBT rights, abortion, euthanasia) and full Turkish accession to the European Union, and for a deeper Federal Europe, deregulation, the Common Foreign and Security Policy and controlling migration into the EU.",
"title": "Academic analysis"
}
]
| The European People's Party Group is a centre-right political group of the European Parliament consisting of deputies (MEPs) from the member parties of the European People's Party (EPP). Sometimes it also includes independent MEPs and/or deputies from unaffiliated national parties. The EPP Group comprises politicians of Christian-democratic, conservative and liberal-conservative orientation. The European People's Party was officially founded as a European political party in 1976. However, the European People's Party Group in the European Parliament has existed in one form or another since June 1953, from the Common Assembly of the European Coal and Steel Community, making it one of the oldest European-level political groups. It has been the largest political group in the European Parliament since 1999. | 2002-02-25T15:51:15Z | 2023-11-11T00:26:28Z | [
"Template:Center",
"Template:DNK",
"Template:LUX",
"Template:Lang-es",
"Template:Main",
"Template:EST",
"Template:SVN",
"Template:Cite news",
"Template:Decrease",
"Template:Lang-nl",
"Template:Portal",
"Template:Webarchive",
"Template:CRO",
"Template:DEU",
"Template:ROU",
"Template:EuroparlGroup",
"Template:Nowrap",
"Template:Lang-hu",
"Template:SWE",
"Template:Composition bar",
"Template:Steady",
"Template:Lang-de",
"Template:BGR",
"Template:GBR",
"Template:Dead link",
"Template:NLD",
"Template:Cite tweet",
"Template:Authority control",
"Template:Lang",
"Template:BEL",
"Template:Increase",
"Template:IRL",
"Template:POL",
"Template:Cbignore",
"Template:Use dmy dates",
"Template:CZE",
"Template:Citation needed",
"Template:Cite magazine",
"Template:MLT",
"Template:POR",
"Template:Lang-ro",
"Template:Legend",
"Template:Reflist",
"Template:Cite web",
"Template:Short description",
"Template:HUN",
"Template:Lang-el",
"Template:FRA",
"Template:ESP",
"Template:Lang-fr",
"Template:LVA",
"Template:Use British English",
"Template:Infobox European Parliament group",
"Template:AUT",
"Template:GRE",
"Template:LTU",
"Template:Primary source inline",
"Template:ITA",
"Template:About",
"Template:CYP",
"Template:FIN",
"Template:SVK",
"Template:European People's Party",
"Template:Flag",
"Template:Lang-lb",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/European_People%27s_Party_Group |
9,867 | The Left in the European Parliament – GUE/NGL | The Left in the European Parliament – GUE/NGL is a left-wing political group of the European Parliament established in 1995. Before January 2021, it was named the European United Left/Nordic Green Left (French: Gauche unitaire européenne/Gauche verte nordique, GUE/NGL).
The group comprises political parties with democratic socialist, communist, and eurosceptic orientation.
In 1995, the enlargement of the European Union led to the creation of the Nordic Green Left group of parties. The Nordic Green Left (NGL) merged with the Confederal Group of the European United Left (GUE) on 6 January 1995, forming the Confederal Group of the European United Left/Nordic Green Left. The NGL suffix was added to the name of the expanded group on insistence of Swedish and Finnish MEPs. The group initially consisted of MEPs from the Finnish Left Alliance, the Swedish Left Party, the Danish Socialist People's Party, the United Left of Spain (including the Spanish Communist Party), the Synaspismos of Greece, the French Communist Party, the Portuguese Communist Party, the Communist Party of Greece, and the Communist Refoundation Party of Italy.
In 1998, Ken Coates, an expelled MEP from the British Labour Party who co-founded the Independent Labour Network, joined the group.
In 1999, the German Party of Democratic Socialism (PDS) and the Greek Democratic Social Movement (DIKKI) joined as full members, while the five MEPs elected from the list of the French Trotskyist alliance LO–LCR and the one MEP for the Dutch Socialist Party joined as associate members.
In 2002, four MEPs from the French Citizen and Republican Movement and one from the Danish People's Movement against the EU also joined the group.
In 2004, no MEPs were elected from LO–LCR and DIKKI — which was undergoing a dispute with its leader over the party constitution — , as well as the French Citizen and Republican Movement, did not put forward candidates. MEPs from the Portuguese Left Block, the Irish Sinn Féin, the Progressive Party of Working People of Cyprus, and the Communist Party of Bohemia and Moravia joined the group. The Danish Socialist People's Party, a member of the Nordic Green Left, left the group to instead sit in the Greens–European Free Alliance group.
In 2009, no MEPs were elected from the Italian Communist Refoundation Party and the Finnish Left Alliance. MEPs from the Irish Socialist Party, the Socialist Party of Latvia, and the French Left Party joined the group.
In 2013, one MEP from the Croatian Labourists – Labour Party also joined the group.
In 2014, no MEPs were elected from the Irish Socialist Party, the Socialist Party of Latvia, and the Croatian Labourists – Labour Party. MEPs from the Spanish Podemos as well as EH Bildu and the Dutch Party for the Animals joined the group, while MEPs from the Italian Communist Refoundation Party and the Finnish Left Alliance re-entered parliament and rejoined. The Communist Party of Greece, a founding member of the group, decided to leave and instead sit as Non-Inscrits.
In 2019, no MEPs were elected from the French Communist Party, the Danish People's Movement against the EU, the Dutch Socialist Party, and from the Italian parties The Left and the Communist Refoundation Party. MEPs from the French La France insoumise, the Belgian Workers' Party of Belgium, the German Human Environment Animal Protection, the Irish Independents 4 Change, and the Danish Red-Green Alliance joined the group.
According to its 1994 constituent declaration, the group is opposed to the present European Union political structure, but it is committed to integration. That declaration sets out three aims for the construction of another European Union, the total change of institutions to make them fully democratic, breaking with neoliberal monetarist policies, and a policy of co-development and equitable cooperation. The group wants to disband the North Atlantic Treaty Organization (NATO), and strengthen the Organization for Security and Co-operation in Europe (OSCE).
The group is ambiguous between reformism and revolution, leaving it up to each party to decide on the manner they deem best suited to achieve these aims. As such, it has simultaneously positioned itself as insiders within the European institutions, enabling it to influence the decisions made by co-decision; and as outsiders by its willingness to seek another Europe, which would abolish the Maastricht Treaty.
The GUE/NGL is a confederal group that is composed of MEPs from national parties. Those national parties must share common political objectives with the group, as specified in the group's constituent declaration. Nevertheless, those national parties, and not the group, retain control of their MEPs; therefore, the group may be divided on certain issues.
Members of the group meet regularly to prepare for meetings, debate on policies, and vote on resolutions. The group also publishes reports on various topics.
MEPs may be full or associate members.
National parties may be full or associate members.
The initial member parties for the 9th European Parliament was determined at the first meeting on 29 May 2019. | [
{
"paragraph_id": 0,
"text": "The Left in the European Parliament – GUE/NGL is a left-wing political group of the European Parliament established in 1995. Before January 2021, it was named the European United Left/Nordic Green Left (French: Gauche unitaire européenne/Gauche verte nordique, GUE/NGL).",
"title": ""
},
{
"paragraph_id": 1,
"text": "The group comprises political parties with democratic socialist, communist, and eurosceptic orientation.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In 1995, the enlargement of the European Union led to the creation of the Nordic Green Left group of parties. The Nordic Green Left (NGL) merged with the Confederal Group of the European United Left (GUE) on 6 January 1995, forming the Confederal Group of the European United Left/Nordic Green Left. The NGL suffix was added to the name of the expanded group on insistence of Swedish and Finnish MEPs. The group initially consisted of MEPs from the Finnish Left Alliance, the Swedish Left Party, the Danish Socialist People's Party, the United Left of Spain (including the Spanish Communist Party), the Synaspismos of Greece, the French Communist Party, the Portuguese Communist Party, the Communist Party of Greece, and the Communist Refoundation Party of Italy.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "In 1998, Ken Coates, an expelled MEP from the British Labour Party who co-founded the Independent Labour Network, joined the group.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "In 1999, the German Party of Democratic Socialism (PDS) and the Greek Democratic Social Movement (DIKKI) joined as full members, while the five MEPs elected from the list of the French Trotskyist alliance LO–LCR and the one MEP for the Dutch Socialist Party joined as associate members.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In 2002, four MEPs from the French Citizen and Republican Movement and one from the Danish People's Movement against the EU also joined the group.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In 2004, no MEPs were elected from LO–LCR and DIKKI — which was undergoing a dispute with its leader over the party constitution — , as well as the French Citizen and Republican Movement, did not put forward candidates. MEPs from the Portuguese Left Block, the Irish Sinn Féin, the Progressive Party of Working People of Cyprus, and the Communist Party of Bohemia and Moravia joined the group. The Danish Socialist People's Party, a member of the Nordic Green Left, left the group to instead sit in the Greens–European Free Alliance group.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In 2009, no MEPs were elected from the Italian Communist Refoundation Party and the Finnish Left Alliance. MEPs from the Irish Socialist Party, the Socialist Party of Latvia, and the French Left Party joined the group.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 2013, one MEP from the Croatian Labourists – Labour Party also joined the group.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In 2014, no MEPs were elected from the Irish Socialist Party, the Socialist Party of Latvia, and the Croatian Labourists – Labour Party. MEPs from the Spanish Podemos as well as EH Bildu and the Dutch Party for the Animals joined the group, while MEPs from the Italian Communist Refoundation Party and the Finnish Left Alliance re-entered parliament and rejoined. The Communist Party of Greece, a founding member of the group, decided to leave and instead sit as Non-Inscrits.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In 2019, no MEPs were elected from the French Communist Party, the Danish People's Movement against the EU, the Dutch Socialist Party, and from the Italian parties The Left and the Communist Refoundation Party. MEPs from the French La France insoumise, the Belgian Workers' Party of Belgium, the German Human Environment Animal Protection, the Irish Independents 4 Change, and the Danish Red-Green Alliance joined the group.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "According to its 1994 constituent declaration, the group is opposed to the present European Union political structure, but it is committed to integration. That declaration sets out three aims for the construction of another European Union, the total change of institutions to make them fully democratic, breaking with neoliberal monetarist policies, and a policy of co-development and equitable cooperation. The group wants to disband the North Atlantic Treaty Organization (NATO), and strengthen the Organization for Security and Co-operation in Europe (OSCE).",
"title": "Position"
},
{
"paragraph_id": 12,
"text": "The group is ambiguous between reformism and revolution, leaving it up to each party to decide on the manner they deem best suited to achieve these aims. As such, it has simultaneously positioned itself as insiders within the European institutions, enabling it to influence the decisions made by co-decision; and as outsiders by its willingness to seek another Europe, which would abolish the Maastricht Treaty.",
"title": "Position"
},
{
"paragraph_id": 13,
"text": "The GUE/NGL is a confederal group that is composed of MEPs from national parties. Those national parties must share common political objectives with the group, as specified in the group's constituent declaration. Nevertheless, those national parties, and not the group, retain control of their MEPs; therefore, the group may be divided on certain issues.",
"title": "Organisation"
},
{
"paragraph_id": 14,
"text": "Members of the group meet regularly to prepare for meetings, debate on policies, and vote on resolutions. The group also publishes reports on various topics.",
"title": "Organisation"
},
{
"paragraph_id": 15,
"text": "MEPs may be full or associate members.",
"title": "Organisation"
},
{
"paragraph_id": 16,
"text": "National parties may be full or associate members.",
"title": "Organisation"
},
{
"paragraph_id": 17,
"text": "The initial member parties for the 9th European Parliament was determined at the first meeting on 29 May 2019.",
"title": "Membership"
}
]
| The Left in the European Parliament – GUE/NGL is a left-wing political group of the European Parliament established in 1995. Before January 2021, it was named the European United Left/Nordic Green Left. The group comprises political parties with democratic socialist, communist, and eurosceptic orientation. | 2002-02-25T15:51:15Z | 2023-12-27T06:19:53Z | [
"Template:Composition bar",
"Template:NLD",
"Template:Cite web",
"Template:Use British English",
"Template:Lang-fr",
"Template:UK",
"Template:Party of the European Left",
"Template:Political organisations at European Union level",
"Template:GRC",
"Template:PRT",
"Template:IRL",
"Template:ITA",
"Template:Decrease",
"Template:ESP",
"Template:Main",
"Template:Citation needed",
"Template:Unreferenced section",
"Template:Short description",
"Template:Infobox European Parliament group",
"Template:DEU",
"Template:Use dmy dates",
"Template:Flag",
"Template:FIN",
"Template:LAT",
"Template:Increase",
"Template:CZE",
"Template:DEN",
"Template:SWE",
"Template:Cite book",
"Template:European Parliament groups",
"Template:GER",
"Template:Nowrap",
"Template:CYP",
"Template:Reflist",
"Template:Authority control",
"Template:FRA",
"Template:Update inline"
]
| https://en.wikipedia.org/wiki/The_Left_in_the_European_Parliament_%E2%80%93_GUE/NGL |
9,868 | European Democrats | The European Democrats were a loose association of conservative political parties in Europe. It was a political group in the European Parliament from 1979 until 1992, when it became a subgroup of the European People's Party–European Democrats (EPP-ED) group. The European Democrats continued to exist as a political group in the Parliamentary Assembly of the Council of Europe (PACE) until 2014, when it became the European Conservatives Group.
The European Democratic Group (ED) was formed on 17 July 1979 by British Conservative Party, Danish Conservative People's Party and other MEPs after their success in the 1979 elections. It supplanted the earlier European Conservative Group.
In the late seventies and early eighties, the ED was the third-largest political group in the European Parliament.
However, the group saw its membership fall sharply in the late 1980s, as many centre-right members moved to the rival European People's Party (EPP), dominated by the Christian Democratic Union of Germany (CDU), Italian Christian Democrats and the ideology of Christian democracy in general. The ED had been somewhat further from the political centre and less pro-European than the EPP. Largely isolated, even hardline eurosceptics like Margaret Thatcher conceded that the British Conservatives could not be effectively heard from such a peripheral group.
On 1 May 1992, the ED (now largely composed of UK Conservative Party members) dissolved, and its remaining members were accorded "associated party" status in the European People's Party Group; that is, being part of the parliamentary group without retaining actual membership in the EPP Europarty organisation. This was considered essential for the Conservatives, as the EPP was generally seen as quite favourable to European integration, a stance at odds with their core ideology. The Conservatives' relationship to the EPP would become a sore point in the following years, particularly for the eurosceptic general membership in Britain. Then-leader of the British Conservative Party William Hague hoped to put the issue to rest by negotiating a new arrangement in 1999 by which the EPP's parliamentary group would rebrand itself as the European People's Party–European Democrats (EPP-ED), with the "European Democrat" nomenclature returning after a seven-year hiatus. This was intended to nominally underscore the Conservatives' status apart from the rest of EPP, and it was hoped that with the coming enlargement of the European Union numerous newly involved right-wing parties, averse to the EPP proper for its perceived European federalism, would be willing to instead enter the ED subgroup, growing the overall alignment.
The arrangement proved to do little to appease opposition. Hague's successor, Iain Duncan Smith, made a concerted drive at one point to resurrect the European Democratic Group, but backed off when it became clear that Conservative MEPs would not move voluntarily. The hope that multiple Central and European parties would join ED also proved to be dubious, as only the Czech Civic Democratic Party took up the offer, with the remainder joining EPP proper or other groups such as Union for Europe of the Nations (UEN) or Independence and Democracy (IND/DEM). Meanwhile, the ED remained a more eurosceptic subgroup within the broader EPP-ED bloc that contributed slightly more than 10% of its total MEPs. It resisted the trend of incorporating as a European political party.
During the 2005 Conservative leadership contest, eventual winner David Cameron pledged to withdraw the Conservatives from the EPP-ED group, while opponent David Davis argued in a letter to the editor of The Daily Telegraph that the current ED arrangement allowed the Conservatives to maintain suitable distance from EPP while still having influence in the largest parliamentary grouping. Conservative/EPP-ED MEP Martin Callanan responded in that paper the following day:
SIR - David Davis (Letter, November 10) is sadly misinformed about our Conservative MEPs' relationship with the European People's Party (EPP) in the European Parliament. He claims that "Conservatives are members of the European Democrat group, which forms an alliance with the EPP". In reality, though, the ED does not exist. It has no staff or money and is, in effect, a discussion group within the EPP. […] Far from being a symbolic step, as Mr Davis suggests, leaving the EPP is the one hard, bankable commitment to have come out of this leadership campaign.
The Czech Civic Democratic Party (ODS), the Law and Justice (PiS) of Poland and the Rally For France party were among the first to discuss forming a breakaway group under the Movement for European Reform. Sir Reg Empey, Leader of the Ulster Unionist Party (UUP) has committed his party thereunto Its position would be that the European Union should exist, but as a looser supranational organisation than at present, making the group less eurosceptic than the UEN and IND/DEM groups. Some members from the above parties founded a new organization, the Alliance for an Open Europe, in the midst of this debate, with broadly similar objectives.
Cameron initially intended to form the new group in 2006, though this aspiration had to be cancelled due to their main prospective partners, the ODS and PiS, being unable or unwilling to break away from their then-groupings; the new grouping was put on hiatus until the 2009 European elections. By then, new factors—including the collapse of the UEN group—made conditions for forming a new political grouping much more favourable. On 22 June 2009, the founder members of the European Conservatives and Reformists (ECR) group, all signatories of the Prague Declaration announced that they were to leave the EPP-ED, and in virtue of that fact, the European Democrats movement. This announcement ended the 30-year existence of the European Democrats in the European Parliament.
The following political parties were associated with the European Democrats at some point:
The European Democrat Group in the Parliamentary Assembly of the Council of Europe was founded as the Group of Independent Representatives in 1970 by British and Scandinavian members of PACE, having about 35–40 members from the UK, Ireland, Norway, Denmark, Turkey, Sweden and Switzerland. It adopted the European Democrats Group name in September 1980, later becoming the European Conservatives Group in 2014. | [
{
"paragraph_id": 0,
"text": "The European Democrats were a loose association of conservative political parties in Europe. It was a political group in the European Parliament from 1979 until 1992, when it became a subgroup of the European People's Party–European Democrats (EPP-ED) group. The European Democrats continued to exist as a political group in the Parliamentary Assembly of the Council of Europe (PACE) until 2014, when it became the European Conservatives Group.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The European Democratic Group (ED) was formed on 17 July 1979 by British Conservative Party, Danish Conservative People's Party and other MEPs after their success in the 1979 elections. It supplanted the earlier European Conservative Group.",
"title": "European Democrats in the European Parliament"
},
{
"paragraph_id": 2,
"text": "In the late seventies and early eighties, the ED was the third-largest political group in the European Parliament.",
"title": "European Democrats in the European Parliament"
},
{
"paragraph_id": 3,
"text": "However, the group saw its membership fall sharply in the late 1980s, as many centre-right members moved to the rival European People's Party (EPP), dominated by the Christian Democratic Union of Germany (CDU), Italian Christian Democrats and the ideology of Christian democracy in general. The ED had been somewhat further from the political centre and less pro-European than the EPP. Largely isolated, even hardline eurosceptics like Margaret Thatcher conceded that the British Conservatives could not be effectively heard from such a peripheral group.",
"title": "European Democrats in the European Parliament"
},
{
"paragraph_id": 4,
"text": "On 1 May 1992, the ED (now largely composed of UK Conservative Party members) dissolved, and its remaining members were accorded \"associated party\" status in the European People's Party Group; that is, being part of the parliamentary group without retaining actual membership in the EPP Europarty organisation. This was considered essential for the Conservatives, as the EPP was generally seen as quite favourable to European integration, a stance at odds with their core ideology. The Conservatives' relationship to the EPP would become a sore point in the following years, particularly for the eurosceptic general membership in Britain. Then-leader of the British Conservative Party William Hague hoped to put the issue to rest by negotiating a new arrangement in 1999 by which the EPP's parliamentary group would rebrand itself as the European People's Party–European Democrats (EPP-ED), with the \"European Democrat\" nomenclature returning after a seven-year hiatus. This was intended to nominally underscore the Conservatives' status apart from the rest of EPP, and it was hoped that with the coming enlargement of the European Union numerous newly involved right-wing parties, averse to the EPP proper for its perceived European federalism, would be willing to instead enter the ED subgroup, growing the overall alignment.",
"title": "European Democrats in the European Parliament"
},
{
"paragraph_id": 5,
"text": "The arrangement proved to do little to appease opposition. Hague's successor, Iain Duncan Smith, made a concerted drive at one point to resurrect the European Democratic Group, but backed off when it became clear that Conservative MEPs would not move voluntarily. The hope that multiple Central and European parties would join ED also proved to be dubious, as only the Czech Civic Democratic Party took up the offer, with the remainder joining EPP proper or other groups such as Union for Europe of the Nations (UEN) or Independence and Democracy (IND/DEM). Meanwhile, the ED remained a more eurosceptic subgroup within the broader EPP-ED bloc that contributed slightly more than 10% of its total MEPs. It resisted the trend of incorporating as a European political party.",
"title": "European Democrats in the European Parliament"
},
{
"paragraph_id": 6,
"text": "During the 2005 Conservative leadership contest, eventual winner David Cameron pledged to withdraw the Conservatives from the EPP-ED group, while opponent David Davis argued in a letter to the editor of The Daily Telegraph that the current ED arrangement allowed the Conservatives to maintain suitable distance from EPP while still having influence in the largest parliamentary grouping. Conservative/EPP-ED MEP Martin Callanan responded in that paper the following day:",
"title": "European Democrats in the European Parliament"
},
{
"paragraph_id": 7,
"text": "SIR - David Davis (Letter, November 10) is sadly misinformed about our Conservative MEPs' relationship with the European People's Party (EPP) in the European Parliament. He claims that \"Conservatives are members of the European Democrat group, which forms an alliance with the EPP\". In reality, though, the ED does not exist. It has no staff or money and is, in effect, a discussion group within the EPP. […] Far from being a symbolic step, as Mr Davis suggests, leaving the EPP is the one hard, bankable commitment to have come out of this leadership campaign.",
"title": "European Democrats in the European Parliament"
},
{
"paragraph_id": 8,
"text": "The Czech Civic Democratic Party (ODS), the Law and Justice (PiS) of Poland and the Rally For France party were among the first to discuss forming a breakaway group under the Movement for European Reform. Sir Reg Empey, Leader of the Ulster Unionist Party (UUP) has committed his party thereunto Its position would be that the European Union should exist, but as a looser supranational organisation than at present, making the group less eurosceptic than the UEN and IND/DEM groups. Some members from the above parties founded a new organization, the Alliance for an Open Europe, in the midst of this debate, with broadly similar objectives.",
"title": "European Democrats in the European Parliament"
},
{
"paragraph_id": 9,
"text": "Cameron initially intended to form the new group in 2006, though this aspiration had to be cancelled due to their main prospective partners, the ODS and PiS, being unable or unwilling to break away from their then-groupings; the new grouping was put on hiatus until the 2009 European elections. By then, new factors—including the collapse of the UEN group—made conditions for forming a new political grouping much more favourable. On 22 June 2009, the founder members of the European Conservatives and Reformists (ECR) group, all signatories of the Prague Declaration announced that they were to leave the EPP-ED, and in virtue of that fact, the European Democrats movement. This announcement ended the 30-year existence of the European Democrats in the European Parliament.",
"title": "European Democrats in the European Parliament"
},
{
"paragraph_id": 10,
"text": "The following political parties were associated with the European Democrats at some point:",
"title": "European Democrats in the European Parliament"
},
{
"paragraph_id": 11,
"text": "The European Democrat Group in the Parliamentary Assembly of the Council of Europe was founded as the Group of Independent Representatives in 1970 by British and Scandinavian members of PACE, having about 35–40 members from the UK, Ireland, Norway, Denmark, Turkey, Sweden and Switzerland. It adopted the European Democrats Group name in September 1980, later becoming the European Conservatives Group in 2014.",
"title": "European Democrats in PACE (Parliamentary Assembly of the Council of Europe)"
}
]
| The European Democrats were a loose association of conservative political parties in Europe. It was a political group in the European Parliament from 1979 until 1992, when it became a subgroup of the European People's Party–European Democrats (EPP-ED) group. The European Democrats continued to exist as a political group in the Parliamentary Assembly of the Council of Europe (PACE) until 2014, when it became the European Conservatives Group. | 2002-02-25T15:51:15Z | 2023-10-30T01:02:57Z | [
"Template:Blockquote",
"Template:Flag",
"Template:Webarchive",
"Template:Portal",
"Template:European Parliament groups",
"Template:Short description",
"Template:Other uses",
"Template:Cite web",
"Template:Dead link",
"Template:Conservatism footer",
"Template:Infobox European Parliament group",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/European_Democrats |
9,869 | Epistle to the Ephesians | The Epistle to the Ephesians is the tenth book of the New Testament. According to its text, the letter was written by Paul the Apostle, an attribution that Christians traditionally accepted. However, starting in 1792, some scholars have claimed the letter is actually Deutero-Pauline, meaning that it is pseudepigrapha written in Paul's name by a later author strongly influenced by Paul's thought. According to one scholarly source, the letter was probably written "by a loyal disciple to sum up Paul's teaching and to apply it to a new situation fifteen to twenty-five years after the Apostle's death".
According to New Testament scholar Daniel Wallace, the theme may be stated pragmatically as "Christians, get along with each other! Maintain the unity practically which Christ has effected positionally by his death."
Another major theme in Ephesians is the keeping of Christ's body (that is, the Church) pure and holy.
Therefore be imitators of God, as beloved children. And walk in love, as Christ loved us and gave himself up for us, a fragrant offering and sacrifice to God.
In the second part of the letter, Ephesians 4:17–6:20, the author gives practical advice in how to live a holy, pure, and Christ-inspired lifestyle.
According to tradition, the Apostle Paul wrote the letter while he was in prison in Rome (around AD 62). This would be about the same time as the Epistle to the Colossians (which in many points it resembles) and the Epistle to Philemon. However, many critical scholars have questioned the authorship of the letter and suggest that it may have been written between AD 80 and 100.
The first verse in the letter identifies Paul as its author. While early lists of New Testament books, including the Muratorian fragment and possibly Marcion's canon (if it is to be equated with the Epistle to the Laodiceans), attribute the letter to Paul, more recently there have been challenges to Pauline authorship on the basis of the letter's characteristically non-Pauline syntax, terminology, and eschatology.
Biblical scholar Harold Hoehner, surveying 279 commentaries written between 1519 and 2001, found that 54% favored Pauline authorship, 39% concluded against Pauline authorship and 7% remained uncertain. Norman Perrin and Dennis C. Duling found that of six authoritative scholarly references, "four of the six decide for pseudonymity, and the other two (Peake's Commentary on the Bible and the Jerome Biblical Commentary) recognize the difficulties in maintaining Pauline authorship. Indeed, the difficulties are insurmountable." Bible scholar Raymond E. Brown asserts that about 80% of critical scholarship judges that Paul did not write Ephesians.
There are four main theories in biblical scholarship that address the question of Pauline authorship.
While most English translations indicate that the letter was addressed to "the saints who are in Ephesus" (1:1), the words "in Ephesus" do not appear in the best and earliest manuscripts of the letter, leading most textual critics, like Bart Ehrman, to regard the words as an interpolation. This lack of any internal references to Ephesus in the early manuscripts may have led Marcion, a second-century heresiarch who created the first New Testament canon, to believe that the letter was actually addressed to the church at Laodicea. For details see Epistle to the Laodiceans.
Furthermore, if Paul is regarded as the author, the impersonal character of the letter, which lacks personal greetings or any indication that the author has personal knowledge of his recipients, is incongruous with the account in Acts of Paul staying more than two years in Ephesus. For these reasons, most regard Ephesians to be a circular letter intended for many churches. The Jerusalem Bible notes that some critics think the words "who are" would have been followed by a blank to be filled in with the name of "whichever church was being sent the letter".
If Paul was the author of the letter, then it was probably written from Rome during Paul's first imprisonment, and probably soon after his arrival there in the year 62, four years after he had parted with the Ephesian elders at Miletus. However, scholars who dispute Paul's authorship date the letter to between 70 and 80 AD. In the latter case, the possible location of the authorship could have been within the church of Ephesus itself. Ignatius of Antioch seemed to be very well versed in the epistle to the Ephesians, and mirrors many of his own thoughts in his own epistle to the Ephesians.
Ephesians contains:
Paul's first and hurried visit for the space of three months to Ephesus is recorded in Acts 18:19–21. The work he began on this occasion was carried forward by Apollos and Aquila and Priscilla. On his second visit early in the following year, he remained at Ephesus "three years", for he found it was the key to the western provinces of Asia Minor. Here "a great door and effectual" was opened to him, and the church was established and strengthened by his diligent labours there. From Ephesus the gospel spread abroad "almost throughout all Asia." The word "mightily grew and prevailed" despite all the opposition and persecution he encountered.
On his last journey to Jerusalem, the apostle landed at Miletus and, summoning together the elders of the church from Ephesus, delivered to them a farewell charge, expecting to see them no more.
The following parallels between this epistle and the Milesian charge may be traced:
The purpose of the epistle, and to whom it was written, are matters of much speculation. It was regarded by C.H. Dodd as the "crown of Paulinism." In general, it is born out of its particular socio-historical context and the situational context of both the author and the audience. Originating in the circumstance of a multicultural church (primarily Jewish and Hellenistic), the author addressed issues appropriate to the diverse religious and cultural backgrounds present in the community.
The author exhorts the church repeatedly to embrace a specific view of salvation, which he then explicates.
Frank Charles Thompson argues that the main theme of Ephesians is in response to the newly converted Jews who often separated themselves from their Gentile brethren. The unity of the church, especially between Jew and Gentile believers, is the keynote of the book.
Ephesians is notable for its domestic code treatment in Ephesians 5:22–6:9, covering husband-wife, parent-child, and master-slave relationships. In Ephesians 5:22, wives are urged to submit to their husbands, and husbands to love their wives "as Christ loved the Church." Christian Egalitarian theologians, such as Katharine Bushnell and Jessie Penn-Lewis, interpret these commands in the context of the preceding verse, for all Christians to "submit to one another." Thus, it is two-way, mutual submission of both husbands to wives and wives to husbands. But according to Peter O'Brien, Professor Emeritus at Moore Theological College, this would be the only instance of this meaning of submission in the whole New Testament, indeed in any extant comparable Greek texts; by O'Brien's account, the word simply does not connote mutuality. Dallas Theological Seminary professor Daniel Wallace understands it to be an extension of Ephesians 5:15-21 on being filled by the Holy Spirit.
In the period leading up to the American Civil War (1861–65), Ephesians 6:5 on master-slave relationships was one of the Bible verses used by Confederate slaveholders in support of a slaveholding position. | [
{
"paragraph_id": 0,
"text": "The Epistle to the Ephesians is the tenth book of the New Testament. According to its text, the letter was written by Paul the Apostle, an attribution that Christians traditionally accepted. However, starting in 1792, some scholars have claimed the letter is actually Deutero-Pauline, meaning that it is pseudepigrapha written in Paul's name by a later author strongly influenced by Paul's thought. According to one scholarly source, the letter was probably written \"by a loyal disciple to sum up Paul's teaching and to apply it to a new situation fifteen to twenty-five years after the Apostle's death\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "According to New Testament scholar Daniel Wallace, the theme may be stated pragmatically as \"Christians, get along with each other! Maintain the unity practically which Christ has effected positionally by his death.\"",
"title": "Themes"
},
{
"paragraph_id": 2,
"text": "Another major theme in Ephesians is the keeping of Christ's body (that is, the Church) pure and holy.",
"title": "Themes"
},
{
"paragraph_id": 3,
"text": "Therefore be imitators of God, as beloved children. And walk in love, as Christ loved us and gave himself up for us, a fragrant offering and sacrifice to God.",
"title": "Themes"
},
{
"paragraph_id": 4,
"text": "In the second part of the letter, Ephesians 4:17–6:20, the author gives practical advice in how to live a holy, pure, and Christ-inspired lifestyle.",
"title": "Themes"
},
{
"paragraph_id": 5,
"text": "According to tradition, the Apostle Paul wrote the letter while he was in prison in Rome (around AD 62). This would be about the same time as the Epistle to the Colossians (which in many points it resembles) and the Epistle to Philemon. However, many critical scholars have questioned the authorship of the letter and suggest that it may have been written between AD 80 and 100.",
"title": "Composition"
},
{
"paragraph_id": 6,
"text": "The first verse in the letter identifies Paul as its author. While early lists of New Testament books, including the Muratorian fragment and possibly Marcion's canon (if it is to be equated with the Epistle to the Laodiceans), attribute the letter to Paul, more recently there have been challenges to Pauline authorship on the basis of the letter's characteristically non-Pauline syntax, terminology, and eschatology.",
"title": "Composition"
},
{
"paragraph_id": 7,
"text": "Biblical scholar Harold Hoehner, surveying 279 commentaries written between 1519 and 2001, found that 54% favored Pauline authorship, 39% concluded against Pauline authorship and 7% remained uncertain. Norman Perrin and Dennis C. Duling found that of six authoritative scholarly references, \"four of the six decide for pseudonymity, and the other two (Peake's Commentary on the Bible and the Jerome Biblical Commentary) recognize the difficulties in maintaining Pauline authorship. Indeed, the difficulties are insurmountable.\" Bible scholar Raymond E. Brown asserts that about 80% of critical scholarship judges that Paul did not write Ephesians.",
"title": "Composition"
},
{
"paragraph_id": 8,
"text": "There are four main theories in biblical scholarship that address the question of Pauline authorship.",
"title": "Composition"
},
{
"paragraph_id": 9,
"text": "While most English translations indicate that the letter was addressed to \"the saints who are in Ephesus\" (1:1), the words \"in Ephesus\" do not appear in the best and earliest manuscripts of the letter, leading most textual critics, like Bart Ehrman, to regard the words as an interpolation. This lack of any internal references to Ephesus in the early manuscripts may have led Marcion, a second-century heresiarch who created the first New Testament canon, to believe that the letter was actually addressed to the church at Laodicea. For details see Epistle to the Laodiceans.",
"title": "Composition"
},
{
"paragraph_id": 10,
"text": "Furthermore, if Paul is regarded as the author, the impersonal character of the letter, which lacks personal greetings or any indication that the author has personal knowledge of his recipients, is incongruous with the account in Acts of Paul staying more than two years in Ephesus. For these reasons, most regard Ephesians to be a circular letter intended for many churches. The Jerusalem Bible notes that some critics think the words \"who are\" would have been followed by a blank to be filled in with the name of \"whichever church was being sent the letter\".",
"title": "Composition"
},
{
"paragraph_id": 11,
"text": "If Paul was the author of the letter, then it was probably written from Rome during Paul's first imprisonment, and probably soon after his arrival there in the year 62, four years after he had parted with the Ephesian elders at Miletus. However, scholars who dispute Paul's authorship date the letter to between 70 and 80 AD. In the latter case, the possible location of the authorship could have been within the church of Ephesus itself. Ignatius of Antioch seemed to be very well versed in the epistle to the Ephesians, and mirrors many of his own thoughts in his own epistle to the Ephesians.",
"title": "Composition"
},
{
"paragraph_id": 12,
"text": "Ephesians contains:",
"title": "Outline"
},
{
"paragraph_id": 13,
"text": "Paul's first and hurried visit for the space of three months to Ephesus is recorded in Acts 18:19–21. The work he began on this occasion was carried forward by Apollos and Aquila and Priscilla. On his second visit early in the following year, he remained at Ephesus \"three years\", for he found it was the key to the western provinces of Asia Minor. Here \"a great door and effectual\" was opened to him, and the church was established and strengthened by his diligent labours there. From Ephesus the gospel spread abroad \"almost throughout all Asia.\" The word \"mightily grew and prevailed\" despite all the opposition and persecution he encountered.",
"title": "Founding of the church at Ephesus"
},
{
"paragraph_id": 14,
"text": "On his last journey to Jerusalem, the apostle landed at Miletus and, summoning together the elders of the church from Ephesus, delivered to them a farewell charge, expecting to see them no more.",
"title": "Founding of the church at Ephesus"
},
{
"paragraph_id": 15,
"text": "The following parallels between this epistle and the Milesian charge may be traced:",
"title": "Founding of the church at Ephesus"
},
{
"paragraph_id": 16,
"text": "The purpose of the epistle, and to whom it was written, are matters of much speculation. It was regarded by C.H. Dodd as the \"crown of Paulinism.\" In general, it is born out of its particular socio-historical context and the situational context of both the author and the audience. Originating in the circumstance of a multicultural church (primarily Jewish and Hellenistic), the author addressed issues appropriate to the diverse religious and cultural backgrounds present in the community.",
"title": "Purpose"
},
{
"paragraph_id": 17,
"text": "The author exhorts the church repeatedly to embrace a specific view of salvation, which he then explicates.",
"title": "Purpose"
},
{
"paragraph_id": 18,
"text": "Frank Charles Thompson argues that the main theme of Ephesians is in response to the newly converted Jews who often separated themselves from their Gentile brethren. The unity of the church, especially between Jew and Gentile believers, is the keynote of the book.",
"title": "Purpose"
},
{
"paragraph_id": 19,
"text": "Ephesians is notable for its domestic code treatment in Ephesians 5:22–6:9, covering husband-wife, parent-child, and master-slave relationships. In Ephesians 5:22, wives are urged to submit to their husbands, and husbands to love their wives \"as Christ loved the Church.\" Christian Egalitarian theologians, such as Katharine Bushnell and Jessie Penn-Lewis, interpret these commands in the context of the preceding verse, for all Christians to \"submit to one another.\" Thus, it is two-way, mutual submission of both husbands to wives and wives to husbands. But according to Peter O'Brien, Professor Emeritus at Moore Theological College, this would be the only instance of this meaning of submission in the whole New Testament, indeed in any extant comparable Greek texts; by O'Brien's account, the word simply does not connote mutuality. Dallas Theological Seminary professor Daniel Wallace understands it to be an extension of Ephesians 5:15-21 on being filled by the Holy Spirit.",
"title": "Interpretations"
},
{
"paragraph_id": 20,
"text": "In the period leading up to the American Civil War (1861–65), Ephesians 6:5 on master-slave relationships was one of the Bible verses used by Confederate slaveholders in support of a slaveholding position.",
"title": "Interpretations"
}
]
| The Epistle to the Ephesians is the tenth book of the New Testament. According to its text, the letter was written by Paul the Apostle, an attribution that Christians traditionally accepted. However, starting in 1792, some scholars have claimed the letter is actually Deutero-Pauline, meaning that it is pseudepigrapha written in Paul's name by a later author strongly influenced by Paul's thought. According to one scholarly source, the letter was probably written "by a loyal disciple to sum up Paul's teaching and to apply it to a new situation fifteen to twenty-five years after the Apostle's death". | 2001-10-01T02:54:30Z | 2023-09-17T19:50:30Z | [
"Template:Wikisource",
"Template:Citation",
"Template:Reflist",
"Template:Cite book",
"Template:Cite web",
"Template:Efn",
"Template:Books of the New Testament",
"Template:Main",
"Template:Rp",
"Template:Bibleref2-nb",
"Template:S-end",
"Template:Authority control",
"Template:Short description",
"Template:See also",
"Template:Citation needed",
"Template:Bibleref2",
"Template:Epistle to the Ephesians",
"Template:Books of the Bible",
"Template:Paul",
"Template:Bibleverse",
"Template:Bibleverse-nb",
"Template:Librivox book",
"Template:S-start",
"Template:S-hou",
"Template:S-ttl",
"Template:Use dmy dates",
"Template:Cite journal",
"Template:Notelist",
"Template:EBD",
"Template:Wikiquote",
"Template:S-bef",
"Template:S-aft",
"Template:Anchor",
"Template:ISBN",
"Template:Cite AmCyc",
"Template:Blockquote"
]
| https://en.wikipedia.org/wiki/Epistle_to_the_Ephesians |
9,872 | Electric bus (disambiguation) | Electric bus is a bus powered by electric energy. "Electric bus" can also refer to: | [
{
"paragraph_id": 0,
"text": "Electric bus is a bus powered by electric energy. \"Electric bus\" can also refer to:",
"title": ""
}
]
| Electric bus is a bus powered by electric energy. "Electric bus" can also refer to: Bus (computing), used for connecting components of a computer or communication between computers
Busbars, thick conductors used in electrical substations
In power engineering, a "bus" is any graph node of the single-line diagram at which voltage, current, power flow, or other quantities are to be evaluated. This may correspond to the physical busbars in substation.
A ground bus or earth bus is a conductor used as a zero voltage reference in a system, often connected to ground or earth.
In professional audio, bus refers to a place in the audio signal chain where one can hear a mix of different audio signals—usually at the output of a mixing console. | 2022-06-06T04:27:17Z | [
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/Electric_bus_(disambiguation) |
|
9,875 | Exploit (computer security) | An exploit (from the English verb to exploit, meaning "to use something to one’s own advantage") is a piece of software, a chunk of data, or a sequence of commands that takes advantage of a bug or vulnerability to cause unintended or unanticipated behavior to occur on computer software, hardware, or something electronic (usually computerized). Such behavior frequently includes things like gaining control of a computer system, allowing privilege escalation, or a denial-of-service (DoS or related DDoS) attack. In lay terms, some exploit is akin to a 'hack'.
There are several methods of classifying exploits. The most common is by how the exploit communicates to the vulnerable software.
A remote exploit works over a network and exploits the security vulnerability without any prior access to the vulnerable system.
A local exploit requires prior access to the vulnerable system and usually increases the privileges of the person running the exploit past those granted by the system administrator. Exploits against client applications also exist, usually consisting of modified servers that send an exploit if accessed with a client application. A common form of exploits against client applications are browser exploits.
Exploits against client applications may also require some interaction with the user and thus may be used in combination with the social engineering method. Another classification is by the action against the vulnerable system; unauthorized data access, arbitrary code execution, and denial of service are examples.
Many exploits are designed to provide superuser-level access to a computer system. However, it is also possible to use several exploits, first to gain low-level access, then to escalate privileges repeatedly until one reaches the highest administrative level (often called "root"). In this case the attacker is chaining several exploits together to perform one attack, this is known as an exploit chain.
After an exploit is made known to the authors of the affected software, the vulnerability is often fixed through a patch and the exploit becomes unusable. That is the reason why some black hat hackers as well as military or intelligence agencies' hackers do not publish their exploits but keep them private.
Exploits unknown to everyone except the people that found and developed them are referred to as zero day or “0day” exploits.
Exploits are used by hackers to bypass security controls and manipulate system vulnerabilities. Researchers have estimated that this costs over $450 billion every year from the global economy. In response, organizations are using cyber threat intelligence to protect their vulnerabilities.
Exploitations are commonly categorized and named by the type of vulnerability they exploit (see vulnerabilities for a list), whether they are local/remote and the result of running the exploit (e.g. EoP, DoS, spoofing). One scheme that offers zero day exploits is exploit as a service.
A zero-click attack is an exploit that requires no user interaction to operate – that is to say, no key-presses or mouse clicks. FORCEDENTRY, discovered in 2021, is an example of a zero-click attack.
These exploits are commonly the most sought after exploits (specifically on the underground exploit market) because the target typically has no way of knowing they have been compromised at the time of exploitation.
In 2022, NSO Group was reportedly selling zero-click exploits to governments for breaking into individuals' phones.
Pivoting is a method used by hackers and penetration testers to expand the attack surface of a target organization. A compromised system to attack other systems on the same network that are not directly reachable from the Internet due to restrictions such as firewall. There tends to be more machines reachable from inside a network as compared to Internet facing hosts. For example, if an attacker compromises a web server on a corporate network, the attacker can then use the compromised web server to attack any reachable system on the network. These types of attacks are often called multi-layered attacks. Pivoting is also known as island hopping.
Pivoting can further be distinguished into proxy pivoting and VPN pivoting:
Typically, the proxy or VPN applications enabling pivoting are executed on the target computer as the payload of an exploit.
Pivoting is usually done by infiltrating a part of a network infrastructure (as an example, a vulnerable printer or thermostat) and using a scanner to find other devices connected to attack them. By attacking a vulnerable piece of networking, an attacker could infect most or all of a network and gain complete control. | [
{
"paragraph_id": 0,
"text": "An exploit (from the English verb to exploit, meaning \"to use something to one’s own advantage\") is a piece of software, a chunk of data, or a sequence of commands that takes advantage of a bug or vulnerability to cause unintended or unanticipated behavior to occur on computer software, hardware, or something electronic (usually computerized). Such behavior frequently includes things like gaining control of a computer system, allowing privilege escalation, or a denial-of-service (DoS or related DDoS) attack. In lay terms, some exploit is akin to a 'hack'.",
"title": ""
},
{
"paragraph_id": 1,
"text": "There are several methods of classifying exploits. The most common is by how the exploit communicates to the vulnerable software.",
"title": "Classification"
},
{
"paragraph_id": 2,
"text": "A remote exploit works over a network and exploits the security vulnerability without any prior access to the vulnerable system.",
"title": "Classification"
},
{
"paragraph_id": 3,
"text": "A local exploit requires prior access to the vulnerable system and usually increases the privileges of the person running the exploit past those granted by the system administrator. Exploits against client applications also exist, usually consisting of modified servers that send an exploit if accessed with a client application. A common form of exploits against client applications are browser exploits.",
"title": "Classification"
},
{
"paragraph_id": 4,
"text": "Exploits against client applications may also require some interaction with the user and thus may be used in combination with the social engineering method. Another classification is by the action against the vulnerable system; unauthorized data access, arbitrary code execution, and denial of service are examples.",
"title": "Classification"
},
{
"paragraph_id": 5,
"text": "Many exploits are designed to provide superuser-level access to a computer system. However, it is also possible to use several exploits, first to gain low-level access, then to escalate privileges repeatedly until one reaches the highest administrative level (often called \"root\"). In this case the attacker is chaining several exploits together to perform one attack, this is known as an exploit chain.",
"title": "Classification"
},
{
"paragraph_id": 6,
"text": "After an exploit is made known to the authors of the affected software, the vulnerability is often fixed through a patch and the exploit becomes unusable. That is the reason why some black hat hackers as well as military or intelligence agencies' hackers do not publish their exploits but keep them private.",
"title": "Classification"
},
{
"paragraph_id": 7,
"text": "Exploits unknown to everyone except the people that found and developed them are referred to as zero day or “0day” exploits.",
"title": "Classification"
},
{
"paragraph_id": 8,
"text": "Exploits are used by hackers to bypass security controls and manipulate system vulnerabilities. Researchers have estimated that this costs over $450 billion every year from the global economy. In response, organizations are using cyber threat intelligence to protect their vulnerabilities.",
"title": "Classification"
},
{
"paragraph_id": 9,
"text": "Exploitations are commonly categorized and named by the type of vulnerability they exploit (see vulnerabilities for a list), whether they are local/remote and the result of running the exploit (e.g. EoP, DoS, spoofing). One scheme that offers zero day exploits is exploit as a service.",
"title": "Classification"
},
{
"paragraph_id": 10,
"text": "A zero-click attack is an exploit that requires no user interaction to operate – that is to say, no key-presses or mouse clicks. FORCEDENTRY, discovered in 2021, is an example of a zero-click attack.",
"title": "Classification"
},
{
"paragraph_id": 11,
"text": "These exploits are commonly the most sought after exploits (specifically on the underground exploit market) because the target typically has no way of knowing they have been compromised at the time of exploitation.",
"title": "Classification"
},
{
"paragraph_id": 12,
"text": "In 2022, NSO Group was reportedly selling zero-click exploits to governments for breaking into individuals' phones.",
"title": "Classification"
},
{
"paragraph_id": 13,
"text": "Pivoting is a method used by hackers and penetration testers to expand the attack surface of a target organization. A compromised system to attack other systems on the same network that are not directly reachable from the Internet due to restrictions such as firewall. There tends to be more machines reachable from inside a network as compared to Internet facing hosts. For example, if an attacker compromises a web server on a corporate network, the attacker can then use the compromised web server to attack any reachable system on the network. These types of attacks are often called multi-layered attacks. Pivoting is also known as island hopping.",
"title": "Classification"
},
{
"paragraph_id": 14,
"text": "Pivoting can further be distinguished into proxy pivoting and VPN pivoting:",
"title": "Classification"
},
{
"paragraph_id": 15,
"text": "Typically, the proxy or VPN applications enabling pivoting are executed on the target computer as the payload of an exploit.",
"title": "Classification"
},
{
"paragraph_id": 16,
"text": "Pivoting is usually done by infiltrating a part of a network infrastructure (as an example, a vulnerable printer or thermostat) and using a scanner to find other devices connected to attack them. By attacking a vulnerable piece of networking, an attacker could infect most or all of a network and gain complete control.",
"title": "Classification"
}
]
| An exploit is a piece of software, a chunk of data, or a sequence of commands that takes advantage of a bug or vulnerability to cause unintended or unanticipated behavior to occur on computer software, hardware, or something electronic. Such behavior frequently includes things like gaining control of a computer system, allowing privilege escalation, or a denial-of-service attack. In lay terms, some exploit is akin to a 'hack'. | 2002-02-25T15:43:11Z | 2023-12-18T17:06:15Z | [
"Template:Cite magazine",
"Template:Cite news",
"Template:Commonscat-inline",
"Template:Authority control",
"Template:Reflist",
"Template:Clarify",
"Template:Cite web",
"Template:Cite journal",
"Template:Information security",
"Template:Short description"
]
| https://en.wikipedia.org/wiki/Exploit_(computer_security) |
9,877 | Erg | The erg is a unit of energy equal to 10 joules (100 nJ). It originated in the Centimetre–gram–second system of units (CGS). It has the symbol erg. The erg is not an SI unit. Its name is derived from ergon (ἔργον), a Greek word meaning 'work' or 'task'.
An erg is the amount of work done by a force of one dyne exerted for a distance of one centimetre. In the CGS base units, it is equal to one gram centimetre-squared per second-squared (g⋅cm/s). It is thus equal to 10 joules or 100 nanojoules (nJ) in SI units.
In 1864, Rudolf Clausius proposed the Greek word ἐργον (ergon) for the unit of energy, work and heat. In 1873, a committee of the British Association for the Advancement of Science, including British physicists James Clerk Maxwell and William Thomson recommended the general adoption of the centimetre, the gramme, and the second as fundamental units (C.G.S. System of Units). To distinguish derived units, they recommended using the prefix "C.G.S. unit of ..." and requested that the word erg or ergon be strictly limited to refer to the C.G.S. unit of energy.
In 1922, chemist William Draper Harkins proposed the name micri-erg as a convenient unit to measure the surface energy of molecules in surface chemistry. It would equate to 10 erg, the equivalent to 10 joule.
The erg is not a part of the International System of Units (SI), which have been recommended since 1 January 1978 when the European Economic Community ratified a directive of 1971 that implemented SI as agreed by the General Conference of Weights and Measures. It the unit of energy in Gaussian units, which are widely used in astrophysics, applications involving microscopic problems and relativistic electrodynamics, and sometimes in mechanics. | [
{
"paragraph_id": 0,
"text": "The erg is a unit of energy equal to 10 joules (100 nJ). It originated in the Centimetre–gram–second system of units (CGS). It has the symbol erg. The erg is not an SI unit. Its name is derived from ergon (ἔργον), a Greek word meaning 'work' or 'task'.",
"title": ""
},
{
"paragraph_id": 1,
"text": "An erg is the amount of work done by a force of one dyne exerted for a distance of one centimetre. In the CGS base units, it is equal to one gram centimetre-squared per second-squared (g⋅cm/s). It is thus equal to 10 joules or 100 nanojoules (nJ) in SI units.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In 1864, Rudolf Clausius proposed the Greek word ἐργον (ergon) for the unit of energy, work and heat. In 1873, a committee of the British Association for the Advancement of Science, including British physicists James Clerk Maxwell and William Thomson recommended the general adoption of the centimetre, the gramme, and the second as fundamental units (C.G.S. System of Units). To distinguish derived units, they recommended using the prefix \"C.G.S. unit of ...\" and requested that the word erg or ergon be strictly limited to refer to the C.G.S. unit of energy.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "In 1922, chemist William Draper Harkins proposed the name micri-erg as a convenient unit to measure the surface energy of molecules in surface chemistry. It would equate to 10 erg, the equivalent to 10 joule.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The erg is not a part of the International System of Units (SI), which have been recommended since 1 January 1978 when the European Economic Community ratified a directive of 1971 that implemented SI as agreed by the General Conference of Weights and Measures. It the unit of energy in Gaussian units, which are widely used in astrophysics, applications involving microscopic problems and relativistic electrodynamics, and sometimes in mechanics.",
"title": "History"
}
]
| The erg is a unit of energy equal to 10−7 joules (100 nJ). It originated in the Centimetre–gram–second system of units (CGS). It has the symbol erg. The erg is not an SI unit. Its name is derived from ergon (ἔργον), a Greek word meaning 'work' or 'task'. An erg is the amount of work done by a force of one dyne exerted for a distance of one centimetre. In the CGS base units, it is equal to one gram centimetre-squared per second-squared (g⋅cm2/s2). It is thus equal to 10−7 joules or 100 nanojoules (nJ) in SI units. 1 erg = 10−7 J = 100 nJ
1 erg = 10−10 sn⋅m = 100 psn⋅m = 100 picosthène-metres
1 erg = 624.15 GeV = 6.2415×1011 eV
1 erg = 1 dyn⋅cm = 1 g⋅cm2/s2
1 erg = 2.77778×10−11 W⋅h | 2001-10-01T18:38:40Z | 2023-11-28T21:37:51Z | [
"Template:Nbsp",
"Template:Better source needed",
"Template:Cite web",
"Template:CGS units",
"Template:Short description",
"Template:Hatgrp",
"Template:Transl",
"Template:Cn",
"Template:Cite journal",
"Template:Infobox unit",
"Template:Val",
"Template:Cvt",
"Template:Cite book",
"Template:Cite conference",
"Template:Use British English Oxford spelling",
"Template:Lang"
]
| https://en.wikipedia.org/wiki/Erg |
9,878 | Everway | Everway is a fantasy role-playing game first published by Wizards of the Coast under their Alter Ego brand in 1995. Its lead designer was Jonathan Tweet. Marketed as a "Visionary Roleplaying Game", it has often been characterized as an innovative piece with a limited commercial success. Wizards later abandoned the line, and Rubicon Games purchased it, and published several supplements. The line was sold again to Gaslight Press in February 2001. The line is currently with The Everway Company, which has released a Silver Anniversary Edition.
The game has a fantasy setting of the multiverse type, with many different worlds, some of which differed from generic fantasy. It appears to have been heavily influenced by divinatory tarot, the four classical elements of ancient Greece, and mythologies from around the world.
Everway was first with implementing, in a commercial game, several new concepts including much more picture-based/visual source material and character creation than usual. Like other works by Jonathan Tweet, the rules are very simple and flexible. It is also one of a few diceless role-playing games. The Fortune Deck works as a randomizer and inspirational tool, and the results obtained by it are highly subjective. In order to clarify their use, Tweet coined some new vocabulary to describe and formalize methods of gamemaster adjudication; these terms have been adopted by the wider tabletop RPG community. Tweet's adjudication terms are: Karma (making a decision based on character abilities, tactics, and the internal logic of a fictional situation), Drama (making a decision based on what moves the story along), and Fortune (letting a randomizer — drawing a card in Everway, but could also refer to rolling dice in other games — determine the outcome).
Everway was a boxed set designed by Jonathan Tweet, Jenny Scott, Aron Anderson, Scott Hungerford, Kathy Ice, Bob Kruger, and John Tynes, with illustrations by Doug Alexander, Rick Berry, Daniel Gelon, Janine Johnston, Hannibal King, Scott Kirschner, Ed Lee, John Matson, Martin McKenna, Ian Miller, Jeff Miracola, Roger Raupp, Andrew Robinson, Christopher Rush, and Amy Weber, and cover art by Susan Harris
The components included:
The official setting for Everway revolves around heroes with the power of "spherewalking," traveling between worlds called "spheres." Spheres typically consist of many "realms." The city of Everway is located in a realm called Roundwander, in the sphere called Fourcorner. Roundwander is the only realm in Fourcorner that is described. There is some detail on the sphere's main city, Everway, which contains a stone pyramid, a set of family-oriented guilds, and various exotic events related to the city's position as an inter-dimensional trading center. Several dozen other spheres are described as one-sentence blurbs, a few as page-long summaries, and one in detail as the setting for a sample adventure, "Journey to Stonekeep." The theme is strongly fantasy-oriented as opposed to science fictional, with advanced technology explicitly forbidden in the character creation rules. The authors gave significant thought to anthropology by describing how the people of various spheres live, including many similarities across cultures. Some of these common features are entirely realistic (language, art), and others plainly related to the game's fantasy elements (magic, knowledge of the Fortune Deck). Nearly all spheres are inhabited by humans, with mostly realistic physics.
Character design is abstract and simple by most role-playing games' standards. Each character begins with twenty points to divide between four Element scores roughly equivalent to statistics for Strength (Fire), Perception (Water), Intelligence (Air) and Endurance (Earth). Scores range from 1 (pathetic) to 3 (average) to 10 (godlike), so a generic hero would have scores of 5. Each Element also has a specialty for which a character can get a 1-point bonus; e.g., a 5-Air hero with an Air specialty of "Writing" could write as though their Air score were 6. As a general rule a statistic of N is twice as capable as a level of N-1, where this makes sense. (A 5-Fire, 5-Earth hero can typically defeat two 4-Fire, 5-Earth enemies, or handily defeat a 3-Fire, 5-Earth character in foot race, but cannot necessarily run twice as fast even though speed is governed by Fire.)
Each character also has Powers representing unusual abilities. These cost from 0 to 3 or more points depending on whether they should be considered Frequent, Major (or even "Twice Major", for especially powerful abilities that significantly affect gameplay) and/or Versatile. For instance, a "Cat Familiar," a slightly intelligent cat, is arguably worth 2 points for being Frequent (usually around and often useful) and Versatile (able to scout, carry messages, and fight). A "Winning Smile" that makes the hero likable is worth 0 points because of its trivial effect, while a "Charming Song" that inspires one emotion when played might be useful enough to count as Frequent (1 point). There is no strict rule for deciding what a Power is worth. Each hero can have one 0-point Power for free; additional Powers that would otherwise cost 0 points instead cost 1.
Magic is also abstract. A hero wanting access to magic, as opposed to a few specific Powers, must design their own magic system. This is done by choosing an Element for its basis, which affects its theme; e.g., Air is associated with speech and intellect and would be suitable for a system of spoken spells gained through study. The new Magic statistic has a 1–10 rating and point cost, and can be no higher than the Element on what it is based. The game's rules suggest listing examples of what the magic system can do at each power level, working these out with the GM. It is suggested that most characters do not need magic and that it is not suitable for new players.
Finally, each hero has personality traits based on the game's Fortune and Vision cards. Players are to choose one or more Vision cards and base a backstory on them, and to have three Fortune cards representing a Virtue, Fault, and Fate (a challenge they will face). These three cards can change to represent new phases in the hero's life. There is a list of suggested Motives for why the hero is adventuring, such as "Adversity" or "Wanderlust", but this feature has no gameplay effect.
Equipment such as weaponry is handled completely abstractly, with no specific rules for item cost, carrying capacity, or combat statistics. However, a particularly powerful piece of equipment—for example, a cloak that renders its wearer invisible for a brief period—may be treated as a Power that the hero must spend their initial element points on.
To decide what happens, the GM considers the rules of Karma (characters' abilities, tactics, logic), Drama (the needs of the plot), and Fortune, the result of a card drawn from the Fortune Deck. Many of these cards are based on the "Major Arcana" of tarot divination, such as "The Fool" and "Death", but the deck includes original cards such as "Drowning in Armor" and "Law." As with the Tarot deck there is symbolic art and each card has two complementary meanings when upright or reversed (while face up). The meanings are printed on the cards (e.g., "Protective Measures Turn Dangerous" vs. "True Prudence" for "Drowning in Armor") and explained more fully in the game's books. The rules are flexible about how often the GM should consult the Fortune Deck, whether the cards should be shown to players, and how much influence the draw should have—it is entirely acceptable for the GM to never use the deck at all, if she so desires. Though cards sometimes have obvious interpretations for the context in which they are drawn, the rules explain that sometimes they are best read simply as "a positive (or negative) result."
Although the Fortune Deck resembles (and can be used as) a fortune-telling device, Everway treats the Deck only as a storytelling device and an element of the fictional setting. It does not in any way endorse "real" fortune-telling or other supernatural concepts.
In the December 1995 edition of Dragon (Issue 224), Rick Swan was surprised by Wizards of the Coast's choice of the very different Everway to enter the role-playing game market: "Everway is so far out of the mainstream, it’s barely recognizable as an RPG. For starters, it has no dice. It has no tables or charts. A deck of cards directs the flow of the game. Monster bashing, treasure hunting, dungeon crawling—bye-bye; Everway is pure narrative." Swan liked the "first class" production values of the components, but found the maps "lifeless". Swan was a big fan of the diceless system, saying, "It makes for a brisk game, and Everway, to its credit, plays at blinding speed." But Swan was concerned by the how the game placed an unreasonable onus on the improvisational skills of both the gamemaster and the players. He concluded by giving the game an average rating of 4 out of 6. | [
{
"paragraph_id": 0,
"text": "Everway is a fantasy role-playing game first published by Wizards of the Coast under their Alter Ego brand in 1995. Its lead designer was Jonathan Tweet. Marketed as a \"Visionary Roleplaying Game\", it has often been characterized as an innovative piece with a limited commercial success. Wizards later abandoned the line, and Rubicon Games purchased it, and published several supplements. The line was sold again to Gaslight Press in February 2001. The line is currently with The Everway Company, which has released a Silver Anniversary Edition.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The game has a fantasy setting of the multiverse type, with many different worlds, some of which differed from generic fantasy. It appears to have been heavily influenced by divinatory tarot, the four classical elements of ancient Greece, and mythologies from around the world.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Everway was first with implementing, in a commercial game, several new concepts including much more picture-based/visual source material and character creation than usual. Like other works by Jonathan Tweet, the rules are very simple and flexible. It is also one of a few diceless role-playing games. The Fortune Deck works as a randomizer and inspirational tool, and the results obtained by it are highly subjective. In order to clarify their use, Tweet coined some new vocabulary to describe and formalize methods of gamemaster adjudication; these terms have been adopted by the wider tabletop RPG community. Tweet's adjudication terms are: Karma (making a decision based on character abilities, tactics, and the internal logic of a fictional situation), Drama (making a decision based on what moves the story along), and Fortune (letting a randomizer — drawing a card in Everway, but could also refer to rolling dice in other games — determine the outcome).",
"title": ""
},
{
"paragraph_id": 3,
"text": "Everway was a boxed set designed by Jonathan Tweet, Jenny Scott, Aron Anderson, Scott Hungerford, Kathy Ice, Bob Kruger, and John Tynes, with illustrations by Doug Alexander, Rick Berry, Daniel Gelon, Janine Johnston, Hannibal King, Scott Kirschner, Ed Lee, John Matson, Martin McKenna, Ian Miller, Jeff Miracola, Roger Raupp, Andrew Robinson, Christopher Rush, and Amy Weber, and cover art by Susan Harris",
"title": "Description"
},
{
"paragraph_id": 4,
"text": "The components included:",
"title": "Description"
},
{
"paragraph_id": 5,
"text": "The official setting for Everway revolves around heroes with the power of \"spherewalking,\" traveling between worlds called \"spheres.\" Spheres typically consist of many \"realms.\" The city of Everway is located in a realm called Roundwander, in the sphere called Fourcorner. Roundwander is the only realm in Fourcorner that is described. There is some detail on the sphere's main city, Everway, which contains a stone pyramid, a set of family-oriented guilds, and various exotic events related to the city's position as an inter-dimensional trading center. Several dozen other spheres are described as one-sentence blurbs, a few as page-long summaries, and one in detail as the setting for a sample adventure, \"Journey to Stonekeep.\" The theme is strongly fantasy-oriented as opposed to science fictional, with advanced technology explicitly forbidden in the character creation rules. The authors gave significant thought to anthropology by describing how the people of various spheres live, including many similarities across cultures. Some of these common features are entirely realistic (language, art), and others plainly related to the game's fantasy elements (magic, knowledge of the Fortune Deck). Nearly all spheres are inhabited by humans, with mostly realistic physics.",
"title": "Setting"
},
{
"paragraph_id": 6,
"text": "Character design is abstract and simple by most role-playing games' standards. Each character begins with twenty points to divide between four Element scores roughly equivalent to statistics for Strength (Fire), Perception (Water), Intelligence (Air) and Endurance (Earth). Scores range from 1 (pathetic) to 3 (average) to 10 (godlike), so a generic hero would have scores of 5. Each Element also has a specialty for which a character can get a 1-point bonus; e.g., a 5-Air hero with an Air specialty of \"Writing\" could write as though their Air score were 6. As a general rule a statistic of N is twice as capable as a level of N-1, where this makes sense. (A 5-Fire, 5-Earth hero can typically defeat two 4-Fire, 5-Earth enemies, or handily defeat a 3-Fire, 5-Earth character in foot race, but cannot necessarily run twice as fast even though speed is governed by Fire.)",
"title": "Character creation"
},
{
"paragraph_id": 7,
"text": "Each character also has Powers representing unusual abilities. These cost from 0 to 3 or more points depending on whether they should be considered Frequent, Major (or even \"Twice Major\", for especially powerful abilities that significantly affect gameplay) and/or Versatile. For instance, a \"Cat Familiar,\" a slightly intelligent cat, is arguably worth 2 points for being Frequent (usually around and often useful) and Versatile (able to scout, carry messages, and fight). A \"Winning Smile\" that makes the hero likable is worth 0 points because of its trivial effect, while a \"Charming Song\" that inspires one emotion when played might be useful enough to count as Frequent (1 point). There is no strict rule for deciding what a Power is worth. Each hero can have one 0-point Power for free; additional Powers that would otherwise cost 0 points instead cost 1.",
"title": "Character creation"
},
{
"paragraph_id": 8,
"text": "Magic is also abstract. A hero wanting access to magic, as opposed to a few specific Powers, must design their own magic system. This is done by choosing an Element for its basis, which affects its theme; e.g., Air is associated with speech and intellect and would be suitable for a system of spoken spells gained through study. The new Magic statistic has a 1–10 rating and point cost, and can be no higher than the Element on what it is based. The game's rules suggest listing examples of what the magic system can do at each power level, working these out with the GM. It is suggested that most characters do not need magic and that it is not suitable for new players.",
"title": "Character creation"
},
{
"paragraph_id": 9,
"text": "Finally, each hero has personality traits based on the game's Fortune and Vision cards. Players are to choose one or more Vision cards and base a backstory on them, and to have three Fortune cards representing a Virtue, Fault, and Fate (a challenge they will face). These three cards can change to represent new phases in the hero's life. There is a list of suggested Motives for why the hero is adventuring, such as \"Adversity\" or \"Wanderlust\", but this feature has no gameplay effect.",
"title": "Character creation"
},
{
"paragraph_id": 10,
"text": "Equipment such as weaponry is handled completely abstractly, with no specific rules for item cost, carrying capacity, or combat statistics. However, a particularly powerful piece of equipment—for example, a cloak that renders its wearer invisible for a brief period—may be treated as a Power that the hero must spend their initial element points on.",
"title": "Character creation"
},
{
"paragraph_id": 11,
"text": "To decide what happens, the GM considers the rules of Karma (characters' abilities, tactics, logic), Drama (the needs of the plot), and Fortune, the result of a card drawn from the Fortune Deck. Many of these cards are based on the \"Major Arcana\" of tarot divination, such as \"The Fool\" and \"Death\", but the deck includes original cards such as \"Drowning in Armor\" and \"Law.\" As with the Tarot deck there is symbolic art and each card has two complementary meanings when upright or reversed (while face up). The meanings are printed on the cards (e.g., \"Protective Measures Turn Dangerous\" vs. \"True Prudence\" for \"Drowning in Armor\") and explained more fully in the game's books. The rules are flexible about how often the GM should consult the Fortune Deck, whether the cards should be shown to players, and how much influence the draw should have—it is entirely acceptable for the GM to never use the deck at all, if she so desires. Though cards sometimes have obvious interpretations for the context in which they are drawn, the rules explain that sometimes they are best read simply as \"a positive (or negative) result.\"",
"title": "The Fortune Deck"
},
{
"paragraph_id": 12,
"text": "Although the Fortune Deck resembles (and can be used as) a fortune-telling device, Everway treats the Deck only as a storytelling device and an element of the fictional setting. It does not in any way endorse \"real\" fortune-telling or other supernatural concepts.",
"title": "The Fortune Deck"
},
{
"paragraph_id": 13,
"text": "In the December 1995 edition of Dragon (Issue 224), Rick Swan was surprised by Wizards of the Coast's choice of the very different Everway to enter the role-playing game market: \"Everway is so far out of the mainstream, it’s barely recognizable as an RPG. For starters, it has no dice. It has no tables or charts. A deck of cards directs the flow of the game. Monster bashing, treasure hunting, dungeon crawling—bye-bye; Everway is pure narrative.\" Swan liked the \"first class\" production values of the components, but found the maps \"lifeless\". Swan was a big fan of the diceless system, saying, \"It makes for a brisk game, and Everway, to its credit, plays at blinding speed.\" But Swan was concerned by the how the game placed an unreasonable onus on the improvisational skills of both the gamemaster and the players. He concluded by giving the game an average rating of 4 out of 6.",
"title": "Reception"
}
]
| Everway is a fantasy role-playing game first published by Wizards of the Coast under their Alter Ego brand in 1995. Its lead designer was Jonathan Tweet. Marketed as a "Visionary Roleplaying Game", it has often been characterized as an innovative piece with a limited commercial success. Wizards later abandoned the line, and Rubicon Games purchased it, and published several supplements. The line was sold again to Gaslight Press in February 2001. The line is currently with The Everway Company, which has released a Silver Anniversary Edition. The game has a fantasy setting of the multiverse type, with many different worlds, some of which differed from generic fantasy. It appears to have been heavily influenced by divinatory tarot, the four classical elements of ancient Greece, and mythologies from around the world. Everway was first with implementing, in a commercial game, several new concepts including much more picture-based/visual source material and character creation than usual. Like other works by Jonathan Tweet, the rules are very simple and flexible. It is also one of a few diceless role-playing games. The Fortune Deck works as a randomizer and inspirational tool, and the results obtained by it are highly subjective. In order to clarify their use, Tweet coined some new vocabulary to describe and formalize methods of gamemaster adjudication; these terms have been adopted by the wider tabletop RPG community. Tweet's adjudication terms are: Karma, Drama, and Fortune. | 2023-06-27T07:23:36Z | [
"Template:Short description",
"Template:Refimprove",
"Template:Italic title",
"Template:Infobox RPG",
"Template:Rp",
"Template:Cite magazine",
"Template:Cite book",
"Template:Cite journal"
]
| https://en.wikipedia.org/wiki/Everway |
|
9,883 | Eurocard (printed circuit board) | Eurocard is an IEEE standard format for printed circuit board (PCB) cards that can be plugged together into a standard chassis which, in turn, can be mounted in a 19-inch rack. The chassis consists of a series of slotted card guides on the top and bottom, into which the cards are slid so they stand on end, like books on a shelf. At the spine of each card is one or more connectors which plug into mating connectors on a backplane that closes the rear of the chassis.
As the cards are assumed to be installed in a vertical orientation, the usual meanings of height and width are transposed: A card might be 233.35 mm "high", but only 20 mm "wide". Height is measured in rack units, "U", with 1 U being 1.75 in (44.45 mm). This dimension refers to the subrack in which the card is to be mounted, rather than the card itself.
A single card is 100 mm high. Taller cards add a 133.35 mm, so that a double height card is 233.35 mm high and a triple 366.7 mm high.
Enclosure heights are multiples of 3U, with the cards always 33.35 mm (1.313 in) shorter than the enclosure. Two common heights are 3U (a 100 mm card in a 5.25 in (133.35 mm) subrack) and 6U (a 233.35 mm card in a 10.5 in (266.70 mm) high subrack). As two 3U cards are shorter than a 6U card (by 33.35 mm), it is possible to install two 3U cards in one slot of a 6U subrack, with a mid-height structure for proper support.
Card widths are specified in horizontal pitch units "HP", with 1 HP being 0.20 in (5.08 mm).
Card depths start at 100 mm (3.937 in) and increase in 60 mm (2.362 in) increments. The most common today is 160 mm (6.299 in), but standard hardware is available for depths of 100 mm (3.937 in), 160 mm (6.299 in), 220 mm (8.661 in), 280 mm (11.024 in), 340 mm (13.386 in), and 400 mm (15.748 in).
The Eurocard mechanical architecture was defined originally under IEC-60297-3. Today, the most widely recognized standards for this mechanical structure are IEEE 1101.1, IEEE 1101.10 (also known commonly as "dot ten") and IEEE 1101.11. IEEE 1101.10 covers the additional mechanical and electromagnetic interference features required for VITA 1.1-1997(R2002), which is the VME64 Extensions standard, as well as PICMG 2.0 (R3.0), which is the CompactPCI specification.
The IEEE 1101.11 standard covers rear plug-in units that are also called rear transition modules or RTMs.
The Eurocard is a mechanical system and does not define the specific connector to be used or the signals that are assigned to connector contacts.
The connector systems that are commonly used with Eurocard architectures include the original DIN 41612 connector that is also standardized as IEC 60603.2. This is the connector that is used for the VMEbus standard, which was IEEE 1014. The connector known as the 5-row DIN, which is used for the VME64 Extensions standard is IEC 61076-4-113. The VME64 Extension architecture defined by VITA 1.1-1997 (R2002).
Another popular computer architecture that utilizes the 6U-160 Eurocard is CompactPCI and CompactPCI Express. These are defined by PICMG 2.0R3 and PICMG Exp0 R1 respectively. Other computer architectures that utilize the Eurocard system are VME eXtensions for Instrumentation (VXI), PCI eXtensions for Instrumentation (PXI), and PXI Express.
A computer architecture that used the 6U-220 Eurocard format was Multibus-II, which was IEEE 1296.
Because the Eurocard system provided for so many modular card sizes and because connector manufacturers have continued to create new connectors that are compatible with this system, it is a popular mechanical standard that is also used for innumerable "one-off" applications.
Conduction-cooled Eurocards are used in military and aerospace applications. They are defined by the IEEE 1101.2-1992 (2001) standard.
The Eurocard standard is also the basis of the "Eurorack" format for modular electronic music synthesizers, popularized by Doepfer and other manufacturers. | [
{
"paragraph_id": 0,
"text": "Eurocard is an IEEE standard format for printed circuit board (PCB) cards that can be plugged together into a standard chassis which, in turn, can be mounted in a 19-inch rack. The chassis consists of a series of slotted card guides on the top and bottom, into which the cards are slid so they stand on end, like books on a shelf. At the spine of each card is one or more connectors which plug into mating connectors on a backplane that closes the rear of the chassis.",
"title": ""
},
{
"paragraph_id": 1,
"text": "As the cards are assumed to be installed in a vertical orientation, the usual meanings of height and width are transposed: A card might be 233.35 mm \"high\", but only 20 mm \"wide\". Height is measured in rack units, \"U\", with 1 U being 1.75 in (44.45 mm). This dimension refers to the subrack in which the card is to be mounted, rather than the card itself.",
"title": "Dimensions"
},
{
"paragraph_id": 2,
"text": "A single card is 100 mm high. Taller cards add a 133.35 mm, so that a double height card is 233.35 mm high and a triple 366.7 mm high.",
"title": "Dimensions"
},
{
"paragraph_id": 3,
"text": "Enclosure heights are multiples of 3U, with the cards always 33.35 mm (1.313 in) shorter than the enclosure. Two common heights are 3U (a 100 mm card in a 5.25 in (133.35 mm) subrack) and 6U (a 233.35 mm card in a 10.5 in (266.70 mm) high subrack). As two 3U cards are shorter than a 6U card (by 33.35 mm), it is possible to install two 3U cards in one slot of a 6U subrack, with a mid-height structure for proper support.",
"title": "Dimensions"
},
{
"paragraph_id": 4,
"text": "Card widths are specified in horizontal pitch units \"HP\", with 1 HP being 0.20 in (5.08 mm).",
"title": "Dimensions"
},
{
"paragraph_id": 5,
"text": "Card depths start at 100 mm (3.937 in) and increase in 60 mm (2.362 in) increments. The most common today is 160 mm (6.299 in), but standard hardware is available for depths of 100 mm (3.937 in), 160 mm (6.299 in), 220 mm (8.661 in), 280 mm (11.024 in), 340 mm (13.386 in), and 400 mm (15.748 in).",
"title": "Dimensions"
},
{
"paragraph_id": 6,
"text": "The Eurocard mechanical architecture was defined originally under IEC-60297-3. Today, the most widely recognized standards for this mechanical structure are IEEE 1101.1, IEEE 1101.10 (also known commonly as \"dot ten\") and IEEE 1101.11. IEEE 1101.10 covers the additional mechanical and electromagnetic interference features required for VITA 1.1-1997(R2002), which is the VME64 Extensions standard, as well as PICMG 2.0 (R3.0), which is the CompactPCI specification.",
"title": "Standards and architecture"
},
{
"paragraph_id": 7,
"text": "The IEEE 1101.11 standard covers rear plug-in units that are also called rear transition modules or RTMs.",
"title": "Standards and architecture"
},
{
"paragraph_id": 8,
"text": "The Eurocard is a mechanical system and does not define the specific connector to be used or the signals that are assigned to connector contacts.",
"title": "Standards and architecture"
},
{
"paragraph_id": 9,
"text": "The connector systems that are commonly used with Eurocard architectures include the original DIN 41612 connector that is also standardized as IEC 60603.2. This is the connector that is used for the VMEbus standard, which was IEEE 1014. The connector known as the 5-row DIN, which is used for the VME64 Extensions standard is IEC 61076-4-113. The VME64 Extension architecture defined by VITA 1.1-1997 (R2002).",
"title": "Standards and architecture"
},
{
"paragraph_id": 10,
"text": "Another popular computer architecture that utilizes the 6U-160 Eurocard is CompactPCI and CompactPCI Express. These are defined by PICMG 2.0R3 and PICMG Exp0 R1 respectively. Other computer architectures that utilize the Eurocard system are VME eXtensions for Instrumentation (VXI), PCI eXtensions for Instrumentation (PXI), and PXI Express.",
"title": "Standards and architecture"
},
{
"paragraph_id": 11,
"text": "A computer architecture that used the 6U-220 Eurocard format was Multibus-II, which was IEEE 1296.",
"title": "Standards and architecture"
},
{
"paragraph_id": 12,
"text": "Because the Eurocard system provided for so many modular card sizes and because connector manufacturers have continued to create new connectors that are compatible with this system, it is a popular mechanical standard that is also used for innumerable \"one-off\" applications.",
"title": "Standards and architecture"
},
{
"paragraph_id": 13,
"text": "Conduction-cooled Eurocards are used in military and aerospace applications. They are defined by the IEEE 1101.2-1992 (2001) standard.",
"title": "Standards and architecture"
},
{
"paragraph_id": 14,
"text": "The Eurocard standard is also the basis of the \"Eurorack\" format for modular electronic music synthesizers, popularized by Doepfer and other manufacturers.",
"title": "Standards and architecture"
}
]
| Eurocard is an IEEE standard format for printed circuit board (PCB) cards that can be plugged together into a standard chassis which, in turn, can be mounted in a 19-inch rack. The chassis consists of a series of slotted card guides on the top and bottom, into which the cards are slid so they stand on end, like books on a shelf. At the spine of each card is one or more connectors which plug into mating connectors on a backplane that closes the rear of the chassis. | 2023-05-21T01:11:39Z | [
"Template:Distinguish",
"Template:Short description",
"Template:Unreferenced",
"Template:Convert"
]
| https://en.wikipedia.org/wiki/Eurocard_(printed_circuit_board) |
|
9,890 | Electron counting | In chemistry, electron counting is a formalism for assigning a number of valence electrons to individual atoms in a molecule. It is used for classifying compounds and for explaining or predicting their electronic structure and bonding. Many rules in chemistry rely on electron-counting:
Atoms are called "electron-deficient" when they have too few electrons as compared to their respective rules, or "hypervalent" when they have too many electrons. Since these compounds tend to be more reactive than compounds that obey their rule, electron counting is an important tool for identifying the reactivity of molecules. While the counting formalism considers each atom separately, these individual atoms (with their hypothetical assigned charge) do not generally exist as free species.
Two methods of electron counting are "neutral counting" and "ionic counting". Both approaches give the same result (and can therefore be used to verify one's calculation).
It is important, though, to be aware that most chemical species exist between the purely covalent and ionic extremes.
The numbers of electrons "donated" by some ligands depends on the geometry of the metal-ligand ensemble. An example of this complication is the M–NO entity. When this grouping is linear, the NO ligand is considered to be a three-electron ligand. When the M–NO subunit is strongly bent at N, the NO is treated as a pseudohalide and is thus a one electron (in the neutral counting approach). The situation is not very different from the η versus the η allyl. Another unusual ligand from the electron counting perspective is sulfur dioxide. | [
{
"paragraph_id": 0,
"text": "In chemistry, electron counting is a formalism for assigning a number of valence electrons to individual atoms in a molecule. It is used for classifying compounds and for explaining or predicting their electronic structure and bonding. Many rules in chemistry rely on electron-counting:",
"title": ""
},
{
"paragraph_id": 1,
"text": "Atoms are called \"electron-deficient\" when they have too few electrons as compared to their respective rules, or \"hypervalent\" when they have too many electrons. Since these compounds tend to be more reactive than compounds that obey their rule, electron counting is an important tool for identifying the reactivity of molecules. While the counting formalism considers each atom separately, these individual atoms (with their hypothetical assigned charge) do not generally exist as free species.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Two methods of electron counting are \"neutral counting\" and \"ionic counting\". Both approaches give the same result (and can therefore be used to verify one's calculation).",
"title": "Counting rules"
},
{
"paragraph_id": 3,
"text": "It is important, though, to be aware that most chemical species exist between the purely covalent and ionic extremes.",
"title": "Counting rules"
},
{
"paragraph_id": 4,
"text": "The numbers of electrons \"donated\" by some ligands depends on the geometry of the metal-ligand ensemble. An example of this complication is the M–NO entity. When this grouping is linear, the NO ligand is considered to be a three-electron ligand. When the M–NO subunit is strongly bent at N, the NO is treated as a pseudohalide and is thus a one electron (in the neutral counting approach). The situation is not very different from the η versus the η allyl. Another unusual ligand from the electron counting perspective is sulfur dioxide.",
"title": "Electrons donated by common fragments"
},
{
"paragraph_id": 5,
"text": "",
"title": "Examples"
}
]
| In chemistry, electron counting is a formalism for assigning a number of valence electrons to individual atoms in a molecule. It is used for classifying compounds and for explaining or predicting their electronic structure and bonding. Many rules in chemistry rely on electron-counting: Octet rule is used with Lewis structures for main group elements, especially the lighter ones such as carbon, nitrogen, and oxygen,
18-electron rule in inorganic chemistry and organometallic chemistry of transition metals,
Hückel's rule for the π-electrons of aromatic compounds,
Polyhedral skeletal electron pair theory for polyhedral cluster compounds, including transition metals and main group elements and mixtures thereof, such as boranes. Atoms are called "electron-deficient" when they have too few electrons as compared to their respective rules, or "hypervalent" when they have too many electrons. Since these compounds tend to be more reactive than compounds that obey their rule, electron counting is an important tool for identifying the reactivity of molecules. While the counting formalism considers each atom separately, these individual atoms do not generally exist as free species. | 2022-12-31T05:59:15Z | [
"Template:Short description",
"Template:Citation needed",
"Template:Chem2",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite web",
"Template:Electron configuration navbox",
"Template:Chemical bonds"
]
| https://en.wikipedia.org/wiki/Electron_counting |
|
9,891 | Entropy | Entropy is a scientific concept that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the microscopic description of nature in statistical physics, and to the principles of information theory. It has found far-ranging applications in chemistry and physics, in biological systems and their relation to life, in cosmology, economics, sociology, weather science, climate change, and information systems including the transmission of information in telecommunication.
Entropy is central to the second law of thermodynamics, which states that the entropy of an isolated system left to spontaneous evolution cannot decrease with time. As a result, isolated systems evolve toward thermodynamic equilibrium, where the entropy is highest. A consequence of the second law of thermodynamics is that certain processes are irreversible.
The thermodynamic concept was referred to by Scottish scientist and engineer William Rankine in 1850 with the names thermodynamic function and heat-potential. In 1865, German physicist Rudolf Clausius, one of the leading founders of the field of thermodynamics, defined it as the quotient of an infinitesimal amount of heat to the instantaneous temperature. He initially described it as transformation-content, in German Verwandlungsinhalt, and later coined the term entropy from a Greek word for transformation.
Austrian physicist Ludwig Boltzmann explained entropy as the measure of the number of possible microscopic arrangements or states of individual atoms and molecules of a system that comply with the macroscopic condition of the system. He thereby introduced the concept of statistical disorder and probability distributions into a new field of thermodynamics, called statistical mechanics, and found the link between the microscopic interactions, which fluctuate about an average configuration, to the macroscopically observable behavior, in form of a simple logarithmic law, with a proportionality constant, the Boltzmann constant, that has become one of the defining universal constants for the modern International System of Units (SI).
In his 1803 paper Fundamental Principles of Equilibrium and Movement, the French mathematician Lazare Carnot proposed that in any machine, the accelerations and shocks of the moving parts represent losses of moment of activity; in any natural process there exists an inherent tendency towards the dissipation of useful energy. In 1824, building on that work, Lazare's son, Sadi Carnot, published Reflections on the Motive Power of Fire, which posited that in all heat-engines, whenever "caloric" (what is now known as heat) falls through a temperature difference, work or motive power can be produced from the actions of its fall from a hot to cold body. He used an analogy with how water falls in a water wheel. That was an early insight into the second law of thermodynamics. Carnot based his views of heat partially on the early 18th-century "Newtonian hypothesis" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, and partially on the contemporary views of Count Rumford, who showed in 1789 that heat could be created by friction, as when cannon bores are machined. Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its original state at the end of a complete engine cycle, "no change occurs in the condition of the working body".
The first law of thermodynamics, deduced from the heat-friction experiments of James Joule in 1843, expresses the concept of energy and its conservation in all processes; the first law, however, is unsuitable to separately quantify the effects of friction and dissipation.
In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in the working body, and gave that change a mathematical interpretation, by questioning the nature of the inherent loss of usable heat when work is done, e.g., heat produced by friction. He described his observations as a dissipative use of energy, resulting in a transformation-content (Verwandlungsinhalt in German), of a thermodynamic system or working body of chemical species during a change of state. That was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Clausius discovered that the non-usable energy increases as steam proceeds from inlet to exhaust in a steam engine. From the prefix en-, as in 'energy', and from the Greek word τροπή [tropē], which is translated in an established lexicon as turning or change and that he rendered in German as Verwandlung, a word often translated into English as transformation, in 1865 Clausius coined the name of that property as entropy. The word was adopted into the English language in 1868.
Later, scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, and James Clerk Maxwell gave entropy a statistical basis. In 1877, Boltzmann visualized a probabilistic way to measure the entropy of an ensemble of ideal gas particles, in which he defined entropy as proportional to the natural logarithm of the number of microstates such a gas could occupy. The proportionality constant in this definition, called the Boltzmann constant, has become one of the defining universal constants for the modern International System of Units (SI). Henceforth, the essential problem in statistical thermodynamics has been to determine the distribution of a given amount of energy E over N identical systems. Constantin Carathéodory, a Greek mathematician, linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability.
In 1865, Clausius named the concept of "the differential of a quantity which depends on the configuration of the system", entropy (Entropie) after the Greek word for 'transformation'. He gave "transformational content" (Verwandlungsinhalt) as a synonym, paralleling his "thermal and ergonal content" (Wärme- und Werkinhalt) as the name of U {\displaystyle U} , but preferring the term entropy as a close parallel of the word energy, as he found the concepts nearly "analogous in their physical significance". This term was formed by replacing the root of ἔργον ('ergon', 'work') by that of τροπή ('tropy', 'transformation').
In more detail, Clausius explained his choice of "entropy" as a name as follows:
I prefer going to the ancient languages for the names of important scientific quantities, so that they may mean the same thing in all living tongues. I propose, therefore, to call S the entropy of a body, after the Greek word "transformation". I have designedly coined the word entropy to be similar to energy, for these two quantities are so analogous in their physical significance, that an analogy of denominations seems to me helpful.
Leon Cooper added that in this way "he succeeded in coining a word that meant the same thing to everybody: nothing".
Any method involving the notion of entropy, the very existence of which depends on the second law of thermodynamics, will doubtless seem to many far-fetched, and may repel beginners as obscure and difficult of comprehension.
Willard Gibbs, Graphical Methods in the Thermodynamics of Fluids
The concept of entropy is described by two principal approaches, the macroscopic perspective of classical thermodynamics, and the microscopic description central to statistical mechanics. The classical approach defines entropy in terms of macroscopically measurable physical properties, such as bulk mass, volume, pressure, and temperature. The statistical definition of entropy defines it in terms of the statistics of the motions of the microscopic constituents of a system – modeled at first classically, e.g. Newtonian particles constituting a gas, and later quantum-mechanically (photons, phonons, spins, etc.). The two approaches form a consistent, unified view of the same phenomenon as expressed in the second law of thermodynamics, which has found universal applicability to physical processes.
Many thermodynamic properties are defined by physical variables that define a state of thermodynamic equilibrium; these are state variables. State variables depend only on the equilibrium condition, not on the path evolution to that state. State variables can be functions of state, also called state functions, in a sense that one state variable is a mathematical function of other state variables. Often, if some properties of a system are determined, they are sufficient to determine the state of the system and thus other properties' values. For example, temperature and pressure of a given quantity of gas determine its state, and thus also its volume via the ideal gas law. A system composed of a pure substance of a single phase at a particular uniform temperature and pressure is determined, and is thus a particular state, and has not only a particular volume but also a specific entropy. The fact that entropy is a function of state makes it useful. In the Carnot cycle, the working fluid returns to the same state that it had at the start of the cycle, hence the change or line integral of any state function, such as entropy, over this reversible cycle is zero.
Total entropy may be conserved during a reversible process. The entropy change d S {\textstyle dS} of the system (not including the surroundings) is well-defined as heat δ Q rev {\textstyle \delta Q_{\text{rev}}} transferred to the system divided by the system temperature T {\textstyle T} , d S = δ Q rev T {\textstyle dS={\frac {\delta Q_{\text{rev}}}{T}}} . A reversible process is a quasistatic one that deviates only infinitesimally from thermodynamic equilibrium and avoids friction or other dissipation. Any process that happens quickly enough to deviate from thermal equilibrium cannot be reversible, total entropy increases, and the potential for maximum work to be done in the process is also lost. For example, in the Carnot cycle, while the heat flow from the hot reservoir to the cold reservoir represents an increase in entropy in the cold reservoir, the work output, if reversibly and perfectly stored in some energy storage mechanism, represents a decrease in entropy that could be used to operate the heat engine in reverse and return to the previous state; thus the total entropy change may still be zero at all times if the entire process is reversible. An irreversible process increases the total entropy of the system and surroundings.
The concept of entropy arose from Rudolf Clausius's study of the Carnot cycle that is a thermodynamic cycle performed by a Carnot heat engine as a reversible heat engine. In a Carnot cycle, heat QH is absorbed isothermally at temperature TH from a 'hot' reservoir (in the isothermal expansion stage) and given up isothermally as heat QC to a 'cold' reservoir at TC (in the isothermal compression stage). According to Carnot's principle or theorem, work from a heat engine with two thermal reservoirs can be produced only when there is a temperature difference between these reservoirs, and for reversible engines which are mostly and equally efficient among all heat engines for a given thermal reservoir pair, the work is a function of the reservoir temperatures and the heat absorbed to the engine QH (heat engine work output = heat engine efficiency × heat to the engine, where the efficiency is a function of the reservoir temperatures for reversible heat engines). Carnot did not distinguish between QH and QC, since he was using the incorrect hypothesis that caloric theory was valid, and hence heat was conserved (the incorrect assumption that QH and QC were equal in magnitude) when, in fact, the magnitude of QH is greater than the magnitude of QC. Through the efforts of Clausius and Kelvin, it is now known that the work done by a reversible heat engine is the product of the Carnot efficiency (it is the efficiency of all reversible heat engines with the same thermal reservoir pairs according to the Carnot's theorem) and the heat absorbed from the hot reservoir:
Here W {\displaystyle W} is work done by the Carnot heat engine, Q H {\displaystyle Q_{\text{H}}} is heat to the engine from the hot reservoir, and − T C T H Q H {\displaystyle -{\frac {T_{\text{C}}}{T_{\text{H}}}}Q_{\text{H}}} is heat to the cold reservoir from the engine. To derive the Carnot efficiency, which is 1 − TC/TH (a number less than one), Kelvin had to evaluate the ratio of the work output to the heat absorbed during the isothermal expansion with the help of the Carnot–Clapeyron equation, which contained an unknown function called the Carnot function. The possibility that the Carnot function could be the temperature as measured from a zero point of temperature was suggested by Joule in a letter to Kelvin. This allowed Kelvin to establish his absolute temperature scale. It is also known that the net work W produced by the system in one cycle is the net heat absorbed, which is the sum (or difference of the magnitudes) of the heat QH > 0 absorbed from the hot reservoir and the waste heat QC < 0 given off to the cold reservoir:
Since the latter is valid over the entire cycle, this gave Clausius the hint that at each stage of the cycle, work and heat would not be equal, but rather their difference would be the change of a state function that would vanish upon completion of the cycle. The state function was called the internal energy, that is central to the first law of thermodynamics.
Now equating (1) and (2) gives, for the engine per Carnot cycle,
This implies that there is a function of state whose change is Q/T and this state function is conserved over a complete Carnot cycle, like other state function such as the internal energy. Clausius called this state function entropy. One can see that entropy was discovered through mathematics rather than through laboratory experimental results. It is a mathematical construct and has no easy physical analogy. This makes the concept somewhat obscure or abstract, akin to how the concept of energy arose. This equation shows an entropy change per Carnot cycle is zero. In fact, an entropy change in the both thermal reservoirs per Carnot cycle is also zero since that change is simply expressed by reverting the sign of each term in the equation (3) according to the fact that, for example, for heat transfer from the hot reservoir to the engine, the engine receives the heat while the hot reservoir loses the same amount of the heat;
where we denote an entropy change for a thermal reservoir by ΔSr,i = - Qi/Ti, for i as either H (Hot reservoir) or C (Cold reservoir), by considering the above-mentioned signal convention of heat for the engine.
Clausius then asked what would happen if less work is produced by the system than that predicted by Carnot's principle for the same thermal reservoir pair and the same heat transfer from the hot reservoir to the engine QH. In this case, the right-hand side of the equation (1) would be the upper bound of the work output by the system, and the equation would now be converted into an inequality
When the equation (2) is used to express the work as a net or total heat exchanged in a cycle, we get
or
by considering the sign convention of heat where QH > 0 is heat that is from the hot reservoir and is absorbed by the engine and QC < 0 is the waste heat given off to the cold reservoir from the engine. So, more heat is given up to the cold reservoir than in the Carnot cycle. The above inequality Q H + Q C < ( 1 − T C T H ) Q H {\displaystyle Q_{\text{H}}+Q_{\text{C}}<\left(1-{\frac {T_{\text{C}}}{T_{\text{H}}}}\right)Q_{\text{H}}} can be written as
If we, again, denote an entropy change for a thermal reservoir by ΔSr,i = - Qi/Ti, for i as either H (Hot reservoir) or C (Cold reservoir), by considering the abovementioned signal convention of heat for the engine, then
or
telling that the magnitude of the entropy earned by the cold reservoir is greater than the entropy lost by the hot reservoir. The net entropy change in the engine per its thermodynamic cycle is zero, so the net entropy change in the engine and both the thermal reservoirs per cycle increases if work produced by the engine is less than the work achieved by a Carnot engine in the equation (1).
The Carnot cycle and Carnot efficiency as shown in the equation (1) are useful because they define the upper bound of the possible work output and the efficiency of any classical thermodynamic heat engine. Other cycles, such as the Otto cycle, Diesel cycle and Brayton cycle, can be analyzed from the standpoint of the Carnot cycle. Any machine or cyclic process that converts heat to work and is claimed to produce an efficiency greater than the Carnot efficiency is not viable because it violates the second law of thermodynamics.
For very small numbers of particles in the system, statistical thermodynamics must be used. The efficiency of devices such as photovoltaic cells requires an analysis from the standpoint of quantum mechanics.
The thermodynamic definition of entropy was developed in the early 1850s by Rudolf Clausius and essentially describes how to measure the entropy of an isolated system in thermodynamic equilibrium with its parts. Clausius created the term entropy as an extensive thermodynamic variable that was shown to be useful in characterizing the Carnot cycle. Heat transfer in the isotherm steps (isothermal expansion and isothermal compression) of the Carnot cycle was found to be proportional to the temperature of a system (known as its absolute temperature). This relationship was expressed in an increment of entropy that is equal to incremental heat transfer divided by temperature. Entropy was found to vary in the thermodynamic cycle but eventually returned to the same value at the end of every cycle. Thus it was found to be a function of state, specifically a thermodynamic state of the system.
While Clausius based his definition on a reversible process, there are also irreversible processes that change entropy. Following the second law of thermodynamics, entropy of an isolated system always increases for irreversible processes. The difference between an isolated system and closed system is that energy may not flow to and from an isolated system, but energy flow to and from a closed system is possible. Nevertheless, for both closed and isolated systems, and indeed, also in open systems, irreversible thermodynamics processes may occur.
According to the Clausius equality, for a reversible cyclic process: ∮ δ Q rev T = 0 {\textstyle \oint {\frac {\delta Q_{\text{rev}}}{T}}=0} . This means the line integral ∫ L δ Q rev T {\textstyle \int _{L}{\frac {\delta Q_{\text{rev}}}{T}}} is path-independent.
So we can define a state function S called entropy, which satisfies d S = δ Q rev T {\textstyle dS={\frac {\delta Q_{\text{rev}}}{T}}} .
To find the entropy difference between any two states of a system, the integral must be evaluated for some reversible path between the initial and final states. Since entropy is a state function, the entropy change of the system for an irreversible path is the same as for a reversible path between the same two states. However, the heat transferred to or from, and the entropy change of, the surroundings is different.
We can only obtain the change of entropy by integrating the above formula. To obtain the absolute value of the entropy, we need the third law of thermodynamics, which states that S = 0 at absolute zero for perfect crystals.
From a macroscopic perspective, in classical thermodynamics the entropy is interpreted as a state function of a thermodynamic system: that is, a property depending only on the current state of the system, independent of how that state came to be achieved. In any process where the system gives up energy ΔE, and its entropy falls by ΔS, a quantity at least TR ΔS of that energy must be given up to the system's surroundings as heat (TR is the temperature of the system's external surroundings). Otherwise the process cannot go forward. In classical thermodynamics, the entropy of a system is defined only if it is in physical thermodynamic equilibrium. (But chemical equilibrium is not required: the entropy of a mixture of two moles of hydrogen and one mole of oxygen at 1 bar pressure and 298 K is well-defined.)
The statistical definition was developed by Ludwig Boltzmann in the 1870s by analyzing the statistical behavior of the microscopic components of the system. Boltzmann showed that this definition of entropy was equivalent to the thermodynamic entropy to within a constant factor—known as the Boltzmann constant. In short, the thermodynamic definition of entropy provides the experimental verification of entropy, while the statistical definition of entropy extends the concept, providing an explanation and a deeper understanding of its nature.
The interpretation of entropy in statistical mechanics is the measure of uncertainty, disorder, or mixedupness in the phrase of Gibbs, which remains about a system after its observable macroscopic properties, such as temperature, pressure and volume, have been taken into account. For a given set of macroscopic variables, the entropy measures the degree to which the probability of the system is spread out over different possible microstates. In contrast to the macrostate, which characterizes plainly observable average quantities, a microstate specifies all molecular details about the system including the position and velocity of every molecule. The more such states are available to the system with appreciable probability, the greater the entropy. In statistical mechanics, entropy is a measure of the number of ways a system can be arranged, often taken to be a measure of "disorder" (the higher the entropy, the higher the disorder). This definition describes the entropy as being proportional to the natural logarithm of the number of possible microscopic configurations of the individual atoms and molecules of the system (microstates) that could cause the observed macroscopic state (macrostate) of the system. The constant of proportionality is the Boltzmann constant.
The Boltzmann constant, and therefore entropy, have dimensions of energy divided by temperature, which has a unit of joules per kelvin (J⋅K) in the International System of Units (or kg⋅m⋅s⋅K in terms of base units). The entropy of a substance is usually given as an intensive property – either entropy per unit mass (SI unit: J⋅K⋅kg) or entropy per unit amount of substance (SI unit: J⋅K⋅mol).
Specifically, entropy is a logarithmic measure of the number of system states with significant probability of being occupied:
( p i {\displaystyle p_{i}} is the probability that the system is in i {\displaystyle i} th state, usually given by the Boltzmann distribution; if states are defined in a continuous manner, the summation is replaced by an integral over all possible states) or, equivalently, the expected value of the logarithm of the probability that a microstate is occupied
where kB is the Boltzmann constant, equal to 1.38065×10 J/K. The summation is over all the possible microstates of the system, and pi is the probability that the system is in the i-th microstate. This definition assumes that the basis set of states has been picked so that there is no information on their relative phases. In a different basis set, the more general expression is
where ρ ^ {\displaystyle {\widehat {\rho }}} is the density matrix, Tr {\displaystyle \operatorname {Tr} } is trace, and ln {\displaystyle \ln } is the matrix logarithm. This density matrix formulation is not needed in cases of thermal equilibrium so long as the basis states are chosen to be energy eigenstates. For most practical purposes, this can be taken as the fundamental definition of entropy since all other formulas for S can be mathematically derived from it, but not vice versa.
In what has been called the fundamental assumption of statistical thermodynamics or the fundamental postulate in statistical mechanics, among system microstates of the same energy (degenerate microstates) each microstate is assumed to be populated with equal probability; this assumption is usually justified for an isolated system in equilibrium. Then for an isolated system pi = 1/Ω, where Ω is the number of microstates whose energy equals the system's energy, and the previous equation reduces to
In thermodynamics, such a system is one in which the volume, number of molecules, and internal energy are fixed (the microcanonical ensemble).
For a given thermodynamic system, the excess entropy is defined as the entropy minus that of an ideal gas at the same density and temperature, a quantity that is always negative because an ideal gas is maximally disordered. This concept plays an important role in liquid-state theory. For instance, Rosenfeld's excess-entropy scaling principle states that reduced transport coefficients throughout the two-dimensional phase diagram are functions uniquely determined by the excess entropy.
The most general interpretation of entropy is as a measure of the extent of uncertainty about a system. The equilibrium state of a system maximizes the entropy because it does not reflect all information about the initial conditions, except for the conserved variables. This uncertainty is not of the everyday subjective kind, but rather the uncertainty inherent to the experimental method and interpretative model.
The interpretative model has a central role in determining entropy. The qualifier "for a given set of macroscopic variables" above has deep implications: if two observers use different sets of macroscopic variables, they see different entropies. For example, if observer A uses the variables U, V and W, and observer B uses U, V, W, X, then, by changing X, observer B can cause an effect that looks like a violation of the second law of thermodynamics to observer A. In other words: the set of macroscopic variables one chooses must include everything that may change in the experiment, otherwise one might see decreasing entropy.
Entropy can be defined for any Markov processes with reversible dynamics and the detailed balance property.
In Boltzmann's 1896 Lectures on Gas Theory, he showed that this expression gives a measure of entropy for systems of atoms and molecules in the gas phase, thus providing a measure for the entropy of classical thermodynamics.
Entropy arises directly from the Carnot cycle. It can also be described as the reversible heat divided by temperature. Entropy is a fundamental function of state.
In a thermodynamic system, pressure and temperature tend to become uniform over time because the equilibrium state has higher probability (more possible combinations of microstates) than any other state.
As an example, for a glass of ice water in air at room temperature, the difference in temperature between the warm room (the surroundings) and the cold glass of ice and water (the system and not part of the room) decreases as portions of the thermal energy from the warm surroundings spread to the cooler system of ice and water. Over time the temperature of the glass and its contents and the temperature of the room become equal. In other words, the entropy of the room has decreased as some of its energy has been dispersed to the ice and water, of which the entropy has increased.
However, as calculated in the example, the entropy of the system of ice and water has increased more than the entropy of the surrounding room has decreased. In an isolated system such as the room and ice water taken together, the dispersal of energy from warmer to cooler always results in a net increase in entropy. Thus, when the "universe" of the room and ice water system has reached a temperature equilibrium, the entropy change from the initial state is at a maximum. The entropy of the thermodynamic system is a measure of how far the equalization has progressed.
Thermodynamic entropy is a non-conserved state function that is of great importance in the sciences of physics and chemistry. Historically, the concept of entropy evolved to explain why some processes (permitted by conservation laws) occur spontaneously while their time reversals (also permitted by conservation laws) do not; systems tend to progress in the direction of increasing entropy. For isolated systems, entropy never decreases. This fact has several important consequences in science: first, it prohibits "perpetual motion" machines; and second, it implies the arrow of entropy has the same direction as the arrow of time. Increases in the total entropy of system and surroundings correspond to irreversible changes, because some energy is expended as waste heat, limiting the amount of work a system can do.
Unlike many other functions of state, entropy cannot be directly observed but must be calculated. Absolute standard molar entropy of a substance can be calculated from the measured temperature dependence of its heat capacity. The molar entropy of ions is obtained as a difference in entropy from a reference state defined as zero entropy. The second law of thermodynamics states that the entropy of an isolated system must increase or remain constant. Therefore, entropy is not a conserved quantity: for example, in an isolated system with non-uniform temperature, heat might irreversibly flow and the temperature become more uniform such that entropy increases. Chemical reactions cause changes in entropy and system entropy, in conjunction with enthalpy, plays an important role in determining in which direction a chemical reaction spontaneously proceeds.
One dictionary definition of entropy is that it is "a measure of thermal energy per unit temperature that is not available for useful work" in a cyclic process. For instance, a substance at uniform temperature is at maximum entropy and cannot drive a heat engine. A substance at non-uniform temperature is at a lower entropy (than if the heat distribution is allowed to even out) and some of the thermal energy can drive a heat engine.
A special case of entropy increase, the entropy of mixing, occurs when two or more different substances are mixed. If the substances are at the same temperature and pressure, there is no net exchange of heat or work – the entropy change is entirely due to the mixing of the different substances. At a statistical mechanical level, this results due to the change in available volume per particle with mixing.
Proofs of equivalence between the definition of entropy in statistical mechanics (the Gibbs entropy formula S = − k B ∑ i p i log p i {\textstyle S=-k_{\mathrm {B} }\sum _{i}p_{i}\log p_{i}} ) and in classical thermodynamics ( d S = δ Q rev T {\textstyle dS={\frac {\delta Q_{\text{rev}}}{T}}} together with the fundamental thermodynamic relation) are known for the microcanonical ensemble, the canonical ensemble, the grand canonical ensemble, and the isothermal–isobaric ensemble. These proofs are based on the probability density of microstates of the generalized Boltzmann distribution and the identification of the thermodynamic internal energy as the ensemble average U = ⟨ E i ⟩ {\displaystyle U=\left\langle E_{i}\right\rangle } . Thermodynamic relations are then employed to derive the well-known Gibbs entropy formula. However, the equivalence between the Gibbs entropy formula and the thermodynamic definition of entropy is not a fundamental thermodynamic relation but rather a consequence of the form of the generalized Boltzmann distribution.
Furthermore, it has been shown that the definitions of entropy in statistical mechanics is the only entropy that is equivalent to the classical thermodynamics entropy under the following postulates:
The second law of thermodynamics requires that, in general, the total entropy of any system does not decrease other than by increasing the entropy of some other system. Hence, in a system isolated from its environment, the entropy of that system tends not to decrease. It follows that heat cannot flow from a colder body to a hotter body without the application of work to the colder body. Secondly, it is impossible for any device operating on a cycle to produce net work from a single temperature reservoir; the production of net work requires flow of heat from a hotter reservoir to a colder reservoir, or a single expanding reservoir undergoing adiabatic cooling, which performs adiabatic work. As a result, there is no possibility of a perpetual motion machine. It follows that a reduction in the increase of entropy in a specified process, such as a chemical reaction, means that it is energetically more efficient.
It follows from the second law of thermodynamics that the entropy of a system that is not isolated may decrease. An air conditioner, for example, may cool the air in a room, thus reducing the entropy of the air of that system. The heat expelled from the room (the system), which the air conditioner transports and discharges to the outside air, always makes a bigger contribution to the entropy of the environment than the decrease of the entropy of the air of that system. Thus, the total of entropy of the room plus the entropy of the environment increases, in agreement with the second law of thermodynamics.
In mechanics, the second law in conjunction with the fundamental thermodynamic relation places limits on a system's ability to do useful work. The entropy change of a system at temperature T {\textstyle T} absorbing an infinitesimal amount of heat δ q {\textstyle \delta q} in a reversible way, is given by δ q / T {\textstyle \delta q/T} . More explicitly, an energy T R S {\textstyle T_{R}S} is not available to do useful work, where T R {\textstyle T_{R}} is the temperature of the coldest accessible reservoir or heat sink external to the system. For further discussion, see Exergy.
Statistical mechanics demonstrates that entropy is governed by probability, thus allowing for a decrease in disorder even in an isolated system. Although this is possible, such an event has a small probability of occurring, making it unlikely.
The applicability of a second law of thermodynamics is limited to systems in or sufficiently near equilibrium state, so that they have defined entropy. Some inhomogeneous systems out of thermodynamic equilibrium still satisfy the hypothesis of local thermodynamic equilibrium, so that entropy density is locally defined as an intensive quantity. For such systems, there may apply a principle of maximum time rate of entropy production. It states that such a system may evolve to a steady state that maximizes its time rate of entropy production. This does not mean that such a system is necessarily always in a condition of maximum time rate of entropy production; it means that it may evolve to such a steady state.
The entropy of a system depends on its internal energy and its external parameters, such as its volume. In the thermodynamic limit, this fact leads to an equation relating the change in the internal energy U {\displaystyle U} to changes in the entropy and the external parameters. This relation is known as the fundamental thermodynamic relation. If external pressure p {\displaystyle p} bears on the volume V {\displaystyle V} as the only external parameter, this relation is:
Since both internal energy and entropy are monotonic functions of temperature T {\displaystyle T} , implying that the internal energy is fixed when one specifies the entropy and the volume, this relation is valid even if the change from one state of thermal equilibrium to another with infinitesimally larger entropy and volume happens in a non-quasistatic way (so during this change the system may be very far out of thermal equilibrium and then the whole-system entropy, pressure, and temperature may not exist).
The fundamental thermodynamic relation implies many thermodynamic identities that are valid in general, independent of the microscopic details of the system. Important examples are the Maxwell relations and the relations between heat capacities.
Thermodynamic entropy is central in chemical thermodynamics, enabling changes to be quantified and the outcome of reactions predicted. The second law of thermodynamics states that entropy in an isolated system – the combination of a subsystem under study and its surroundings – increases during all spontaneous chemical and physical processes. The Clausius equation of δ q rev / T = Δ S {\displaystyle \delta q_{\text{rev}}/T=\Delta S} introduces the measurement of entropy change, Δ S {\displaystyle \Delta S} . Entropy change describes the direction and quantifies the magnitude of simple changes such as heat transfer between systems – always from hotter to cooler spontaneously.
The thermodynamic entropy therefore has the dimension of energy divided by temperature, and the unit joule per kelvin (J/K) in the International System of Units (SI).
Thermodynamic entropy is an extensive property, meaning that it scales with the size or extent of a system. In many processes it is useful to specify the entropy as an intensive property independent of the size, as a specific entropy characteristic of the type of system studied. Specific entropy may be expressed relative to a unit of mass, typically the kilogram (unit: J⋅kg⋅K). Alternatively, in chemistry, it is also referred to one mole of substance, in which case it is called the molar entropy with a unit of J⋅mol⋅K.
Thus, when one mole of substance at about 0 K is warmed by its surroundings to 298 K, the sum of the incremental values of q rev / T {\textstyle q_{\text{rev}}/T} constitute each element's or compound's standard molar entropy, an indicator of the amount of energy stored by a substance at 298 K. Entropy change also measures the mixing of substances as a summation of their relative quantities in the final mixture.
Entropy is equally essential in predicting the extent and direction of complex chemical reactions. For such applications, Δ S {\displaystyle \Delta S} must be incorporated in an expression that includes both the system and its surroundings, Δ S universe = Δ S surroundings + Δ S system {\displaystyle \Delta S_{\text{universe}}=\Delta S_{\text{surroundings}}+\Delta S_{\text{system}}} . This expression becomes, via some steps, the Gibbs free energy equation for reactants and products in the system: Δ G {\displaystyle \Delta G} [the Gibbs free energy change of the system] = Δ H {\displaystyle =\Delta H} [the enthalpy change] − T Δ S {\displaystyle -T\,\Delta S} [the entropy change].
A 2011 study in Science (journal) estimated the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources. The author's estimate that human kind's technological capacity to store information grew from 2.6 (entropically compressed) exabytes in 1986 to 295 (entropically compressed) exabytes in 2007. The world's technological capacity to receive information through one-way broadcast networks was 432 exabytes of (entropically compressed) information in 1986, to 1.9 zettabytes in 2007. The world's effective capacity to exchange information through two-way telecommunication networks was 281 petabytes of (entropically compressed) information in 1986, to 65 (entropically compressed) exabytes in 2007.
In chemical engineering, the principles of thermodynamics are commonly applied to "open systems", i.e. those in which heat, work, and mass flow across the system boundary. Flows of both heat ( Q ˙ {\displaystyle {\dot {Q}}} ) and work, i.e. W ˙ S {\displaystyle {\dot {W}}_{\text{S}}} (shaft work) and P ( d V / d t ) {\displaystyle P(dV/dt)} (pressure-volume work), across the system boundaries, in general cause changes in the entropy of the system. Transfer as heat entails entropy transfer Q ˙ / T {\displaystyle {\dot {Q}}/T} , where T {\displaystyle T} is the absolute thermodynamic temperature of the system at the point of the heat flow. If there are mass flows across the system boundaries, they also influence the total entropy of the system. This account, in terms of heat and work, is valid only for cases in which the work and heat transfers are by paths physically distinct from the paths of entry and exit of matter from the system.
To derive a generalized entropy balanced equation, we start with the general balance equation for the change in any extensive quantity θ {\displaystyle \theta } in a thermodynamic system, a quantity that may be either conserved, such as energy, or non-conserved, such as entropy. The basic generic balance expression states that d θ / d t {\displaystyle d\theta /dt} , i.e. the rate of change of θ {\displaystyle \theta } in the system, equals the rate at which θ {\displaystyle \theta } enters the system at the boundaries, minus the rate at which θ {\displaystyle \theta } leaves the system across the system boundaries, plus the rate at which θ {\displaystyle \theta } is generated within the system. For an open thermodynamic system in which heat and work are transferred by paths separate from the paths for transfer of matter, using this generic balance equation, with respect to the rate of change with time t {\displaystyle t} of the extensive quantity entropy S {\displaystyle S} , the entropy balance equation is:
where
If there are multiple heat flows, the term Q ˙ / T {\displaystyle {\dot {Q}}/T} is replaced by ∑ Q ˙ j / T j , {\textstyle \sum {\dot {Q}}_{j}/T_{j},} where Q ˙ j {\displaystyle {\dot {Q}}_{j}} is the heat flow and T j {\displaystyle T_{j}} is the temperature at the j {\displaystyle j} th heat flow port into the system.
The nomenclature "entropy balance" is misleading and often deemed inappropriate because entropy is not a conserved quantity. In other words, the term S ˙ gen {\displaystyle {\dot {S}}_{\text{gen}}} is never a known quantity but always a derived one based on the expression above. Therefore, the open system version of the second law is more appropriately described as the "entropy generation equation" since it specifies that S ˙ gen ≥ 0 {\displaystyle {\dot {S}}_{\text{gen}}\geq 0} , with zero for reversible processes or greater than zero for irreversible ones.
For certain simple transformations in systems of constant composition, the entropy changes are given by simple formulas.
For the expansion (or compression) of an ideal gas from an initial volume V 0 {\displaystyle V_{0}} and pressure P 0 {\displaystyle P_{0}} to a final volume V {\displaystyle V} and pressure P {\displaystyle P} at any constant temperature, the change in entropy is given by:
Here n {\displaystyle n} is the amount of gas (in moles) and R {\displaystyle R} is the ideal gas constant. These equations also apply for expansion into a finite vacuum or a throttling process, where the temperature, internal energy and enthalpy for an ideal gas remain constant.
For pure heating or cooling of any system (gas, liquid or solid) at constant pressure from an initial temperature T 0 {\displaystyle T_{0}} to a final temperature T {\displaystyle T} , the entropy change is
provided that the constant-pressure molar heat capacity (or specific heat) CP is constant and that no phase transition occurs in this temperature interval.
Similarly at constant volume, the entropy change is
where the constant-volume molar heat capacity Cv is constant and there is no phase change.
At low temperatures near absolute zero, heat capacities of solids quickly drop off to near zero, so the assumption of constant heat capacity does not apply.
Since entropy is a state function, the entropy change of any process in which temperature and volume both vary is the same as for a path divided into two steps – heating at constant volume and expansion at constant temperature. For an ideal gas, the total entropy change is
Similarly if the temperature and pressure of an ideal gas both vary,
Reversible phase transitions occur at constant temperature and pressure. The reversible heat is the enthalpy change for the transition, and the entropy change is the enthalpy change divided by the thermodynamic temperature. For fusion (melting) of a solid to a liquid at the melting point Tm, the entropy of fusion is
Similarly, for vaporization of a liquid to a gas at the boiling point Tb, the entropy of vaporization is
As a fundamental aspect of thermodynamics and physics, several different approaches to entropy beyond that of Clausius and Boltzmann are valid.
The following is a list of additional definitions of entropy from a collection of textbooks:
In Boltzmann's analysis in terms of constituent particles, entropy is a measure of the number of possible microscopic states (or microstates) of a system in thermodynamic equilibrium.
Entropy is often loosely associated with the amount of order or disorder, or of chaos, in a thermodynamic system. The traditional qualitative description of entropy is that it refers to changes in the status quo of the system and is a measure of "molecular disorder" and the amount of wasted energy in a dynamical energy transformation from one state or form to another. In this direction, several recent authors have derived exact entropy formulas to account for and measure disorder and order in atomic and molecular assemblies. One of the simpler entropy order/disorder formulas is that derived in 1984 by thermodynamic physicist Peter Landsberg, based on a combination of thermodynamics and information theory arguments. He argues that when constraints operate on a system, such that it is prevented from entering one or more of its possible or permitted states, as contrasted with its forbidden states, the measure of the total amount of "disorder" in the system is given by:
Similarly, the total amount of "order" in the system is given by:
In which CD is the "disorder" capacity of the system, which is the entropy of the parts contained in the permitted ensemble, CI is the "information" capacity of the system, an expression similar to Shannon's channel capacity, and CO is the "order" capacity of the system.
The concept of entropy can be described qualitatively as a measure of energy dispersal at a specific temperature. Similar terms have been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels.
Ambiguities in the terms disorder and chaos, which usually have meanings directly opposed to equilibrium, contribute to widespread confusion and hamper comprehension of entropy for most students. As the second law of thermodynamics shows, in an isolated system internal portions at different temperatures tend to adjust to a single uniform temperature and thus produce equilibrium. A recently developed educational approach avoids ambiguous terms and describes such spreading out of energy as dispersal, which leads to loss of the differentials required for work even though the total energy remains constant in accordance with the first law of thermodynamics (compare discussion in next section). Physical chemist Peter Atkins, in his textbook Physical Chemistry, introduces entropy with the statement that "spontaneous changes are always accompanied by a dispersal of energy or matter and often both".
It is possible (in a thermal context) to regard lower entropy as a measure of the effectiveness or usefulness of a particular quantity of energy. Energy supplied at a higher temperature (i.e. with low entropy) tends to be more useful than the same amount of energy available at a lower temperature. Mixing a hot parcel of a fluid with a cold one produces a parcel of intermediate temperature, in which the overall increase in entropy represents a "loss" that can never be replaced.
As the entropy of the universe is steadily increasing, its total energy is becoming less useful. Eventually, this is theorized to lead to the heat death of the universe.
A definition of entropy based entirely on the relation of adiabatic accessibility between equilibrium states was given by E. H. Lieb and J. Yngvason in 1999. This approach has several predecessors, including the pioneering work of Constantin Carathéodory from 1909 and the monograph by R. Giles. In the setting of Lieb and Yngvason, one starts by picking, for a unit amount of the substance under consideration, two reference states X 0 {\displaystyle X_{0}} and X 1 {\displaystyle X_{1}} such that the latter is adiabatically accessible from the former but not conversely. Defining the entropies of the reference states to be 0 and 1 respectively, the entropy of a state X {\displaystyle X} is defined as the largest number λ {\displaystyle \lambda } such that X {\displaystyle X} is adiabatically accessible from a composite state consisting of an amount λ {\displaystyle \lambda } in the state X 1 {\displaystyle X_{1}} and a complementary amount, ( 1 − λ ) {\displaystyle (1-\lambda )} , in the state X 0 {\displaystyle X_{0}} . A simple but important result within this setting is that entropy is uniquely determined, apart from a choice of unit and an additive constant for each chemical element, by the following properties: It is monotonic with respect to the relation of adiabatic accessibility, additive on composite systems, and extensive under scaling.
In quantum statistical mechanics, the concept of entropy was developed by John von Neumann and is generally referred to as "von Neumann entropy",
where ρ is the density matrix, and Tr is the trace operator.
This upholds the correspondence principle, because in the classical limit, when the phases between the basis states used for the classical probabilities are purely random, this expression is equivalent to the familiar classical definition of entropy,
i.e. in such a basis the density matrix is diagonal.
Von Neumann established a rigorous mathematical framework for quantum mechanics with his work Mathematische Grundlagen der Quantenmechanik. He provided in this work a theory of measurement, where the usual notion of wave function collapse is described as an irreversible process (the so-called von Neumann or projective measurement). Using this concept, in conjunction with the density matrix he extended the classical concept of entropy into the quantum domain.
I thought of calling it "information", but the word was overly used, so I decided to call it "uncertainty". [...] Von Neumann told me, "You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage.
Conversation between Claude Shannon and John von Neumann regarding what name to give to the attenuation in phone-line signals
When viewed in terms of information theory, the entropy state function is the amount of information in the system that is needed to fully specify the microstate of the system. Entropy is the measure of the amount of missing information before reception. Often called Shannon entropy, it was originally devised by Claude Shannon in 1948 to study the size of information of a transmitted message. The definition of information entropy is expressed in terms of a discrete set of probabilities p i {\displaystyle p_{i}} so that
where the base of the logarithm determines the units (for example, the binary logarithm corresponds to bits).
In the case of transmitted messages, these probabilities were the probabilities that a particular message was actually transmitted, and the entropy of the message system was a measure of the average size of information of a message. For the case of equal probabilities (i.e. each message is equally probable), the Shannon entropy (in bits) is just the number of binary questions needed to determine the content of the message.
Most researchers consider information entropy and thermodynamic entropy directly linked to the same concept, while others argue that they are distinct. Both expressions are mathematically similar. If W {\displaystyle W} is the number of microstates that can yield a given macrostate, and each microstate has the same a priori probability, then that probability is p = 1 / W {\displaystyle p=1/W} . The Shannon entropy (in nats) is
and if entropy is measured in units of k {\displaystyle k} per nat, then the entropy is given by
which is the Boltzmann entropy formula, where k {\displaystyle k} is the Boltzmann constant, which may be interpreted as the thermodynamic entropy per nat. Some authors argue for dropping the word entropy for the H {\displaystyle H} function of information theory and using Shannon's other term, "uncertainty", instead.
The entropy of a substance can be measured, although in an indirect way. The measurement, known as entropymetry, is done on a closed system (with particle number N and volume V being constants) and uses the definition of temperature in terms of entropy, while limiting energy exchange to heat ( d U → d Q {\displaystyle dU\rightarrow dQ} ).
The resulting relation describes how entropy changes d S {\displaystyle dS} when a small amount of energy d Q {\displaystyle dQ} is introduced into the system at a certain temperature T {\displaystyle T} .
The process of measurement goes as follows. First, a sample of the substance is cooled as close to absolute zero as possible. At such temperatures, the entropy approaches zero – due to the definition of temperature. Then, small amounts of heat are introduced into the sample and the change in temperature is recorded, until the temperature reaches a desired value (usually 25 °C). The obtained data allows the user to integrate the equation above, yielding the absolute value of entropy of the substance at the final temperature. This value of entropy is called calorimetric entropy.
Although the concept of entropy was originally a thermodynamic concept, it has been adapted in other fields of study, including information theory, psychodynamics, thermoeconomics/ecological economics, and evolution.
Entropy is the only quantity in the physical sciences that seems to imply a particular direction of progress, sometimes called an arrow of time. As time progresses, the second law of thermodynamics states that the entropy of an isolated system never decreases in large systems over significant periods of time. Hence, from this perspective, entropy measurement is thought of as a clock in these conditions.
Chiavazzo et al. proposed that where cave spiders choose to lay their eggs can be explained through entropy minimization.
Entropy has been proven useful in the analysis of base pair sequences in DNA. Many entropy-based measures have been shown to distinguish between different structural regions of the genome, differentiate between coding and non-coding regions of DNA, and can also be applied for the recreation of evolutionary trees by determining the evolutionary distance between different species.
Assuming that a finite universe is an isolated system, the second law of thermodynamics states that its total entropy is continually increasing. It has been speculated, since the 19th century, that the universe is fated to a heat death in which all the energy ends up as a homogeneous distribution of thermal energy so that no more work can be extracted from any source.
If the universe can be considered to have generally increasing entropy, then – as Roger Penrose has pointed out – gravity plays an important role in the increase because gravity causes dispersed matter to accumulate into stars, which collapse eventually into black holes. The entropy of a black hole is proportional to the surface area of the black hole's event horizon. Jacob Bekenstein and Stephen Hawking have shown that black holes have the maximum possible entropy of any object of equal size. This makes them likely end points of all entropy-increasing processes, if they are totally effective matter and energy traps. However, the escape of energy from black holes might be possible due to quantum activity (see Hawking radiation).
The role of entropy in cosmology remains a controversial subject since the time of Ludwig Boltzmann. Recent work has cast some doubt on the heat death hypothesis and the applicability of any simple thermodynamic model to the universe in general. Although entropy does increase in the model of an expanding universe, the maximum possible entropy rises much more rapidly, moving the universe further from the heat death with time, not closer. This results in an "entropy gap" pushing the system further away from the posited heat death equilibrium. Other complicating factors, such as the energy density of the vacuum and macroscopic quantum effects, are difficult to reconcile with thermodynamical models, making any predictions of large-scale thermodynamics extremely difficult.
Current theories suggest the entropy gap to have been originally opened up by the early rapid exponential expansion of the universe.
Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and a paradigm founder of ecological economics, made extensive use of the entropy concept in his magnum opus on The Entropy Law and the Economic Process. Due to Georgescu-Roegen's work, the laws of thermodynamics form an integral part of the ecological economics school. Although his work was blemished somewhat by mistakes, a full chapter on the economics of Georgescu-Roegen has approvingly been included in one elementary physics textbook on the historical development of thermodynamics.
In economics, Georgescu-Roegen's work has generated the term 'entropy pessimism'. Since the 1990s, leading ecological economist and steady-state theorist Herman Daly – a student of Georgescu-Roegen – has been the economics profession's most influential proponent of the entropy pessimism position. | [
{
"paragraph_id": 0,
"text": "Entropy is a scientific concept that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the microscopic description of nature in statistical physics, and to the principles of information theory. It has found far-ranging applications in chemistry and physics, in biological systems and their relation to life, in cosmology, economics, sociology, weather science, climate change, and information systems including the transmission of information in telecommunication.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Entropy is central to the second law of thermodynamics, which states that the entropy of an isolated system left to spontaneous evolution cannot decrease with time. As a result, isolated systems evolve toward thermodynamic equilibrium, where the entropy is highest. A consequence of the second law of thermodynamics is that certain processes are irreversible.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The thermodynamic concept was referred to by Scottish scientist and engineer William Rankine in 1850 with the names thermodynamic function and heat-potential. In 1865, German physicist Rudolf Clausius, one of the leading founders of the field of thermodynamics, defined it as the quotient of an infinitesimal amount of heat to the instantaneous temperature. He initially described it as transformation-content, in German Verwandlungsinhalt, and later coined the term entropy from a Greek word for transformation.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Austrian physicist Ludwig Boltzmann explained entropy as the measure of the number of possible microscopic arrangements or states of individual atoms and molecules of a system that comply with the macroscopic condition of the system. He thereby introduced the concept of statistical disorder and probability distributions into a new field of thermodynamics, called statistical mechanics, and found the link between the microscopic interactions, which fluctuate about an average configuration, to the macroscopically observable behavior, in form of a simple logarithmic law, with a proportionality constant, the Boltzmann constant, that has become one of the defining universal constants for the modern International System of Units (SI).",
"title": ""
},
{
"paragraph_id": 4,
"text": "In his 1803 paper Fundamental Principles of Equilibrium and Movement, the French mathematician Lazare Carnot proposed that in any machine, the accelerations and shocks of the moving parts represent losses of moment of activity; in any natural process there exists an inherent tendency towards the dissipation of useful energy. In 1824, building on that work, Lazare's son, Sadi Carnot, published Reflections on the Motive Power of Fire, which posited that in all heat-engines, whenever \"caloric\" (what is now known as heat) falls through a temperature difference, work or motive power can be produced from the actions of its fall from a hot to cold body. He used an analogy with how water falls in a water wheel. That was an early insight into the second law of thermodynamics. Carnot based his views of heat partially on the early 18th-century \"Newtonian hypothesis\" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, and partially on the contemporary views of Count Rumford, who showed in 1789 that heat could be created by friction, as when cannon bores are machined. Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its original state at the end of a complete engine cycle, \"no change occurs in the condition of the working body\".",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The first law of thermodynamics, deduced from the heat-friction experiments of James Joule in 1843, expresses the concept of energy and its conservation in all processes; the first law, however, is unsuitable to separately quantify the effects of friction and dissipation.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in the working body, and gave that change a mathematical interpretation, by questioning the nature of the inherent loss of usable heat when work is done, e.g., heat produced by friction. He described his observations as a dissipative use of energy, resulting in a transformation-content (Verwandlungsinhalt in German), of a thermodynamic system or working body of chemical species during a change of state. That was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Clausius discovered that the non-usable energy increases as steam proceeds from inlet to exhaust in a steam engine. From the prefix en-, as in 'energy', and from the Greek word τροπή [tropē], which is translated in an established lexicon as turning or change and that he rendered in German as Verwandlung, a word often translated into English as transformation, in 1865 Clausius coined the name of that property as entropy. The word was adopted into the English language in 1868.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Later, scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, and James Clerk Maxwell gave entropy a statistical basis. In 1877, Boltzmann visualized a probabilistic way to measure the entropy of an ensemble of ideal gas particles, in which he defined entropy as proportional to the natural logarithm of the number of microstates such a gas could occupy. The proportionality constant in this definition, called the Boltzmann constant, has become one of the defining universal constants for the modern International System of Units (SI). Henceforth, the essential problem in statistical thermodynamics has been to determine the distribution of a given amount of energy E over N identical systems. Constantin Carathéodory, a Greek mathematician, linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 1865, Clausius named the concept of \"the differential of a quantity which depends on the configuration of the system\", entropy (Entropie) after the Greek word for 'transformation'. He gave \"transformational content\" (Verwandlungsinhalt) as a synonym, paralleling his \"thermal and ergonal content\" (Wärme- und Werkinhalt) as the name of U {\\displaystyle U} , but preferring the term entropy as a close parallel of the word energy, as he found the concepts nearly \"analogous in their physical significance\". This term was formed by replacing the root of ἔργον ('ergon', 'work') by that of τροπή ('tropy', 'transformation').",
"title": "Etymology"
},
{
"paragraph_id": 9,
"text": "In more detail, Clausius explained his choice of \"entropy\" as a name as follows:",
"title": "Etymology"
},
{
"paragraph_id": 10,
"text": "I prefer going to the ancient languages for the names of important scientific quantities, so that they may mean the same thing in all living tongues. I propose, therefore, to call S the entropy of a body, after the Greek word \"transformation\". I have designedly coined the word entropy to be similar to energy, for these two quantities are so analogous in their physical significance, that an analogy of denominations seems to me helpful.",
"title": "Etymology"
},
{
"paragraph_id": 11,
"text": "Leon Cooper added that in this way \"he succeeded in coining a word that meant the same thing to everybody: nothing\".",
"title": "Etymology"
},
{
"paragraph_id": 12,
"text": "Any method involving the notion of entropy, the very existence of which depends on the second law of thermodynamics, will doubtless seem to many far-fetched, and may repel beginners as obscure and difficult of comprehension.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 13,
"text": "Willard Gibbs, Graphical Methods in the Thermodynamics of Fluids",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 14,
"text": "The concept of entropy is described by two principal approaches, the macroscopic perspective of classical thermodynamics, and the microscopic description central to statistical mechanics. The classical approach defines entropy in terms of macroscopically measurable physical properties, such as bulk mass, volume, pressure, and temperature. The statistical definition of entropy defines it in terms of the statistics of the motions of the microscopic constituents of a system – modeled at first classically, e.g. Newtonian particles constituting a gas, and later quantum-mechanically (photons, phonons, spins, etc.). The two approaches form a consistent, unified view of the same phenomenon as expressed in the second law of thermodynamics, which has found universal applicability to physical processes.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 15,
"text": "Many thermodynamic properties are defined by physical variables that define a state of thermodynamic equilibrium; these are state variables. State variables depend only on the equilibrium condition, not on the path evolution to that state. State variables can be functions of state, also called state functions, in a sense that one state variable is a mathematical function of other state variables. Often, if some properties of a system are determined, they are sufficient to determine the state of the system and thus other properties' values. For example, temperature and pressure of a given quantity of gas determine its state, and thus also its volume via the ideal gas law. A system composed of a pure substance of a single phase at a particular uniform temperature and pressure is determined, and is thus a particular state, and has not only a particular volume but also a specific entropy. The fact that entropy is a function of state makes it useful. In the Carnot cycle, the working fluid returns to the same state that it had at the start of the cycle, hence the change or line integral of any state function, such as entropy, over this reversible cycle is zero.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 16,
"text": "Total entropy may be conserved during a reversible process. The entropy change d S {\\textstyle dS} of the system (not including the surroundings) is well-defined as heat δ Q rev {\\textstyle \\delta Q_{\\text{rev}}} transferred to the system divided by the system temperature T {\\textstyle T} , d S = δ Q rev T {\\textstyle dS={\\frac {\\delta Q_{\\text{rev}}}{T}}} . A reversible process is a quasistatic one that deviates only infinitesimally from thermodynamic equilibrium and avoids friction or other dissipation. Any process that happens quickly enough to deviate from thermal equilibrium cannot be reversible, total entropy increases, and the potential for maximum work to be done in the process is also lost. For example, in the Carnot cycle, while the heat flow from the hot reservoir to the cold reservoir represents an increase in entropy in the cold reservoir, the work output, if reversibly and perfectly stored in some energy storage mechanism, represents a decrease in entropy that could be used to operate the heat engine in reverse and return to the previous state; thus the total entropy change may still be zero at all times if the entire process is reversible. An irreversible process increases the total entropy of the system and surroundings.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 17,
"text": "The concept of entropy arose from Rudolf Clausius's study of the Carnot cycle that is a thermodynamic cycle performed by a Carnot heat engine as a reversible heat engine. In a Carnot cycle, heat QH is absorbed isothermally at temperature TH from a 'hot' reservoir (in the isothermal expansion stage) and given up isothermally as heat QC to a 'cold' reservoir at TC (in the isothermal compression stage). According to Carnot's principle or theorem, work from a heat engine with two thermal reservoirs can be produced only when there is a temperature difference between these reservoirs, and for reversible engines which are mostly and equally efficient among all heat engines for a given thermal reservoir pair, the work is a function of the reservoir temperatures and the heat absorbed to the engine QH (heat engine work output = heat engine efficiency × heat to the engine, where the efficiency is a function of the reservoir temperatures for reversible heat engines). Carnot did not distinguish between QH and QC, since he was using the incorrect hypothesis that caloric theory was valid, and hence heat was conserved (the incorrect assumption that QH and QC were equal in magnitude) when, in fact, the magnitude of QH is greater than the magnitude of QC. Through the efforts of Clausius and Kelvin, it is now known that the work done by a reversible heat engine is the product of the Carnot efficiency (it is the efficiency of all reversible heat engines with the same thermal reservoir pairs according to the Carnot's theorem) and the heat absorbed from the hot reservoir:",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 18,
"text": "Here W {\\displaystyle W} is work done by the Carnot heat engine, Q H {\\displaystyle Q_{\\text{H}}} is heat to the engine from the hot reservoir, and − T C T H Q H {\\displaystyle -{\\frac {T_{\\text{C}}}{T_{\\text{H}}}}Q_{\\text{H}}} is heat to the cold reservoir from the engine. To derive the Carnot efficiency, which is 1 − TC/TH (a number less than one), Kelvin had to evaluate the ratio of the work output to the heat absorbed during the isothermal expansion with the help of the Carnot–Clapeyron equation, which contained an unknown function called the Carnot function. The possibility that the Carnot function could be the temperature as measured from a zero point of temperature was suggested by Joule in a letter to Kelvin. This allowed Kelvin to establish his absolute temperature scale. It is also known that the net work W produced by the system in one cycle is the net heat absorbed, which is the sum (or difference of the magnitudes) of the heat QH > 0 absorbed from the hot reservoir and the waste heat QC < 0 given off to the cold reservoir:",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 19,
"text": "Since the latter is valid over the entire cycle, this gave Clausius the hint that at each stage of the cycle, work and heat would not be equal, but rather their difference would be the change of a state function that would vanish upon completion of the cycle. The state function was called the internal energy, that is central to the first law of thermodynamics.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 20,
"text": "Now equating (1) and (2) gives, for the engine per Carnot cycle,",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 21,
"text": "This implies that there is a function of state whose change is Q/T and this state function is conserved over a complete Carnot cycle, like other state function such as the internal energy. Clausius called this state function entropy. One can see that entropy was discovered through mathematics rather than through laboratory experimental results. It is a mathematical construct and has no easy physical analogy. This makes the concept somewhat obscure or abstract, akin to how the concept of energy arose. This equation shows an entropy change per Carnot cycle is zero. In fact, an entropy change in the both thermal reservoirs per Carnot cycle is also zero since that change is simply expressed by reverting the sign of each term in the equation (3) according to the fact that, for example, for heat transfer from the hot reservoir to the engine, the engine receives the heat while the hot reservoir loses the same amount of the heat;",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 22,
"text": "where we denote an entropy change for a thermal reservoir by ΔSr,i = - Qi/Ti, for i as either H (Hot reservoir) or C (Cold reservoir), by considering the above-mentioned signal convention of heat for the engine.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 23,
"text": "Clausius then asked what would happen if less work is produced by the system than that predicted by Carnot's principle for the same thermal reservoir pair and the same heat transfer from the hot reservoir to the engine QH. In this case, the right-hand side of the equation (1) would be the upper bound of the work output by the system, and the equation would now be converted into an inequality",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 24,
"text": "When the equation (2) is used to express the work as a net or total heat exchanged in a cycle, we get",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 25,
"text": "or",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 26,
"text": "by considering the sign convention of heat where QH > 0 is heat that is from the hot reservoir and is absorbed by the engine and QC < 0 is the waste heat given off to the cold reservoir from the engine. So, more heat is given up to the cold reservoir than in the Carnot cycle. The above inequality Q H + Q C < ( 1 − T C T H ) Q H {\\displaystyle Q_{\\text{H}}+Q_{\\text{C}}<\\left(1-{\\frac {T_{\\text{C}}}{T_{\\text{H}}}}\\right)Q_{\\text{H}}} can be written as",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 27,
"text": "If we, again, denote an entropy change for a thermal reservoir by ΔSr,i = - Qi/Ti, for i as either H (Hot reservoir) or C (Cold reservoir), by considering the abovementioned signal convention of heat for the engine, then",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 28,
"text": "or",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 29,
"text": "telling that the magnitude of the entropy earned by the cold reservoir is greater than the entropy lost by the hot reservoir. The net entropy change in the engine per its thermodynamic cycle is zero, so the net entropy change in the engine and both the thermal reservoirs per cycle increases if work produced by the engine is less than the work achieved by a Carnot engine in the equation (1).",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 30,
"text": "The Carnot cycle and Carnot efficiency as shown in the equation (1) are useful because they define the upper bound of the possible work output and the efficiency of any classical thermodynamic heat engine. Other cycles, such as the Otto cycle, Diesel cycle and Brayton cycle, can be analyzed from the standpoint of the Carnot cycle. Any machine or cyclic process that converts heat to work and is claimed to produce an efficiency greater than the Carnot efficiency is not viable because it violates the second law of thermodynamics.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 31,
"text": "For very small numbers of particles in the system, statistical thermodynamics must be used. The efficiency of devices such as photovoltaic cells requires an analysis from the standpoint of quantum mechanics.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 32,
"text": "The thermodynamic definition of entropy was developed in the early 1850s by Rudolf Clausius and essentially describes how to measure the entropy of an isolated system in thermodynamic equilibrium with its parts. Clausius created the term entropy as an extensive thermodynamic variable that was shown to be useful in characterizing the Carnot cycle. Heat transfer in the isotherm steps (isothermal expansion and isothermal compression) of the Carnot cycle was found to be proportional to the temperature of a system (known as its absolute temperature). This relationship was expressed in an increment of entropy that is equal to incremental heat transfer divided by temperature. Entropy was found to vary in the thermodynamic cycle but eventually returned to the same value at the end of every cycle. Thus it was found to be a function of state, specifically a thermodynamic state of the system.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 33,
"text": "While Clausius based his definition on a reversible process, there are also irreversible processes that change entropy. Following the second law of thermodynamics, entropy of an isolated system always increases for irreversible processes. The difference between an isolated system and closed system is that energy may not flow to and from an isolated system, but energy flow to and from a closed system is possible. Nevertheless, for both closed and isolated systems, and indeed, also in open systems, irreversible thermodynamics processes may occur.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 34,
"text": "According to the Clausius equality, for a reversible cyclic process: ∮ δ Q rev T = 0 {\\textstyle \\oint {\\frac {\\delta Q_{\\text{rev}}}{T}}=0} . This means the line integral ∫ L δ Q rev T {\\textstyle \\int _{L}{\\frac {\\delta Q_{\\text{rev}}}{T}}} is path-independent.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 35,
"text": "So we can define a state function S called entropy, which satisfies d S = δ Q rev T {\\textstyle dS={\\frac {\\delta Q_{\\text{rev}}}{T}}} .",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 36,
"text": "To find the entropy difference between any two states of a system, the integral must be evaluated for some reversible path between the initial and final states. Since entropy is a state function, the entropy change of the system for an irreversible path is the same as for a reversible path between the same two states. However, the heat transferred to or from, and the entropy change of, the surroundings is different.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 37,
"text": "We can only obtain the change of entropy by integrating the above formula. To obtain the absolute value of the entropy, we need the third law of thermodynamics, which states that S = 0 at absolute zero for perfect crystals.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 38,
"text": "From a macroscopic perspective, in classical thermodynamics the entropy is interpreted as a state function of a thermodynamic system: that is, a property depending only on the current state of the system, independent of how that state came to be achieved. In any process where the system gives up energy ΔE, and its entropy falls by ΔS, a quantity at least TR ΔS of that energy must be given up to the system's surroundings as heat (TR is the temperature of the system's external surroundings). Otherwise the process cannot go forward. In classical thermodynamics, the entropy of a system is defined only if it is in physical thermodynamic equilibrium. (But chemical equilibrium is not required: the entropy of a mixture of two moles of hydrogen and one mole of oxygen at 1 bar pressure and 298 K is well-defined.)",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 39,
"text": "The statistical definition was developed by Ludwig Boltzmann in the 1870s by analyzing the statistical behavior of the microscopic components of the system. Boltzmann showed that this definition of entropy was equivalent to the thermodynamic entropy to within a constant factor—known as the Boltzmann constant. In short, the thermodynamic definition of entropy provides the experimental verification of entropy, while the statistical definition of entropy extends the concept, providing an explanation and a deeper understanding of its nature.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 40,
"text": "The interpretation of entropy in statistical mechanics is the measure of uncertainty, disorder, or mixedupness in the phrase of Gibbs, which remains about a system after its observable macroscopic properties, such as temperature, pressure and volume, have been taken into account. For a given set of macroscopic variables, the entropy measures the degree to which the probability of the system is spread out over different possible microstates. In contrast to the macrostate, which characterizes plainly observable average quantities, a microstate specifies all molecular details about the system including the position and velocity of every molecule. The more such states are available to the system with appreciable probability, the greater the entropy. In statistical mechanics, entropy is a measure of the number of ways a system can be arranged, often taken to be a measure of \"disorder\" (the higher the entropy, the higher the disorder). This definition describes the entropy as being proportional to the natural logarithm of the number of possible microscopic configurations of the individual atoms and molecules of the system (microstates) that could cause the observed macroscopic state (macrostate) of the system. The constant of proportionality is the Boltzmann constant.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 41,
"text": "The Boltzmann constant, and therefore entropy, have dimensions of energy divided by temperature, which has a unit of joules per kelvin (J⋅K) in the International System of Units (or kg⋅m⋅s⋅K in terms of base units). The entropy of a substance is usually given as an intensive property – either entropy per unit mass (SI unit: J⋅K⋅kg) or entropy per unit amount of substance (SI unit: J⋅K⋅mol).",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 42,
"text": "Specifically, entropy is a logarithmic measure of the number of system states with significant probability of being occupied:",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 43,
"text": "( p i {\\displaystyle p_{i}} is the probability that the system is in i {\\displaystyle i} th state, usually given by the Boltzmann distribution; if states are defined in a continuous manner, the summation is replaced by an integral over all possible states) or, equivalently, the expected value of the logarithm of the probability that a microstate is occupied",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 44,
"text": "where kB is the Boltzmann constant, equal to 1.38065×10 J/K. The summation is over all the possible microstates of the system, and pi is the probability that the system is in the i-th microstate. This definition assumes that the basis set of states has been picked so that there is no information on their relative phases. In a different basis set, the more general expression is",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 45,
"text": "where ρ ^ {\\displaystyle {\\widehat {\\rho }}} is the density matrix, Tr {\\displaystyle \\operatorname {Tr} } is trace, and ln {\\displaystyle \\ln } is the matrix logarithm. This density matrix formulation is not needed in cases of thermal equilibrium so long as the basis states are chosen to be energy eigenstates. For most practical purposes, this can be taken as the fundamental definition of entropy since all other formulas for S can be mathematically derived from it, but not vice versa.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 46,
"text": "In what has been called the fundamental assumption of statistical thermodynamics or the fundamental postulate in statistical mechanics, among system microstates of the same energy (degenerate microstates) each microstate is assumed to be populated with equal probability; this assumption is usually justified for an isolated system in equilibrium. Then for an isolated system pi = 1/Ω, where Ω is the number of microstates whose energy equals the system's energy, and the previous equation reduces to",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 47,
"text": "In thermodynamics, such a system is one in which the volume, number of molecules, and internal energy are fixed (the microcanonical ensemble).",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 48,
"text": "For a given thermodynamic system, the excess entropy is defined as the entropy minus that of an ideal gas at the same density and temperature, a quantity that is always negative because an ideal gas is maximally disordered. This concept plays an important role in liquid-state theory. For instance, Rosenfeld's excess-entropy scaling principle states that reduced transport coefficients throughout the two-dimensional phase diagram are functions uniquely determined by the excess entropy.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 49,
"text": "The most general interpretation of entropy is as a measure of the extent of uncertainty about a system. The equilibrium state of a system maximizes the entropy because it does not reflect all information about the initial conditions, except for the conserved variables. This uncertainty is not of the everyday subjective kind, but rather the uncertainty inherent to the experimental method and interpretative model.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 50,
"text": "The interpretative model has a central role in determining entropy. The qualifier \"for a given set of macroscopic variables\" above has deep implications: if two observers use different sets of macroscopic variables, they see different entropies. For example, if observer A uses the variables U, V and W, and observer B uses U, V, W, X, then, by changing X, observer B can cause an effect that looks like a violation of the second law of thermodynamics to observer A. In other words: the set of macroscopic variables one chooses must include everything that may change in the experiment, otherwise one might see decreasing entropy.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 51,
"text": "Entropy can be defined for any Markov processes with reversible dynamics and the detailed balance property.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 52,
"text": "In Boltzmann's 1896 Lectures on Gas Theory, he showed that this expression gives a measure of entropy for systems of atoms and molecules in the gas phase, thus providing a measure for the entropy of classical thermodynamics.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 53,
"text": "Entropy arises directly from the Carnot cycle. It can also be described as the reversible heat divided by temperature. Entropy is a fundamental function of state.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 54,
"text": "In a thermodynamic system, pressure and temperature tend to become uniform over time because the equilibrium state has higher probability (more possible combinations of microstates) than any other state.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 55,
"text": "As an example, for a glass of ice water in air at room temperature, the difference in temperature between the warm room (the surroundings) and the cold glass of ice and water (the system and not part of the room) decreases as portions of the thermal energy from the warm surroundings spread to the cooler system of ice and water. Over time the temperature of the glass and its contents and the temperature of the room become equal. In other words, the entropy of the room has decreased as some of its energy has been dispersed to the ice and water, of which the entropy has increased.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 56,
"text": "However, as calculated in the example, the entropy of the system of ice and water has increased more than the entropy of the surrounding room has decreased. In an isolated system such as the room and ice water taken together, the dispersal of energy from warmer to cooler always results in a net increase in entropy. Thus, when the \"universe\" of the room and ice water system has reached a temperature equilibrium, the entropy change from the initial state is at a maximum. The entropy of the thermodynamic system is a measure of how far the equalization has progressed.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 57,
"text": "Thermodynamic entropy is a non-conserved state function that is of great importance in the sciences of physics and chemistry. Historically, the concept of entropy evolved to explain why some processes (permitted by conservation laws) occur spontaneously while their time reversals (also permitted by conservation laws) do not; systems tend to progress in the direction of increasing entropy. For isolated systems, entropy never decreases. This fact has several important consequences in science: first, it prohibits \"perpetual motion\" machines; and second, it implies the arrow of entropy has the same direction as the arrow of time. Increases in the total entropy of system and surroundings correspond to irreversible changes, because some energy is expended as waste heat, limiting the amount of work a system can do.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 58,
"text": "Unlike many other functions of state, entropy cannot be directly observed but must be calculated. Absolute standard molar entropy of a substance can be calculated from the measured temperature dependence of its heat capacity. The molar entropy of ions is obtained as a difference in entropy from a reference state defined as zero entropy. The second law of thermodynamics states that the entropy of an isolated system must increase or remain constant. Therefore, entropy is not a conserved quantity: for example, in an isolated system with non-uniform temperature, heat might irreversibly flow and the temperature become more uniform such that entropy increases. Chemical reactions cause changes in entropy and system entropy, in conjunction with enthalpy, plays an important role in determining in which direction a chemical reaction spontaneously proceeds.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 59,
"text": "One dictionary definition of entropy is that it is \"a measure of thermal energy per unit temperature that is not available for useful work\" in a cyclic process. For instance, a substance at uniform temperature is at maximum entropy and cannot drive a heat engine. A substance at non-uniform temperature is at a lower entropy (than if the heat distribution is allowed to even out) and some of the thermal energy can drive a heat engine.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 60,
"text": "A special case of entropy increase, the entropy of mixing, occurs when two or more different substances are mixed. If the substances are at the same temperature and pressure, there is no net exchange of heat or work – the entropy change is entirely due to the mixing of the different substances. At a statistical mechanical level, this results due to the change in available volume per particle with mixing.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 61,
"text": "Proofs of equivalence between the definition of entropy in statistical mechanics (the Gibbs entropy formula S = − k B ∑ i p i log p i {\\textstyle S=-k_{\\mathrm {B} }\\sum _{i}p_{i}\\log p_{i}} ) and in classical thermodynamics ( d S = δ Q rev T {\\textstyle dS={\\frac {\\delta Q_{\\text{rev}}}{T}}} together with the fundamental thermodynamic relation) are known for the microcanonical ensemble, the canonical ensemble, the grand canonical ensemble, and the isothermal–isobaric ensemble. These proofs are based on the probability density of microstates of the generalized Boltzmann distribution and the identification of the thermodynamic internal energy as the ensemble average U = ⟨ E i ⟩ {\\displaystyle U=\\left\\langle E_{i}\\right\\rangle } . Thermodynamic relations are then employed to derive the well-known Gibbs entropy formula. However, the equivalence between the Gibbs entropy formula and the thermodynamic definition of entropy is not a fundamental thermodynamic relation but rather a consequence of the form of the generalized Boltzmann distribution.",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 62,
"text": "Furthermore, it has been shown that the definitions of entropy in statistical mechanics is the only entropy that is equivalent to the classical thermodynamics entropy under the following postulates:",
"title": "Definitions and descriptions"
},
{
"paragraph_id": 63,
"text": "The second law of thermodynamics requires that, in general, the total entropy of any system does not decrease other than by increasing the entropy of some other system. Hence, in a system isolated from its environment, the entropy of that system tends not to decrease. It follows that heat cannot flow from a colder body to a hotter body without the application of work to the colder body. Secondly, it is impossible for any device operating on a cycle to produce net work from a single temperature reservoir; the production of net work requires flow of heat from a hotter reservoir to a colder reservoir, or a single expanding reservoir undergoing adiabatic cooling, which performs adiabatic work. As a result, there is no possibility of a perpetual motion machine. It follows that a reduction in the increase of entropy in a specified process, such as a chemical reaction, means that it is energetically more efficient.",
"title": "Second law of thermodynamics"
},
{
"paragraph_id": 64,
"text": "It follows from the second law of thermodynamics that the entropy of a system that is not isolated may decrease. An air conditioner, for example, may cool the air in a room, thus reducing the entropy of the air of that system. The heat expelled from the room (the system), which the air conditioner transports and discharges to the outside air, always makes a bigger contribution to the entropy of the environment than the decrease of the entropy of the air of that system. Thus, the total of entropy of the room plus the entropy of the environment increases, in agreement with the second law of thermodynamics.",
"title": "Second law of thermodynamics"
},
{
"paragraph_id": 65,
"text": "In mechanics, the second law in conjunction with the fundamental thermodynamic relation places limits on a system's ability to do useful work. The entropy change of a system at temperature T {\\textstyle T} absorbing an infinitesimal amount of heat δ q {\\textstyle \\delta q} in a reversible way, is given by δ q / T {\\textstyle \\delta q/T} . More explicitly, an energy T R S {\\textstyle T_{R}S} is not available to do useful work, where T R {\\textstyle T_{R}} is the temperature of the coldest accessible reservoir or heat sink external to the system. For further discussion, see Exergy.",
"title": "Second law of thermodynamics"
},
{
"paragraph_id": 66,
"text": "Statistical mechanics demonstrates that entropy is governed by probability, thus allowing for a decrease in disorder even in an isolated system. Although this is possible, such an event has a small probability of occurring, making it unlikely.",
"title": "Second law of thermodynamics"
},
{
"paragraph_id": 67,
"text": "The applicability of a second law of thermodynamics is limited to systems in or sufficiently near equilibrium state, so that they have defined entropy. Some inhomogeneous systems out of thermodynamic equilibrium still satisfy the hypothesis of local thermodynamic equilibrium, so that entropy density is locally defined as an intensive quantity. For such systems, there may apply a principle of maximum time rate of entropy production. It states that such a system may evolve to a steady state that maximizes its time rate of entropy production. This does not mean that such a system is necessarily always in a condition of maximum time rate of entropy production; it means that it may evolve to such a steady state.",
"title": "Second law of thermodynamics"
},
{
"paragraph_id": 68,
"text": "The entropy of a system depends on its internal energy and its external parameters, such as its volume. In the thermodynamic limit, this fact leads to an equation relating the change in the internal energy U {\\displaystyle U} to changes in the entropy and the external parameters. This relation is known as the fundamental thermodynamic relation. If external pressure p {\\displaystyle p} bears on the volume V {\\displaystyle V} as the only external parameter, this relation is:",
"title": "Applications"
},
{
"paragraph_id": 69,
"text": "Since both internal energy and entropy are monotonic functions of temperature T {\\displaystyle T} , implying that the internal energy is fixed when one specifies the entropy and the volume, this relation is valid even if the change from one state of thermal equilibrium to another with infinitesimally larger entropy and volume happens in a non-quasistatic way (so during this change the system may be very far out of thermal equilibrium and then the whole-system entropy, pressure, and temperature may not exist).",
"title": "Applications"
},
{
"paragraph_id": 70,
"text": "The fundamental thermodynamic relation implies many thermodynamic identities that are valid in general, independent of the microscopic details of the system. Important examples are the Maxwell relations and the relations between heat capacities.",
"title": "Applications"
},
{
"paragraph_id": 71,
"text": "Thermodynamic entropy is central in chemical thermodynamics, enabling changes to be quantified and the outcome of reactions predicted. The second law of thermodynamics states that entropy in an isolated system – the combination of a subsystem under study and its surroundings – increases during all spontaneous chemical and physical processes. The Clausius equation of δ q rev / T = Δ S {\\displaystyle \\delta q_{\\text{rev}}/T=\\Delta S} introduces the measurement of entropy change, Δ S {\\displaystyle \\Delta S} . Entropy change describes the direction and quantifies the magnitude of simple changes such as heat transfer between systems – always from hotter to cooler spontaneously.",
"title": "Applications"
},
{
"paragraph_id": 72,
"text": "The thermodynamic entropy therefore has the dimension of energy divided by temperature, and the unit joule per kelvin (J/K) in the International System of Units (SI).",
"title": "Applications"
},
{
"paragraph_id": 73,
"text": "Thermodynamic entropy is an extensive property, meaning that it scales with the size or extent of a system. In many processes it is useful to specify the entropy as an intensive property independent of the size, as a specific entropy characteristic of the type of system studied. Specific entropy may be expressed relative to a unit of mass, typically the kilogram (unit: J⋅kg⋅K). Alternatively, in chemistry, it is also referred to one mole of substance, in which case it is called the molar entropy with a unit of J⋅mol⋅K.",
"title": "Applications"
},
{
"paragraph_id": 74,
"text": "Thus, when one mole of substance at about 0 K is warmed by its surroundings to 298 K, the sum of the incremental values of q rev / T {\\textstyle q_{\\text{rev}}/T} constitute each element's or compound's standard molar entropy, an indicator of the amount of energy stored by a substance at 298 K. Entropy change also measures the mixing of substances as a summation of their relative quantities in the final mixture.",
"title": "Applications"
},
{
"paragraph_id": 75,
"text": "Entropy is equally essential in predicting the extent and direction of complex chemical reactions. For such applications, Δ S {\\displaystyle \\Delta S} must be incorporated in an expression that includes both the system and its surroundings, Δ S universe = Δ S surroundings + Δ S system {\\displaystyle \\Delta S_{\\text{universe}}=\\Delta S_{\\text{surroundings}}+\\Delta S_{\\text{system}}} . This expression becomes, via some steps, the Gibbs free energy equation for reactants and products in the system: Δ G {\\displaystyle \\Delta G} [the Gibbs free energy change of the system] = Δ H {\\displaystyle =\\Delta H} [the enthalpy change] − T Δ S {\\displaystyle -T\\,\\Delta S} [the entropy change].",
"title": "Applications"
},
{
"paragraph_id": 76,
"text": "A 2011 study in Science (journal) estimated the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources. The author's estimate that human kind's technological capacity to store information grew from 2.6 (entropically compressed) exabytes in 1986 to 295 (entropically compressed) exabytes in 2007. The world's technological capacity to receive information through one-way broadcast networks was 432 exabytes of (entropically compressed) information in 1986, to 1.9 zettabytes in 2007. The world's effective capacity to exchange information through two-way telecommunication networks was 281 petabytes of (entropically compressed) information in 1986, to 65 (entropically compressed) exabytes in 2007.",
"title": "Applications"
},
{
"paragraph_id": 77,
"text": "In chemical engineering, the principles of thermodynamics are commonly applied to \"open systems\", i.e. those in which heat, work, and mass flow across the system boundary. Flows of both heat ( Q ˙ {\\displaystyle {\\dot {Q}}} ) and work, i.e. W ˙ S {\\displaystyle {\\dot {W}}_{\\text{S}}} (shaft work) and P ( d V / d t ) {\\displaystyle P(dV/dt)} (pressure-volume work), across the system boundaries, in general cause changes in the entropy of the system. Transfer as heat entails entropy transfer Q ˙ / T {\\displaystyle {\\dot {Q}}/T} , where T {\\displaystyle T} is the absolute thermodynamic temperature of the system at the point of the heat flow. If there are mass flows across the system boundaries, they also influence the total entropy of the system. This account, in terms of heat and work, is valid only for cases in which the work and heat transfers are by paths physically distinct from the paths of entry and exit of matter from the system.",
"title": "Applications"
},
{
"paragraph_id": 78,
"text": "To derive a generalized entropy balanced equation, we start with the general balance equation for the change in any extensive quantity θ {\\displaystyle \\theta } in a thermodynamic system, a quantity that may be either conserved, such as energy, or non-conserved, such as entropy. The basic generic balance expression states that d θ / d t {\\displaystyle d\\theta /dt} , i.e. the rate of change of θ {\\displaystyle \\theta } in the system, equals the rate at which θ {\\displaystyle \\theta } enters the system at the boundaries, minus the rate at which θ {\\displaystyle \\theta } leaves the system across the system boundaries, plus the rate at which θ {\\displaystyle \\theta } is generated within the system. For an open thermodynamic system in which heat and work are transferred by paths separate from the paths for transfer of matter, using this generic balance equation, with respect to the rate of change with time t {\\displaystyle t} of the extensive quantity entropy S {\\displaystyle S} , the entropy balance equation is:",
"title": "Applications"
},
{
"paragraph_id": 79,
"text": "where",
"title": "Applications"
},
{
"paragraph_id": 80,
"text": "If there are multiple heat flows, the term Q ˙ / T {\\displaystyle {\\dot {Q}}/T} is replaced by ∑ Q ˙ j / T j , {\\textstyle \\sum {\\dot {Q}}_{j}/T_{j},} where Q ˙ j {\\displaystyle {\\dot {Q}}_{j}} is the heat flow and T j {\\displaystyle T_{j}} is the temperature at the j {\\displaystyle j} th heat flow port into the system.",
"title": "Applications"
},
{
"paragraph_id": 81,
"text": "The nomenclature \"entropy balance\" is misleading and often deemed inappropriate because entropy is not a conserved quantity. In other words, the term S ˙ gen {\\displaystyle {\\dot {S}}_{\\text{gen}}} is never a known quantity but always a derived one based on the expression above. Therefore, the open system version of the second law is more appropriately described as the \"entropy generation equation\" since it specifies that S ˙ gen ≥ 0 {\\displaystyle {\\dot {S}}_{\\text{gen}}\\geq 0} , with zero for reversible processes or greater than zero for irreversible ones.",
"title": "Applications"
},
{
"paragraph_id": 82,
"text": "For certain simple transformations in systems of constant composition, the entropy changes are given by simple formulas.",
"title": "Entropy change formulas for simple processes"
},
{
"paragraph_id": 83,
"text": "For the expansion (or compression) of an ideal gas from an initial volume V 0 {\\displaystyle V_{0}} and pressure P 0 {\\displaystyle P_{0}} to a final volume V {\\displaystyle V} and pressure P {\\displaystyle P} at any constant temperature, the change in entropy is given by:",
"title": "Entropy change formulas for simple processes"
},
{
"paragraph_id": 84,
"text": "Here n {\\displaystyle n} is the amount of gas (in moles) and R {\\displaystyle R} is the ideal gas constant. These equations also apply for expansion into a finite vacuum or a throttling process, where the temperature, internal energy and enthalpy for an ideal gas remain constant.",
"title": "Entropy change formulas for simple processes"
},
{
"paragraph_id": 85,
"text": "For pure heating or cooling of any system (gas, liquid or solid) at constant pressure from an initial temperature T 0 {\\displaystyle T_{0}} to a final temperature T {\\displaystyle T} , the entropy change is",
"title": "Entropy change formulas for simple processes"
},
{
"paragraph_id": 86,
"text": "provided that the constant-pressure molar heat capacity (or specific heat) CP is constant and that no phase transition occurs in this temperature interval.",
"title": "Entropy change formulas for simple processes"
},
{
"paragraph_id": 87,
"text": "Similarly at constant volume, the entropy change is",
"title": "Entropy change formulas for simple processes"
},
{
"paragraph_id": 88,
"text": "where the constant-volume molar heat capacity Cv is constant and there is no phase change.",
"title": "Entropy change formulas for simple processes"
},
{
"paragraph_id": 89,
"text": "At low temperatures near absolute zero, heat capacities of solids quickly drop off to near zero, so the assumption of constant heat capacity does not apply.",
"title": "Entropy change formulas for simple processes"
},
{
"paragraph_id": 90,
"text": "Since entropy is a state function, the entropy change of any process in which temperature and volume both vary is the same as for a path divided into two steps – heating at constant volume and expansion at constant temperature. For an ideal gas, the total entropy change is",
"title": "Entropy change formulas for simple processes"
},
{
"paragraph_id": 91,
"text": "Similarly if the temperature and pressure of an ideal gas both vary,",
"title": "Entropy change formulas for simple processes"
},
{
"paragraph_id": 92,
"text": "Reversible phase transitions occur at constant temperature and pressure. The reversible heat is the enthalpy change for the transition, and the entropy change is the enthalpy change divided by the thermodynamic temperature. For fusion (melting) of a solid to a liquid at the melting point Tm, the entropy of fusion is",
"title": "Entropy change formulas for simple processes"
},
{
"paragraph_id": 93,
"text": "Similarly, for vaporization of a liquid to a gas at the boiling point Tb, the entropy of vaporization is",
"title": "Entropy change formulas for simple processes"
},
{
"paragraph_id": 94,
"text": "As a fundamental aspect of thermodynamics and physics, several different approaches to entropy beyond that of Clausius and Boltzmann are valid.",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 95,
"text": "The following is a list of additional definitions of entropy from a collection of textbooks:",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 96,
"text": "In Boltzmann's analysis in terms of constituent particles, entropy is a measure of the number of possible microscopic states (or microstates) of a system in thermodynamic equilibrium.",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 97,
"text": "Entropy is often loosely associated with the amount of order or disorder, or of chaos, in a thermodynamic system. The traditional qualitative description of entropy is that it refers to changes in the status quo of the system and is a measure of \"molecular disorder\" and the amount of wasted energy in a dynamical energy transformation from one state or form to another. In this direction, several recent authors have derived exact entropy formulas to account for and measure disorder and order in atomic and molecular assemblies. One of the simpler entropy order/disorder formulas is that derived in 1984 by thermodynamic physicist Peter Landsberg, based on a combination of thermodynamics and information theory arguments. He argues that when constraints operate on a system, such that it is prevented from entering one or more of its possible or permitted states, as contrasted with its forbidden states, the measure of the total amount of \"disorder\" in the system is given by:",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 98,
"text": "Similarly, the total amount of \"order\" in the system is given by:",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 99,
"text": "In which CD is the \"disorder\" capacity of the system, which is the entropy of the parts contained in the permitted ensemble, CI is the \"information\" capacity of the system, an expression similar to Shannon's channel capacity, and CO is the \"order\" capacity of the system.",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 100,
"text": "The concept of entropy can be described qualitatively as a measure of energy dispersal at a specific temperature. Similar terms have been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or \"spreading\" of the total energy of each constituent of a system over its particular quantized energy levels.",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 101,
"text": "Ambiguities in the terms disorder and chaos, which usually have meanings directly opposed to equilibrium, contribute to widespread confusion and hamper comprehension of entropy for most students. As the second law of thermodynamics shows, in an isolated system internal portions at different temperatures tend to adjust to a single uniform temperature and thus produce equilibrium. A recently developed educational approach avoids ambiguous terms and describes such spreading out of energy as dispersal, which leads to loss of the differentials required for work even though the total energy remains constant in accordance with the first law of thermodynamics (compare discussion in next section). Physical chemist Peter Atkins, in his textbook Physical Chemistry, introduces entropy with the statement that \"spontaneous changes are always accompanied by a dispersal of energy or matter and often both\".",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 102,
"text": "It is possible (in a thermal context) to regard lower entropy as a measure of the effectiveness or usefulness of a particular quantity of energy. Energy supplied at a higher temperature (i.e. with low entropy) tends to be more useful than the same amount of energy available at a lower temperature. Mixing a hot parcel of a fluid with a cold one produces a parcel of intermediate temperature, in which the overall increase in entropy represents a \"loss\" that can never be replaced.",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 103,
"text": "As the entropy of the universe is steadily increasing, its total energy is becoming less useful. Eventually, this is theorized to lead to the heat death of the universe.",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 104,
"text": "A definition of entropy based entirely on the relation of adiabatic accessibility between equilibrium states was given by E. H. Lieb and J. Yngvason in 1999. This approach has several predecessors, including the pioneering work of Constantin Carathéodory from 1909 and the monograph by R. Giles. In the setting of Lieb and Yngvason, one starts by picking, for a unit amount of the substance under consideration, two reference states X 0 {\\displaystyle X_{0}} and X 1 {\\displaystyle X_{1}} such that the latter is adiabatically accessible from the former but not conversely. Defining the entropies of the reference states to be 0 and 1 respectively, the entropy of a state X {\\displaystyle X} is defined as the largest number λ {\\displaystyle \\lambda } such that X {\\displaystyle X} is adiabatically accessible from a composite state consisting of an amount λ {\\displaystyle \\lambda } in the state X 1 {\\displaystyle X_{1}} and a complementary amount, ( 1 − λ ) {\\displaystyle (1-\\lambda )} , in the state X 0 {\\displaystyle X_{0}} . A simple but important result within this setting is that entropy is uniquely determined, apart from a choice of unit and an additive constant for each chemical element, by the following properties: It is monotonic with respect to the relation of adiabatic accessibility, additive on composite systems, and extensive under scaling.",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 105,
"text": "In quantum statistical mechanics, the concept of entropy was developed by John von Neumann and is generally referred to as \"von Neumann entropy\",",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 106,
"text": "where ρ is the density matrix, and Tr is the trace operator.",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 107,
"text": "This upholds the correspondence principle, because in the classical limit, when the phases between the basis states used for the classical probabilities are purely random, this expression is equivalent to the familiar classical definition of entropy,",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 108,
"text": "i.e. in such a basis the density matrix is diagonal.",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 109,
"text": "Von Neumann established a rigorous mathematical framework for quantum mechanics with his work Mathematische Grundlagen der Quantenmechanik. He provided in this work a theory of measurement, where the usual notion of wave function collapse is described as an irreversible process (the so-called von Neumann or projective measurement). Using this concept, in conjunction with the density matrix he extended the classical concept of entropy into the quantum domain.",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 110,
"text": "I thought of calling it \"information\", but the word was overly used, so I decided to call it \"uncertainty\". [...] Von Neumann told me, \"You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage.",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 111,
"text": "Conversation between Claude Shannon and John von Neumann regarding what name to give to the attenuation in phone-line signals",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 112,
"text": "When viewed in terms of information theory, the entropy state function is the amount of information in the system that is needed to fully specify the microstate of the system. Entropy is the measure of the amount of missing information before reception. Often called Shannon entropy, it was originally devised by Claude Shannon in 1948 to study the size of information of a transmitted message. The definition of information entropy is expressed in terms of a discrete set of probabilities p i {\\displaystyle p_{i}} so that",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 113,
"text": "where the base of the logarithm determines the units (for example, the binary logarithm corresponds to bits).",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 114,
"text": "In the case of transmitted messages, these probabilities were the probabilities that a particular message was actually transmitted, and the entropy of the message system was a measure of the average size of information of a message. For the case of equal probabilities (i.e. each message is equally probable), the Shannon entropy (in bits) is just the number of binary questions needed to determine the content of the message.",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 115,
"text": "Most researchers consider information entropy and thermodynamic entropy directly linked to the same concept, while others argue that they are distinct. Both expressions are mathematically similar. If W {\\displaystyle W} is the number of microstates that can yield a given macrostate, and each microstate has the same a priori probability, then that probability is p = 1 / W {\\displaystyle p=1/W} . The Shannon entropy (in nats) is",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 116,
"text": "and if entropy is measured in units of k {\\displaystyle k} per nat, then the entropy is given by",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 117,
"text": "which is the Boltzmann entropy formula, where k {\\displaystyle k} is the Boltzmann constant, which may be interpreted as the thermodynamic entropy per nat. Some authors argue for dropping the word entropy for the H {\\displaystyle H} function of information theory and using Shannon's other term, \"uncertainty\", instead.",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 118,
"text": "The entropy of a substance can be measured, although in an indirect way. The measurement, known as entropymetry, is done on a closed system (with particle number N and volume V being constants) and uses the definition of temperature in terms of entropy, while limiting energy exchange to heat ( d U → d Q {\\displaystyle dU\\rightarrow dQ} ).",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 119,
"text": "The resulting relation describes how entropy changes d S {\\displaystyle dS} when a small amount of energy d Q {\\displaystyle dQ} is introduced into the system at a certain temperature T {\\displaystyle T} .",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 120,
"text": "The process of measurement goes as follows. First, a sample of the substance is cooled as close to absolute zero as possible. At such temperatures, the entropy approaches zero – due to the definition of temperature. Then, small amounts of heat are introduced into the sample and the change in temperature is recorded, until the temperature reaches a desired value (usually 25 °C). The obtained data allows the user to integrate the equation above, yielding the absolute value of entropy of the substance at the final temperature. This value of entropy is called calorimetric entropy.",
"title": "Approaches to understanding entropy"
},
{
"paragraph_id": 121,
"text": "Although the concept of entropy was originally a thermodynamic concept, it has been adapted in other fields of study, including information theory, psychodynamics, thermoeconomics/ecological economics, and evolution.",
"title": "Interdisciplinary applications"
},
{
"paragraph_id": 122,
"text": "Entropy is the only quantity in the physical sciences that seems to imply a particular direction of progress, sometimes called an arrow of time. As time progresses, the second law of thermodynamics states that the entropy of an isolated system never decreases in large systems over significant periods of time. Hence, from this perspective, entropy measurement is thought of as a clock in these conditions.",
"title": "Interdisciplinary applications"
},
{
"paragraph_id": 123,
"text": "Chiavazzo et al. proposed that where cave spiders choose to lay their eggs can be explained through entropy minimization.",
"title": "Interdisciplinary applications"
},
{
"paragraph_id": 124,
"text": "Entropy has been proven useful in the analysis of base pair sequences in DNA. Many entropy-based measures have been shown to distinguish between different structural regions of the genome, differentiate between coding and non-coding regions of DNA, and can also be applied for the recreation of evolutionary trees by determining the evolutionary distance between different species.",
"title": "Interdisciplinary applications"
},
{
"paragraph_id": 125,
"text": "Assuming that a finite universe is an isolated system, the second law of thermodynamics states that its total entropy is continually increasing. It has been speculated, since the 19th century, that the universe is fated to a heat death in which all the energy ends up as a homogeneous distribution of thermal energy so that no more work can be extracted from any source.",
"title": "Interdisciplinary applications"
},
{
"paragraph_id": 126,
"text": "If the universe can be considered to have generally increasing entropy, then – as Roger Penrose has pointed out – gravity plays an important role in the increase because gravity causes dispersed matter to accumulate into stars, which collapse eventually into black holes. The entropy of a black hole is proportional to the surface area of the black hole's event horizon. Jacob Bekenstein and Stephen Hawking have shown that black holes have the maximum possible entropy of any object of equal size. This makes them likely end points of all entropy-increasing processes, if they are totally effective matter and energy traps. However, the escape of energy from black holes might be possible due to quantum activity (see Hawking radiation).",
"title": "Interdisciplinary applications"
},
{
"paragraph_id": 127,
"text": "The role of entropy in cosmology remains a controversial subject since the time of Ludwig Boltzmann. Recent work has cast some doubt on the heat death hypothesis and the applicability of any simple thermodynamic model to the universe in general. Although entropy does increase in the model of an expanding universe, the maximum possible entropy rises much more rapidly, moving the universe further from the heat death with time, not closer. This results in an \"entropy gap\" pushing the system further away from the posited heat death equilibrium. Other complicating factors, such as the energy density of the vacuum and macroscopic quantum effects, are difficult to reconcile with thermodynamical models, making any predictions of large-scale thermodynamics extremely difficult.",
"title": "Interdisciplinary applications"
},
{
"paragraph_id": 128,
"text": "Current theories suggest the entropy gap to have been originally opened up by the early rapid exponential expansion of the universe.",
"title": "Interdisciplinary applications"
},
{
"paragraph_id": 129,
"text": "Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and a paradigm founder of ecological economics, made extensive use of the entropy concept in his magnum opus on The Entropy Law and the Economic Process. Due to Georgescu-Roegen's work, the laws of thermodynamics form an integral part of the ecological economics school. Although his work was blemished somewhat by mistakes, a full chapter on the economics of Georgescu-Roegen has approvingly been included in one elementary physics textbook on the historical development of thermodynamics.",
"title": "Interdisciplinary applications"
},
{
"paragraph_id": 130,
"text": "In economics, Georgescu-Roegen's work has generated the term 'entropy pessimism'. Since the 1990s, leading ecological economist and steady-state theorist Herman Daly – a student of Georgescu-Roegen – has been the economics profession's most influential proponent of the entropy pessimism position.",
"title": "Interdisciplinary applications"
}
]
| Entropy is a scientific concept that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the microscopic description of nature in statistical physics, and to the principles of information theory. It has found far-ranging applications in chemistry and physics, in biological systems and their relation to life, in cosmology, economics, sociology, weather science, climate change, and information systems including the transmission of information in telecommunication. Entropy is central to the second law of thermodynamics, which states that the entropy of an isolated system left to spontaneous evolution cannot decrease with time. As a result, isolated systems evolve toward thermodynamic equilibrium, where the entropy is highest. A consequence of the second law of thermodynamics is that certain processes are irreversible. The thermodynamic concept was referred to by Scottish scientist and engineer William Rankine in 1850 with the names thermodynamic function and heat-potential. In 1865, German physicist Rudolf Clausius, one of the leading founders of the field of thermodynamics, defined it as the quotient of an infinitesimal amount of heat to the instantaneous temperature. He initially described it as transformation-content, in German Verwandlungsinhalt, and later coined the term entropy from a Greek word for transformation. Austrian physicist Ludwig Boltzmann explained entropy as the measure of the number of possible microscopic arrangements or states of individual atoms and molecules of a system that comply with the macroscopic condition of the system. He thereby introduced the concept of statistical disorder and probability distributions into a new field of thermodynamics, called statistical mechanics, and found the link between the microscopic interactions, which fluctuate about an average configuration, to the macroscopically observable behavior, in form of a simple logarithmic law, with a proportionality constant, the Boltzmann constant, that has become one of the defining universal constants for the modern International System of Units (SI). | 2001-11-20T06:01:12Z | 2023-12-22T10:31:24Z | [
"Template:Short description",
"Template:Hatgrp",
"Template:Thermodynamics sidebar",
"Template:Colend",
"Template:Cite book",
"Template:ISBN",
"Template:Authority control",
"Template:Main",
"Template:Lang",
"Template:Quote box",
"Template:Snd",
"Template:Reflist",
"Template:Use dmy dates",
"Template:NumBlk",
"Template:Rp",
"Template:Cite encyclopedia",
"Template:Wikibooks",
"Template:EntropySegments",
"Template:Complex systems",
"Template:EquationNote",
"Template:Conjugate variables (thermodynamics)",
"Template:Mvar",
"Template:Ordered list",
"Template:Cite journal",
"Template:Modern physics",
"Template:Val",
"Template:See also",
"Template:Cite web",
"Template:Wikiquote",
"Template:Energy footer",
"Template:Statistical mechanics topics",
"Template:Cn",
"Template:Citation needed",
"Template:Wiktionary",
"Template:Infobox physical quantity",
"Template:Math",
"Template:Colbegin",
"Template:Complex systems topics"
]
| https://en.wikipedia.org/wiki/Entropy |
9,892 | Expert | An expert is somebody who has a broad and deep understanding and competence in terms of knowledge, skill and experience through practice and education in a particular field or area of study. Informally, an expert is someone widely recognized as a reliable source of technique or skill whose faculty for judging or deciding rightly, justly, or wisely is accorded authority and status by peers or the public in a specific well-distinguished domain. An expert, more generally, is a person with extensive knowledge or ability based on research, experience, or occupation and in a particular area of study. Experts are called in for advice on their respective subject, but they do not always agree on the particulars of a field of study. An expert can be believed, by virtue of credentials, training, education, profession, publication or experience, to have special knowledge of a subject beyond that of the average person, sufficient that others may officially (and legally) rely upon the individual's opinion on that topic. Historically, an expert was referred to as a sage. The individual was usually a profound thinker distinguished for wisdom and sound judgment.
In specific fields, the definition of expert is well established by consensus and therefore it is not always necessary for individuals to have a professional or academic qualification for them to be accepted as an expert. In this respect, a shepherd with fifty years of experience tending flocks would be widely recognized as having complete expertise in the use and training of sheep dogs and the care of sheep. Another example from computer science is that an expert system may be taught by a human and thereafter considered an expert, often outperforming human beings at particular tasks. In law, an expert witness must be recognized by argument and authority.
Research in this area attempts to understand the relation between expert knowledge, skills and personal characteristics and exceptional performance. Some researchers have investigated the cognitive structures and processes of experts. The fundamental aim of this research is to describe what it is that experts know and how they use their knowledge to achieve performance that most people assume requires extreme or extraordinary ability. Studies have investigated the factors that enable experts to be fast and accurate.
Expertise characteristics, skills and knowledge of a person (that is, expert) or of a system, which distinguish experts from novices and less experienced people. In many domains there are objective measures of performance capable of distinguishing experts from novices: expert chess players will almost always win games against recreational chess players; expert medical specialists are more likely to diagnose a disease correctly; etc.
The word expertise is used to refer also to expert determination, where an expert is invited to decide a disputed issue. The decision may be binding or advisory, according to the agreement between the parties in dispute.
There are two academic approaches to the understanding and study of expertise. The first understands expertise as an emergent property of communities of practice. In this view expertise is socially constructed; tools for thinking and scripts for action are jointly constructed within social groups enabling that group jointly to define and acquire expertise in some domain.
In the second view, expertise is a characteristic of individuals and is a consequence of the human capacity for extensive adaptation to physical and social environments. Many accounts of the development of expertise emphasize that it comes about through long periods of deliberate practice. In many domains of expertise estimates of 10 years' experience deliberate practice are common. Recent research on expertise emphasizes the nurture side of the nature and nurture argument. Some factors not fitting the nature-nurture dichotomy are biological but not genetic, such as starting age, handedness, and season of birth.
In the field of education there is a potential "expert blind spot" (see also Dunning–Kruger effect) in newly practicing educators who are experts in their content area. This is based on the "expert blind spot hypothesis" researched by Mitchell Nathan and Andrew Petrosino. Newly practicing educators with advanced subject-area expertise of an educational content area tend to use the formalities and analysis methods of their particular area of expertise as a major guiding factor of student instruction and knowledge development, rather than being guided by student learning and developmental needs that are prevalent among novice learners.
The blind spot metaphor refers to the physiological blind spot in human vision in which perceptions of surroundings and circumstances are strongly impacted by their expectations. Beginning practicing educators tend to overlook the importance of novice levels of prior knowledge and other factors involved in adjusting and adapting pedagogy for learner understanding. This expert blind spot is in part due to an assumption that novices' cognitive schemata are less elaborate, interconnected, and accessible than experts' and that their pedagogical reasoning skills are less well developed. Essential knowledge of subject matter for practicing educators consists of overlapping knowledge domains: subject matter knowledge and pedagogical content matter. Pedagogical content matter consists of an understanding of how to represent certain concepts in ways appropriate to the learner contexts, including abilities and interests. The expert blind spot is a pedagogical phenomenon that is typically overcome through educators' experience with instructing learners over time.
In line with the socially constructed view of expertise, expertise can also be understood as a form of power; that is, experts have the ability to influence others as a result of their defined social status. By a similar token, a fear of experts can arise from fear of an intellectual elite's power. In earlier periods of history, simply being able to read made one part of an intellectual elite. The introduction of the printing press in Europe during the fifteenth century and the diffusion of printed matter contributed to higher literacy rates and wider access to the once-rarefied knowledge of academia. The subsequent spread of education and learning changed society, and initiated an era of widespread education whose elite would now instead be those who produced the written content itself for consumption, in education and all other spheres.
Plato's "Noble Lie", concerns expertise. Plato did not believe most people were clever enough to look after their own and society's best interest, so the few clever people of the world needed to lead the rest of the flock. Therefore, the idea was born that only the elite should know the truth in its complete form and the rulers, Plato said, must tell the people of the city "the noble lie" to keep them passive and content, without the risk of upheaval and unrest.
In contemporary society, doctors and scientists, for example, are considered to be experts in that they hold a body of dominant knowledge that is, on the whole, inaccessible to the layman. However, this inaccessibility and perhaps even mystery that surrounds expertise does not cause the layman to disregard the opinion of the experts on account of the unknown. Instead, the complete opposite occurs whereby members of the public believe in and highly value the opinion of medical professionals or of scientific discoveries, despite not understanding it.
A number of computational models have been developed in cognitive science to explain the development from novice to expert. In particular, Herbert A. Simon and Kevin Gilmartin proposed a model of learning in chess called MAPP (Memory-Aided Pattern Recognizer). Based on simulations, they estimated that about 50,000 chunks (units of memory) are necessary to become an expert, and hence the many years needed to reach this level. More recently, the CHREST model (Chunk Hierarchy and REtrieval STructures) has simulated in detail a number of phenomena in chess expertise (eye movements, performance in a variety of memory tasks, development from novice to expert) and in other domains.
An important feature of expert performance seems to be the way in which experts are able to rapidly retrieve complex configurations of information from long-term memory. They recognize situations because they have meaning. It is perhaps this central concern with meaning and how it attaches to situations which provides an important link between the individual and social approaches to the development of expertise. Work on "Skilled Memory and Expertise" by Anders Ericsson and James J. Staszewski confronts the paradox of expertise and claims that people not only acquire content knowledge as they practice cognitive skills, they also develop mechanisms that enable them to use a large and familiar knowledge base efficiently.
Work on expert systems (computer software designed to provide an answer to a problem, or clarify uncertainties where normally one or more human experts would need to be consulted) typically is grounded on the premise that expertise is based on acquired repertoires of rules and frameworks for decision making which can be elicited as the basis for computer supported judgment and decision-making. However, there is increasing evidence that expertise does not work in this fashion. Rather, experts recognize situations based on experience of many prior situations. They are in consequence able to make rapid decisions in complex and dynamic situations.
In a critique of the expert systems literature, Dreyfus & Dreyfus suggest:
If one asks an expert for the rules he or she is using, one will, in effect, force the expert to regress to the level of a beginner and state the rules learned in school. Thus, instead of using rules he or she no longer remembers, as the knowledge engineers suppose, the expert is forced to remember rules he or she no longer uses. ... No amount of rules and facts can capture the knowledge an expert has when he or she has stored experience of the actual outcomes of tens of thousands of situations.
The role of long-term memory in the skilled memory effect was first articulated by Chase and Simon in their classic studies of chess expertise. They asserted that organized patterns of information stored in long-term memory (chunks) mediated experts' rapid encoding and superior retention. Their study revealed that all subjects retrieved about the same number of chunks, but the size of the chunks varied with subjects' prior experience. Experts' chunks contained more individual pieces than those of novices. This research did not investigate how experts find, distinguish, and retrieve the right chunks from the vast number they hold without a lengthy search of long-term memory.
Skilled memory enables experts to rapidly encode, store, and retrieve information within the domain of their expertise and thereby circumvent the capacity limitations that typically constrain novice performance. For example, it explains experts' ability to recall large amounts of material displayed for only brief study intervals, provided that the material comes from their domain of expertise. When unfamiliar material (not from their domain of expertise) is presented to experts, their recall is no better than that of novices.
The first principle of skilled memory, the meaningful encoding principle, states that experts exploit prior knowledge to durably encode information needed to perform a familiar task successfully. Experts form more elaborate and accessible memory representations than novices. The elaborate semantic memory network creates meaningful memory codes that create multiple potential cues and avenues for retrieval.
The second principle, the retrieval structure principle states that experts develop memory mechanisms called retrieval structures to facilitate the retrieval of information stored in long-term memory. These mechanisms operate in a fashion consistent with the meaningful encoding principle to provide cues that can later be regenerated to retrieve the stored information efficiently without a lengthy search.
The third principle, the speed up principle states that long-term memory encoding and retrieval operations speed up with practice, so that their speed and accuracy approach the speed and accuracy of short-term memory storage and retrieval.
Examples of skilled memory research described in the Ericsson and Stasewski study include:
Much of the research regarding expertise involves the studies of how experts and novices differ in solving problems. Mathematics and physics are common domains for these studies.
One of the most cited works in this area examines how experts (PhD students in physics) and novices (undergraduate students that completed one semester of mechanics) categorize and represent physics problems. They found that novices sort problems into categories based upon surface features (e.g., keywords in the problem statement or visual configurations of the objects depicted). Experts, however, categorize problems based upon their deep structures (i.e., the main physics principle used to solve the problem).
Their findings also suggest that while the schemas of both novices and experts are activated by the same features of a problem statement, the experts' schemas contain more procedural knowledge which aid in determining which principle to apply, and novices' schemas contain mostly declarative knowledge which do not aid in determining methods for solution.
Relative to a specific field, an expert has:
Marie-Line Germain developed a psychometric measure of perception of employee expertise called the Generalized Expertise Measure. She defined a behavioral dimension in experts, in addition to the dimensions suggested by Swanson and Holton. Her 16-item scale contains objective expertise items and subjective expertise items. Objective items were named Evidence-Based items. Subjective items (the remaining 11 items from the measure below) were named Self-Enhancement items because of their behavioral component.
Scholars in rhetoric have also turned their attention to the concept of the expert. Considered an appeal to ethos or "the personal character of the speaker", established expertise allows a speaker to make statements regarding special topics of which the audience may be ignorant. In other words, the expert enjoys the deference of the audience's judgment and can appeal to authority where a non-expert cannot.
In The Rhetoric of Expertise, E. Johanna Hartelius defines two basic modes of expertise: autonomous and attributed expertise. While an autonomous expert can "possess expert knowledge without recognition from other people," attributed expertise is "a performance that may or may not indicate genuine knowledge." With these two categories, Hartelius isolates the rhetorical problems faced by experts: just as someone with autonomous expertise may not possess the skill to persuade people to hold their points of view, someone with merely attributed expertise may be persuasive but lack the actual knowledge pertaining to a given subject. The problem faced by audiences follows from the problem facing experts: when faced with competing claims of expertise, what resources do non-experts have to evaluate claims put before them?
Hartelius and other scholars have also noted the challenges that projects such as Wikipedia pose to how experts have traditionally constructed their authority. In "Wikipedia and the Emergence of Dialogic Expertise", she highlights Wikipedia as an example of the "dialogic expertise" made possible by collaborative digital spaces. Predicated upon the notion that "truth emerges from dialogue", Wikipedia challenges traditional expertise both because anyone can edit it and because no single person, regardless of their credentials, can end a discussion by fiat. In other words, the community, rather than single individuals, direct the course of discussion. The production of knowledge, then, as a process of dialogue and argumentation, becomes an inherently rhetorical activity.
Hartelius calls attention to two competing norm systems of expertise: “network norms of dialogic collaboration” and “deferential norms of socially sanctioned professionalism”; Wikipedia being evidence of the first. Drawing on a Bakhtinian framework, Hartelius posits that Wikipedia is an example of an epistemic network that is driven by the view that individuals’ ideas clash with one another so as to generate expertise collaboratively. Hartelius compares Wikipedia's methodology of open-ended discussions of topics to that of Bakhtin's theory of speech communication, where genuine dialogue is considered a live event, which is continuously open to new additions and participants. Hartelius acknowledges that knowledge, experience, training, skill, and qualification are important dimensions of expertise but posits that the concept is more complex than sociologists and psychologists suggest. Arguing that expertise is rhetorical, then, Hartelius explains that expertise "is not simply about one person's skills being different from another's. It is also fundamentally contingent on a struggle for ownership and legitimacy." Effective communication is an inherent element in expertise in the same style as knowledge is. Rather than leaving each other out, substance and communicative style are complementary. Hartelius further suggests that Wikipedia's dialogic construction of expertise illustrates both the instrumental and the constitutive dimensions of rhetoric; instrumentally as it challenges traditional encyclopedias and constitutively as a function of its knowledge production. Going over the historical development of the encyclopedic project, Hartelius argues that changes in traditional encyclopedias have led to changes in traditional expertise. Wikipedia's use of hyperlinks to connect one topic to another depends on, and develops, electronic interactivity meaning that Wikipedia's way of knowing is dialogic. Dialogic expertise then, emerges from multiple interactions between utterances within the discourse community. The ongoing dialogue between contributors on Wikipedia not only results in the emergence of truth; it also explicates the topics one can be an expert of. As Hartelius explains, "the very act of presenting information about topics that are not included in traditional encyclopedias is a construction of new expertise." While Wikipedia insists that contributors must only publish preexisting knowledge, the dynamics behind dialogic expertise creates new information nonetheless. Knowledge production is created as a function of dialogue. According to Hartelius, dialogic expertise has emerged on Wikipedia not only because of its interactive structure but also because of the site's hortative discourse which is not found in traditional encyclopedias. By Wikipedia's hortative discourse, Hartelius means various encouragements to edit certain topics and instructions on how to do so that appear on the site. One further reason to the emergence of dialogic expertise on Wikipedia is the site's community pages, which function as a techne; explicating Wikipedia's expert methodology.
Building on Hartelius, Damien Pfister developed the concept of "networked expertise". Noting that Wikipedia employs a "many to many" rather than a "one to one" model of communication, he notes how expertise likewise shifts to become a quality of a group rather than an individual. With the information traditionally associated with individual experts now stored within a text produced by a collective, knowing about something is less important than knowing how to find something. As he puts it, "With the internet, the historical power of subject matter expertise is eroded: the archival nature of the Web means that what and how to information is readily available." The rhetorical authority previously afforded to subject matter expertise, then, is given to those with the procedural knowledge of how to find information called for by a situation.
An expert differs from the specialist in that a specialist has to be able to solve a problem and an expert has to know its solution. The opposite of an expert is generally known as a layperson, while someone who occupies a middle grade of understanding is generally known as a technician and often employed to assist experts. A person may well be an expert in one field and a layperson in many other fields. The concepts of experts and expertise are debated within the field of epistemology under the general heading of expert knowledge. In contrast, the opposite of a specialist would be a generalist or polymath.
The term is widely used informally, with people being described as 'experts' in order to bolster the relative value of their opinion, when no objective criteria for their expertise is available. The term crank is likewise used to disparage opinions. Academic elitism arises when experts become convinced that only their opinion is useful, sometimes on matters beyond their personal expertise.
In contrast to an expert, a novice (known colloquially as a newbie or 'greenhorn') is any person that is new to any science or field of study or activity or social cause and who is undergoing training in order to meet normal requirements of being regarded a mature and equal participant.
"Expert" is also being mistakenly interchanged with the term "authority" in new media. An expert can be an authority if through relationships to people and technology, that expert is allowed to control access to his expertise. However, a person who merely wields authority is not by right an expert. In new media, users are being misled by the term "authority". Many sites and search engines such as Google and Technorati use the term "authority" to denote the link value and traffic to a particular topic. However, this authority only measures populist information. It in no way assures that the author of that site or blog is an expert.
An expert is not to be confused with a professional. A professional is someone who gets paid to do something. An amateur is the opposite of a professional, not the opposite of an expert.
Some characteristics of the development of an expert have been found to include
Mark Twain defined an expert as "an ordinary fellow from another town". Will Rogers described an expert as "A man fifty miles from home with a briefcase." Danish scientist and Nobel laureate Niels Bohr defined an expert as "A person that has made every possible mistake within his or her field." Malcolm Gladwell describes expertise as a matter of practicing the correct way for a total of around 10,000 hours. | [
{
"paragraph_id": 0,
"text": "An expert is somebody who has a broad and deep understanding and competence in terms of knowledge, skill and experience through practice and education in a particular field or area of study. Informally, an expert is someone widely recognized as a reliable source of technique or skill whose faculty for judging or deciding rightly, justly, or wisely is accorded authority and status by peers or the public in a specific well-distinguished domain. An expert, more generally, is a person with extensive knowledge or ability based on research, experience, or occupation and in a particular area of study. Experts are called in for advice on their respective subject, but they do not always agree on the particulars of a field of study. An expert can be believed, by virtue of credentials, training, education, profession, publication or experience, to have special knowledge of a subject beyond that of the average person, sufficient that others may officially (and legally) rely upon the individual's opinion on that topic. Historically, an expert was referred to as a sage. The individual was usually a profound thinker distinguished for wisdom and sound judgment.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In specific fields, the definition of expert is well established by consensus and therefore it is not always necessary for individuals to have a professional or academic qualification for them to be accepted as an expert. In this respect, a shepherd with fifty years of experience tending flocks would be widely recognized as having complete expertise in the use and training of sheep dogs and the care of sheep. Another example from computer science is that an expert system may be taught by a human and thereafter considered an expert, often outperforming human beings at particular tasks. In law, an expert witness must be recognized by argument and authority.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Research in this area attempts to understand the relation between expert knowledge, skills and personal characteristics and exceptional performance. Some researchers have investigated the cognitive structures and processes of experts. The fundamental aim of this research is to describe what it is that experts know and how they use their knowledge to achieve performance that most people assume requires extreme or extraordinary ability. Studies have investigated the factors that enable experts to be fast and accurate.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Expertise characteristics, skills and knowledge of a person (that is, expert) or of a system, which distinguish experts from novices and less experienced people. In many domains there are objective measures of performance capable of distinguishing experts from novices: expert chess players will almost always win games against recreational chess players; expert medical specialists are more likely to diagnose a disease correctly; etc.",
"title": "Expertise"
},
{
"paragraph_id": 4,
"text": "The word expertise is used to refer also to expert determination, where an expert is invited to decide a disputed issue. The decision may be binding or advisory, according to the agreement between the parties in dispute.",
"title": "Expertise"
},
{
"paragraph_id": 5,
"text": "There are two academic approaches to the understanding and study of expertise. The first understands expertise as an emergent property of communities of practice. In this view expertise is socially constructed; tools for thinking and scripts for action are jointly constructed within social groups enabling that group jointly to define and acquire expertise in some domain.",
"title": "Expertise"
},
{
"paragraph_id": 6,
"text": "In the second view, expertise is a characteristic of individuals and is a consequence of the human capacity for extensive adaptation to physical and social environments. Many accounts of the development of expertise emphasize that it comes about through long periods of deliberate practice. In many domains of expertise estimates of 10 years' experience deliberate practice are common. Recent research on expertise emphasizes the nurture side of the nature and nurture argument. Some factors not fitting the nature-nurture dichotomy are biological but not genetic, such as starting age, handedness, and season of birth.",
"title": "Expertise"
},
{
"paragraph_id": 7,
"text": "In the field of education there is a potential \"expert blind spot\" (see also Dunning–Kruger effect) in newly practicing educators who are experts in their content area. This is based on the \"expert blind spot hypothesis\" researched by Mitchell Nathan and Andrew Petrosino. Newly practicing educators with advanced subject-area expertise of an educational content area tend to use the formalities and analysis methods of their particular area of expertise as a major guiding factor of student instruction and knowledge development, rather than being guided by student learning and developmental needs that are prevalent among novice learners.",
"title": "Expertise"
},
{
"paragraph_id": 8,
"text": "The blind spot metaphor refers to the physiological blind spot in human vision in which perceptions of surroundings and circumstances are strongly impacted by their expectations. Beginning practicing educators tend to overlook the importance of novice levels of prior knowledge and other factors involved in adjusting and adapting pedagogy for learner understanding. This expert blind spot is in part due to an assumption that novices' cognitive schemata are less elaborate, interconnected, and accessible than experts' and that their pedagogical reasoning skills are less well developed. Essential knowledge of subject matter for practicing educators consists of overlapping knowledge domains: subject matter knowledge and pedagogical content matter. Pedagogical content matter consists of an understanding of how to represent certain concepts in ways appropriate to the learner contexts, including abilities and interests. The expert blind spot is a pedagogical phenomenon that is typically overcome through educators' experience with instructing learners over time.",
"title": "Expertise"
},
{
"paragraph_id": 9,
"text": "In line with the socially constructed view of expertise, expertise can also be understood as a form of power; that is, experts have the ability to influence others as a result of their defined social status. By a similar token, a fear of experts can arise from fear of an intellectual elite's power. In earlier periods of history, simply being able to read made one part of an intellectual elite. The introduction of the printing press in Europe during the fifteenth century and the diffusion of printed matter contributed to higher literacy rates and wider access to the once-rarefied knowledge of academia. The subsequent spread of education and learning changed society, and initiated an era of widespread education whose elite would now instead be those who produced the written content itself for consumption, in education and all other spheres.",
"title": "Expertise"
},
{
"paragraph_id": 10,
"text": "Plato's \"Noble Lie\", concerns expertise. Plato did not believe most people were clever enough to look after their own and society's best interest, so the few clever people of the world needed to lead the rest of the flock. Therefore, the idea was born that only the elite should know the truth in its complete form and the rulers, Plato said, must tell the people of the city \"the noble lie\" to keep them passive and content, without the risk of upheaval and unrest.",
"title": "Expertise"
},
{
"paragraph_id": 11,
"text": "In contemporary society, doctors and scientists, for example, are considered to be experts in that they hold a body of dominant knowledge that is, on the whole, inaccessible to the layman. However, this inaccessibility and perhaps even mystery that surrounds expertise does not cause the layman to disregard the opinion of the experts on account of the unknown. Instead, the complete opposite occurs whereby members of the public believe in and highly value the opinion of medical professionals or of scientific discoveries, despite not understanding it.",
"title": "Expertise"
},
{
"paragraph_id": 12,
"text": "A number of computational models have been developed in cognitive science to explain the development from novice to expert. In particular, Herbert A. Simon and Kevin Gilmartin proposed a model of learning in chess called MAPP (Memory-Aided Pattern Recognizer). Based on simulations, they estimated that about 50,000 chunks (units of memory) are necessary to become an expert, and hence the many years needed to reach this level. More recently, the CHREST model (Chunk Hierarchy and REtrieval STructures) has simulated in detail a number of phenomena in chess expertise (eye movements, performance in a variety of memory tasks, development from novice to expert) and in other domains.",
"title": "Expertise"
},
{
"paragraph_id": 13,
"text": "An important feature of expert performance seems to be the way in which experts are able to rapidly retrieve complex configurations of information from long-term memory. They recognize situations because they have meaning. It is perhaps this central concern with meaning and how it attaches to situations which provides an important link between the individual and social approaches to the development of expertise. Work on \"Skilled Memory and Expertise\" by Anders Ericsson and James J. Staszewski confronts the paradox of expertise and claims that people not only acquire content knowledge as they practice cognitive skills, they also develop mechanisms that enable them to use a large and familiar knowledge base efficiently.",
"title": "Expertise"
},
{
"paragraph_id": 14,
"text": "Work on expert systems (computer software designed to provide an answer to a problem, or clarify uncertainties where normally one or more human experts would need to be consulted) typically is grounded on the premise that expertise is based on acquired repertoires of rules and frameworks for decision making which can be elicited as the basis for computer supported judgment and decision-making. However, there is increasing evidence that expertise does not work in this fashion. Rather, experts recognize situations based on experience of many prior situations. They are in consequence able to make rapid decisions in complex and dynamic situations.",
"title": "Expertise"
},
{
"paragraph_id": 15,
"text": "In a critique of the expert systems literature, Dreyfus & Dreyfus suggest:",
"title": "Expertise"
},
{
"paragraph_id": 16,
"text": "If one asks an expert for the rules he or she is using, one will, in effect, force the expert to regress to the level of a beginner and state the rules learned in school. Thus, instead of using rules he or she no longer remembers, as the knowledge engineers suppose, the expert is forced to remember rules he or she no longer uses. ... No amount of rules and facts can capture the knowledge an expert has when he or she has stored experience of the actual outcomes of tens of thousands of situations.",
"title": "Expertise"
},
{
"paragraph_id": 17,
"text": "The role of long-term memory in the skilled memory effect was first articulated by Chase and Simon in their classic studies of chess expertise. They asserted that organized patterns of information stored in long-term memory (chunks) mediated experts' rapid encoding and superior retention. Their study revealed that all subjects retrieved about the same number of chunks, but the size of the chunks varied with subjects' prior experience. Experts' chunks contained more individual pieces than those of novices. This research did not investigate how experts find, distinguish, and retrieve the right chunks from the vast number they hold without a lengthy search of long-term memory.",
"title": "Expertise"
},
{
"paragraph_id": 18,
"text": "Skilled memory enables experts to rapidly encode, store, and retrieve information within the domain of their expertise and thereby circumvent the capacity limitations that typically constrain novice performance. For example, it explains experts' ability to recall large amounts of material displayed for only brief study intervals, provided that the material comes from their domain of expertise. When unfamiliar material (not from their domain of expertise) is presented to experts, their recall is no better than that of novices.",
"title": "Expertise"
},
{
"paragraph_id": 19,
"text": "The first principle of skilled memory, the meaningful encoding principle, states that experts exploit prior knowledge to durably encode information needed to perform a familiar task successfully. Experts form more elaborate and accessible memory representations than novices. The elaborate semantic memory network creates meaningful memory codes that create multiple potential cues and avenues for retrieval.",
"title": "Expertise"
},
{
"paragraph_id": 20,
"text": "The second principle, the retrieval structure principle states that experts develop memory mechanisms called retrieval structures to facilitate the retrieval of information stored in long-term memory. These mechanisms operate in a fashion consistent with the meaningful encoding principle to provide cues that can later be regenerated to retrieve the stored information efficiently without a lengthy search.",
"title": "Expertise"
},
{
"paragraph_id": 21,
"text": "The third principle, the speed up principle states that long-term memory encoding and retrieval operations speed up with practice, so that their speed and accuracy approach the speed and accuracy of short-term memory storage and retrieval.",
"title": "Expertise"
},
{
"paragraph_id": 22,
"text": "Examples of skilled memory research described in the Ericsson and Stasewski study include:",
"title": "Expertise"
},
{
"paragraph_id": 23,
"text": "Much of the research regarding expertise involves the studies of how experts and novices differ in solving problems. Mathematics and physics are common domains for these studies.",
"title": "Expertise"
},
{
"paragraph_id": 24,
"text": "One of the most cited works in this area examines how experts (PhD students in physics) and novices (undergraduate students that completed one semester of mechanics) categorize and represent physics problems. They found that novices sort problems into categories based upon surface features (e.g., keywords in the problem statement or visual configurations of the objects depicted). Experts, however, categorize problems based upon their deep structures (i.e., the main physics principle used to solve the problem).",
"title": "Expertise"
},
{
"paragraph_id": 25,
"text": "Their findings also suggest that while the schemas of both novices and experts are activated by the same features of a problem statement, the experts' schemas contain more procedural knowledge which aid in determining which principle to apply, and novices' schemas contain mostly declarative knowledge which do not aid in determining methods for solution.",
"title": "Expertise"
},
{
"paragraph_id": 26,
"text": "Relative to a specific field, an expert has:",
"title": "Expertise"
},
{
"paragraph_id": 27,
"text": "Marie-Line Germain developed a psychometric measure of perception of employee expertise called the Generalized Expertise Measure. She defined a behavioral dimension in experts, in addition to the dimensions suggested by Swanson and Holton. Her 16-item scale contains objective expertise items and subjective expertise items. Objective items were named Evidence-Based items. Subjective items (the remaining 11 items from the measure below) were named Self-Enhancement items because of their behavioral component.",
"title": "Expertise"
},
{
"paragraph_id": 28,
"text": "Scholars in rhetoric have also turned their attention to the concept of the expert. Considered an appeal to ethos or \"the personal character of the speaker\", established expertise allows a speaker to make statements regarding special topics of which the audience may be ignorant. In other words, the expert enjoys the deference of the audience's judgment and can appeal to authority where a non-expert cannot.",
"title": "Rhetoric"
},
{
"paragraph_id": 29,
"text": "In The Rhetoric of Expertise, E. Johanna Hartelius defines two basic modes of expertise: autonomous and attributed expertise. While an autonomous expert can \"possess expert knowledge without recognition from other people,\" attributed expertise is \"a performance that may or may not indicate genuine knowledge.\" With these two categories, Hartelius isolates the rhetorical problems faced by experts: just as someone with autonomous expertise may not possess the skill to persuade people to hold their points of view, someone with merely attributed expertise may be persuasive but lack the actual knowledge pertaining to a given subject. The problem faced by audiences follows from the problem facing experts: when faced with competing claims of expertise, what resources do non-experts have to evaluate claims put before them?",
"title": "Rhetoric"
},
{
"paragraph_id": 30,
"text": "Hartelius and other scholars have also noted the challenges that projects such as Wikipedia pose to how experts have traditionally constructed their authority. In \"Wikipedia and the Emergence of Dialogic Expertise\", she highlights Wikipedia as an example of the \"dialogic expertise\" made possible by collaborative digital spaces. Predicated upon the notion that \"truth emerges from dialogue\", Wikipedia challenges traditional expertise both because anyone can edit it and because no single person, regardless of their credentials, can end a discussion by fiat. In other words, the community, rather than single individuals, direct the course of discussion. The production of knowledge, then, as a process of dialogue and argumentation, becomes an inherently rhetorical activity.",
"title": "Rhetoric"
},
{
"paragraph_id": 31,
"text": "Hartelius calls attention to two competing norm systems of expertise: “network norms of dialogic collaboration” and “deferential norms of socially sanctioned professionalism”; Wikipedia being evidence of the first. Drawing on a Bakhtinian framework, Hartelius posits that Wikipedia is an example of an epistemic network that is driven by the view that individuals’ ideas clash with one another so as to generate expertise collaboratively. Hartelius compares Wikipedia's methodology of open-ended discussions of topics to that of Bakhtin's theory of speech communication, where genuine dialogue is considered a live event, which is continuously open to new additions and participants. Hartelius acknowledges that knowledge, experience, training, skill, and qualification are important dimensions of expertise but posits that the concept is more complex than sociologists and psychologists suggest. Arguing that expertise is rhetorical, then, Hartelius explains that expertise \"is not simply about one person's skills being different from another's. It is also fundamentally contingent on a struggle for ownership and legitimacy.\" Effective communication is an inherent element in expertise in the same style as knowledge is. Rather than leaving each other out, substance and communicative style are complementary. Hartelius further suggests that Wikipedia's dialogic construction of expertise illustrates both the instrumental and the constitutive dimensions of rhetoric; instrumentally as it challenges traditional encyclopedias and constitutively as a function of its knowledge production. Going over the historical development of the encyclopedic project, Hartelius argues that changes in traditional encyclopedias have led to changes in traditional expertise. Wikipedia's use of hyperlinks to connect one topic to another depends on, and develops, electronic interactivity meaning that Wikipedia's way of knowing is dialogic. Dialogic expertise then, emerges from multiple interactions between utterances within the discourse community. The ongoing dialogue between contributors on Wikipedia not only results in the emergence of truth; it also explicates the topics one can be an expert of. As Hartelius explains, \"the very act of presenting information about topics that are not included in traditional encyclopedias is a construction of new expertise.\" While Wikipedia insists that contributors must only publish preexisting knowledge, the dynamics behind dialogic expertise creates new information nonetheless. Knowledge production is created as a function of dialogue. According to Hartelius, dialogic expertise has emerged on Wikipedia not only because of its interactive structure but also because of the site's hortative discourse which is not found in traditional encyclopedias. By Wikipedia's hortative discourse, Hartelius means various encouragements to edit certain topics and instructions on how to do so that appear on the site. One further reason to the emergence of dialogic expertise on Wikipedia is the site's community pages, which function as a techne; explicating Wikipedia's expert methodology.",
"title": "Rhetoric"
},
{
"paragraph_id": 32,
"text": "Building on Hartelius, Damien Pfister developed the concept of \"networked expertise\". Noting that Wikipedia employs a \"many to many\" rather than a \"one to one\" model of communication, he notes how expertise likewise shifts to become a quality of a group rather than an individual. With the information traditionally associated with individual experts now stored within a text produced by a collective, knowing about something is less important than knowing how to find something. As he puts it, \"With the internet, the historical power of subject matter expertise is eroded: the archival nature of the Web means that what and how to information is readily available.\" The rhetorical authority previously afforded to subject matter expertise, then, is given to those with the procedural knowledge of how to find information called for by a situation.",
"title": "Rhetoric"
},
{
"paragraph_id": 33,
"text": "An expert differs from the specialist in that a specialist has to be able to solve a problem and an expert has to know its solution. The opposite of an expert is generally known as a layperson, while someone who occupies a middle grade of understanding is generally known as a technician and often employed to assist experts. A person may well be an expert in one field and a layperson in many other fields. The concepts of experts and expertise are debated within the field of epistemology under the general heading of expert knowledge. In contrast, the opposite of a specialist would be a generalist or polymath.",
"title": "Contrasts and comparisons"
},
{
"paragraph_id": 34,
"text": "The term is widely used informally, with people being described as 'experts' in order to bolster the relative value of their opinion, when no objective criteria for their expertise is available. The term crank is likewise used to disparage opinions. Academic elitism arises when experts become convinced that only their opinion is useful, sometimes on matters beyond their personal expertise.",
"title": "Contrasts and comparisons"
},
{
"paragraph_id": 35,
"text": "In contrast to an expert, a novice (known colloquially as a newbie or 'greenhorn') is any person that is new to any science or field of study or activity or social cause and who is undergoing training in order to meet normal requirements of being regarded a mature and equal participant.",
"title": "Contrasts and comparisons"
},
{
"paragraph_id": 36,
"text": "\"Expert\" is also being mistakenly interchanged with the term \"authority\" in new media. An expert can be an authority if through relationships to people and technology, that expert is allowed to control access to his expertise. However, a person who merely wields authority is not by right an expert. In new media, users are being misled by the term \"authority\". Many sites and search engines such as Google and Technorati use the term \"authority\" to denote the link value and traffic to a particular topic. However, this authority only measures populist information. It in no way assures that the author of that site or blog is an expert.",
"title": "Contrasts and comparisons"
},
{
"paragraph_id": 37,
"text": "An expert is not to be confused with a professional. A professional is someone who gets paid to do something. An amateur is the opposite of a professional, not the opposite of an expert.",
"title": "Contrasts and comparisons"
},
{
"paragraph_id": 38,
"text": "Some characteristics of the development of an expert have been found to include",
"title": "Contrasts and comparisons"
},
{
"paragraph_id": 39,
"text": "Mark Twain defined an expert as \"an ordinary fellow from another town\". Will Rogers described an expert as \"A man fifty miles from home with a briefcase.\" Danish scientist and Nobel laureate Niels Bohr defined an expert as \"A person that has made every possible mistake within his or her field.\" Malcolm Gladwell describes expertise as a matter of practicing the correct way for a total of around 10,000 hours.",
"title": "Use in literature"
}
]
| An expert is somebody who has a broad and deep understanding and competence in terms of knowledge, skill and experience through practice and education in a particular field or area of study. Informally, an expert is someone widely recognized as a reliable source of technique or skill whose faculty for judging or deciding rightly, justly, or wisely is accorded authority and status by peers or the public in a specific well-distinguished domain. An expert, more generally, is a person with extensive knowledge or ability based on research, experience, or occupation and in a particular area of study. Experts are called in for advice on their respective subject, but they do not always agree on the particulars of a field of study. An expert can be believed, by virtue of credentials, training, education, profession, publication or experience, to have special knowledge of a subject beyond that of the average person, sufficient that others may officially rely upon the individual's opinion on that topic. Historically, an expert was referred to as a sage. The individual was usually a profound thinker distinguished for wisdom and sound judgment. In specific fields, the definition of expert is well established by consensus and therefore it is not always necessary for individuals to have a professional or academic qualification for them to be accepted as an expert. In this respect, a shepherd with fifty years of experience tending flocks would be widely recognized as having complete expertise in the use and training of sheep dogs and the care of sheep. Another example from computer science is that an expert system may be taught by a human and thereafter considered an expert, often outperforming human beings at particular tasks. In law, an expert witness must be recognized by argument and authority. Research in this area attempts to understand the relation between expert knowledge, skills and personal characteristics and exceptional performance. Some researchers have investigated the cognitive structures and processes of experts. The fundamental aim of this research is to describe what it is that experts know and how they use their knowledge to achieve performance that most people assume requires extreme or extraordinary ability. Studies have investigated the factors that enable experts to be fast and accurate. | 2001-02-06T22:42:17Z | 2023-12-05T16:24:36Z | [
"Template:Short description",
"Template:Annotated link",
"Template:Ref improve section",
"Template:Portal",
"Template:ISBN",
"Template:Or",
"Template:See also",
"Template:Unreferenced section",
"Template:Refbegin",
"Template:Sfn",
"Template:Cn",
"Template:Cite web",
"Template:Citation",
"Template:Refend",
"Template:Wiktionary",
"Template:Other uses",
"Template:Unsourced section",
"Template:Cite news",
"Template:Cite journal",
"Template:Cite magazine",
"Template:Wikiquote",
"Template:Authority control",
"Template:More footnotes",
"Template:Reflist",
"Template:Cite book",
"Template:Dead link",
"Template:US patent"
]
| https://en.wikipedia.org/wiki/Expert |
9,895 | Economy of Afghanistan | The economy of Afghanistan is listed as the 124th largest in the world in terms of nominal gross domestic product (GDP), and 102nd largest in the world in terms of purchasing power parity (PPP). With a population of around 41 million people, Afghanistan's GDP (nominal) stands at $14.58 billion as of 2021, amounting to a GDP per capita of $363.7 (according to a World Bank report). Its annual exports exceed $2 billion, with agricultural, mineral and textile products accounting for 94% of total exports. The nation's total external debt is $1.4 billion as of 2022.
The Afghan economy continues to improve due to the influx of expats, establishment of more trade routes with neighboring and regional countries, and expansion of the nation's agriculture, energy and mining sectors. The billions of dollars in assistance that came from expats and the international community saw this increase when there was more political reliability after NATO became involved in Afghanistan.
Despite holding over one trillion dollars in proven untapped mineral deposits, Afghanistan remains one of the least developed countries in the world. Its unemployment rate is over 23% and about half of its population lives below the poverty line. The main factor behind this has been the continuous war in the country, which deterred business investors and left much of the population fighting among each other instead of catching up with the rest of the world. Afghanistan has long sought foreign investment in order to improve its economy. The population of Afghanistan increased by more than 50% between 2001 and 2014, while its GDP grew eightfold. After the U.S. withdrawal from Afghanistan and the Taliban's return to power in 2021, the Biden administration decided to confiscate or withhold $9.5 billion worth of assets from the Afghanistan Central Bank to stop the Taliban from accessing it.
The official currency of Afghanistan is the afghani (AFN), which has an exchange rate of around 70 afghanis to 1 United States dollar. The country has a central bank called Da Afghanistan Bank (DAB). A number of local banks also operate in the country, including the Afghanistan International Bank, Azizi Bank, New Kabul Bank and Pashtany Bank.
When Afghanistan was ruled by Emir Abdur Rahman Khan (1880–1901) and his son Habibullah Khan (1901–1919), a great deal of commerce was controlled by the government. These monarchs were eager to develop the stature of government and the country's military capability, and so attempted to raise money by the imposition of state monopolies on the sale of commodities and high taxes. This slowed the long-term development of Afghanistan during that period. Western technologies and manufacturing methods were introduced at the command of the Afghan ruler, but in general only according to the logistical requirements of the growing army. An emphasis was placed on the manufacture of weapons and other military material. This process was in the hands of a small number of foreign experts invited to Kabul by the Afghan kings. Otherwise, it was not possible for non-Afghans, particularly westerners, to set up large-scale enterprises in Afghanistan during that period.
In the post-independence period, DAB strongly financed the cultivation of cotton; at one point, the Spinzar Cotton Company in Kunduz Province was one of the largest providers of cotton in the world, most of which were exported to the Soviet Union. Fruits were mainly exported to British-controlled India.
The first prominent plan to develop Afghanistan's economy in modern times was the Helmand Valley Authority project of 1952, modeled on the Tennessee Valley Authority in the United States, which was expected to be of primary economic importance. Glenn Foster, an American contractor working in Afghanistan in the 1950s, stated this about the Afghan people:
Even though there are masses of people, the country seems able to feed them all. Although their diet may not be abundant, you don't see the hunger that you do in some countries....
Afghanistan began facing severe economic hardships during the 1979 Soviet invasion and ensuing civil war destroyed much of the country's limited infrastructure, and disrupted normal patterns of economic activity. Eventually, Afghanistan went from a traditional economy to a centrally planned economy up until 2002 when it was replaced by a free market economy. Gross domestic product has fallen substantially since the 1980s due to disruption of trade and transport as well as loss of labor and capital. Continuing internal strife severely hampered domestic efforts to rebuild the nation or provide ways for the international community to help.
According to the International Monetary Fund, the Afghan economy grew 20% in the fiscal year ending in March 2004, after expanding 30% in the previous 12 months. The growth was mainly attributed to United Nations assistance. Billions of dollars in international aid had entered Afghanistan from 2002 to 2021. A GDP of $4 billion in fiscal year 2003 was recalculated by the IMF to $6.1 billion, after adding proceeds from opium production. Mean graduate pay was $0.56 per man-hour in 2010. The country expects to be self sufficient in wheat, rice, poultry and dairy production by 2026.
The recent reestablishment of the Taliban government led to temporary suspension of international development aid to Afghanistan. The World Bank and International Monetary Fund also halted payments during that period. In this regard, Taliban's spiritual leader Hibatullah Akhundzada stated, "The economy of a country is built when its people work together and do not rely on foreign aid[.]" The Biden administration froze about $9 billion in assets belonging to the DAB, which was intended to block the Taliban from accessing the money. The recent droughts, earthquakes and floods in the country have created further adverse economic situation for many residents. The Ministry of Finance has collected over $2 billion in 2022.
The GDP of Afghanistan is estimated to have dropped by 20% following the Taliban return to power. Following this, after months of free-fall, the Afghan economy began stabilizing, as a result of the Taliban's restrictions on smuggled imports, limits on banking transactions, and UN aid. In 2023, the Afghan economy began seeing signs of revival. This has also been followed by stable exchange rates, low inflation, stable revenue collection, and the rise of trade in exports. In the third quarter of 2023, the Afghani rose to be the best performing currency in the world, climbing over 9% against the US dollar.
Agriculture remains Afghanistan’s most important source of employment: 60-80 percent of Afghanistan’s population works in this sector, although it accounts for less than a third of GDP due to insufficient irrigation, drought, lack of market access, and other structural impediments. Most Afghan farmers are primarily subsistence farmers.
Afghanistan produced in 2018:
In addition to smaller productions of other agricultural products.
Afghanistan produces around 1.5 million tons of fresh fruits annually, which could be increased significantly. It is known for producing some of the finest fruits, especially pomegranates and grapes as well as sweet melons and mulberries. Other fruits grown in the country include apples, apricots, cherries, figs, kiwi, oranges, peaches, pears, persimmons, plums, and strawberries. Farming is entirely organic and steadily increasing. There are over 5,000 greenhouses in the country.
The northern and western Afghan provinces are long known for pistachio cultivation. In recent years, farmers in the southern provinces began growing American pistachio trees. Provinces in the east of the country, particularly Khost and Paktia, are famous for pine nuts. The northern and central provinces are also famous for almonds and walnuts. The Bamyan Province in central Afghanistan is known for growing superior quality potatoes, which produced 370,000 tons in 2020. Nangarhar, Kunar and Laghman are the only provinces in the country where large farms of grapefruits, lemons, limes, and oranges can be found. Nangarhar also has farms of dates, peanuts, olives, and sugarcane. Cultivation of these products have spread to other provinces of the country. Other agricultural products such as avocados, bananas and pineapples have recently been planted in the provinces of Balkh, Helmand, Nangarhar, and Paktia.
Afghanistan is listed as the 54th largest vegetables producing country. Most of its vegetables are for domestic consumption and include beans, broccoli, cabbages, carrots, cauliflowers, chickpeas, coriander, corns, cucumbers, eggplants, leeks, lettuces, okras, onions, peppers, potatoes, pumpkins, radishes, rhubarb, spinach, tomatoes, turnips, and zucchinis. Wheat and cereal production is Afghanistan's traditional agricultural mainstay. The nation is nearing self-sufficiency in grain production. It requires an additional 1 to 3 million tons of wheat to become self-sufficient, which is predicted to be accomplished in the near future.
Livestock in Afghanistan mainly include cattle, sheep, and goats. Poultry farming is widespread in the warmer parts of the country. The Habib Hassam Poultry Complex is located in Jalalabad.
Arable land in Afghanistan was reported to be over 7.5 million hectares. Wheat production had stood at about 5 million tonnes in 2015, nurseries held 119,000 hectares of land, and grape production is at 615,000 tonnes. It was reported that cotton production has jumped to 500,000 tons. Around 3,200 ha (7,900 acres) of farm land in Afghanistan is used to cultivate saffron, mostly in the west, north and south of the country. Sugarcane is currently grown on 1,750 ha (4,300 acres) of land, and asafoetida on nearly 980 ha (2,400 acres) of land.
According to a 2010 report, only about 2.1% (or 1,350,000 ha (3,300,000 acres)) of Afghanistan is forested. This can be significantly increased by planting trees, including in the non-rocky hills and mountains which trap underground water. Some steps have been taken in recent years in planting trees in the urban areas all across Afghanistan. Even the Taliban spiritual leader has recently called for planting more trees. Felling has been made illegal nationally.
Afghanistan is landlocked with its citizens having no direct access to an ocean. The country has many lakes, ponds, reservoirs, rivers, springs, streams, etc., which make it a suitable climate for fish farming. Historically, fish constituted a smaller part of the Afghan diet because of the unavailability of modern fish farms. Fishing only took place in the lakes and rivers, particularly in the Amu, Helmand and Kabul rivers. Consumption of fish has increased sharply due to the establishment of many fish farms. There are over 2,600 of them in the country. The largest ones are at the national reservoirs, which supply fish eggs to smaller fish farms.
Afghanistan's geographical location makes it economically secured. The Lapis Lazuli corridor connects Afghanistan with Turkmenistan and ultimately ends somewhere in Europe. Other such trade routes connect Afghanistan with neighboring Iran, Pakistan, Tajikistan and Uzbekistan. The country also has direct trade with China and India via air corridor. It has four international airports, which include: Kabul International Airport in the capital city; Mazar-e Sharif International Airport in the north of the country; Herat International Airport in the west; and the Ahmad Shah Baba International Airport in Kandahar. It also has about 24 domestic airports. The major airlines of the country include Ariana Afghan Airlines and Kam Air. Its national rail network is slowly expanding to connect Central Asia with Pakistan and Iran. In addition to Central Asia, imported goods also enter by rail from neighboring Iran and China.
The Afghanistan–Pakistan Transit Trade Agreement (APTTA) allows Afghan and Pakistani cargo trucks to transit goods within both nations. This revised US-sponsored APTTA agreement also allows Afghan trucks to transport exports to India via Pakistan up to the Wagah crossing point. There are over a dozen official border crossing points all around Afghanistan. They include Abu Nasir Port in Farah Province, Ai-Khanoum in Takhar Province, Angur Ada in Paktika Province, Aqina in Faryab Province, Dand-aw-Patan in Paktia Province, Ghulam Khan in Khost Province, Hairatan in Balkh Province, Ishkashim in Badakhshan Province, Islam Qala in Herat Province, Sher Khan Bandar in Kunduz Province, Torghundi in Herat Province, Torkham in Nangarhar Province, Spin Boldak in Kandahar Province, and Zaranj in Nimruz Province. The country also has legal access to two major seaports in Pakistan, the Gwadar Port in Balochistan and the Port Qasim in Sindh. Afghanistan also has legal access to major seaports in Iran, which include the one in Bandar Abbas in the Persian Gulf and the Chabahar Port in the Gulf of Oman.
Afghanistan is endowed with a wealth of natural resources, which include extensive deposits of barites, chromite, coal, copper, gold, gemstone, iron ore, lead, lithium, marble, natural gas, petroleum, salt, sulfur, talc, uranium, and zinc. Rare-earth elements can be found all over the country. In 2006, a U.S. Geological Survey estimated that Afghanistan has as much as 1,000×10^ m (36×10^ cu ft) of natural gas, 570×10^ m (3.6 Gbbl) of oil and condensate reserves. According to a 2007 assessment, Afghanistan has significant amounts of undiscovered non-fuel mineral resources. Geologists also found indications of abundant deposits of colored stones and gemstones, including emerald, garnet, kunzite, lapis lazuli, peridot, ruby, sapphire, spinel, and tourmaline.
It is claimed that Afghanistan has at least $1 trillion in untapped mineral deposits. A memo from the Pentagon stated that Afghanistan could become the "Saudi Arabia of lithium". Some believe that the untapped minerals are worth up to $3 trillion. The Khanashin carbonatites in the Helmand Province of the country have an estimated 1 million metric tonnes of rare earth elements.
Afghanistan currently has a copper mining deal with China Metallurgical Group Corporation, which involves the investment of $2.8 billion by China and an annual income of about $400 million to the Afghan government. The country's Ainak copper mine, located in Logar Province, is one of the biggest in the world. It is estimated to hold at least 11 million tonnes or US$33 billion worth of copper.
The previous government has signed a 30-year contract with investment group Centar and its operating company, Afghan Gold and Minerals Co., to explore and develop a copper mining operation in Balkhab District in Sar-e Pol Province, including a gold mining operation in Badakhshan Province. The copper contract involved a $56 million investment and the gold contract a $22 million investment.
The country's other recently announced treasure is the Hajigak iron mine, located 210 km (130 mi) west of Kabul and is believed to hold an estimated 1.8 billion to 2 billion metric tons of the mineral used to make steel. The country also has a number of coal mines.
Afghanistan's important resource in the past has been natural gas, which was first tapped in 1967. During the 1980s, gas sales accounted for $300 million a year in export revenues (56% of the total). About 90% of these exports went to the Soviet Union to pay for imports and debts. However, during the withdrawal of Soviet troops in 1989, the natural gas fields were capped to prevent sabotage by criminals. Gas production has dropped from a high of 8.2×10^ m (290×10^ cu ft) per day in the 1980s to a low of about 600,000 m (21×10^ cu ft) in 2001. Production of natural gas was restored during the Karzai administration in 2010.
It is predicted that by pumping-out its own oil reserves, Afghanistan will no longer be importing oil products after 2026. Originally, the Karzai administration and China National Petroleum Corporation (CNPC) signed a contract for the development of three oil fields in the northern provinces of Sar-e Pol, Jowzjan and Faryab. It was later reported that CNPC began extracting 240,000 m (1.5×10^ bbl) of oil annually. In early 2023, the Xinjiang Central Asia Petroleum and Gas Company signed a similar contract with the Islamic Emirate of Afghanistan. Russia had also found interest in oil and gas supply to Afghanistan.
Afghanistan embarked on a modest economic development program in the 1930s. The government founded banks; introduced paper money; established a university; expanded primary, secondary, and technical schools; and sent students abroad for education. In 1952 it created the Helmand Valley Authority to manage the economic development of the Helmand and Arghandab valleys through irrigation and land development, a scheme which remains one of the country's most important capital resources.
In 1956, the government promulgated the first in a long series of ambitious development plans. By the late 1970s, these had achieved only mixed results due to flaws in the planning process as well as inadequate funding and a shortage of the skilled managers and technicians needed for implementation.
Da Afghanistan Bank serves as the central bank of the nation. The "afghani" (AFN) is the national currency, which has an exchange rate of around 70 afghanis to 1 US dollar. There are over a dozen different banks operating in the country, including Afghanistan International Bank, Kabul Bank, Azizi Bank, Pashtany Bank, Standard Chartered Bank, and First Micro Finance Bank. Cash is still widely used for most transactions. A new law on private investment provides three to seven-year tax holidays to eligible companies and a four-year exemption from exports tariffs and duties. Improvements to the business-enabling environment have resulted in more than $1.5 billion in telecom investment and created more than 100,000 jobs since 2003.
Afghanistan is a member of ECO, OIC, SAARC, and WTO. It has an observer status in the SCO. It seeks to complete the so-called New Silk Road trade project, which is aimed to connecting South Asia with Central Asia and the Middle East. This way Afghanistan will be able to collect large fees from trade passing through the country, including from the Trans-Afghanistan Pipeline.
Some of the ongoing national mega projects include the Qosh Tepa Canal project in the north of the country and the New Kabul City. Other smaller development projects include the Qatar Township in Kabul, Aino Mena in Kandahar and the Ghazi Amanullah Khan Town east of Jalalabad. Similar projects are also found in Herat in the west, Mazar-e-Sharif in the north, Khost in the east, and in other cities.
There are as much as 5,000 factories in Afghanistan. Most are locally owned, while others involve foreign investors. They produce construction materials, furniture, household items, apparel, food, beverages, pharmaceutical products, etc. The country imports roughly $500 million of textile goods from other countries. It exported about $168 million worth of cotton in 2022. Afghan handwoven rugs are one of the most popular products for exportation. Other products include hand crafted antique replicas as well as leather and furs. Afghanistan is the third largest exporter of cashmere.
After the Islamic Emirate of Afghanistan returned to power, the country suffered from a major liquidity crisis and lack of banknotes. Because outside donors have severely cut funding to support Afghanistan's health, education, and other essential sectors, many Afghans lost their incomes. Under the assessment system of the World Food Programme (WFP), almost 20 million people suffered either level-3 “crisis” or level-4 “emergency” levels of food insecurity. The crisis’ impact on women and girls was especially severe. Officials under the new Islamic Emirate continue to provide communication services to areas that lacked them. The government collected 61 billion afghanis in tariffs in 2022, which increased to 76 billion in 2023. It continues to attract foreign investors.
Tourism in Afghanistan was at its peak in 1977. Many tourists from around the world visited Afghanistan, including from as far away as Europe and North America. All of that ended with the start of the April 1978 Saur Revolution. However, it is again gradually increasing despite having reputation as one of the most dangerous countries in the world. Between 4,000 and 20,000 foreign tourists visit Afghanistan every year. As many as 371,000 Afghans have visited different parts of the country in 2022. Tourists are advised to avoid areas where armed criminals may operate.
Ariana, Flydubai and Kam Air all provide flight services between Dubai International Airport and Kabul International Airport. The city of Kabul has many guest houses and hotels, which include the Kabul Serena Hotel, the Hotel Inter-Continental Kabul, the Safi Landmark Hotel, and the Kabul Star Hotel. Small number of guest houses and hotels are also available in other cities such Kandahar, Herat, Mazar-i-Sharif, Jalalabad, Bamyan, Fayezabad, etc. For those wanting to travel by road, there are bus terminals with mosques, Afghan style restaurants and small shops in the major cities.
The following are some of the notable places in Afghanistan that tourists visit:
The following table shows the main economic indicators in 2002–2020 (with IMF staff estimates in 2021–2026). Inflation below 5% is in green. The annual unemployment rate is extracted from the World Bank, although the International Monetary Fund find them unreliable.
Gross national saving: 22.7% of GDP (2017)
GDP - composition by sector:
note: data excludes opium production
GDP - composition by end use:
Household income or consumption by percentage share:
Agriculture - products: wheat, milk, grapes, vegetables, potatoes, watermelons, melons, rice, onions, apples
Industries: small-scale production of bricks, textiles, soap, furniture, shoes, fertilizer, apparel, food-products, non-alcoholic beverages, mineral water, cement; handwoven carpets; natural gas, coal, copper
Industrial production growth rate: -1.9% (2016) country comparison to the world: 181
Labor force: 8.478 million (2017) country comparison to the world: 58
Labor force - by occupation: agriculture 44.3%, industry 18.1%, services 37.6% (2017)
Population below poverty line: 54.5% (2017)
Budget:
Taxes and other revenues: 11.2% (of GDP) (2017) country comparison to the world: 210
Exports: $2 billion (2022) country comparison to the world: 164
Exports - commodities: gold, grapes, opium, fruits and nuts, insect resins, cotton, handwoven carpets, soapstone, scrap metal (2019)
Exports - partners: United Arab Emirates 45%, Pakistan 24%, India 22%, China 1% (2019)
Imports: $7 billion (2022) country comparison to the world: 125
Imports - commodities: wheat flours, broadcasting equipment, refined petroleum, rolled tobacco, aircraft parts, synthetic fabrics (2019)
Imports - partners: United Arab Emirates 23%, Pakistan 17%, India 13%, China 9%, United States 9%, Uzbekistan 7%, Kazakhstan 6% (2019)
Reserves of foreign exchange and gold: $7.187 billion (2017) country comparison to the world: 85
Current account balance: $1.014 billion (2017) country comparison to the world: 49
Currency: Afghani (AFN)
Exchange rates: 67 afghanis to 1 US dollar (2023)
Fiscal year: 21 December - 20 December
Energy in Afghanistan is provided by hydropower followed by fossil fuel and solar power. The nation currently generates over 600 megawatts (MW) of electricity from its several hydroelectric plants as well as using fossil fuel and solar panels. Over 670 MW more is imported from neighboring Iran, Tajikistan, Turkmenistan and Uzbekistan. Da Afghanistan Breshna Sherkat (DABS) is the national electricity provider.
Price of electricity is 2.5 afghanis per kw in Kabul Province, 4 afghanis in Herat Province, and around 6 afghanis in Balkh Province. The government wants to use the nation's coal reserves to produce extra electricity. The CASA-1000 project will also add 300 MW of electricity to the national grid.
Due to large influx of expats from neighboring Pakistan and Iran, the nation may require as much as 7,000 MW of electricity in the coming years. The Afghan National Development Strategy has identified renewable energy alternatives, such as wind and solar energy, as a high value power source to develop. A number of major solar and wind farms already exist in the country, with more under development. | [
{
"paragraph_id": 0,
"text": "The economy of Afghanistan is listed as the 124th largest in the world in terms of nominal gross domestic product (GDP), and 102nd largest in the world in terms of purchasing power parity (PPP). With a population of around 41 million people, Afghanistan's GDP (nominal) stands at $14.58 billion as of 2021, amounting to a GDP per capita of $363.7 (according to a World Bank report). Its annual exports exceed $2 billion, with agricultural, mineral and textile products accounting for 94% of total exports. The nation's total external debt is $1.4 billion as of 2022.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Afghan economy continues to improve due to the influx of expats, establishment of more trade routes with neighboring and regional countries, and expansion of the nation's agriculture, energy and mining sectors. The billions of dollars in assistance that came from expats and the international community saw this increase when there was more political reliability after NATO became involved in Afghanistan.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Despite holding over one trillion dollars in proven untapped mineral deposits, Afghanistan remains one of the least developed countries in the world. Its unemployment rate is over 23% and about half of its population lives below the poverty line. The main factor behind this has been the continuous war in the country, which deterred business investors and left much of the population fighting among each other instead of catching up with the rest of the world. Afghanistan has long sought foreign investment in order to improve its economy. The population of Afghanistan increased by more than 50% between 2001 and 2014, while its GDP grew eightfold. After the U.S. withdrawal from Afghanistan and the Taliban's return to power in 2021, the Biden administration decided to confiscate or withhold $9.5 billion worth of assets from the Afghanistan Central Bank to stop the Taliban from accessing it.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The official currency of Afghanistan is the afghani (AFN), which has an exchange rate of around 70 afghanis to 1 United States dollar. The country has a central bank called Da Afghanistan Bank (DAB). A number of local banks also operate in the country, including the Afghanistan International Bank, Azizi Bank, New Kabul Bank and Pashtany Bank.",
"title": ""
},
{
"paragraph_id": 4,
"text": "When Afghanistan was ruled by Emir Abdur Rahman Khan (1880–1901) and his son Habibullah Khan (1901–1919), a great deal of commerce was controlled by the government. These monarchs were eager to develop the stature of government and the country's military capability, and so attempted to raise money by the imposition of state monopolies on the sale of commodities and high taxes. This slowed the long-term development of Afghanistan during that period. Western technologies and manufacturing methods were introduced at the command of the Afghan ruler, but in general only according to the logistical requirements of the growing army. An emphasis was placed on the manufacture of weapons and other military material. This process was in the hands of a small number of foreign experts invited to Kabul by the Afghan kings. Otherwise, it was not possible for non-Afghans, particularly westerners, to set up large-scale enterprises in Afghanistan during that period.",
"title": "Economic history"
},
{
"paragraph_id": 5,
"text": "In the post-independence period, DAB strongly financed the cultivation of cotton; at one point, the Spinzar Cotton Company in Kunduz Province was one of the largest providers of cotton in the world, most of which were exported to the Soviet Union. Fruits were mainly exported to British-controlled India.",
"title": "Economic history"
},
{
"paragraph_id": 6,
"text": "The first prominent plan to develop Afghanistan's economy in modern times was the Helmand Valley Authority project of 1952, modeled on the Tennessee Valley Authority in the United States, which was expected to be of primary economic importance. Glenn Foster, an American contractor working in Afghanistan in the 1950s, stated this about the Afghan people:",
"title": "Economic history"
},
{
"paragraph_id": 7,
"text": "Even though there are masses of people, the country seems able to feed them all. Although their diet may not be abundant, you don't see the hunger that you do in some countries....",
"title": "Economic history"
},
{
"paragraph_id": 8,
"text": "Afghanistan began facing severe economic hardships during the 1979 Soviet invasion and ensuing civil war destroyed much of the country's limited infrastructure, and disrupted normal patterns of economic activity. Eventually, Afghanistan went from a traditional economy to a centrally planned economy up until 2002 when it was replaced by a free market economy. Gross domestic product has fallen substantially since the 1980s due to disruption of trade and transport as well as loss of labor and capital. Continuing internal strife severely hampered domestic efforts to rebuild the nation or provide ways for the international community to help.",
"title": "Economic history"
},
{
"paragraph_id": 9,
"text": "According to the International Monetary Fund, the Afghan economy grew 20% in the fiscal year ending in March 2004, after expanding 30% in the previous 12 months. The growth was mainly attributed to United Nations assistance. Billions of dollars in international aid had entered Afghanistan from 2002 to 2021. A GDP of $4 billion in fiscal year 2003 was recalculated by the IMF to $6.1 billion, after adding proceeds from opium production. Mean graduate pay was $0.56 per man-hour in 2010. The country expects to be self sufficient in wheat, rice, poultry and dairy production by 2026.",
"title": "Economic history"
},
{
"paragraph_id": 10,
"text": "The recent reestablishment of the Taliban government led to temporary suspension of international development aid to Afghanistan. The World Bank and International Monetary Fund also halted payments during that period. In this regard, Taliban's spiritual leader Hibatullah Akhundzada stated, \"The economy of a country is built when its people work together and do not rely on foreign aid[.]\" The Biden administration froze about $9 billion in assets belonging to the DAB, which was intended to block the Taliban from accessing the money. The recent droughts, earthquakes and floods in the country have created further adverse economic situation for many residents. The Ministry of Finance has collected over $2 billion in 2022.",
"title": "Economic history"
},
{
"paragraph_id": 11,
"text": "The GDP of Afghanistan is estimated to have dropped by 20% following the Taliban return to power. Following this, after months of free-fall, the Afghan economy began stabilizing, as a result of the Taliban's restrictions on smuggled imports, limits on banking transactions, and UN aid. In 2023, the Afghan economy began seeing signs of revival. This has also been followed by stable exchange rates, low inflation, stable revenue collection, and the rise of trade in exports. In the third quarter of 2023, the Afghani rose to be the best performing currency in the world, climbing over 9% against the US dollar.",
"title": "Economic history"
},
{
"paragraph_id": 12,
"text": "Agriculture remains Afghanistan’s most important source of employment: 60-80 percent of Afghanistan’s population works in this sector, although it accounts for less than a third of GDP due to insufficient irrigation, drought, lack of market access, and other structural impediments. Most Afghan farmers are primarily subsistence farmers.",
"title": "Agriculture and livestock"
},
{
"paragraph_id": 13,
"text": "Afghanistan produced in 2018:",
"title": "Agriculture and livestock"
},
{
"paragraph_id": 14,
"text": "In addition to smaller productions of other agricultural products.",
"title": "Agriculture and livestock"
},
{
"paragraph_id": 15,
"text": "Afghanistan produces around 1.5 million tons of fresh fruits annually, which could be increased significantly. It is known for producing some of the finest fruits, especially pomegranates and grapes as well as sweet melons and mulberries. Other fruits grown in the country include apples, apricots, cherries, figs, kiwi, oranges, peaches, pears, persimmons, plums, and strawberries. Farming is entirely organic and steadily increasing. There are over 5,000 greenhouses in the country.",
"title": "Agriculture and livestock"
},
{
"paragraph_id": 16,
"text": "The northern and western Afghan provinces are long known for pistachio cultivation. In recent years, farmers in the southern provinces began growing American pistachio trees. Provinces in the east of the country, particularly Khost and Paktia, are famous for pine nuts. The northern and central provinces are also famous for almonds and walnuts. The Bamyan Province in central Afghanistan is known for growing superior quality potatoes, which produced 370,000 tons in 2020. Nangarhar, Kunar and Laghman are the only provinces in the country where large farms of grapefruits, lemons, limes, and oranges can be found. Nangarhar also has farms of dates, peanuts, olives, and sugarcane. Cultivation of these products have spread to other provinces of the country. Other agricultural products such as avocados, bananas and pineapples have recently been planted in the provinces of Balkh, Helmand, Nangarhar, and Paktia.",
"title": "Agriculture and livestock"
},
{
"paragraph_id": 17,
"text": "Afghanistan is listed as the 54th largest vegetables producing country. Most of its vegetables are for domestic consumption and include beans, broccoli, cabbages, carrots, cauliflowers, chickpeas, coriander, corns, cucumbers, eggplants, leeks, lettuces, okras, onions, peppers, potatoes, pumpkins, radishes, rhubarb, spinach, tomatoes, turnips, and zucchinis. Wheat and cereal production is Afghanistan's traditional agricultural mainstay. The nation is nearing self-sufficiency in grain production. It requires an additional 1 to 3 million tons of wheat to become self-sufficient, which is predicted to be accomplished in the near future.",
"title": "Agriculture and livestock"
},
{
"paragraph_id": 18,
"text": "Livestock in Afghanistan mainly include cattle, sheep, and goats. Poultry farming is widespread in the warmer parts of the country. The Habib Hassam Poultry Complex is located in Jalalabad.",
"title": "Agriculture and livestock"
},
{
"paragraph_id": 19,
"text": "Arable land in Afghanistan was reported to be over 7.5 million hectares. Wheat production had stood at about 5 million tonnes in 2015, nurseries held 119,000 hectares of land, and grape production is at 615,000 tonnes. It was reported that cotton production has jumped to 500,000 tons. Around 3,200 ha (7,900 acres) of farm land in Afghanistan is used to cultivate saffron, mostly in the west, north and south of the country. Sugarcane is currently grown on 1,750 ha (4,300 acres) of land, and asafoetida on nearly 980 ha (2,400 acres) of land.",
"title": "Agriculture and livestock"
},
{
"paragraph_id": 20,
"text": "According to a 2010 report, only about 2.1% (or 1,350,000 ha (3,300,000 acres)) of Afghanistan is forested. This can be significantly increased by planting trees, including in the non-rocky hills and mountains which trap underground water. Some steps have been taken in recent years in planting trees in the urban areas all across Afghanistan. Even the Taliban spiritual leader has recently called for planting more trees. Felling has been made illegal nationally.",
"title": "Agriculture and livestock"
},
{
"paragraph_id": 21,
"text": "Afghanistan is landlocked with its citizens having no direct access to an ocean. The country has many lakes, ponds, reservoirs, rivers, springs, streams, etc., which make it a suitable climate for fish farming. Historically, fish constituted a smaller part of the Afghan diet because of the unavailability of modern fish farms. Fishing only took place in the lakes and rivers, particularly in the Amu, Helmand and Kabul rivers. Consumption of fish has increased sharply due to the establishment of many fish farms. There are over 2,600 of them in the country. The largest ones are at the national reservoirs, which supply fish eggs to smaller fish farms.",
"title": "Agriculture and livestock"
},
{
"paragraph_id": 22,
"text": "Afghanistan's geographical location makes it economically secured. The Lapis Lazuli corridor connects Afghanistan with Turkmenistan and ultimately ends somewhere in Europe. Other such trade routes connect Afghanistan with neighboring Iran, Pakistan, Tajikistan and Uzbekistan. The country also has direct trade with China and India via air corridor. It has four international airports, which include: Kabul International Airport in the capital city; Mazar-e Sharif International Airport in the north of the country; Herat International Airport in the west; and the Ahmad Shah Baba International Airport in Kandahar. It also has about 24 domestic airports. The major airlines of the country include Ariana Afghan Airlines and Kam Air. Its national rail network is slowly expanding to connect Central Asia with Pakistan and Iran. In addition to Central Asia, imported goods also enter by rail from neighboring Iran and China.",
"title": "Trade and industry"
},
{
"paragraph_id": 23,
"text": "The Afghanistan–Pakistan Transit Trade Agreement (APTTA) allows Afghan and Pakistani cargo trucks to transit goods within both nations. This revised US-sponsored APTTA agreement also allows Afghan trucks to transport exports to India via Pakistan up to the Wagah crossing point. There are over a dozen official border crossing points all around Afghanistan. They include Abu Nasir Port in Farah Province, Ai-Khanoum in Takhar Province, Angur Ada in Paktika Province, Aqina in Faryab Province, Dand-aw-Patan in Paktia Province, Ghulam Khan in Khost Province, Hairatan in Balkh Province, Ishkashim in Badakhshan Province, Islam Qala in Herat Province, Sher Khan Bandar in Kunduz Province, Torghundi in Herat Province, Torkham in Nangarhar Province, Spin Boldak in Kandahar Province, and Zaranj in Nimruz Province. The country also has legal access to two major seaports in Pakistan, the Gwadar Port in Balochistan and the Port Qasim in Sindh. Afghanistan also has legal access to major seaports in Iran, which include the one in Bandar Abbas in the Persian Gulf and the Chabahar Port in the Gulf of Oman.",
"title": "Trade and industry"
},
{
"paragraph_id": 24,
"text": "Afghanistan is endowed with a wealth of natural resources, which include extensive deposits of barites, chromite, coal, copper, gold, gemstone, iron ore, lead, lithium, marble, natural gas, petroleum, salt, sulfur, talc, uranium, and zinc. Rare-earth elements can be found all over the country. In 2006, a U.S. Geological Survey estimated that Afghanistan has as much as 1,000×10^ m (36×10^ cu ft) of natural gas, 570×10^ m (3.6 Gbbl) of oil and condensate reserves. According to a 2007 assessment, Afghanistan has significant amounts of undiscovered non-fuel mineral resources. Geologists also found indications of abundant deposits of colored stones and gemstones, including emerald, garnet, kunzite, lapis lazuli, peridot, ruby, sapphire, spinel, and tourmaline.",
"title": "Trade and industry"
},
{
"paragraph_id": 25,
"text": "It is claimed that Afghanistan has at least $1 trillion in untapped mineral deposits. A memo from the Pentagon stated that Afghanistan could become the \"Saudi Arabia of lithium\". Some believe that the untapped minerals are worth up to $3 trillion. The Khanashin carbonatites in the Helmand Province of the country have an estimated 1 million metric tonnes of rare earth elements.",
"title": "Trade and industry"
},
{
"paragraph_id": 26,
"text": "Afghanistan currently has a copper mining deal with China Metallurgical Group Corporation, which involves the investment of $2.8 billion by China and an annual income of about $400 million to the Afghan government. The country's Ainak copper mine, located in Logar Province, is one of the biggest in the world. It is estimated to hold at least 11 million tonnes or US$33 billion worth of copper.",
"title": "Trade and industry"
},
{
"paragraph_id": 27,
"text": "The previous government has signed a 30-year contract with investment group Centar and its operating company, Afghan Gold and Minerals Co., to explore and develop a copper mining operation in Balkhab District in Sar-e Pol Province, including a gold mining operation in Badakhshan Province. The copper contract involved a $56 million investment and the gold contract a $22 million investment.",
"title": "Trade and industry"
},
{
"paragraph_id": 28,
"text": "The country's other recently announced treasure is the Hajigak iron mine, located 210 km (130 mi) west of Kabul and is believed to hold an estimated 1.8 billion to 2 billion metric tons of the mineral used to make steel. The country also has a number of coal mines.",
"title": "Trade and industry"
},
{
"paragraph_id": 29,
"text": "Afghanistan's important resource in the past has been natural gas, which was first tapped in 1967. During the 1980s, gas sales accounted for $300 million a year in export revenues (56% of the total). About 90% of these exports went to the Soviet Union to pay for imports and debts. However, during the withdrawal of Soviet troops in 1989, the natural gas fields were capped to prevent sabotage by criminals. Gas production has dropped from a high of 8.2×10^ m (290×10^ cu ft) per day in the 1980s to a low of about 600,000 m (21×10^ cu ft) in 2001. Production of natural gas was restored during the Karzai administration in 2010.",
"title": "Trade and industry"
},
{
"paragraph_id": 30,
"text": "It is predicted that by pumping-out its own oil reserves, Afghanistan will no longer be importing oil products after 2026. Originally, the Karzai administration and China National Petroleum Corporation (CNPC) signed a contract for the development of three oil fields in the northern provinces of Sar-e Pol, Jowzjan and Faryab. It was later reported that CNPC began extracting 240,000 m (1.5×10^ bbl) of oil annually. In early 2023, the Xinjiang Central Asia Petroleum and Gas Company signed a similar contract with the Islamic Emirate of Afghanistan. Russia had also found interest in oil and gas supply to Afghanistan.",
"title": "Trade and industry"
},
{
"paragraph_id": 31,
"text": "Afghanistan embarked on a modest economic development program in the 1930s. The government founded banks; introduced paper money; established a university; expanded primary, secondary, and technical schools; and sent students abroad for education. In 1952 it created the Helmand Valley Authority to manage the economic development of the Helmand and Arghandab valleys through irrigation and land development, a scheme which remains one of the country's most important capital resources.",
"title": "Economic development and recovery"
},
{
"paragraph_id": 32,
"text": "In 1956, the government promulgated the first in a long series of ambitious development plans. By the late 1970s, these had achieved only mixed results due to flaws in the planning process as well as inadequate funding and a shortage of the skilled managers and technicians needed for implementation.",
"title": "Economic development and recovery"
},
{
"paragraph_id": 33,
"text": "Da Afghanistan Bank serves as the central bank of the nation. The \"afghani\" (AFN) is the national currency, which has an exchange rate of around 70 afghanis to 1 US dollar. There are over a dozen different banks operating in the country, including Afghanistan International Bank, Kabul Bank, Azizi Bank, Pashtany Bank, Standard Chartered Bank, and First Micro Finance Bank. Cash is still widely used for most transactions. A new law on private investment provides three to seven-year tax holidays to eligible companies and a four-year exemption from exports tariffs and duties. Improvements to the business-enabling environment have resulted in more than $1.5 billion in telecom investment and created more than 100,000 jobs since 2003.",
"title": "Economic development and recovery"
},
{
"paragraph_id": 34,
"text": "Afghanistan is a member of ECO, OIC, SAARC, and WTO. It has an observer status in the SCO. It seeks to complete the so-called New Silk Road trade project, which is aimed to connecting South Asia with Central Asia and the Middle East. This way Afghanistan will be able to collect large fees from trade passing through the country, including from the Trans-Afghanistan Pipeline.",
"title": "Economic development and recovery"
},
{
"paragraph_id": 35,
"text": "Some of the ongoing national mega projects include the Qosh Tepa Canal project in the north of the country and the New Kabul City. Other smaller development projects include the Qatar Township in Kabul, Aino Mena in Kandahar and the Ghazi Amanullah Khan Town east of Jalalabad. Similar projects are also found in Herat in the west, Mazar-e-Sharif in the north, Khost in the east, and in other cities.",
"title": "Economic development and recovery"
},
{
"paragraph_id": 36,
"text": "There are as much as 5,000 factories in Afghanistan. Most are locally owned, while others involve foreign investors. They produce construction materials, furniture, household items, apparel, food, beverages, pharmaceutical products, etc. The country imports roughly $500 million of textile goods from other countries. It exported about $168 million worth of cotton in 2022. Afghan handwoven rugs are one of the most popular products for exportation. Other products include hand crafted antique replicas as well as leather and furs. Afghanistan is the third largest exporter of cashmere.",
"title": "Economic development and recovery"
},
{
"paragraph_id": 37,
"text": "After the Islamic Emirate of Afghanistan returned to power, the country suffered from a major liquidity crisis and lack of banknotes. Because outside donors have severely cut funding to support Afghanistan's health, education, and other essential sectors, many Afghans lost their incomes. Under the assessment system of the World Food Programme (WFP), almost 20 million people suffered either level-3 “crisis” or level-4 “emergency” levels of food insecurity. The crisis’ impact on women and girls was especially severe. Officials under the new Islamic Emirate continue to provide communication services to areas that lacked them. The government collected 61 billion afghanis in tariffs in 2022, which increased to 76 billion in 2023. It continues to attract foreign investors.",
"title": "Economic development and recovery"
},
{
"paragraph_id": 38,
"text": "Tourism in Afghanistan was at its peak in 1977. Many tourists from around the world visited Afghanistan, including from as far away as Europe and North America. All of that ended with the start of the April 1978 Saur Revolution. However, it is again gradually increasing despite having reputation as one of the most dangerous countries in the world. Between 4,000 and 20,000 foreign tourists visit Afghanistan every year. As many as 371,000 Afghans have visited different parts of the country in 2022. Tourists are advised to avoid areas where armed criminals may operate.",
"title": "Tourism"
},
{
"paragraph_id": 39,
"text": "Ariana, Flydubai and Kam Air all provide flight services between Dubai International Airport and Kabul International Airport. The city of Kabul has many guest houses and hotels, which include the Kabul Serena Hotel, the Hotel Inter-Continental Kabul, the Safi Landmark Hotel, and the Kabul Star Hotel. Small number of guest houses and hotels are also available in other cities such Kandahar, Herat, Mazar-i-Sharif, Jalalabad, Bamyan, Fayezabad, etc. For those wanting to travel by road, there are bus terminals with mosques, Afghan style restaurants and small shops in the major cities.",
"title": "Tourism"
},
{
"paragraph_id": 40,
"text": "The following are some of the notable places in Afghanistan that tourists visit:",
"title": "Tourism"
},
{
"paragraph_id": 41,
"text": "The following table shows the main economic indicators in 2002–2020 (with IMF staff estimates in 2021–2026). Inflation below 5% is in green. The annual unemployment rate is extracted from the World Bank, although the International Monetary Fund find them unreliable.",
"title": "National data"
},
{
"paragraph_id": 42,
"text": "Gross national saving: 22.7% of GDP (2017)",
"title": "National data"
},
{
"paragraph_id": 43,
"text": "GDP - composition by sector:",
"title": "National data"
},
{
"paragraph_id": 44,
"text": "note: data excludes opium production",
"title": "National data"
},
{
"paragraph_id": 45,
"text": "GDP - composition by end use:",
"title": "National data"
},
{
"paragraph_id": 46,
"text": "Household income or consumption by percentage share:",
"title": "National data"
},
{
"paragraph_id": 47,
"text": "Agriculture - products: wheat, milk, grapes, vegetables, potatoes, watermelons, melons, rice, onions, apples",
"title": "National data"
},
{
"paragraph_id": 48,
"text": "Industries: small-scale production of bricks, textiles, soap, furniture, shoes, fertilizer, apparel, food-products, non-alcoholic beverages, mineral water, cement; handwoven carpets; natural gas, coal, copper",
"title": "National data"
},
{
"paragraph_id": 49,
"text": "Industrial production growth rate: -1.9% (2016) country comparison to the world: 181",
"title": "National data"
},
{
"paragraph_id": 50,
"text": "Labor force: 8.478 million (2017) country comparison to the world: 58",
"title": "National data"
},
{
"paragraph_id": 51,
"text": "Labor force - by occupation: agriculture 44.3%, industry 18.1%, services 37.6% (2017)",
"title": "National data"
},
{
"paragraph_id": 52,
"text": "Population below poverty line: 54.5% (2017)",
"title": "National data"
},
{
"paragraph_id": 53,
"text": "Budget:",
"title": "National data"
},
{
"paragraph_id": 54,
"text": "Taxes and other revenues: 11.2% (of GDP) (2017) country comparison to the world: 210",
"title": "National data"
},
{
"paragraph_id": 55,
"text": "Exports: $2 billion (2022) country comparison to the world: 164",
"title": "National data"
},
{
"paragraph_id": 56,
"text": "Exports - commodities: gold, grapes, opium, fruits and nuts, insect resins, cotton, handwoven carpets, soapstone, scrap metal (2019)",
"title": "National data"
},
{
"paragraph_id": 57,
"text": "Exports - partners: United Arab Emirates 45%, Pakistan 24%, India 22%, China 1% (2019)",
"title": "National data"
},
{
"paragraph_id": 58,
"text": "Imports: $7 billion (2022) country comparison to the world: 125",
"title": "National data"
},
{
"paragraph_id": 59,
"text": "Imports - commodities: wheat flours, broadcasting equipment, refined petroleum, rolled tobacco, aircraft parts, synthetic fabrics (2019)",
"title": "National data"
},
{
"paragraph_id": 60,
"text": "Imports - partners: United Arab Emirates 23%, Pakistan 17%, India 13%, China 9%, United States 9%, Uzbekistan 7%, Kazakhstan 6% (2019)",
"title": "National data"
},
{
"paragraph_id": 61,
"text": "Reserves of foreign exchange and gold: $7.187 billion (2017) country comparison to the world: 85",
"title": "National data"
},
{
"paragraph_id": 62,
"text": "Current account balance: $1.014 billion (2017) country comparison to the world: 49",
"title": "National data"
},
{
"paragraph_id": 63,
"text": "Currency: Afghani (AFN)",
"title": "National data"
},
{
"paragraph_id": 64,
"text": "Exchange rates: 67 afghanis to 1 US dollar (2023)",
"title": "National data"
},
{
"paragraph_id": 65,
"text": "Fiscal year: 21 December - 20 December",
"title": "National data"
},
{
"paragraph_id": 66,
"text": "Energy in Afghanistan is provided by hydropower followed by fossil fuel and solar power. The nation currently generates over 600 megawatts (MW) of electricity from its several hydroelectric plants as well as using fossil fuel and solar panels. Over 670 MW more is imported from neighboring Iran, Tajikistan, Turkmenistan and Uzbekistan. Da Afghanistan Breshna Sherkat (DABS) is the national electricity provider.",
"title": "Energy in Afghanistan"
},
{
"paragraph_id": 67,
"text": "Price of electricity is 2.5 afghanis per kw in Kabul Province, 4 afghanis in Herat Province, and around 6 afghanis in Balkh Province. The government wants to use the nation's coal reserves to produce extra electricity. The CASA-1000 project will also add 300 MW of electricity to the national grid.",
"title": "Energy in Afghanistan"
},
{
"paragraph_id": 68,
"text": "Due to large influx of expats from neighboring Pakistan and Iran, the nation may require as much as 7,000 MW of electricity in the coming years. The Afghan National Development Strategy has identified renewable energy alternatives, such as wind and solar energy, as a high value power source to develop. A number of major solar and wind farms already exist in the country, with more under development.",
"title": "Energy in Afghanistan"
}
]
| The economy of Afghanistan is listed as the 124th largest in the world in terms of nominal gross domestic product (GDP), and 102nd largest in the world in terms of purchasing power parity (PPP). With a population of around 41 million people, Afghanistan's GDP (nominal) stands at $14.58 billion as of 2021, amounting to a GDP per capita of $363.7. Its annual exports exceed $2 billion, with agricultural, mineral and textile products accounting for 94% of total exports. The nation's total external debt is $1.4 billion as of 2022. The Afghan economy continues to improve due to the influx of expats, establishment of more trade routes with neighboring and regional countries, and expansion of the nation's agriculture, energy and mining sectors. The billions of dollars in assistance that came from expats and the international community saw this increase when there was more political reliability after NATO became involved in Afghanistan. Despite holding over one trillion dollars in proven untapped mineral deposits, Afghanistan remains one of the least developed countries in the world. Its unemployment rate is over 23% and about half of its population lives below the poverty line. The main factor behind this has been the continuous war in the country, which deterred business investors and left much of the population fighting among each other instead of catching up with the rest of the world. Afghanistan has long sought foreign investment in order to improve its economy. The population of Afghanistan increased by more than 50% between 2001 and 2014, while its GDP grew eightfold. After the U.S. withdrawal from Afghanistan and the Taliban's return to power in 2021, the Biden administration decided to confiscate or withhold $9.5 billion worth of assets from the Afghanistan Central Bank to stop the Taliban from accessing it. The official currency of Afghanistan is the afghani (AFN), which has an exchange rate of around 70 afghanis to 1 United States dollar. The country has a central bank called Da Afghanistan Bank (DAB). A number of local banks also operate in the country, including the Afghanistan International Bank, Azizi Bank, New Kabul Bank and Pashtany Bank. | 2001-01-21T23:02:12Z | 2023-12-08T02:08:44Z | [
"Template:Decrease",
"Template:Citation-attribution",
"Template:Asia in topic",
"Template:Blockquote",
"Template:Increase",
"Template:Portal",
"Template:Cite web",
"Template:Cite journal",
"Template:Commons category",
"Template:Short description",
"Template:Infobox economy",
"Template:Cite book",
"Template:CIA World Factbook",
"Template:SAFTA",
"Template:Further",
"Template:Cvt",
"Template:Main",
"Template:Cite news",
"Template:Afghanistan topics",
"Template:Anchor",
"Template:Cbignore",
"Template:South Asian Association for Regional Cooperation",
"Template:Steady",
"Template:Cite video",
"Template:Webarchive",
"Template:WTO",
"Template:Authority control",
"Template:IncreaseNegative",
"Template:Reflist",
"Template:DecreasePositive",
"Template:YouTube"
]
| https://en.wikipedia.org/wiki/Economy_of_Afghanistan |
9,896 | Elf | An elf (pl. elves) is a type of humanoid supernatural being in Germanic folklore. Elves appear especially in North Germanic mythology, being mentioned in the Icelandic Poetic Edda and Snorri Sturluson's Prose Edda.
In medieval Germanic-speaking cultures, elves generally seem to have been thought of as beings with magical powers and supernatural beauty, ambivalent towards everyday people and capable of either helping or hindering them. However, the details of these beliefs have varied considerably over time and space and have flourished in both pre-Christian and Christian cultures.
Sometimes elves are, like dwarfs, associated with craftmanship. Wayland the Smith embodies this feature. He is known under many names, depending on the language in which the stories were distributed. The names include Völund in Old Norse, Wēland in Anglo-Saxon and Wieland in German. The story of Wayland is also to be found in the Prose Edda.
The word elf is found throughout the Germanic languages and seems originally to have meant 'white being'. However, reconstructing the early concept of an elf depends largely on texts written by Christians, in Old and Middle English, medieval German, and Old Norse. These associate elves variously with the gods of Norse mythology, with causing illness, with magic, and with beauty and seduction.
After the medieval period, the word elf tended to become less common throughout the Germanic languages, losing out to alternative native terms like Zwerg ('dwarf') in German and huldra ('hidden being') in North Germanic languages, and to loan-words like fairy (borrowed from French into most of the Germanic languages). Still, belief in elves persisted in the early modern period, particularly in Scotland and Scandinavia, where elves were thought of as magically powerful people living, usually invisibly, alongside everyday human communities. They continued to be associated with causing illnesses and with sexual threats. For example, several early modern ballads in the British Isles and Scandinavia, originating in the medieval period, describe elves attempting to seduce or abduct human characters.
With urbanisation and industrialisation in the nineteenth and twentieth centuries, belief in elves declined rapidly (though Iceland has some claim to continued popular belief in elves). However, elves started to be prominent in the literature and art of educated elites from the early modern period onwards. These literary elves were imagined as tiny, playful beings, with William Shakespeare's A Midsummer Night's Dream being a key development of this idea. In the eighteenth century, German Romantic writers were influenced by this notion of the elf and re-imported the English word elf into the German language.
From the Romantic idea of elves came the elves of popular culture that emerged in the nineteenth and twentieth centuries. The "Christmas elves" of contemporary popular culture are a relatively recent creation, popularized during the late nineteenth century in the United States. Elves entered the twentieth-century high fantasy genre in the wake of works published by authors such as J. R. R. Tolkien; these re-popularised the idea of elves as human-sized and humanlike beings. Elves remain a prominent feature of fantasy media today.
Elves have in many times and places been believed to be real beings. Where enough people have believed in the reality of elves that those beliefs then had real effects in the world, they can be understood as part of people's worldview, and as a social reality: a thing which, like the exchange value of a dollar bill or the sense of pride stirred up by a national flag, is real because of people's beliefs rather than as an objective reality. Accordingly, beliefs about elves and their social functions have varied over time and space.
Even in the twenty-first century, fantasy stories about elves have been argued both to reflect and to shape their audiences' understanding of the real world, and traditions about Santa Claus and his elves relate to Christmas.
Over time, people have attempted to demythologise or rationalise beliefs in elves in various ways.
Beliefs about elves have their origins before the conversion to Christianity and associated Christianization of northwest Europe. For this reason, belief in elves has, from the Middle Ages through into recent scholarship, often been labelled "pagan" and a "superstition." However, almost all surviving textual sources about elves were produced by Christians (whether Anglo-Saxon monks, medieval Icelandic poets, early modern ballad-singers, nineteenth-century folklore collectors, or even twentieth-century fantasy authors). Attested beliefs about elves, therefore, need to be understood as part of Germanic-speakers' Christian culture and not merely a relic of their pre-Christian religion. Accordingly, investigating the relationship between beliefs in elves and Christian cosmology has been a preoccupation of scholarship about elves both in early times and modern research.
Historically, people have taken three main approaches to integrate elves into Christian cosmology, all of which are found widely across time and space:
Some nineteenth- and twentieth-century scholars attempted to rationalise beliefs in elves as folk memories of lost indigenous peoples. Since belief in supernatural beings is ubiquitous in human cultures, scholars no longer believe such explanations are valid. Research has shown, however, that stories about elves have often been used as a way for people to think metaphorically about real-life ethnic others.
Scholars have at times also tried to explain beliefs in elves as being inspired by people suffering certain kinds of illnesses (such as Williams syndrome). Elves were certainly often seen as a cause of illness, and indeed the English word oaf seems to have originated as a form of elf: the word elf came to mean 'changeling left by an elf' and then, because changelings were noted for their failure to thrive, to its modern sense 'a fool, a stupid person; a large, clumsy man or boy'. However, it again seems unlikely that the origin of beliefs in elves itself is to be explained by people's encounters with objectively real people affected by disease.
The English word elf is from the Old English word most often attested as ælf (whose plural would have been *ælfe). Although this word took a variety of forms in different Old English dialects, these converged on the form elf during the Middle English period. During the Old English period, separate forms were used for female elves (such as ælfen, putatively from Proto-Germanic *ɑlβ(i)innjō), but during the Middle English period the word elf routinely came to include female beings.
The Old English forms are cognates – linguistic siblings stemming from a common origin – with medieval Germanic terms such as Old Norse alfr ('elf'; plural alfar), Old High German alp ('evil spirit'; pl. alpî, elpî; feminine elbe), Burgundian *alfs ('elf'), and Middle Low German alf ('evil spirit'). These words must come from Proto-Germanic, the ancestor-language of the attested Germanic languages; the Proto-Germanic forms are reconstructed as *ɑlβi-z and *ɑlβɑ-z.
Germanic *ɑlβi-z~*ɑlβɑ-z is generally agreed to be a cognate with Latin albus ('(matt) white'), Old Irish ailbhín ('flock'), Ancient Greek ἀλφός (alphós; 'whiteness, white leprosy';), and Albanian elb ('barley'); and the Germanic word for 'swan' reconstructed as *albit- (compare Modern Icelandic álpt) is often thought to be derived from it. These all come from a Proto-Indo-European root *h₂elbʰ-, and seem to be connected by the idea of whiteness. The Germanic word presumably originally meant 'white one', perhaps as a euphemism. Jakob Grimm thought whiteness implied positive moral connotations, and, noting Snorri Sturluson's ljósálfar, suggested that elves were divinities of light. This is not necessarily the case, however. For example, because the cognates suggest matt white rather than shining white, and because in medieval Scandinavian texts whiteness is associated with beauty, Alaric Hall has suggested that elves may have been called 'the white people' because whiteness was associated with (specifically feminine) beauty. Some scholars have argued that the names Albion and Alps may also be related (possibly through Celtic).
A completely different etymology, making elf a cognate with the Ṛbhus, semi-divine craftsmen in Indian mythology, was suggested by Adalbert Kuhn in 1855. In this case, *ɑlβi-z would connote the meaning 'skillful, inventive, clever', and could be a cognate with Latin labor, in the sense of 'creative work'. While often mentioned, this etymology is not widely accepted.
Throughout the medieval Germanic languages, elf was one of the nouns used in personal names, almost invariably as a first element. These names may have been influenced by Celtic names beginning in Albio- such as Albiorix.
Personal names provide the only evidence for elf in Gothic, which must have had the word *albs (plural *albeis). The most famous name of this kind is Alboin. Old English names in elf- include the cognate of Alboin Ælfwine (literally "elf-friend", m.), Ælfric ("elf-powerful", m.), Ælfweard ("elf-guardian", m.), and Ælfwaru ("elf-care", f.). A widespread survivor of these in modern English is Alfred (Old English Ælfrēd, "elf-advice"). Also surviving are the English surname Elgar (Ælfgar, "elf-spear") and the name of St Alphege (Ælfhēah, "elf-tall"). German examples are Alberich, Alphart and Alphere (father of Walter of Aquitaine) and Icelandic examples include Álfhildur. These names suggest that elves were positively regarded in early Germanic culture. Of the many words for supernatural beings in Germanic languages, the only ones regularly used in personal names are elf and words denoting pagan gods, suggesting that elves were considered similar to gods.
In later Old Icelandic, alfr ("elf") and the personal name which in Common Germanic had been *Aþa(l)wulfaz both coincidentally became álfr~Álfr.
Elves appear in some place names, though it is difficult to be sure how many of other words, including personal names, can appear similar to elf. The clearest English examples are Elveden ("elves' hill", Suffolk) and Elvendon ("elves' valley", Oxfordshire); other examples may be Eldon Hill ("Elves' hill", Derbyshire); and Alden Valley ("elves' valley", Lancashire). These seem to associate elves fairly consistently with woods and valleys.
The earliest surviving manuscripts mentioning elves in any Germanic language are from Anglo-Saxon England. Medieval English evidence has, therefore, attracted quite extensive research and debate. In Old English, elves are most often mentioned in medical texts which attest to the belief that elves might afflict humans and livestock with illnesses: apparently mostly sharp, internal pains and mental disorders. The most famous of the medical texts is the metrical charm Wið færstice ("against a stabbing pain"), from the tenth-century compilation Lacnunga, but most of the attestations are in the tenth-century Bald's Leechbook and Leechbook III. This tradition continues into later English-language traditions too: elves continue to appear in Middle English medical texts.
Belief in elves causing illnesses remained prominent in early modern Scotland, where elves were viewed as supernaturally powerful people who lived invisibly alongside everyday rural people. Thus, elves were often mentioned in the early modern Scottish witchcraft trials: many witnesses in the trials believed themselves to have been given healing powers or to know of people or animals made sick by elves. Throughout these sources, elves are sometimes associated with the succubus-like supernatural being called the mare.
While they may have been thought to cause diseases with magical weapons, elves are more clearly associated in Old English with a kind of magic denoted by Old English sīden and sīdsa, a cognate with the Old Norse seiðr, and also paralleled in the Old Irish Serglige Con Culainn. By the fourteenth century, they were also associated with the arcane practice of alchemy.
In one or two Old English medical texts, elves might be envisaged as inflicting illnesses with projectiles. In the twentieth century, scholars often labelled the illnesses elves caused as "elf-shot", but work from the 1990s onwards showed that the medieval evidence for elves' being thought to cause illnesses in this way is slender; debate about its significance is ongoing.
The noun elf-shot is first attested in a Scots poem, "Rowlis Cursing," from around 1500, where "elf schot" is listed among a range of curses to be inflicted on some chicken thieves. The term may not always have denoted an actual projectile: shot could mean "a sharp pain" as well as "projectile." But in early modern Scotland, elf-schot and other terms like elf-arrowhead are sometimes used of neolithic arrow-heads, apparently thought to have been made by elves. In a few witchcraft trials, people attest that these arrow-heads were used in healing rituals and occasionally alleged that witches (and perhaps elves) used them to injure people and cattle. Compare with the following excerpt from a 1749–50 ode by William Collins:
There every herd, by sad experience, knows How, winged with fate, their elf-shot arrows fly, When the sick ewe her summer food forgoes, Or, stretched on earth, the heart-smit heifers lie.
Because of elves' association with illness, in the twentieth century, most scholars imagined that elves in the Anglo-Saxon tradition were small, invisible, demonic beings, causing illnesses with arrows. This was encouraged by the idea that "elf-shot" is depicted in the Eadwine Psalter, in an image which became well known in this connection. However, this is now thought to be a misunderstanding: the image proves to be a conventional illustration of God's arrows and Christian demons. Rather, twenty-first century scholarship suggests that Anglo-Saxon elves, like elves in Scandinavia or the Irish Aos Sí, were regarded as people.
Like words for gods and men, the word elf is used in personal names where words for monsters and demons are not. Just as álfar is associated with Æsir in Old Norse, the Old English Wið færstice associates elves with ēse; whatever this word meant by the tenth century, etymologically it denoted pagan gods. In Old English, the plural ylfe (attested in Beowulf) is grammatically an ethnonym (a word for an ethnic group), suggesting that elves were seen as people. As well as appearing in medical texts, the Old English word ælf and its feminine derivative ælbinne were used in glosses to translate Latin words for nymphs. This fits well with the word ælfscȳne, which meant "elf-beautiful" and is attested describing the seductively beautiful Biblical heroines Sarah and Judith.
Likewise, in Middle English and early modern Scottish evidence, while still appearing as causes of harm and danger, elves appear clearly as humanlike beings. They became associated with medieval chivalric romance traditions of fairies and particularly with the idea of a Fairy Queen. A propensity to seduce or rape people becomes increasingly prominent in the source material. Around the fifteenth century, evidence starts to appear for the belief that elves might steal human babies and replace them with changelings.
By the end of the medieval period, elf was increasingly being supplanted by the French loan-word fairy. An example is Geoffrey Chaucer's satirical tale Sir Thopas, where the title character sets out in a quest for the "elf-queen", who dwells in the "countree of the Faerie".
Evidence for elf beliefs in medieval Scandinavia outside Iceland is sparse, but the Icelandic evidence is uniquely rich. For a long time, views about elves in Old Norse mythology were defined by Snorri Sturluson's Prose Edda, which talks about svartálfar, dökkálfar and ljósálfar ("black elves", "dark elves", and "light elves"). For example, Snorri recounts how the svartálfar create new blond hair for Thor's wife Sif after Loki had shorn off Sif's long hair. However, these terms are attested only in the Prose Edda and texts based on it. It is now agreed that they reflect traditions of dwarves, demons, and angels, partly showing Snorri's "paganisation" of a Christian cosmology learned from the Elucidarius, a popular digest of Christian thought.
Scholars of Old Norse mythology now focus on references to elves in Old Norse poetry, particularly the Elder Edda. The only character explicitly identified as an elf in classical Eddaic poetry, if any, is Völundr, the protagonist of Völundarkviða. However, elves are frequently mentioned in the alliterating phrase Æsir ok Álfar ('Æsir and elves') and its variants. This was a well-established poetic formula, indicating a strong tradition of associating elves with the group of gods known as the Æsir, or even suggesting that the elves and Æsir were one and the same. The pairing is paralleled in the Old English poem Wið færstice and in the Germanic personal name system; moreover, in Skaldic verse the word elf is used in the same way as words for gods. Sigvatr Þórðarson's skaldic travelogue Austrfaravísur, composed around 1020, mentions an álfablót ('elves' sacrifice') in Edskogen in what is now southern Sweden. There does not seem to have been any clear-cut distinction between humans and gods; like the Æsir, then, elves were presumably thought of as being humanlike and existing in opposition to the giants. Many commentators have also (or instead) argued for conceptual overlap between elves and dwarves in Old Norse mythology, which may fit with trends in the medieval German evidence.
There are hints that the god Freyr was associated with elves. In particular, Álfheimr (literally "elf-world") is mentioned as being given to Freyr in Grímnismál. Snorri Sturluson identified Freyr as one of the Vanir. However, the term Vanir is rare in Eddaic verse, very rare in Skaldic verse, and is not generally thought to appear in other Germanic languages. Given the link between Freyr and the elves, it has therefore long been suspected that álfar and Vanir are, more or less, different words for the same group of beings. However, this is not uniformly accepted.
A kenning (poetic metaphor) for the sun, álfröðull (literally "elf disc"), is of uncertain meaning but is to some suggestive of a close link between elves and the sun.
Although the relevant words are of slightly uncertain meaning, it seems fairly clear that Völundr is described as one of the elves in Völundarkviða. As his most prominent deed in the poem is to rape Böðvildr, the poem associates elves with being a sexual threat to maidens. The same idea is present in two post-classical Eddaic poems, which are also influenced by chivalric romance or Breton lais, Kötludraumur and Gullkársljóð. The idea also occurs in later traditions in Scandinavia and beyond, so it may be an early attestation of a prominent tradition. Elves also appear in a couple of verse spells, including the Bergen rune-charm from among the Bryggen inscriptions.
The appearance of elves in sagas is closely defined by genre. The Sagas of Icelanders, Bishops' sagas, and contemporary sagas, whose portrayal of the supernatural is generally restrained, rarely mention álfar, and then only in passing. But although limited, these texts provide some of the best evidence for the presence of elves in everyday beliefs in medieval Scandinavia. They include a fleeting mention of elves seen out riding in 1168 (in Sturlunga saga); mention of an álfablót ("elves' sacrifice") in Kormáks saga; and the existence of the euphemism ganga álfrek ('go to drive away the elves') for "going to the toilet" in Eyrbyggja saga.
The Kings' sagas include a rather elliptical but widely studied account of an early Swedish king being worshipped after his death and being called Ólafr Geirstaðaálfr ('Ólafr the elf of Geirstaðir'), and a demonic elf at the beginning of Norna-Gests þáttr.
The legendary sagas tend to focus on elves as legendary ancestors or on heroes' sexual relations with elf-women. Mention of the land of Álfheimr is found in Heimskringla while Þorsteins saga Víkingssonar recounts a line of local kings who ruled over Álfheim, who since they had elven blood were said to be more beautiful than most men. According to Hrólfs saga kraka, Hrolfr Kraki's half-sister Skuld was the half-elven child of King Helgi and an elf-woman (álfkona). Skuld was skilled in witchcraft (seiðr). Accounts of Skuld in earlier sources, however, do not include this material. The Þiðreks saga version of the Nibelungen (Niflungar) describes Högni as the son of a human queen and an elf, but no such lineage is reported in the Eddas, Völsunga saga, or the Nibelungenlied. The relatively few mentions of elves in the chivalric sagas tend even to be whimsical.
In his Rerum Danicarum fragmenta (1596) written mostly in Latin with some Old Danish and Old Icelandic passages, Arngrímur Jónsson explains the Scandinavian and Icelandic belief in elves (called Allffuafolch). Both Continental Scandinavia and Iceland have a scattering of mentions of elves in medical texts, sometimes in Latin and sometimes in the form of amulets, where elves are viewed as a possible cause of illness. Most of them have Low German connections.
The Old High German word alp is attested only in a small number of glosses. It is defined by the Althochdeutsches Wörterbuch as a "nature-god or nature-demon, equated with the Fauns of Classical mythology ... regarded as eerie, ferocious beings ... As the mare he messes around with women". Accordingly, the German word Alpdruck (literally "elf-oppression") means "nightmare". There is also evidence associating elves with illness, specifically epilepsy.
In a similar vein, elves are in Middle High German most often associated with deceiving or bewildering people in a phrase that occurs so often it would appear to be proverbial: die elben/der alp trieget mich ("the elves/elf are/is deceiving me"). The same pattern holds in Early Modern German. This deception sometimes shows the seductive side apparent in English and Scandinavian material: most famously, the early thirteenth-century Heinrich von Morungen's fifth Minnesang begins "Von den elben wirt entsehen vil manic man / Sô bin ich von grôzer liebe entsên" ("full many a man is bewitched by elves / thus I too am bewitched by great love"). Elbe was also used in this period to translate words for nymphs.
In later medieval prayers, Elves appear as a threatening, even demonic, force. For example, some prayers invoke God's help against nocturnal attacks by Alpe. Correspondingly, in the early modern period, elves are described in north Germany doing the evil bidding of witches; Martin Luther believed his mother to have been afflicted in this way.
As in Old Norse, however, there are few characters identified as elves. It seems likely that in the German-speaking world, elves were to a significant extent conflated with dwarves (Middle High German: getwerc). Thus, some dwarves that appear in German heroic poetry have been seen as relating to elves. In particular, nineteenth-century scholars tended to think that the dwarf Alberich, whose name etymologically means "elf-powerful," was influenced by early traditions of elves.
From around the Late Middle Ages, the word elf began to be used in English as a term loosely synonymous with the French loan-word fairy; in elite art and literature, at least, it also became associated with diminutive supernatural beings like Puck, hobgoblins, Robin Goodfellow, the English and Scots brownie, and the Northumbrian English hob.
However, in Scotland and parts of northern England near the Scottish border, beliefs in elves remained prominent into the nineteenth century. James VI of Scotland and Robert Kirk discussed elves seriously; elf beliefs are prominently attested in the Scottish witchcraft trials, particularly the trial of Issobel Gowdie; and related stories also appear in folktales, There is a significant corpus of ballads narrating stories about elves, such as Thomas the Rhymer, where a man meets a female elf; Tam Lin, The Elfin Knight, and Lady Isabel and the Elf-Knight, in which an Elf-Knight rapes, seduces, or abducts a woman; and The Queen of Elfland's Nourice, a woman is abducted to be a wet-nurse to the elf queen's baby, but promised that she might return home once the child is weaned.
In Scandinavian folklore, many humanlike supernatural beings are attested, which might be thought of as elves and partly originate in medieval Scandinavian beliefs. However, the characteristics and names of these beings have varied widely across time and space, and they cannot be neatly categorised. These beings are sometimes known by words descended directly from the Old Norse álfr. However, in modern languages, traditional terms related to álfr have tended to be replaced with other terms. Things are further complicated because when referring to the elves of Old Norse mythology, scholars have adopted new forms based directly on the Old Norse word álfr. The following table summarises the situation in the main modern standard languages of Scandinavia.
The elves of Norse mythology have survived into folklore mainly as females, living in hills and mounds of stones. The Swedish älvor were stunningly beautiful girls who lived in the forest with an elven king.
The elves could be seen dancing over meadows, particularly at night and on misty mornings. They left a circle where they had danced, called älvdanser (elf dances) or älvringar (elf circles), and to urinate in one was thought to cause venereal diseases. Typically, elf circles were fairy rings consisting of a ring of small mushrooms, but there was also another kind of elf circle. In the words of the local historian Anne Marie Hellström:
... on lake shores, where the forest met the lake, you could find elf circles. They were round places where the grass had been flattened like a floor. Elves had danced there. By Lake Tisnaren, I have seen one of those. It could be dangerous, and one could become ill if one had trodden over such a place or if one destroyed anything there.
If a human watched the dance of the elves, he would discover that even though only a few hours seemed to have passed, many years had passed in the real world. Humans being invited or lured to the elf dance is a common motif transferred from older Scandinavian ballads.
Elves were not exclusively young and beautiful. In the Swedish folktale Little Rosa and Long Leda, an elvish woman (älvakvinna) arrives in the end and saves the heroine, Little Rose, on the condition that the king's cattle no longer graze on her hill. She is described as a beautiful old woman and by her aspect people saw that she belonged to the subterraneans.
Elves have a prominent place in several closely related ballads, which must have originated in the Middle Ages but are first attested in the early modern period. Many of these ballads are first attested in Karen Brahes Folio, a Danish manuscript from the 1570s, but they circulated widely in Scandinavia and northern Britain. They sometimes mention elves because they were learned by heart, even though that term had become archaic in everyday usage. They have therefore played a major role in transmitting traditional ideas about elves in post-medieval cultures. Indeed, some of the early modern ballads are still quite widely known, whether through school syllabuses or contemporary folk music. They, therefore, give people an unusual degree of access to ideas of elves from older traditional culture.
The ballads are characterised by sexual encounters between everyday people and humanlike beings referred to in at least some variants as elves (the same characters also appear as mermen, dwarves, and other kinds of supernatural beings). The elves pose a threat to the everyday community by lure people into the elves' world. The most famous example is Elveskud and its many variants (paralleled in English as Clerk Colvill), where a woman from the elf world tries to tempt a young knight to join her in dancing, or to live among the elves; in some versions he refuses, and in some he accepts, but in either case he dies, tragically. As in Elveskud, sometimes the everyday person is a man and the elf a woman, as also in Elvehøj (much the same story as Elveskud, but with a happy ending), Herr Magnus og Bjærgtrolden, Herr Tønne af Alsø, Herr Bøsmer i elvehjem, or the Northern British Thomas the Rhymer. Sometimes the everyday person is a woman, and the elf is a man, as in the northern British Tam Lin, The Elfin Knight, and Lady Isabel and the Elf-Knight, in which the Elf-Knight bears away Isabel to murder her, or the Scandinavian Harpans kraft. In The Queen of Elfland's Nourice, a woman is abducted to be a wet nurse to the elf-queen's baby, but promised that she might return home once the child is weaned.
In folk stories, Scandinavian elves often play the role of disease spirits. The most common, though the also most harmless case was various irritating skin rashes, which were called älvablåst (elven puff) and could be cured by a forceful counter-blow (a handy pair of bellows was most useful for this purpose). Skålgropar, a particular kind of petroglyph (pictogram on a rock) found in Scandinavia, were known in older times as älvkvarnar (elven mills), because it was believed elves had used them. One could appease the elves by offering a treat (preferably butter) placed into an elven mill.
In order to protect themselves and their livestock against malevolent elves, Scandinavians could use a so-called Elf cross (Alfkors, Älvkors or Ellakors), which was carved into buildings or other objects. It existed in two shapes, one was a pentagram, and it was still frequently used in early 20th-century Sweden as painted or carved onto doors, walls, and household utensils to protect against elves. The second form was an ordinary cross carved onto a round or oblong silver plate. This second kind of elf cross was worn as a pendant in a necklace, and to have sufficient magic, it had to be forged during three evenings with silver, from nine different sources of inherited silver. In some locations it also had to be on the altar of a church for three consecutive Sundays.
In Iceland, expressing belief in the huldufólk ("hidden people"), elves that dwell in rock formations, is still relatively common. Even when Icelanders do not explicitly express their belief, they are often reluctant to express disbelief. A 2006 and 2007 study by the University of Iceland's Faculty of Social Sciences revealed that many would not rule out the existence of elves and ghosts, a result similar to a 1974 survey by Erlendur Haraldsson. The lead researcher of the 2006–2007 study, Terry Gunnell, stated: "Icelanders seem much more open to phenomena like dreaming the future, forebodings, ghosts and elves than other nations". Whether significant numbers of Icelandic people do believe in elves or not, elves are certainly prominent in national discourses. They occur most often in oral narratives and news reporting in which they disrupt house- and road-building. In the analysis of Valdimar Tr. Hafstein, "narratives about the insurrections of elves demonstrate supernatural sanction against development and urbanization; that is to say, the supernaturals protect and enforce religious values and traditional rural culture. The elves fend off, with more or less success, the attacks, and advances of modern technology, palpable in the bulldozer." Elves are also prominent, in similar roles, in contemporary Icelandic literature.
Folk stories told in the nineteenth century about elves are still told in modern Denmark and Sweden. Still, they now feature ethnic minorities in place of elves in essentially racist discourse. In an ethnically fairly homogeneous medieval countryside, supernatural beings provided the Other through which everyday people created their identities; in cosmopolitan industrial contexts, ethnic minorities or immigrants are used in storytelling to similar effect.
Early modern Europe saw the emergence for the first time of a distinctive elite culture: while the Reformation encouraged new skepticism and opposition to traditional beliefs, subsequent Romanticism encouraged the fetishisation of such beliefs by intellectual elites. The effects of this on writing about elves are most apparent in England and Germany, with developments in each country influencing the other. In Scandinavia, the Romantic movement was also prominent, and literary writing was the main context for continued use of the word elf, except in fossilised words for illnesses. However, oral traditions about beings like elves remained prominent in Scandinavia into the early twentieth century.
Elves entered early modern elite culture most clearly in the literature of Elizabethan England. Here Edmund Spenser's Faerie Queene (1590–) used fairy and elf interchangeably of human-sized beings, but they are complex, imaginary and allegorical figures. Spenser also presented his own explanation of the origins of the Elfe and Elfin kynd, claiming that they were created by Prometheus. Likewise, William Shakespeare, in a speech in Romeo and Juliet (1592) has an "elf-lock" (tangled hair) being caused by Queen Mab, who is referred to as "the fairies' midwife". Meanwhile, A Midsummer Night's Dream promoted the idea that elves were diminutive and ethereal. The influence of Shakespeare and Michael Drayton made the use of elf and fairy for very small beings the norm, and had a lasting effect seen in fairy tales about elves, collected in the modern period.
Early modern English notions of elves became influential in eighteenth-century Germany. The Modern German Elf (m) and Elfe (f) was introduced as a loan-word from English in the 1740s and was prominent in Christoph Martin Wieland's 1764 translation of A Midsummer Night's Dream.
As German Romanticism got underway and writers started to seek authentic folklore, Jacob Grimm rejected Elf as a recent Anglicism, and promoted the reuse of the old form Elb (plural Elbe or Elben). In the same vein, Johann Gottfried Herder translated the Danish ballad Elveskud in his 1778 collection of folk songs, Stimmen der Völker in Liedern, as "Erlkönigs Tochter" ("The Erl-king's Daughter"; it appears that Herder introduced the term Erlkönig into German through a mis-Germanisation of the Danish word for elf). This in turn inspired Goethe's poem Der Erlkönig. However, Goethe added another new meaning, as the German word "Erle" does not mean "elf", but "black alder" - the poem about the Erlenkönig is set in the area of an alder quarry in the Saale valley in Thuringia. Goethe's poem then took on a life of its own, inspiring the Romantic concept of the Erlking, which was influential on literary images of elves from the nineteenth century on.
In Scandinavia too, in the nineteenth century, traditions of elves were adapted to include small, insect-winged fairies. These are often called "elves" (älvor in modern Swedish, alfer in Danish, álfar in Icelandic), although the more formal translation in Danish is feer. Thus, the alf found in the fairy tale The Elf of the Rose by Danish author Hans Christian Andersen is so tiny he can have a rose blossom for home, and "wings that reached from his shoulders to his feet". Yet Andersen also wrote about elvere in The Elfin Hill. The elves in this story are more alike those of traditional Danish folklore, who were beautiful females, living in hills and boulders, capable of dancing a man to death. Like the huldra in Norway and Sweden, they are hollow when seen from the back.
English and German literary traditions both influenced the British Victorian image of elves, which appeared in illustrations as tiny men and women with pointed ears and stocking caps. An example is Andrew Lang's fairy tale Princess Nobody (1884), illustrated by Richard Doyle, where fairies are tiny people with butterfly wings. In contrast, elves are small people with red stocking caps. These conceptions remained prominent in twentieth-century children's literature, for example Enid Blyton's The Faraway Tree series, and were influenced by German Romantic literature. Accordingly, in the Brothers Grimm fairy tale Die Wichtelmänner (literally, "the little men"), the title protagonists are two tiny naked men who help a shoemaker in his work. Even though Wichtelmänner are akin to beings such as kobolds, dwarves and brownies, the tale was translated into English by Margaret Hunt in 1884 as The Elves and the Shoemaker. This shows how the meanings of elf had changed and was in itself influential: the usage is echoed, for example, in the house-elf of J. K. Rowling's Harry Potter stories. In his turn, J. R. R. Tolkien recommended using the older German form Elb in translations of his works, as recorded in his Guide to the Names in The Lord of the Rings (1967). Elb, Elben was consequently introduced in 1972 German translation of The Lord of the Rings, repopularising the form in German.
With industrialisation and mass education, traditional folklore about elves waned; however, as the phenomenon of popular culture emerged, elves were re-imagined, in large part based on Romantic literary depictions and associated medievalism.
As American Christmas traditions crystallized in the nineteenth century, the 1823 poem "A Visit from St. Nicholas" (widely known as "'Twas the Night before Christmas") characterized St Nicholas himself as "a right jolly old elf." However, it was his little helpers, inspired partly by folktales like The Elves and the Shoemaker, who became known as "Santa's elves"; the processes through which this came about are not well-understood, but one key figure was a Christmas-related publication by the German-American cartoonist Thomas Nast. Thus in the US, Canada, UK, and Ireland, the modern children's folklore of Santa Claus typically includes small, nimble, green-clad elves with pointy ears, long noses, and pointy hats, as Santa's helpers. They make the toys in a workshop located in the North Pole. The role of elves as Santa's helpers has continued to be popular, as evidenced by the success of the popular Christmas movie Elf.
The fantasy genre in the twentieth century grew out of nineteenth-century Romanticism, in which nineteenth-century scholars such as Andrew Lang and the Grimm brothers collected fairy stories from folklore and in some cases retold them freely.
A pioneering work of the fantasy genre was The King of Elfland's Daughter, a 1924 novel by Lord Dunsany. The Elves of Middle-earth played a central role in Tolkien's legendarium, notably The Hobbit and The Lord of the Rings; this legendarium was enormously influential on subsequent fantasy writing. Tolkien's writing had such influence that in the 1960s and afterwards, elves speaking an elvish language similar to those in Tolkien's novels became staple non-human characters in high fantasy works and in fantasy role-playing games. Tolkien also appears to be the first author to have introduced the notion that elves are immortal. Post-Tolkien fantasy elves (which feature not only in novels but also in role-playing games such as Dungeons & Dragons) are often portrayed as being wiser and more beautiful than humans, with sharper senses and perceptions as well. They are said to be gifted in magic, mentally sharp and lovers of nature, art, and song. They are often skilled archers. A hallmark of many fantasy elves is their pointed ears.
In works where elves are the main characters, such as The Silmarillion or Wendy and Richard Pini's comic book series Elfquest, elves exhibit a similar range of behaviour to a human cast, distinguished largely by their superhuman physical powers. However, where narratives are more human-centered, as in The Lord of the Rings, elves tend to sustain their role as powerful, sometimes threatening, outsiders. Despite the obvious fictionality of fantasy novels and games, scholars have found that elves in these works continue to have a subtle role in shaping the real-life identities of their audiences. For example, elves can function to encode real-world racial others in video games, or to influence gender norms through literature.
Beliefs in humanlike supernatural beings are widespread in human cultures, and many such beings may be referred to as elves in English.
Elfish beings appear to have been a common characteristic within Indo-European mythologies. In the Celtic-speaking regions of north-west Europe, the beings most similar to elves are generally referred to with the Gaelic term Aos Sí. The equivalent term in modern Welsh is Tylwyth Teg. In the Romance-speaking world, beings comparable to elves are widely known by words derived from Latin fata ('fate'), which came into English as fairy. This word became partly synonymous with elf by the early modern period. Other names also abound, however, such as the Sicilian Donas de fuera ('ladies from outside'), or French bonnes dames ('good ladies'). In the Finnic-speaking world, the term usually thought most closely equivalent to elf is haltija (in Finnish) or haldaja (Estonian). Meanwhile, an example of an equivalent in the Slavic-speaking world is the vila (plural vile) of Serbo-Croatian (and, partly, Slovene) folklore. Elves bear some resemblances to the satyrs of Greek mythology, who were also regarded as woodland-dwelling mischief-makers.
Some scholarship draws parallels between the Arabian tradition of jinn with the elves of medieval Germanic-language cultures. Some of the comparisons are quite precise: for example, the root of the word jinn was used in medieval Arabic terms for madness and possession in similar ways to the Old English word ylfig, which was derived from elf and also denoted prophetic states of mind implicitly associated with elfish possession.
Khmer culture in Cambodia includes the Mrenh kongveal, elfish beings associated with guarding animals.
In the animistic precolonial beliefs of the Philippines, the world can be divided into the material world and the spirit world. All objects, animate or inanimate, have a spirit called anito. Non-human anito are known as diwata, usually euphemistically referred to as dili ingon nato ('those unlike us'). They inhabit natural features like mountains, forests, old trees, caves, reefs, etc., as well as personify abstract concepts and natural phenomena. They are similar to elves in that they can be helpful or hateful but are usually indifferent to mortals. They can be mischievous and cause unintentional harm to humans, but they can also deliberately cause illnesses and misfortunes when disrespected or angered. Spanish colonizers equated them with elves and fairy folklore.
Orang bunian are supernatural beings in Malaysian, Bruneian and Indonesian folklore, invisible to most humans except those with spiritual sight. While the term is often translated as "elves", it literally translates to "hidden people" or "whistling people". Their appearance is nearly identical to humans dressed in an ancient Southeast Asian style.
In Māori culture, Patupaiarehe are beings similar to European elves and fairies. | [
{
"paragraph_id": 0,
"text": "An elf (pl. elves) is a type of humanoid supernatural being in Germanic folklore. Elves appear especially in North Germanic mythology, being mentioned in the Icelandic Poetic Edda and Snorri Sturluson's Prose Edda.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In medieval Germanic-speaking cultures, elves generally seem to have been thought of as beings with magical powers and supernatural beauty, ambivalent towards everyday people and capable of either helping or hindering them. However, the details of these beliefs have varied considerably over time and space and have flourished in both pre-Christian and Christian cultures.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Sometimes elves are, like dwarfs, associated with craftmanship. Wayland the Smith embodies this feature. He is known under many names, depending on the language in which the stories were distributed. The names include Völund in Old Norse, Wēland in Anglo-Saxon and Wieland in German. The story of Wayland is also to be found in the Prose Edda.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The word elf is found throughout the Germanic languages and seems originally to have meant 'white being'. However, reconstructing the early concept of an elf depends largely on texts written by Christians, in Old and Middle English, medieval German, and Old Norse. These associate elves variously with the gods of Norse mythology, with causing illness, with magic, and with beauty and seduction.",
"title": ""
},
{
"paragraph_id": 4,
"text": "After the medieval period, the word elf tended to become less common throughout the Germanic languages, losing out to alternative native terms like Zwerg ('dwarf') in German and huldra ('hidden being') in North Germanic languages, and to loan-words like fairy (borrowed from French into most of the Germanic languages). Still, belief in elves persisted in the early modern period, particularly in Scotland and Scandinavia, where elves were thought of as magically powerful people living, usually invisibly, alongside everyday human communities. They continued to be associated with causing illnesses and with sexual threats. For example, several early modern ballads in the British Isles and Scandinavia, originating in the medieval period, describe elves attempting to seduce or abduct human characters.",
"title": ""
},
{
"paragraph_id": 5,
"text": "With urbanisation and industrialisation in the nineteenth and twentieth centuries, belief in elves declined rapidly (though Iceland has some claim to continued popular belief in elves). However, elves started to be prominent in the literature and art of educated elites from the early modern period onwards. These literary elves were imagined as tiny, playful beings, with William Shakespeare's A Midsummer Night's Dream being a key development of this idea. In the eighteenth century, German Romantic writers were influenced by this notion of the elf and re-imported the English word elf into the German language.",
"title": ""
},
{
"paragraph_id": 6,
"text": "From the Romantic idea of elves came the elves of popular culture that emerged in the nineteenth and twentieth centuries. The \"Christmas elves\" of contemporary popular culture are a relatively recent creation, popularized during the late nineteenth century in the United States. Elves entered the twentieth-century high fantasy genre in the wake of works published by authors such as J. R. R. Tolkien; these re-popularised the idea of elves as human-sized and humanlike beings. Elves remain a prominent feature of fantasy media today.",
"title": ""
},
{
"paragraph_id": 7,
"text": "Elves have in many times and places been believed to be real beings. Where enough people have believed in the reality of elves that those beliefs then had real effects in the world, they can be understood as part of people's worldview, and as a social reality: a thing which, like the exchange value of a dollar bill or the sense of pride stirred up by a national flag, is real because of people's beliefs rather than as an objective reality. Accordingly, beliefs about elves and their social functions have varied over time and space.",
"title": "Relationship with reality"
},
{
"paragraph_id": 8,
"text": "Even in the twenty-first century, fantasy stories about elves have been argued both to reflect and to shape their audiences' understanding of the real world, and traditions about Santa Claus and his elves relate to Christmas.",
"title": "Relationship with reality"
},
{
"paragraph_id": 9,
"text": "Over time, people have attempted to demythologise or rationalise beliefs in elves in various ways.",
"title": "Relationship with reality"
},
{
"paragraph_id": 10,
"text": "Beliefs about elves have their origins before the conversion to Christianity and associated Christianization of northwest Europe. For this reason, belief in elves has, from the Middle Ages through into recent scholarship, often been labelled \"pagan\" and a \"superstition.\" However, almost all surviving textual sources about elves were produced by Christians (whether Anglo-Saxon monks, medieval Icelandic poets, early modern ballad-singers, nineteenth-century folklore collectors, or even twentieth-century fantasy authors). Attested beliefs about elves, therefore, need to be understood as part of Germanic-speakers' Christian culture and not merely a relic of their pre-Christian religion. Accordingly, investigating the relationship between beliefs in elves and Christian cosmology has been a preoccupation of scholarship about elves both in early times and modern research.",
"title": "Relationship with reality"
},
{
"paragraph_id": 11,
"text": "Historically, people have taken three main approaches to integrate elves into Christian cosmology, all of which are found widely across time and space:",
"title": "Relationship with reality"
},
{
"paragraph_id": 12,
"text": "Some nineteenth- and twentieth-century scholars attempted to rationalise beliefs in elves as folk memories of lost indigenous peoples. Since belief in supernatural beings is ubiquitous in human cultures, scholars no longer believe such explanations are valid. Research has shown, however, that stories about elves have often been used as a way for people to think metaphorically about real-life ethnic others.",
"title": "Relationship with reality"
},
{
"paragraph_id": 13,
"text": "Scholars have at times also tried to explain beliefs in elves as being inspired by people suffering certain kinds of illnesses (such as Williams syndrome). Elves were certainly often seen as a cause of illness, and indeed the English word oaf seems to have originated as a form of elf: the word elf came to mean 'changeling left by an elf' and then, because changelings were noted for their failure to thrive, to its modern sense 'a fool, a stupid person; a large, clumsy man or boy'. However, it again seems unlikely that the origin of beliefs in elves itself is to be explained by people's encounters with objectively real people affected by disease.",
"title": "Relationship with reality"
},
{
"paragraph_id": 14,
"text": "The English word elf is from the Old English word most often attested as ælf (whose plural would have been *ælfe). Although this word took a variety of forms in different Old English dialects, these converged on the form elf during the Middle English period. During the Old English period, separate forms were used for female elves (such as ælfen, putatively from Proto-Germanic *ɑlβ(i)innjō), but during the Middle English period the word elf routinely came to include female beings.",
"title": "Etymology"
},
{
"paragraph_id": 15,
"text": "The Old English forms are cognates – linguistic siblings stemming from a common origin – with medieval Germanic terms such as Old Norse alfr ('elf'; plural alfar), Old High German alp ('evil spirit'; pl. alpî, elpî; feminine elbe), Burgundian *alfs ('elf'), and Middle Low German alf ('evil spirit'). These words must come from Proto-Germanic, the ancestor-language of the attested Germanic languages; the Proto-Germanic forms are reconstructed as *ɑlβi-z and *ɑlβɑ-z.",
"title": "Etymology"
},
{
"paragraph_id": 16,
"text": "Germanic *ɑlβi-z~*ɑlβɑ-z is generally agreed to be a cognate with Latin albus ('(matt) white'), Old Irish ailbhín ('flock'), Ancient Greek ἀλφός (alphós; 'whiteness, white leprosy';), and Albanian elb ('barley'); and the Germanic word for 'swan' reconstructed as *albit- (compare Modern Icelandic álpt) is often thought to be derived from it. These all come from a Proto-Indo-European root *h₂elbʰ-, and seem to be connected by the idea of whiteness. The Germanic word presumably originally meant 'white one', perhaps as a euphemism. Jakob Grimm thought whiteness implied positive moral connotations, and, noting Snorri Sturluson's ljósálfar, suggested that elves were divinities of light. This is not necessarily the case, however. For example, because the cognates suggest matt white rather than shining white, and because in medieval Scandinavian texts whiteness is associated with beauty, Alaric Hall has suggested that elves may have been called 'the white people' because whiteness was associated with (specifically feminine) beauty. Some scholars have argued that the names Albion and Alps may also be related (possibly through Celtic).",
"title": "Etymology"
},
{
"paragraph_id": 17,
"text": "A completely different etymology, making elf a cognate with the Ṛbhus, semi-divine craftsmen in Indian mythology, was suggested by Adalbert Kuhn in 1855. In this case, *ɑlβi-z would connote the meaning 'skillful, inventive, clever', and could be a cognate with Latin labor, in the sense of 'creative work'. While often mentioned, this etymology is not widely accepted.",
"title": "Etymology"
},
{
"paragraph_id": 18,
"text": "",
"title": "Etymology"
},
{
"paragraph_id": 19,
"text": "Throughout the medieval Germanic languages, elf was one of the nouns used in personal names, almost invariably as a first element. These names may have been influenced by Celtic names beginning in Albio- such as Albiorix.",
"title": "Etymology"
},
{
"paragraph_id": 20,
"text": "Personal names provide the only evidence for elf in Gothic, which must have had the word *albs (plural *albeis). The most famous name of this kind is Alboin. Old English names in elf- include the cognate of Alboin Ælfwine (literally \"elf-friend\", m.), Ælfric (\"elf-powerful\", m.), Ælfweard (\"elf-guardian\", m.), and Ælfwaru (\"elf-care\", f.). A widespread survivor of these in modern English is Alfred (Old English Ælfrēd, \"elf-advice\"). Also surviving are the English surname Elgar (Ælfgar, \"elf-spear\") and the name of St Alphege (Ælfhēah, \"elf-tall\"). German examples are Alberich, Alphart and Alphere (father of Walter of Aquitaine) and Icelandic examples include Álfhildur. These names suggest that elves were positively regarded in early Germanic culture. Of the many words for supernatural beings in Germanic languages, the only ones regularly used in personal names are elf and words denoting pagan gods, suggesting that elves were considered similar to gods.",
"title": "Etymology"
},
{
"paragraph_id": 21,
"text": "In later Old Icelandic, alfr (\"elf\") and the personal name which in Common Germanic had been *Aþa(l)wulfaz both coincidentally became álfr~Álfr.",
"title": "Etymology"
},
{
"paragraph_id": 22,
"text": "Elves appear in some place names, though it is difficult to be sure how many of other words, including personal names, can appear similar to elf. The clearest English examples are Elveden (\"elves' hill\", Suffolk) and Elvendon (\"elves' valley\", Oxfordshire); other examples may be Eldon Hill (\"Elves' hill\", Derbyshire); and Alden Valley (\"elves' valley\", Lancashire). These seem to associate elves fairly consistently with woods and valleys.",
"title": "Etymology"
},
{
"paragraph_id": 23,
"text": "The earliest surviving manuscripts mentioning elves in any Germanic language are from Anglo-Saxon England. Medieval English evidence has, therefore, attracted quite extensive research and debate. In Old English, elves are most often mentioned in medical texts which attest to the belief that elves might afflict humans and livestock with illnesses: apparently mostly sharp, internal pains and mental disorders. The most famous of the medical texts is the metrical charm Wið færstice (\"against a stabbing pain\"), from the tenth-century compilation Lacnunga, but most of the attestations are in the tenth-century Bald's Leechbook and Leechbook III. This tradition continues into later English-language traditions too: elves continue to appear in Middle English medical texts.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 24,
"text": "Belief in elves causing illnesses remained prominent in early modern Scotland, where elves were viewed as supernaturally powerful people who lived invisibly alongside everyday rural people. Thus, elves were often mentioned in the early modern Scottish witchcraft trials: many witnesses in the trials believed themselves to have been given healing powers or to know of people or animals made sick by elves. Throughout these sources, elves are sometimes associated with the succubus-like supernatural being called the mare.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 25,
"text": "While they may have been thought to cause diseases with magical weapons, elves are more clearly associated in Old English with a kind of magic denoted by Old English sīden and sīdsa, a cognate with the Old Norse seiðr, and also paralleled in the Old Irish Serglige Con Culainn. By the fourteenth century, they were also associated with the arcane practice of alchemy.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 26,
"text": "In one or two Old English medical texts, elves might be envisaged as inflicting illnesses with projectiles. In the twentieth century, scholars often labelled the illnesses elves caused as \"elf-shot\", but work from the 1990s onwards showed that the medieval evidence for elves' being thought to cause illnesses in this way is slender; debate about its significance is ongoing.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 27,
"text": "The noun elf-shot is first attested in a Scots poem, \"Rowlis Cursing,\" from around 1500, where \"elf schot\" is listed among a range of curses to be inflicted on some chicken thieves. The term may not always have denoted an actual projectile: shot could mean \"a sharp pain\" as well as \"projectile.\" But in early modern Scotland, elf-schot and other terms like elf-arrowhead are sometimes used of neolithic arrow-heads, apparently thought to have been made by elves. In a few witchcraft trials, people attest that these arrow-heads were used in healing rituals and occasionally alleged that witches (and perhaps elves) used them to injure people and cattle. Compare with the following excerpt from a 1749–50 ode by William Collins:",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 28,
"text": "There every herd, by sad experience, knows How, winged with fate, their elf-shot arrows fly, When the sick ewe her summer food forgoes, Or, stretched on earth, the heart-smit heifers lie.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 29,
"text": "Because of elves' association with illness, in the twentieth century, most scholars imagined that elves in the Anglo-Saxon tradition were small, invisible, demonic beings, causing illnesses with arrows. This was encouraged by the idea that \"elf-shot\" is depicted in the Eadwine Psalter, in an image which became well known in this connection. However, this is now thought to be a misunderstanding: the image proves to be a conventional illustration of God's arrows and Christian demons. Rather, twenty-first century scholarship suggests that Anglo-Saxon elves, like elves in Scandinavia or the Irish Aos Sí, were regarded as people.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 30,
"text": "Like words for gods and men, the word elf is used in personal names where words for monsters and demons are not. Just as álfar is associated with Æsir in Old Norse, the Old English Wið færstice associates elves with ēse; whatever this word meant by the tenth century, etymologically it denoted pagan gods. In Old English, the plural ylfe (attested in Beowulf) is grammatically an ethnonym (a word for an ethnic group), suggesting that elves were seen as people. As well as appearing in medical texts, the Old English word ælf and its feminine derivative ælbinne were used in glosses to translate Latin words for nymphs. This fits well with the word ælfscȳne, which meant \"elf-beautiful\" and is attested describing the seductively beautiful Biblical heroines Sarah and Judith.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 31,
"text": "Likewise, in Middle English and early modern Scottish evidence, while still appearing as causes of harm and danger, elves appear clearly as humanlike beings. They became associated with medieval chivalric romance traditions of fairies and particularly with the idea of a Fairy Queen. A propensity to seduce or rape people becomes increasingly prominent in the source material. Around the fifteenth century, evidence starts to appear for the belief that elves might steal human babies and replace them with changelings.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 32,
"text": "By the end of the medieval period, elf was increasingly being supplanted by the French loan-word fairy. An example is Geoffrey Chaucer's satirical tale Sir Thopas, where the title character sets out in a quest for the \"elf-queen\", who dwells in the \"countree of the Faerie\".",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 33,
"text": "Evidence for elf beliefs in medieval Scandinavia outside Iceland is sparse, but the Icelandic evidence is uniquely rich. For a long time, views about elves in Old Norse mythology were defined by Snorri Sturluson's Prose Edda, which talks about svartálfar, dökkálfar and ljósálfar (\"black elves\", \"dark elves\", and \"light elves\"). For example, Snorri recounts how the svartálfar create new blond hair for Thor's wife Sif after Loki had shorn off Sif's long hair. However, these terms are attested only in the Prose Edda and texts based on it. It is now agreed that they reflect traditions of dwarves, demons, and angels, partly showing Snorri's \"paganisation\" of a Christian cosmology learned from the Elucidarius, a popular digest of Christian thought.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 34,
"text": "Scholars of Old Norse mythology now focus on references to elves in Old Norse poetry, particularly the Elder Edda. The only character explicitly identified as an elf in classical Eddaic poetry, if any, is Völundr, the protagonist of Völundarkviða. However, elves are frequently mentioned in the alliterating phrase Æsir ok Álfar ('Æsir and elves') and its variants. This was a well-established poetic formula, indicating a strong tradition of associating elves with the group of gods known as the Æsir, or even suggesting that the elves and Æsir were one and the same. The pairing is paralleled in the Old English poem Wið færstice and in the Germanic personal name system; moreover, in Skaldic verse the word elf is used in the same way as words for gods. Sigvatr Þórðarson's skaldic travelogue Austrfaravísur, composed around 1020, mentions an álfablót ('elves' sacrifice') in Edskogen in what is now southern Sweden. There does not seem to have been any clear-cut distinction between humans and gods; like the Æsir, then, elves were presumably thought of as being humanlike and existing in opposition to the giants. Many commentators have also (or instead) argued for conceptual overlap between elves and dwarves in Old Norse mythology, which may fit with trends in the medieval German evidence.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 35,
"text": "There are hints that the god Freyr was associated with elves. In particular, Álfheimr (literally \"elf-world\") is mentioned as being given to Freyr in Grímnismál. Snorri Sturluson identified Freyr as one of the Vanir. However, the term Vanir is rare in Eddaic verse, very rare in Skaldic verse, and is not generally thought to appear in other Germanic languages. Given the link between Freyr and the elves, it has therefore long been suspected that álfar and Vanir are, more or less, different words for the same group of beings. However, this is not uniformly accepted.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 36,
"text": "A kenning (poetic metaphor) for the sun, álfröðull (literally \"elf disc\"), is of uncertain meaning but is to some suggestive of a close link between elves and the sun.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 37,
"text": "Although the relevant words are of slightly uncertain meaning, it seems fairly clear that Völundr is described as one of the elves in Völundarkviða. As his most prominent deed in the poem is to rape Böðvildr, the poem associates elves with being a sexual threat to maidens. The same idea is present in two post-classical Eddaic poems, which are also influenced by chivalric romance or Breton lais, Kötludraumur and Gullkársljóð. The idea also occurs in later traditions in Scandinavia and beyond, so it may be an early attestation of a prominent tradition. Elves also appear in a couple of verse spells, including the Bergen rune-charm from among the Bryggen inscriptions.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 38,
"text": "The appearance of elves in sagas is closely defined by genre. The Sagas of Icelanders, Bishops' sagas, and contemporary sagas, whose portrayal of the supernatural is generally restrained, rarely mention álfar, and then only in passing. But although limited, these texts provide some of the best evidence for the presence of elves in everyday beliefs in medieval Scandinavia. They include a fleeting mention of elves seen out riding in 1168 (in Sturlunga saga); mention of an álfablót (\"elves' sacrifice\") in Kormáks saga; and the existence of the euphemism ganga álfrek ('go to drive away the elves') for \"going to the toilet\" in Eyrbyggja saga.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 39,
"text": "The Kings' sagas include a rather elliptical but widely studied account of an early Swedish king being worshipped after his death and being called Ólafr Geirstaðaálfr ('Ólafr the elf of Geirstaðir'), and a demonic elf at the beginning of Norna-Gests þáttr.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 40,
"text": "The legendary sagas tend to focus on elves as legendary ancestors or on heroes' sexual relations with elf-women. Mention of the land of Álfheimr is found in Heimskringla while Þorsteins saga Víkingssonar recounts a line of local kings who ruled over Álfheim, who since they had elven blood were said to be more beautiful than most men. According to Hrólfs saga kraka, Hrolfr Kraki's half-sister Skuld was the half-elven child of King Helgi and an elf-woman (álfkona). Skuld was skilled in witchcraft (seiðr). Accounts of Skuld in earlier sources, however, do not include this material. The Þiðreks saga version of the Nibelungen (Niflungar) describes Högni as the son of a human queen and an elf, but no such lineage is reported in the Eddas, Völsunga saga, or the Nibelungenlied. The relatively few mentions of elves in the chivalric sagas tend even to be whimsical.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 41,
"text": "In his Rerum Danicarum fragmenta (1596) written mostly in Latin with some Old Danish and Old Icelandic passages, Arngrímur Jónsson explains the Scandinavian and Icelandic belief in elves (called Allffuafolch). Both Continental Scandinavia and Iceland have a scattering of mentions of elves in medical texts, sometimes in Latin and sometimes in the form of amulets, where elves are viewed as a possible cause of illness. Most of them have Low German connections.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 42,
"text": "The Old High German word alp is attested only in a small number of glosses. It is defined by the Althochdeutsches Wörterbuch as a \"nature-god or nature-demon, equated with the Fauns of Classical mythology ... regarded as eerie, ferocious beings ... As the mare he messes around with women\". Accordingly, the German word Alpdruck (literally \"elf-oppression\") means \"nightmare\". There is also evidence associating elves with illness, specifically epilepsy.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 43,
"text": "In a similar vein, elves are in Middle High German most often associated with deceiving or bewildering people in a phrase that occurs so often it would appear to be proverbial: die elben/der alp trieget mich (\"the elves/elf are/is deceiving me\"). The same pattern holds in Early Modern German. This deception sometimes shows the seductive side apparent in English and Scandinavian material: most famously, the early thirteenth-century Heinrich von Morungen's fifth Minnesang begins \"Von den elben wirt entsehen vil manic man / Sô bin ich von grôzer liebe entsên\" (\"full many a man is bewitched by elves / thus I too am bewitched by great love\"). Elbe was also used in this period to translate words for nymphs.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 44,
"text": "In later medieval prayers, Elves appear as a threatening, even demonic, force. For example, some prayers invoke God's help against nocturnal attacks by Alpe. Correspondingly, in the early modern period, elves are described in north Germany doing the evil bidding of witches; Martin Luther believed his mother to have been afflicted in this way.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 45,
"text": "As in Old Norse, however, there are few characters identified as elves. It seems likely that in the German-speaking world, elves were to a significant extent conflated with dwarves (Middle High German: getwerc). Thus, some dwarves that appear in German heroic poetry have been seen as relating to elves. In particular, nineteenth-century scholars tended to think that the dwarf Alberich, whose name etymologically means \"elf-powerful,\" was influenced by early traditions of elves.",
"title": "In medieval texts and post-medieval folk belief"
},
{
"paragraph_id": 46,
"text": "From around the Late Middle Ages, the word elf began to be used in English as a term loosely synonymous with the French loan-word fairy; in elite art and literature, at least, it also became associated with diminutive supernatural beings like Puck, hobgoblins, Robin Goodfellow, the English and Scots brownie, and the Northumbrian English hob.",
"title": "Post-medieval folklore"
},
{
"paragraph_id": 47,
"text": "However, in Scotland and parts of northern England near the Scottish border, beliefs in elves remained prominent into the nineteenth century. James VI of Scotland and Robert Kirk discussed elves seriously; elf beliefs are prominently attested in the Scottish witchcraft trials, particularly the trial of Issobel Gowdie; and related stories also appear in folktales, There is a significant corpus of ballads narrating stories about elves, such as Thomas the Rhymer, where a man meets a female elf; Tam Lin, The Elfin Knight, and Lady Isabel and the Elf-Knight, in which an Elf-Knight rapes, seduces, or abducts a woman; and The Queen of Elfland's Nourice, a woman is abducted to be a wet-nurse to the elf queen's baby, but promised that she might return home once the child is weaned.",
"title": "Post-medieval folklore"
},
{
"paragraph_id": 48,
"text": "In Scandinavian folklore, many humanlike supernatural beings are attested, which might be thought of as elves and partly originate in medieval Scandinavian beliefs. However, the characteristics and names of these beings have varied widely across time and space, and they cannot be neatly categorised. These beings are sometimes known by words descended directly from the Old Norse álfr. However, in modern languages, traditional terms related to álfr have tended to be replaced with other terms. Things are further complicated because when referring to the elves of Old Norse mythology, scholars have adopted new forms based directly on the Old Norse word álfr. The following table summarises the situation in the main modern standard languages of Scandinavia.",
"title": "Post-medieval folklore"
},
{
"paragraph_id": 49,
"text": "The elves of Norse mythology have survived into folklore mainly as females, living in hills and mounds of stones. The Swedish älvor were stunningly beautiful girls who lived in the forest with an elven king.",
"title": "Post-medieval folklore"
},
{
"paragraph_id": 50,
"text": "The elves could be seen dancing over meadows, particularly at night and on misty mornings. They left a circle where they had danced, called älvdanser (elf dances) or älvringar (elf circles), and to urinate in one was thought to cause venereal diseases. Typically, elf circles were fairy rings consisting of a ring of small mushrooms, but there was also another kind of elf circle. In the words of the local historian Anne Marie Hellström:",
"title": "Post-medieval folklore"
},
{
"paragraph_id": 51,
"text": "... on lake shores, where the forest met the lake, you could find elf circles. They were round places where the grass had been flattened like a floor. Elves had danced there. By Lake Tisnaren, I have seen one of those. It could be dangerous, and one could become ill if one had trodden over such a place or if one destroyed anything there.",
"title": "Post-medieval folklore"
},
{
"paragraph_id": 52,
"text": "If a human watched the dance of the elves, he would discover that even though only a few hours seemed to have passed, many years had passed in the real world. Humans being invited or lured to the elf dance is a common motif transferred from older Scandinavian ballads.",
"title": "Post-medieval folklore"
},
{
"paragraph_id": 53,
"text": "Elves were not exclusively young and beautiful. In the Swedish folktale Little Rosa and Long Leda, an elvish woman (älvakvinna) arrives in the end and saves the heroine, Little Rose, on the condition that the king's cattle no longer graze on her hill. She is described as a beautiful old woman and by her aspect people saw that she belonged to the subterraneans.",
"title": "Post-medieval folklore"
},
{
"paragraph_id": 54,
"text": "Elves have a prominent place in several closely related ballads, which must have originated in the Middle Ages but are first attested in the early modern period. Many of these ballads are first attested in Karen Brahes Folio, a Danish manuscript from the 1570s, but they circulated widely in Scandinavia and northern Britain. They sometimes mention elves because they were learned by heart, even though that term had become archaic in everyday usage. They have therefore played a major role in transmitting traditional ideas about elves in post-medieval cultures. Indeed, some of the early modern ballads are still quite widely known, whether through school syllabuses or contemporary folk music. They, therefore, give people an unusual degree of access to ideas of elves from older traditional culture.",
"title": "Post-medieval folklore"
},
{
"paragraph_id": 55,
"text": "The ballads are characterised by sexual encounters between everyday people and humanlike beings referred to in at least some variants as elves (the same characters also appear as mermen, dwarves, and other kinds of supernatural beings). The elves pose a threat to the everyday community by lure people into the elves' world. The most famous example is Elveskud and its many variants (paralleled in English as Clerk Colvill), where a woman from the elf world tries to tempt a young knight to join her in dancing, or to live among the elves; in some versions he refuses, and in some he accepts, but in either case he dies, tragically. As in Elveskud, sometimes the everyday person is a man and the elf a woman, as also in Elvehøj (much the same story as Elveskud, but with a happy ending), Herr Magnus og Bjærgtrolden, Herr Tønne af Alsø, Herr Bøsmer i elvehjem, or the Northern British Thomas the Rhymer. Sometimes the everyday person is a woman, and the elf is a man, as in the northern British Tam Lin, The Elfin Knight, and Lady Isabel and the Elf-Knight, in which the Elf-Knight bears away Isabel to murder her, or the Scandinavian Harpans kraft. In The Queen of Elfland's Nourice, a woman is abducted to be a wet nurse to the elf-queen's baby, but promised that she might return home once the child is weaned.",
"title": "Post-medieval folklore"
},
{
"paragraph_id": 56,
"text": "In folk stories, Scandinavian elves often play the role of disease spirits. The most common, though the also most harmless case was various irritating skin rashes, which were called älvablåst (elven puff) and could be cured by a forceful counter-blow (a handy pair of bellows was most useful for this purpose). Skålgropar, a particular kind of petroglyph (pictogram on a rock) found in Scandinavia, were known in older times as älvkvarnar (elven mills), because it was believed elves had used them. One could appease the elves by offering a treat (preferably butter) placed into an elven mill.",
"title": "Post-medieval folklore"
},
{
"paragraph_id": 57,
"text": "In order to protect themselves and their livestock against malevolent elves, Scandinavians could use a so-called Elf cross (Alfkors, Älvkors or Ellakors), which was carved into buildings or other objects. It existed in two shapes, one was a pentagram, and it was still frequently used in early 20th-century Sweden as painted or carved onto doors, walls, and household utensils to protect against elves. The second form was an ordinary cross carved onto a round or oblong silver plate. This second kind of elf cross was worn as a pendant in a necklace, and to have sufficient magic, it had to be forged during three evenings with silver, from nine different sources of inherited silver. In some locations it also had to be on the altar of a church for three consecutive Sundays.",
"title": "Post-medieval folklore"
},
{
"paragraph_id": 58,
"text": "In Iceland, expressing belief in the huldufólk (\"hidden people\"), elves that dwell in rock formations, is still relatively common. Even when Icelanders do not explicitly express their belief, they are often reluctant to express disbelief. A 2006 and 2007 study by the University of Iceland's Faculty of Social Sciences revealed that many would not rule out the existence of elves and ghosts, a result similar to a 1974 survey by Erlendur Haraldsson. The lead researcher of the 2006–2007 study, Terry Gunnell, stated: \"Icelanders seem much more open to phenomena like dreaming the future, forebodings, ghosts and elves than other nations\". Whether significant numbers of Icelandic people do believe in elves or not, elves are certainly prominent in national discourses. They occur most often in oral narratives and news reporting in which they disrupt house- and road-building. In the analysis of Valdimar Tr. Hafstein, \"narratives about the insurrections of elves demonstrate supernatural sanction against development and urbanization; that is to say, the supernaturals protect and enforce religious values and traditional rural culture. The elves fend off, with more or less success, the attacks, and advances of modern technology, palpable in the bulldozer.\" Elves are also prominent, in similar roles, in contemporary Icelandic literature.",
"title": "Post-medieval folklore"
},
{
"paragraph_id": 59,
"text": "Folk stories told in the nineteenth century about elves are still told in modern Denmark and Sweden. Still, they now feature ethnic minorities in place of elves in essentially racist discourse. In an ethnically fairly homogeneous medieval countryside, supernatural beings provided the Other through which everyday people created their identities; in cosmopolitan industrial contexts, ethnic minorities or immigrants are used in storytelling to similar effect.",
"title": "Post-medieval folklore"
},
{
"paragraph_id": 60,
"text": "Early modern Europe saw the emergence for the first time of a distinctive elite culture: while the Reformation encouraged new skepticism and opposition to traditional beliefs, subsequent Romanticism encouraged the fetishisation of such beliefs by intellectual elites. The effects of this on writing about elves are most apparent in England and Germany, with developments in each country influencing the other. In Scandinavia, the Romantic movement was also prominent, and literary writing was the main context for continued use of the word elf, except in fossilised words for illnesses. However, oral traditions about beings like elves remained prominent in Scandinavia into the early twentieth century.",
"title": "Post-medieval elite culture"
},
{
"paragraph_id": 61,
"text": "Elves entered early modern elite culture most clearly in the literature of Elizabethan England. Here Edmund Spenser's Faerie Queene (1590–) used fairy and elf interchangeably of human-sized beings, but they are complex, imaginary and allegorical figures. Spenser also presented his own explanation of the origins of the Elfe and Elfin kynd, claiming that they were created by Prometheus. Likewise, William Shakespeare, in a speech in Romeo and Juliet (1592) has an \"elf-lock\" (tangled hair) being caused by Queen Mab, who is referred to as \"the fairies' midwife\". Meanwhile, A Midsummer Night's Dream promoted the idea that elves were diminutive and ethereal. The influence of Shakespeare and Michael Drayton made the use of elf and fairy for very small beings the norm, and had a lasting effect seen in fairy tales about elves, collected in the modern period.",
"title": "Post-medieval elite culture"
},
{
"paragraph_id": 62,
"text": "Early modern English notions of elves became influential in eighteenth-century Germany. The Modern German Elf (m) and Elfe (f) was introduced as a loan-word from English in the 1740s and was prominent in Christoph Martin Wieland's 1764 translation of A Midsummer Night's Dream.",
"title": "Post-medieval elite culture"
},
{
"paragraph_id": 63,
"text": "As German Romanticism got underway and writers started to seek authentic folklore, Jacob Grimm rejected Elf as a recent Anglicism, and promoted the reuse of the old form Elb (plural Elbe or Elben). In the same vein, Johann Gottfried Herder translated the Danish ballad Elveskud in his 1778 collection of folk songs, Stimmen der Völker in Liedern, as \"Erlkönigs Tochter\" (\"The Erl-king's Daughter\"; it appears that Herder introduced the term Erlkönig into German through a mis-Germanisation of the Danish word for elf). This in turn inspired Goethe's poem Der Erlkönig. However, Goethe added another new meaning, as the German word \"Erle\" does not mean \"elf\", but \"black alder\" - the poem about the Erlenkönig is set in the area of an alder quarry in the Saale valley in Thuringia. Goethe's poem then took on a life of its own, inspiring the Romantic concept of the Erlking, which was influential on literary images of elves from the nineteenth century on.",
"title": "Post-medieval elite culture"
},
{
"paragraph_id": 64,
"text": "In Scandinavia too, in the nineteenth century, traditions of elves were adapted to include small, insect-winged fairies. These are often called \"elves\" (älvor in modern Swedish, alfer in Danish, álfar in Icelandic), although the more formal translation in Danish is feer. Thus, the alf found in the fairy tale The Elf of the Rose by Danish author Hans Christian Andersen is so tiny he can have a rose blossom for home, and \"wings that reached from his shoulders to his feet\". Yet Andersen also wrote about elvere in The Elfin Hill. The elves in this story are more alike those of traditional Danish folklore, who were beautiful females, living in hills and boulders, capable of dancing a man to death. Like the huldra in Norway and Sweden, they are hollow when seen from the back.",
"title": "Post-medieval elite culture"
},
{
"paragraph_id": 65,
"text": "English and German literary traditions both influenced the British Victorian image of elves, which appeared in illustrations as tiny men and women with pointed ears and stocking caps. An example is Andrew Lang's fairy tale Princess Nobody (1884), illustrated by Richard Doyle, where fairies are tiny people with butterfly wings. In contrast, elves are small people with red stocking caps. These conceptions remained prominent in twentieth-century children's literature, for example Enid Blyton's The Faraway Tree series, and were influenced by German Romantic literature. Accordingly, in the Brothers Grimm fairy tale Die Wichtelmänner (literally, \"the little men\"), the title protagonists are two tiny naked men who help a shoemaker in his work. Even though Wichtelmänner are akin to beings such as kobolds, dwarves and brownies, the tale was translated into English by Margaret Hunt in 1884 as The Elves and the Shoemaker. This shows how the meanings of elf had changed and was in itself influential: the usage is echoed, for example, in the house-elf of J. K. Rowling's Harry Potter stories. In his turn, J. R. R. Tolkien recommended using the older German form Elb in translations of his works, as recorded in his Guide to the Names in The Lord of the Rings (1967). Elb, Elben was consequently introduced in 1972 German translation of The Lord of the Rings, repopularising the form in German.",
"title": "Post-medieval elite culture"
},
{
"paragraph_id": 66,
"text": "With industrialisation and mass education, traditional folklore about elves waned; however, as the phenomenon of popular culture emerged, elves were re-imagined, in large part based on Romantic literary depictions and associated medievalism.",
"title": "In popular culture"
},
{
"paragraph_id": 67,
"text": "As American Christmas traditions crystallized in the nineteenth century, the 1823 poem \"A Visit from St. Nicholas\" (widely known as \"'Twas the Night before Christmas\") characterized St Nicholas himself as \"a right jolly old elf.\" However, it was his little helpers, inspired partly by folktales like The Elves and the Shoemaker, who became known as \"Santa's elves\"; the processes through which this came about are not well-understood, but one key figure was a Christmas-related publication by the German-American cartoonist Thomas Nast. Thus in the US, Canada, UK, and Ireland, the modern children's folklore of Santa Claus typically includes small, nimble, green-clad elves with pointy ears, long noses, and pointy hats, as Santa's helpers. They make the toys in a workshop located in the North Pole. The role of elves as Santa's helpers has continued to be popular, as evidenced by the success of the popular Christmas movie Elf.",
"title": "In popular culture"
},
{
"paragraph_id": 68,
"text": "The fantasy genre in the twentieth century grew out of nineteenth-century Romanticism, in which nineteenth-century scholars such as Andrew Lang and the Grimm brothers collected fairy stories from folklore and in some cases retold them freely.",
"title": "In popular culture"
},
{
"paragraph_id": 69,
"text": "A pioneering work of the fantasy genre was The King of Elfland's Daughter, a 1924 novel by Lord Dunsany. The Elves of Middle-earth played a central role in Tolkien's legendarium, notably The Hobbit and The Lord of the Rings; this legendarium was enormously influential on subsequent fantasy writing. Tolkien's writing had such influence that in the 1960s and afterwards, elves speaking an elvish language similar to those in Tolkien's novels became staple non-human characters in high fantasy works and in fantasy role-playing games. Tolkien also appears to be the first author to have introduced the notion that elves are immortal. Post-Tolkien fantasy elves (which feature not only in novels but also in role-playing games such as Dungeons & Dragons) are often portrayed as being wiser and more beautiful than humans, with sharper senses and perceptions as well. They are said to be gifted in magic, mentally sharp and lovers of nature, art, and song. They are often skilled archers. A hallmark of many fantasy elves is their pointed ears.",
"title": "In popular culture"
},
{
"paragraph_id": 70,
"text": "In works where elves are the main characters, such as The Silmarillion or Wendy and Richard Pini's comic book series Elfquest, elves exhibit a similar range of behaviour to a human cast, distinguished largely by their superhuman physical powers. However, where narratives are more human-centered, as in The Lord of the Rings, elves tend to sustain their role as powerful, sometimes threatening, outsiders. Despite the obvious fictionality of fantasy novels and games, scholars have found that elves in these works continue to have a subtle role in shaping the real-life identities of their audiences. For example, elves can function to encode real-world racial others in video games, or to influence gender norms through literature.",
"title": "In popular culture"
},
{
"paragraph_id": 71,
"text": "Beliefs in humanlike supernatural beings are widespread in human cultures, and many such beings may be referred to as elves in English.",
"title": "Equivalents in non-Germanic traditions"
},
{
"paragraph_id": 72,
"text": "Elfish beings appear to have been a common characteristic within Indo-European mythologies. In the Celtic-speaking regions of north-west Europe, the beings most similar to elves are generally referred to with the Gaelic term Aos Sí. The equivalent term in modern Welsh is Tylwyth Teg. In the Romance-speaking world, beings comparable to elves are widely known by words derived from Latin fata ('fate'), which came into English as fairy. This word became partly synonymous with elf by the early modern period. Other names also abound, however, such as the Sicilian Donas de fuera ('ladies from outside'), or French bonnes dames ('good ladies'). In the Finnic-speaking world, the term usually thought most closely equivalent to elf is haltija (in Finnish) or haldaja (Estonian). Meanwhile, an example of an equivalent in the Slavic-speaking world is the vila (plural vile) of Serbo-Croatian (and, partly, Slovene) folklore. Elves bear some resemblances to the satyrs of Greek mythology, who were also regarded as woodland-dwelling mischief-makers.",
"title": "Equivalents in non-Germanic traditions"
},
{
"paragraph_id": 73,
"text": "Some scholarship draws parallels between the Arabian tradition of jinn with the elves of medieval Germanic-language cultures. Some of the comparisons are quite precise: for example, the root of the word jinn was used in medieval Arabic terms for madness and possession in similar ways to the Old English word ylfig, which was derived from elf and also denoted prophetic states of mind implicitly associated with elfish possession.",
"title": "Equivalents in non-Germanic traditions"
},
{
"paragraph_id": 74,
"text": "Khmer culture in Cambodia includes the Mrenh kongveal, elfish beings associated with guarding animals.",
"title": "Equivalents in non-Germanic traditions"
},
{
"paragraph_id": 75,
"text": "In the animistic precolonial beliefs of the Philippines, the world can be divided into the material world and the spirit world. All objects, animate or inanimate, have a spirit called anito. Non-human anito are known as diwata, usually euphemistically referred to as dili ingon nato ('those unlike us'). They inhabit natural features like mountains, forests, old trees, caves, reefs, etc., as well as personify abstract concepts and natural phenomena. They are similar to elves in that they can be helpful or hateful but are usually indifferent to mortals. They can be mischievous and cause unintentional harm to humans, but they can also deliberately cause illnesses and misfortunes when disrespected or angered. Spanish colonizers equated them with elves and fairy folklore.",
"title": "Equivalents in non-Germanic traditions"
},
{
"paragraph_id": 76,
"text": "Orang bunian are supernatural beings in Malaysian, Bruneian and Indonesian folklore, invisible to most humans except those with spiritual sight. While the term is often translated as \"elves\", it literally translates to \"hidden people\" or \"whistling people\". Their appearance is nearly identical to humans dressed in an ancient Southeast Asian style.",
"title": "Equivalents in non-Germanic traditions"
},
{
"paragraph_id": 77,
"text": "In Māori culture, Patupaiarehe are beings similar to European elves and fairies.",
"title": "Equivalents in non-Germanic traditions"
}
]
| An elf is a type of humanoid supernatural being in Germanic folklore. Elves appear especially in North Germanic mythology, being mentioned in the Icelandic Poetic Edda and Snorri Sturluson's Prose Edda. In medieval Germanic-speaking cultures, elves generally seem to have been thought of as beings with magical powers and supernatural beauty, ambivalent towards everyday people and capable of either helping or hindering them. However, the details of these beliefs have varied considerably over time and space and have flourished in both pre-Christian and Christian cultures. Sometimes elves are, like dwarfs, associated with craftmanship. Wayland the Smith embodies this feature. He is known under many names, depending on the language in which the stories were distributed. The names include Völund in Old Norse, Wēland in Anglo-Saxon and Wieland in German. The story of Wayland is also to be found in the Prose Edda. The word elf is found throughout the Germanic languages and seems originally to have meant 'white being'. However, reconstructing the early concept of an elf depends largely on texts written by Christians, in Old and Middle English, medieval German, and Old Norse. These associate elves variously with the gods of Norse mythology, with causing illness, with magic, and with beauty and seduction. After the medieval period, the word elf tended to become less common throughout the Germanic languages, losing out to alternative native terms like Zwerg ('dwarf') in German and huldra in North Germanic languages, and to loan-words like fairy. Still, belief in elves persisted in the early modern period, particularly in Scotland and Scandinavia, where elves were thought of as magically powerful people living, usually invisibly, alongside everyday human communities. They continued to be associated with causing illnesses and with sexual threats. For example, several early modern ballads in the British Isles and Scandinavia, originating in the medieval period, describe elves attempting to seduce or abduct human characters. With urbanisation and industrialisation in the nineteenth and twentieth centuries, belief in elves declined rapidly. However, elves started to be prominent in the literature and art of educated elites from the early modern period onwards. These literary elves were imagined as tiny, playful beings, with William Shakespeare's A Midsummer Night's Dream being a key development of this idea. In the eighteenth century, German Romantic writers were influenced by this notion of the elf and re-imported the English word elf into the German language. From the Romantic idea of elves came the elves of popular culture that emerged in the nineteenth and twentieth centuries. The "Christmas elves" of contemporary popular culture are a relatively recent creation, popularized during the late nineteenth century in the United States. Elves entered the twentieth-century high fantasy genre in the wake of works published by authors such as J. R. R. Tolkien; these re-popularised the idea of elves as human-sized and humanlike beings. Elves remain a prominent feature of fantasy media today. | 2001-11-05T14:57:48Z | 2023-12-29T09:29:50Z | [
"Template:Lang-gmh",
"Template:Dead link",
"Template:Citation",
"Template:Sprotected2",
"Template:Clear",
"Template:Citation needed",
"Template:Reflist",
"Template:Cite journal",
"Template:Good article",
"Template:Plural abbr",
"Template:Cite thesis",
"Template:Sister project links",
"Template:Redirect",
"Template:Pp-semi-indef",
"Template:Nbsp",
"Template:Lang-de",
"Template:Refbegin",
"Template:Quote",
"Template:Cite web",
"Template:Refend",
"Template:Norse mythology",
"Template:Authority control",
"Template:Use British English",
"Template:Harvp",
"Template:Webarchive",
"Template:Scandinavian folklore",
"Template:Anglo-SaxonPaganism",
"Template:Cite book",
"Template:Cite dictionary",
"Template:Elves",
"Template:Use dmy dates",
"Template:Sfnp",
"Template:Lang",
"Template:Anchor",
"Template:See also",
"Template:Short description",
"Template:About",
"Template:Blockquote",
"Template:Main",
"Template:Fairies"
]
| https://en.wikipedia.org/wiki/Elf |
9,897 | Evil | Evil, or badness, in a general sense, is defined as the opposite or absence of good. It can be an extremely broad concept, although in everyday usage it is often more narrowly used to talk about profound wickedness and against common good. It is generally seen as taking multiple possible forms, such as the form of personal moral evil commonly associated with the word, or impersonal natural evil (as in the case of natural disasters or illnesses), and in religious thought, the form of the demonic or supernatural/eternal. While some religions, world views, and philosophies focus on "good versus evil", others deny evil's existence and usefulness in describing people.
Evil can denote profound immorality, but typically not without some basis in the understanding of the human condition, where strife and suffering (cf. Hinduism) are the true roots of evil. In certain religious contexts, evil has been described as a supernatural force. Definitions of evil vary, as does the analysis of its motives. Elements that are commonly associated with personal forms of evil involve unbalanced behavior including anger, revenge, hatred, psychological trauma, expediency, selfishness, ignorance, destruction and neglect.
In some forms of thought, evil is also sometimes perceived as the dualistic antagonistic binary opposite to good, in which good should prevail and evil should be defeated. In cultures with Buddhist spiritual influence, both good and evil are perceived as part of an antagonistic duality that itself must be overcome through achieving Nirvana. The ethical questions regarding good and evil are subsumed into three major areas of study: meta-ethics concerning the nature of good and evil, normative ethics concerning how we ought to behave, and applied ethics concerning particular moral issues. While the term is applied to events and conditions without agency, the forms of evil addressed in this article presume one or more evildoers.
The modern English word evil (Old English yfel) and its cognates such as the German Übel and Dutch euvel are widely considered to come from a Proto-Germanic reconstructed form of *ubilaz, comparable to the Hittite huwapp- ultimately from the Proto-Indo-European form *wap- and suffixed zero-grade form *up-elo-. Other later Germanic forms include Middle English evel, ifel, ufel, Old Frisian evel (adjective and noun), Old Saxon ubil, Old High German ubil, and Gothic ubils.
The root meaning of the word is of obscure origin though shown to be akin to modern German übel (noun: Übel, although the noun evil is normally translated as "das Böse") with the basic idea of social or religious transgression.
As with Buddhism, in Confucianism or Taoism there is no direct analogue to the way good and evil are opposed although reference to demonic influence is common in Chinese folk religion. Confucianism's primary concern is with correct social relationships and the behavior appropriate to the learned or superior man. Thus evil would correspond to wrong behavior. Still less does it map into Taoism, in spite of the centrality of dualism in that system, but the opposite of the cardinal virtues of Taoism, compassion, moderation, and humility can be inferred to be the analogue of evil in it.
In response to the practices of Nazi Germany, Hannah Arendt concluded that "the problem of evil would be the fundamental problem of postwar intellectual life in Europe", although such a focus did not come to fruition.
Baruch Spinoza states
Spinoza assumes a quasi-mathematical style and states these further propositions which he purports to prove or demonstrate from the above definitions in part IV of his Ethics:
Carl Jung, in his book Answer to Job and elsewhere, depicted evil as the dark side of God. People tend to believe evil is something external to them, because they project their shadow onto others. Jung interpreted the story of Jesus as an account of God facing his own shadow.
In 2007, Philip Zimbardo suggested that people may act in evil ways as a result of a collective identity. This hypothesis, based on his previous experience from the Stanford prison experiment, was published in the book The Lucifer Effect: Understanding How Good People Turn Evil.
In 1961, Stanley Milgram began an experiment to help explain how thousands of ordinary, non-deviant, people could have reconciled themselves to a role in the Holocaust. Participants were led to believe they were assisting in an unrelated experiment in which they had to inflict electric shocks on another person. The experiment unexpectedly found that most could be led to inflict the electric shocks, including shocks that would have been fatal if they had been real. The participants tended to be uncomfortable and reluctant in the role. Nearly all stopped at some point to question the experiment, but most continued after being reassured.
A 2014 re-assessment of Milgram's work argued that the results should be interpreted with the "engaged followership" model: that people are not simply obeying the orders of a leader, but instead are willing to continue the experiment because of their desire to support the scientific goals of the leader and because of a lack of identification with the learner. Thomas Blass argues that the experiment explains how people can be complicit in roles such as "the dispassionate bureaucrat who may have shipped Jews to Auschwitz with the same degree of routinization as potatoes to Bremerhaven". However, like James Waller, he argues that it cannot explain an event like the Holocaust. Unlike the perpetrators of the Holocaust, the participants in Milgram's experiment were reassured that their actions would cause little harm and had little time to contemplate their actions.
The Baháʼí Faith asserts that evil is non-existent and that it is a concept reflecting lack of good, just as cold is the state of no heat, darkness is the state of no light, forgetfulness the lacking of memory, ignorance the lacking of knowledge. All of these are states of lacking and have no real existence.
Thus, evil does not exist and is relative to man. `Abdu'l-Bahá, son of the founder of the religion, in Some Answered Questions states:
"Nevertheless a doubt occurs to the mind—that is, scorpions and serpents are poisonous. Are they good or evil, for they are existing beings? Yes, a scorpion is evil in relation to man; a serpent is evil in relation to man; but in relation to themselves they are not evil, for their poison is their weapon, and by their sting they defend themselves."
Thus, evil is more of an intellectual concept than a true reality. Since God is good, and upon creating creation he confirmed it by saying it is Good (Genesis 1:31) evil cannot have a true reality.
Christian theology draws its concept of evil from the Old and New Testaments. The Christian Bible exercises "the dominant influence upon ideas about God and evil in the Western world." In the Old Testament, evil is understood to be an opposition to God as well as something unsuitable or inferior such as the leader of the fallen angels Satan In the New Testament the Greek word poneros is used to indicate unsuitability, while kakos is used to refer to opposition to God in the human realm. Officially, the Catholic Church extracts its understanding of evil from its canonical antiquity and the Dominican theologian, Thomas Aquinas, who in Summa Theologica defines evil as the absence or privation of good. French-American theologian Henri Blocher describes evil, when viewed as a theological concept, as an "unjustifiable reality. In common parlance, evil is 'something' that occurs in the experience that ought not to be."
There is no concept of absolute evil in Islam, as a fundamental universal principle that is independent from and equal with good in a dualistic sense. Although the Quran mentions the biblical forbidden tree, it never refers to it as the 'tree of knowledge of good and evil'. Within Islam, it is considered essential to believe that all comes from God, whether it is perceived as good or bad by individuals; and things that are perceived as evil or bad are either natural events (natural disasters or illnesses) or caused by humanity's free will. Much more the behavior of beings with free will, then they disobey God's orders, harming others or putting themselves over God or others, is considered to be evil. Evil does not necessarily refer to evil as an ontological or moral category, but often to harm or as the intention and consequence of an action, but also to unlawful actions. Unproductive actions or those who do not produce benefits are also thought of as evil.
A typical understanding of evil is reflected by Al-Ash`ari founder of Asharism. Accordingly, qualifying something as evil depends on the circumstances of the observer. An event or an action itself is neutral, but it receives its qualification by God. Since God is omnipotent and nothing can exist outside of God's power, God's will determine, whether or not something is evil.
In Judaism and Jewish theology, the existence of evil is presented as part of the idea of free will: if humans were created to be perfect, always and only doing good, being good would not mean much. For Jewish theology, it is important for humans to have the ability to choose the path of goodness, even in the face of temptation and yetzer hara (the inclination to do evil).
Evil in the religion of ancient Egypt is known as Isfet, "disorder/violence". It is the opposite of Maat, "order", and embodied by the serpent god Apep, who routinely attempts to kill the sun god Ra and is stopped by nearly every other deity. Isfet is not a primordial force, but the consequence of free will and an individual's struggle against the non-existence embodied by Apep, as evidenced by the fact that it was born from Ra's umbilical cord instead of being recorded in the religion's creation myths.
The primal duality in Buddhism is between suffering and enlightenment, so the good vs. evil splitting has no direct analogue in it. One may infer from the general teachings of the Buddha that the catalogued causes of suffering are what correspond in this belief system to 'evil'.
Practically this can refer to 1) the three selfish emotions—desire, hate and delusion; and 2) to their expression in physical and verbal actions. Specifically, evil means whatever harms or obstructs the causes for happiness in this life, a better rebirth, liberation from samsara, and the true and complete enlightenment of a buddha (samyaksambodhi).
"What is evil? Killing is evil, lying is evil, slandering is evil, abuse is evil, gossip is evil: envy is evil, hatred is evil, to cling to false doctrine is evil; all these things are evil. And what is the root of evil? Desire is the root of evil, illusion is the root of evil." Gautama Siddhartha, the founder of Buddhism, 563–483 BC.
In Hinduism, the concept of Dharma or righteousness clearly divides the world into good and evil, and clearly explains that wars have to be waged sometimes to establish and protect Dharma, this war is called Dharmayuddha. This division of good and evil is of major importance in both the Hindu epics of Ramayana and Mahabharata. The main emphasis in Hinduism is on bad action, rather than bad people. The Hindu holy text, the Bhagavad Gita, speaks of the balance of good and evil. When this balance goes off, divine incarnations come to help to restore this balance.
In adherence to the core principle of spiritual evolution, the Sikh idea of evil changes depending on one's position on the path to liberation. At the beginning stages of spiritual growth, good and evil may seem neatly separated. Once one's spirit evolves to the point where it sees most clearly, the idea of evil vanishes and the truth is revealed. In his writings Guru Arjan explains that, because God is the source of all things, what we believe to be evil must too come from God. And because God is ultimately a source of absolute good, nothing truly evil can originate from God.
Sikhism, like many other religions, does incorporate a list of "vices" from which suffering, corruption, and abject negativity arise. These are known as the Five Thieves, called such due to their propensity to cloud the mind and lead one astray from the prosecution of righteous action. These are:
One who gives in to the temptations of the Five Thieves is known as "Manmukh", or someone who lives selfishly and without virtue. Inversely, the "Gurmukh, who thrive in their reverence toward divine knowledge, rise above vice via the practice of the high virtues of Sikhism. These are:
A fundamental question is whether there is a universal, transcendent definition of evil, or whether one's definition of evil is determined by one's social or cultural background. C. S. Lewis, in The Abolition of Man, maintained that there are certain acts that are universally considered evil, such as rape and murder. However, the rape of women, by men, is found in every society, and there are more societies that see at least some versions of it, such as marital rape or punitive rape, as normative than there are societies that see all rape as non-normative (a crime). In nearly all societies, killing except for defense or duty is seen as murder. Yet the definition of defense and duty varies from one society to another. Social deviance is not uniformly defined across different cultures, and is not, in all circumstances, necessarily an aspect of evil.
Defining evil is complicated by its multiple, often ambiguous, common usages: evil is used to describe the whole range of suffering, including that caused by nature, and it is also used to describe the full range of human immorality from the "evil of genocide to the evil of malicious gossip". It is sometimes thought of as the generic opposite of good. Marcus Singer asserts that these common connotations must be set aside as overgeneralized ideas that do not sufficiently describe the nature of evil.
In contemporary philosophy, there are two basic concepts of evil: a broad concept and a narrow concept. A broad concept defines evil simply as any and all pain and suffering: "any bad state of affairs, wrongful action, or character flaw". Yet, it is also asserted that evil cannot be correctly understood "(as some of the utilitarians once thought) [on] a simple hedonic scale on which pleasure appears as a plus, and pain as a minus". This is because pain is necessary for survival. Renowned orthopedist and missionary to lepers, Dr. Paul Brand explains that leprosy attacks the nerve cells that feel pain resulting in no more pain for the leper, which leads to ever increasing, often catastrophic, damage to the body of the leper. Congenital insensitivity to pain (CIP), also known as congenital analgesia, is a neurological disorder that prevents feeling pain. It "leads to ... bone fractures, multiple scars, osteomyelitis, joint deformities, and limb amputation ... Mental retardation is common. Death from hyperpyrexia occurs within the first 3 years of life in almost 20% of the patients." Few with the disorder are able to live into adulthood. Evil cannot be simply defined as all pain and its connected suffering because, as Marcus Singer says: "If something is really evil, it can't be necessary, and if it is really necessary, it can't be evil".
The narrow concept of evil involves moral condemnation, therefore it is ascribed only to moral agents and their actions. This eliminates natural disasters and animal suffering from consideration as evil: according to Claudia Card, "When not guided by moral agents, forces of nature are neither "goods" nor "evils". They just are. Their "agency" routinely produces consequences vital to some forms of life and lethal to others". The narrow definition of evil "picks out only the most morally despicable sorts of actions, characters, events, etc. Evil [in this sense] ... is the worst possible term of opprobrium imaginable”. Eve Garrard suggests that evil describes "particularly horrifying kinds of action which we feel are to be contrasted with more ordinary kinds of wrongdoing, as when for example we might say 'that action wasn't just wrong, it was positively evil'. The implication is that there is a qualitative, and not merely quantitative, difference between evil acts and other wrongful ones; evil acts are not just very bad or wrongful acts, but rather ones possessing some specially horrific quality". In this context, the concept of evil is one element in a full nexus of moral concepts.
Views on the nature of evil belong to the branch of philosophy known as ethics—which in modern philosophy is subsumed into three major areas of study:
There is debate on how useful the term "evil" is, since it is often associated with spirits and the devil. Some see the term as useless because they say it lacks any real ability to explain what it names. There is also real danger of the harm that being labeled "evil" can do when used in moral, political, and legal contexts. Those who support the usefulness of the term say there is a secular view of evil that offers plausible analyses without reference to the supernatural. Garrard and Russell argue that evil is as useful an explanation as any moral concept. Garrard adds that evil actions result from a particular kind of motivation, such as taking pleasure in the suffering of others, and this distinctive motivation provides a partial explanation even if it does not provide a complete explanation. Most theorists agree use of the term evil can be harmful but disagree over what response that requires. Some argue it is "more dangerous to ignore evil than to try to understand it".
Those who support the usefulness of the term, such as Eve Garrard and David McNaughton, argue that the term evil "captures a distinct part of our moral phenomenology, specifically, 'collect[ing] together those wrongful actions to which we have ... a response of moral horror'." Claudia Card asserts it is only by understanding the nature of evil that we can preserve humanitarian values and prevent evil in the future. If evils are the worst sorts of moral wrongs, social policy should focus limited energy and resources on reducing evil over other wrongs. Card asserts that by categorizing certain actions and practices as evil, we are better able to recognize and guard against responding to evil with more evil which will "interrupt cycles of hostility generated by past evils".
One school of thought holds that no person is evil and that only acts may be properly considered evil. Some theorists define an evil action simply as a kind of action an evil person performs. But just as many theorists believe that an evil character is one who is inclined toward evil acts. Luke Russell argues that both evil actions and evil feelings are necessary to identify a person as evil, while Daniel Haybron argues that evil feelings and evil motivations are necessary.
American psychiatrist M. Scott Peck describes evil as a kind of personal "militant ignorance". According to Peck, an evil person is consistently self-deceiving, deceives others, psychologically projects his or her evil onto very specific targets, hates, abuses power, and lies incessantly. Evil people are unable to think from the viewpoint of their victim. Peck considers those he calls evil to be attempting to escape and hide from their own conscience (through self-deception) and views this as being quite distinct from the apparent absence of conscience evident in sociopaths. He also considers that certain institutions may be evil, using the My Lai Massacre to illustrate. By this definition, acts of criminal and state terrorism would also be considered evil.
Martin Luther argued that there are cases where a little evil is a positive good. He wrote, "Seek out the society of your boon companions, drink, play, talk bawdy, and amuse yourself. One must sometimes commit a sin out of hate and contempt for the Devil, so as not to give him the chance to make one scrupulous over mere nothings ... "
The international relations theories of realism and neorealism, sometimes called realpolitik advise politicians to explicitly ban absolute moral and ethical considerations from international politics, and to focus on self-interest, political survival, and power politics, which they hold to be more accurate in explaining a world they view as explicitly amoral and dangerous. Political realists usually justify their perspectives by stating that morals and politics should be separated as two unrelated things, as exerting authority often involves doing something not moral. Machiavelli wrote: "there will be traits considered good that, if followed, will lead to ruin, while other traits, considered vices which if practiced achieve security and well being for the prince."
Notes
Further reading | [
{
"paragraph_id": 0,
"text": "Evil, or badness, in a general sense, is defined as the opposite or absence of good. It can be an extremely broad concept, although in everyday usage it is often more narrowly used to talk about profound wickedness and against common good. It is generally seen as taking multiple possible forms, such as the form of personal moral evil commonly associated with the word, or impersonal natural evil (as in the case of natural disasters or illnesses), and in religious thought, the form of the demonic or supernatural/eternal. While some religions, world views, and philosophies focus on \"good versus evil\", others deny evil's existence and usefulness in describing people.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Evil can denote profound immorality, but typically not without some basis in the understanding of the human condition, where strife and suffering (cf. Hinduism) are the true roots of evil. In certain religious contexts, evil has been described as a supernatural force. Definitions of evil vary, as does the analysis of its motives. Elements that are commonly associated with personal forms of evil involve unbalanced behavior including anger, revenge, hatred, psychological trauma, expediency, selfishness, ignorance, destruction and neglect.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In some forms of thought, evil is also sometimes perceived as the dualistic antagonistic binary opposite to good, in which good should prevail and evil should be defeated. In cultures with Buddhist spiritual influence, both good and evil are perceived as part of an antagonistic duality that itself must be overcome through achieving Nirvana. The ethical questions regarding good and evil are subsumed into three major areas of study: meta-ethics concerning the nature of good and evil, normative ethics concerning how we ought to behave, and applied ethics concerning particular moral issues. While the term is applied to events and conditions without agency, the forms of evil addressed in this article presume one or more evildoers.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The modern English word evil (Old English yfel) and its cognates such as the German Übel and Dutch euvel are widely considered to come from a Proto-Germanic reconstructed form of *ubilaz, comparable to the Hittite huwapp- ultimately from the Proto-Indo-European form *wap- and suffixed zero-grade form *up-elo-. Other later Germanic forms include Middle English evel, ifel, ufel, Old Frisian evel (adjective and noun), Old Saxon ubil, Old High German ubil, and Gothic ubils.",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "The root meaning of the word is of obscure origin though shown to be akin to modern German übel (noun: Übel, although the noun evil is normally translated as \"das Böse\") with the basic idea of social or religious transgression.",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "As with Buddhism, in Confucianism or Taoism there is no direct analogue to the way good and evil are opposed although reference to demonic influence is common in Chinese folk religion. Confucianism's primary concern is with correct social relationships and the behavior appropriate to the learned or superior man. Thus evil would correspond to wrong behavior. Still less does it map into Taoism, in spite of the centrality of dualism in that system, but the opposite of the cardinal virtues of Taoism, compassion, moderation, and humility can be inferred to be the analogue of evil in it.",
"title": "Chinese moral philosophy"
},
{
"paragraph_id": 6,
"text": "In response to the practices of Nazi Germany, Hannah Arendt concluded that \"the problem of evil would be the fundamental problem of postwar intellectual life in Europe\", although such a focus did not come to fruition.",
"title": "European philosophy"
},
{
"paragraph_id": 7,
"text": "Baruch Spinoza states",
"title": "European philosophy"
},
{
"paragraph_id": 8,
"text": "Spinoza assumes a quasi-mathematical style and states these further propositions which he purports to prove or demonstrate from the above definitions in part IV of his Ethics:",
"title": "European philosophy"
},
{
"paragraph_id": 9,
"text": "Carl Jung, in his book Answer to Job and elsewhere, depicted evil as the dark side of God. People tend to believe evil is something external to them, because they project their shadow onto others. Jung interpreted the story of Jesus as an account of God facing his own shadow.",
"title": "Psychology"
},
{
"paragraph_id": 10,
"text": "In 2007, Philip Zimbardo suggested that people may act in evil ways as a result of a collective identity. This hypothesis, based on his previous experience from the Stanford prison experiment, was published in the book The Lucifer Effect: Understanding How Good People Turn Evil.",
"title": "Psychology"
},
{
"paragraph_id": 11,
"text": "In 1961, Stanley Milgram began an experiment to help explain how thousands of ordinary, non-deviant, people could have reconciled themselves to a role in the Holocaust. Participants were led to believe they were assisting in an unrelated experiment in which they had to inflict electric shocks on another person. The experiment unexpectedly found that most could be led to inflict the electric shocks, including shocks that would have been fatal if they had been real. The participants tended to be uncomfortable and reluctant in the role. Nearly all stopped at some point to question the experiment, but most continued after being reassured.",
"title": "Psychology"
},
{
"paragraph_id": 12,
"text": "A 2014 re-assessment of Milgram's work argued that the results should be interpreted with the \"engaged followership\" model: that people are not simply obeying the orders of a leader, but instead are willing to continue the experiment because of their desire to support the scientific goals of the leader and because of a lack of identification with the learner. Thomas Blass argues that the experiment explains how people can be complicit in roles such as \"the dispassionate bureaucrat who may have shipped Jews to Auschwitz with the same degree of routinization as potatoes to Bremerhaven\". However, like James Waller, he argues that it cannot explain an event like the Holocaust. Unlike the perpetrators of the Holocaust, the participants in Milgram's experiment were reassured that their actions would cause little harm and had little time to contemplate their actions.",
"title": "Psychology"
},
{
"paragraph_id": 13,
"text": "The Baháʼí Faith asserts that evil is non-existent and that it is a concept reflecting lack of good, just as cold is the state of no heat, darkness is the state of no light, forgetfulness the lacking of memory, ignorance the lacking of knowledge. All of these are states of lacking and have no real existence.",
"title": "Religions"
},
{
"paragraph_id": 14,
"text": "Thus, evil does not exist and is relative to man. `Abdu'l-Bahá, son of the founder of the religion, in Some Answered Questions states:",
"title": "Religions"
},
{
"paragraph_id": 15,
"text": "\"Nevertheless a doubt occurs to the mind—that is, scorpions and serpents are poisonous. Are they good or evil, for they are existing beings? Yes, a scorpion is evil in relation to man; a serpent is evil in relation to man; but in relation to themselves they are not evil, for their poison is their weapon, and by their sting they defend themselves.\"",
"title": "Religions"
},
{
"paragraph_id": 16,
"text": "Thus, evil is more of an intellectual concept than a true reality. Since God is good, and upon creating creation he confirmed it by saying it is Good (Genesis 1:31) evil cannot have a true reality.",
"title": "Religions"
},
{
"paragraph_id": 17,
"text": "Christian theology draws its concept of evil from the Old and New Testaments. The Christian Bible exercises \"the dominant influence upon ideas about God and evil in the Western world.\" In the Old Testament, evil is understood to be an opposition to God as well as something unsuitable or inferior such as the leader of the fallen angels Satan In the New Testament the Greek word poneros is used to indicate unsuitability, while kakos is used to refer to opposition to God in the human realm. Officially, the Catholic Church extracts its understanding of evil from its canonical antiquity and the Dominican theologian, Thomas Aquinas, who in Summa Theologica defines evil as the absence or privation of good. French-American theologian Henri Blocher describes evil, when viewed as a theological concept, as an \"unjustifiable reality. In common parlance, evil is 'something' that occurs in the experience that ought not to be.\"",
"title": "Religions"
},
{
"paragraph_id": 18,
"text": "There is no concept of absolute evil in Islam, as a fundamental universal principle that is independent from and equal with good in a dualistic sense. Although the Quran mentions the biblical forbidden tree, it never refers to it as the 'tree of knowledge of good and evil'. Within Islam, it is considered essential to believe that all comes from God, whether it is perceived as good or bad by individuals; and things that are perceived as evil or bad are either natural events (natural disasters or illnesses) or caused by humanity's free will. Much more the behavior of beings with free will, then they disobey God's orders, harming others or putting themselves over God or others, is considered to be evil. Evil does not necessarily refer to evil as an ontological or moral category, but often to harm or as the intention and consequence of an action, but also to unlawful actions. Unproductive actions or those who do not produce benefits are also thought of as evil.",
"title": "Religions"
},
{
"paragraph_id": 19,
"text": "A typical understanding of evil is reflected by Al-Ash`ari founder of Asharism. Accordingly, qualifying something as evil depends on the circumstances of the observer. An event or an action itself is neutral, but it receives its qualification by God. Since God is omnipotent and nothing can exist outside of God's power, God's will determine, whether or not something is evil.",
"title": "Religions"
},
{
"paragraph_id": 20,
"text": "In Judaism and Jewish theology, the existence of evil is presented as part of the idea of free will: if humans were created to be perfect, always and only doing good, being good would not mean much. For Jewish theology, it is important for humans to have the ability to choose the path of goodness, even in the face of temptation and yetzer hara (the inclination to do evil).",
"title": "Religions"
},
{
"paragraph_id": 21,
"text": "Evil in the religion of ancient Egypt is known as Isfet, \"disorder/violence\". It is the opposite of Maat, \"order\", and embodied by the serpent god Apep, who routinely attempts to kill the sun god Ra and is stopped by nearly every other deity. Isfet is not a primordial force, but the consequence of free will and an individual's struggle against the non-existence embodied by Apep, as evidenced by the fact that it was born from Ra's umbilical cord instead of being recorded in the religion's creation myths.",
"title": "Religions"
},
{
"paragraph_id": 22,
"text": "The primal duality in Buddhism is between suffering and enlightenment, so the good vs. evil splitting has no direct analogue in it. One may infer from the general teachings of the Buddha that the catalogued causes of suffering are what correspond in this belief system to 'evil'.",
"title": "Religions"
},
{
"paragraph_id": 23,
"text": "Practically this can refer to 1) the three selfish emotions—desire, hate and delusion; and 2) to their expression in physical and verbal actions. Specifically, evil means whatever harms or obstructs the causes for happiness in this life, a better rebirth, liberation from samsara, and the true and complete enlightenment of a buddha (samyaksambodhi).",
"title": "Religions"
},
{
"paragraph_id": 24,
"text": "\"What is evil? Killing is evil, lying is evil, slandering is evil, abuse is evil, gossip is evil: envy is evil, hatred is evil, to cling to false doctrine is evil; all these things are evil. And what is the root of evil? Desire is the root of evil, illusion is the root of evil.\" Gautama Siddhartha, the founder of Buddhism, 563–483 BC.",
"title": "Religions"
},
{
"paragraph_id": 25,
"text": "In Hinduism, the concept of Dharma or righteousness clearly divides the world into good and evil, and clearly explains that wars have to be waged sometimes to establish and protect Dharma, this war is called Dharmayuddha. This division of good and evil is of major importance in both the Hindu epics of Ramayana and Mahabharata. The main emphasis in Hinduism is on bad action, rather than bad people. The Hindu holy text, the Bhagavad Gita, speaks of the balance of good and evil. When this balance goes off, divine incarnations come to help to restore this balance.",
"title": "Religions"
},
{
"paragraph_id": 26,
"text": "In adherence to the core principle of spiritual evolution, the Sikh idea of evil changes depending on one's position on the path to liberation. At the beginning stages of spiritual growth, good and evil may seem neatly separated. Once one's spirit evolves to the point where it sees most clearly, the idea of evil vanishes and the truth is revealed. In his writings Guru Arjan explains that, because God is the source of all things, what we believe to be evil must too come from God. And because God is ultimately a source of absolute good, nothing truly evil can originate from God.",
"title": "Religions"
},
{
"paragraph_id": 27,
"text": "Sikhism, like many other religions, does incorporate a list of \"vices\" from which suffering, corruption, and abject negativity arise. These are known as the Five Thieves, called such due to their propensity to cloud the mind and lead one astray from the prosecution of righteous action. These are:",
"title": "Religions"
},
{
"paragraph_id": 28,
"text": "One who gives in to the temptations of the Five Thieves is known as \"Manmukh\", or someone who lives selfishly and without virtue. Inversely, the \"Gurmukh, who thrive in their reverence toward divine knowledge, rise above vice via the practice of the high virtues of Sikhism. These are:",
"title": "Religions"
},
{
"paragraph_id": 29,
"text": "A fundamental question is whether there is a universal, transcendent definition of evil, or whether one's definition of evil is determined by one's social or cultural background. C. S. Lewis, in The Abolition of Man, maintained that there are certain acts that are universally considered evil, such as rape and murder. However, the rape of women, by men, is found in every society, and there are more societies that see at least some versions of it, such as marital rape or punitive rape, as normative than there are societies that see all rape as non-normative (a crime). In nearly all societies, killing except for defense or duty is seen as murder. Yet the definition of defense and duty varies from one society to another. Social deviance is not uniformly defined across different cultures, and is not, in all circumstances, necessarily an aspect of evil.",
"title": "Question of a universal definition"
},
{
"paragraph_id": 30,
"text": "Defining evil is complicated by its multiple, often ambiguous, common usages: evil is used to describe the whole range of suffering, including that caused by nature, and it is also used to describe the full range of human immorality from the \"evil of genocide to the evil of malicious gossip\". It is sometimes thought of as the generic opposite of good. Marcus Singer asserts that these common connotations must be set aside as overgeneralized ideas that do not sufficiently describe the nature of evil.",
"title": "Question of a universal definition"
},
{
"paragraph_id": 31,
"text": "In contemporary philosophy, there are two basic concepts of evil: a broad concept and a narrow concept. A broad concept defines evil simply as any and all pain and suffering: \"any bad state of affairs, wrongful action, or character flaw\". Yet, it is also asserted that evil cannot be correctly understood \"(as some of the utilitarians once thought) [on] a simple hedonic scale on which pleasure appears as a plus, and pain as a minus\". This is because pain is necessary for survival. Renowned orthopedist and missionary to lepers, Dr. Paul Brand explains that leprosy attacks the nerve cells that feel pain resulting in no more pain for the leper, which leads to ever increasing, often catastrophic, damage to the body of the leper. Congenital insensitivity to pain (CIP), also known as congenital analgesia, is a neurological disorder that prevents feeling pain. It \"leads to ... bone fractures, multiple scars, osteomyelitis, joint deformities, and limb amputation ... Mental retardation is common. Death from hyperpyrexia occurs within the first 3 years of life in almost 20% of the patients.\" Few with the disorder are able to live into adulthood. Evil cannot be simply defined as all pain and its connected suffering because, as Marcus Singer says: \"If something is really evil, it can't be necessary, and if it is really necessary, it can't be evil\".",
"title": "Question of a universal definition"
},
{
"paragraph_id": 32,
"text": "The narrow concept of evil involves moral condemnation, therefore it is ascribed only to moral agents and their actions. This eliminates natural disasters and animal suffering from consideration as evil: according to Claudia Card, \"When not guided by moral agents, forces of nature are neither \"goods\" nor \"evils\". They just are. Their \"agency\" routinely produces consequences vital to some forms of life and lethal to others\". The narrow definition of evil \"picks out only the most morally despicable sorts of actions, characters, events, etc. Evil [in this sense] ... is the worst possible term of opprobrium imaginable”. Eve Garrard suggests that evil describes \"particularly horrifying kinds of action which we feel are to be contrasted with more ordinary kinds of wrongdoing, as when for example we might say 'that action wasn't just wrong, it was positively evil'. The implication is that there is a qualitative, and not merely quantitative, difference between evil acts and other wrongful ones; evil acts are not just very bad or wrongful acts, but rather ones possessing some specially horrific quality\". In this context, the concept of evil is one element in a full nexus of moral concepts.",
"title": "Question of a universal definition"
},
{
"paragraph_id": 33,
"text": "Views on the nature of evil belong to the branch of philosophy known as ethics—which in modern philosophy is subsumed into three major areas of study:",
"title": "Philosophical questions"
},
{
"paragraph_id": 34,
"text": "There is debate on how useful the term \"evil\" is, since it is often associated with spirits and the devil. Some see the term as useless because they say it lacks any real ability to explain what it names. There is also real danger of the harm that being labeled \"evil\" can do when used in moral, political, and legal contexts. Those who support the usefulness of the term say there is a secular view of evil that offers plausible analyses without reference to the supernatural. Garrard and Russell argue that evil is as useful an explanation as any moral concept. Garrard adds that evil actions result from a particular kind of motivation, such as taking pleasure in the suffering of others, and this distinctive motivation provides a partial explanation even if it does not provide a complete explanation. Most theorists agree use of the term evil can be harmful but disagree over what response that requires. Some argue it is \"more dangerous to ignore evil than to try to understand it\".",
"title": "Philosophical questions"
},
{
"paragraph_id": 35,
"text": "Those who support the usefulness of the term, such as Eve Garrard and David McNaughton, argue that the term evil \"captures a distinct part of our moral phenomenology, specifically, 'collect[ing] together those wrongful actions to which we have ... a response of moral horror'.\" Claudia Card asserts it is only by understanding the nature of evil that we can preserve humanitarian values and prevent evil in the future. If evils are the worst sorts of moral wrongs, social policy should focus limited energy and resources on reducing evil over other wrongs. Card asserts that by categorizing certain actions and practices as evil, we are better able to recognize and guard against responding to evil with more evil which will \"interrupt cycles of hostility generated by past evils\".",
"title": "Philosophical questions"
},
{
"paragraph_id": 36,
"text": "One school of thought holds that no person is evil and that only acts may be properly considered evil. Some theorists define an evil action simply as a kind of action an evil person performs. But just as many theorists believe that an evil character is one who is inclined toward evil acts. Luke Russell argues that both evil actions and evil feelings are necessary to identify a person as evil, while Daniel Haybron argues that evil feelings and evil motivations are necessary.",
"title": "Philosophical questions"
},
{
"paragraph_id": 37,
"text": "American psychiatrist M. Scott Peck describes evil as a kind of personal \"militant ignorance\". According to Peck, an evil person is consistently self-deceiving, deceives others, psychologically projects his or her evil onto very specific targets, hates, abuses power, and lies incessantly. Evil people are unable to think from the viewpoint of their victim. Peck considers those he calls evil to be attempting to escape and hide from their own conscience (through self-deception) and views this as being quite distinct from the apparent absence of conscience evident in sociopaths. He also considers that certain institutions may be evil, using the My Lai Massacre to illustrate. By this definition, acts of criminal and state terrorism would also be considered evil.",
"title": "Philosophical questions"
},
{
"paragraph_id": 38,
"text": "Martin Luther argued that there are cases where a little evil is a positive good. He wrote, \"Seek out the society of your boon companions, drink, play, talk bawdy, and amuse yourself. One must sometimes commit a sin out of hate and contempt for the Devil, so as not to give him the chance to make one scrupulous over mere nothings ... \"",
"title": "Philosophical questions"
},
{
"paragraph_id": 39,
"text": "The international relations theories of realism and neorealism, sometimes called realpolitik advise politicians to explicitly ban absolute moral and ethical considerations from international politics, and to focus on self-interest, political survival, and power politics, which they hold to be more accurate in explaining a world they view as explicitly amoral and dangerous. Political realists usually justify their perspectives by stating that morals and politics should be separated as two unrelated things, as exerting authority often involves doing something not moral. Machiavelli wrote: \"there will be traits considered good that, if followed, will lead to ruin, while other traits, considered vices which if practiced achieve security and well being for the prince.\"",
"title": "Philosophical questions"
},
{
"paragraph_id": 40,
"text": "Notes",
"title": "References"
},
{
"paragraph_id": 41,
"text": "Further reading",
"title": "References"
}
]
| Evil, or badness, in a general sense, is defined as the opposite or absence of good. It can be an extremely broad concept, although in everyday usage it is often more narrowly used to talk about profound wickedness and against common good. It is generally seen as taking multiple possible forms, such as the form of personal moral evil commonly associated with the word, or impersonal natural evil, and in religious thought, the form of the demonic or supernatural/eternal. While some religions, world views, and philosophies focus on "good versus evil", others deny evil's existence and usefulness in describing people. Evil can denote profound immorality, but typically not without some basis in the understanding of the human condition, where strife and suffering are the true roots of evil. In certain religious contexts, evil has been described as a supernatural force. Definitions of evil vary, as does the analysis of its motives. Elements that are commonly associated with personal forms of evil involve unbalanced behavior including anger, revenge, hatred, psychological trauma, expediency, selfishness, ignorance, destruction and neglect. In some forms of thought, evil is also sometimes perceived as the dualistic antagonistic binary opposite to good, in which good should prevail and evil should be defeated. In cultures with Buddhist spiritual influence, both good and evil are perceived as part of an antagonistic duality that itself must be overcome through achieving Nirvana. The ethical questions regarding good and evil are subsumed into three major areas of study: meta-ethics concerning the nature of good and evil, normative ethics concerning how we ought to behave, and applied ethics concerning particular moral issues. While the term is applied to events and conditions without agency, the forms of evil addressed in this article presume one or more evildoers. | 2001-10-23T15:40:59Z | 2023-12-28T23:43:57Z | [
"Template:Webarchive",
"Template:ISBN",
"Template:ISBN?",
"Template:Ethics",
"Template:Other uses",
"Template:Cite web",
"Template:Cite journal",
"Template:Hamartiology",
"Template:PIE",
"Template:Blockquote",
"Template:Commons category",
"Template:Rp",
"Template:Cite news",
"Template:Reflist",
"Template:Pp",
"Template:Collist",
"Template:SEP",
"Template:Portal",
"Template:In Our Time",
"Template:Cite book",
"Template:Wiktionary",
"Template:Short description",
"Template:Cn",
"Template:Wikiquote",
"Template:Good and evil",
"Template:Main",
"Template:See also",
"Template:See",
"Template:Lang",
"Template:Citation needed"
]
| https://en.wikipedia.org/wiki/Evil |
9,901 | Epistle to the Hebrews | The Epistle to the Hebrews (Ancient Greek: Πρὸς Ἑβραίους, romanized: Pros Hebraious, lit. 'to the Hebrews') is one of the books of the New Testament.
The text does not mention the name of its author, but was traditionally attributed to Paul the Apostle. Most of the Ancient Greek manuscripts, the Old Syriac Peshitto and some of the Old Latin manuscripts have the epistle to the Hebrews among Paul's letters. However, doubt on Pauline authorship in the Roman Church is reported by Eusebius. Modern biblical scholarship considers its authorship unknown, written in deliberate imitation of the style of Paul, with some contending that it was authored by Priscilla and Aquila or Silas.
Scholars of Greek consider its writing to be more polished and eloquent than any other book of the New Testament, and "the very carefully composed and studied Greek of Hebrews is not Paul's spontaneous, volatile contextual Greek". The book has earned the reputation of being a masterpiece. It has also been described as an intricate New Testament book. Some scholars believe it was written for Jewish Christians who lived in Jerusalem. Its essential purpose was to exhort Christians to persevere in the face of persecution. At this time, certain believers were considering turning back to Judaism and to the Jewish system of law to escape being persecuted for believing Christ to be the messiah. The theme of the epistle is the teaching of the person of Christ and his role as mediator between God and humanity.
According to traditional scholarship, the author of the Epistle to the Hebrews, following in the footsteps of Paul, argued that Jewish Law had played a legitimate role in the past but was superseded by a New Covenant for the Gentiles (cf. Romans 7:1–6; Galatians 3:23–25; Hebrews 8, 10). However, a growing number of scholars note that the terms Gentile, Christian and Christianity are not present in the text and posit that Hebrews was written for a Jewish audience, and is best seen as a debate between Jewish followers of Jesus and mainstream Judaism. In tone, and detail, Hebrews goes beyond Paul and attempts a more complex, nuanced, and openly adversarial definition of the relationship. The epistle opens with an exaltation of Jesus as "the radiance of God's glory, the express image of his being, and upholding all things by his powerful word" (Hebrews 1:1–3). The epistle presents Jesus with the titles "pioneer" or "forerunner", "Son" and "Son of God", "priest" and "high priest". The epistle casts Jesus as both exalted Son and High Priest, a unique dual Christology.
Hebrews uses Old Testament quotations interpreted in light of first-century rabbinical Judaism. New Testament and Second Temple Judaism scholar Eric Mason argues that the conceptual background of the priestly Christology of the Epistle to the Hebrews closely parallels presentations of the messianic priest and Melchizedek in the Qumran scrolls. In both Hebrews and Qumran, a priestly figure is discussed in the context of a Davidic figure; in both cases a divine decree appoints the priests to their eschatological duty; both priestly figures offer an eschatological sacrifice of atonement. Although the author of Hebrews was not directly influenced by Qumran's "Messiah of Aaron", these and other conceptions did provide "a precedent... to conceive Jesus similarly as a priest making atonement and eternal intercession in the heavenly sanctuary".
By the end of the first century there was no consensus on the author's identity. Clement of Rome, Barnabas, Paul the Apostle, and other names were proposed. Others later suggested Luke the Evangelist, Apollos, or his teacher Priscilla as possible authors.
In the 3rd century, Origen wrote of the letter:
In the epistle entitled To The Hebrews the diction does not exhibit the characteristic roughness of speech or phraseology admitted by the Apostle [Paul] himself, the construction of the sentences is closer to the Greek usage, as anyone capable of recognising differences of style would agree. On the other hand the matter of the epistle is wonderful, and quite equal to the Apostle's acknowledged writings: the truth of this would be admitted by anyone who has read the Apostle carefully... If I were asked my personal opinion, I would say that the matter is the Apostle's but the phraseology and construction are those of someone who remembered the Apostle's teaching and wrote his own interpretation of what his master had said. So if any church regards this epistle as Paul's, it should be commended for so doing, for the primitive Church had every justification for handing it down as his. Who wrote the epistle is known to God alone: the accounts that have reached us suggest that it was either Clement, who became Bishop of Rome, or Luke, who wrote the gospel and the Acts.
Matthew J. Thomas argues that Origen was not denying Paul's authorship of Hebrews in that quote, but that he was only meaning that Paul would have employed an amanuensis to compose the letter. He points out that in other writings and quotations of Hebrews, Origen describes Paul as the author of the letter.
In the 4th century, Jerome and Augustine of Hippo supported Paul's authorship: the Church largely agreed to include Hebrews as the fourteenth letter of Paul, and affirmed this authorship until the Reformation. Scholars argued that in the 13th chapter of Hebrews, Timothy is referred to as a companion. Timothy was Paul's missionary companion in the same way Jesus sent disciples out in pairs. The writer also states that he wrote the letter from "Italy", which also at the time fits Paul. The difference in style is explained as simply an adjustment to a distinct audience, to the Jewish Christians who were being persecuted and pressured to go back to traditional Judaism.
Many scholars now believe that the author was one of Paul's pupils or associates, citing stylistic differences between Hebrews and the other Pauline epistles. Recent scholarship has favored the idea that the author was probably a leader of a predominantly Jewish congregation to whom they were writing.
Because of its anonymity, it had some trouble being accepted as part of the Christian canon, being classed with the Antilegomena. Eventually it was accepted as Scripture because of its sound theology, eloquent presentation, and other intrinsic factors. In antiquity, certain circles began to ascribe it to Paul in an attempt to provide the anonymous work with an explicit apostolic pedigree.
The original King James Version of the Bible titled the work "The Epistle of Paul the Apostle to the Hebrews". However, the KJV's attribution to Paul was only a guess, and is currently disputed by recent research. Its vastly different style, different theological focus, different spiritual experience and different Greek vocabulary are all believed to make Paul's authorship of Hebrews increasingly indefensible. At present, modern scholarship does not ascribe Hebrews to Paul.
A.J. Gordon ascribes the authorship of Hebrews to Priscilla, writing that "It is evident that the Holy Spirit made this woman Priscilla a teacher of teachers". Later proposed by Adolf von Harnack in 1900, Harnack's reasoning won the support of prominent Bible scholars of the early twentieth century. Harnack believes the letter was written in Rome – not to the Church, but to the inner circle. In setting forth his evidence for Priscillan authorship, he finds it amazing that the name of the author was blotted out by the earliest tradition. Citing Hebrews 13, he says it was written by a person of "high standing and apostolic teacher of equal rank with Timothy". If Luke, Clement, Barnabas, or Apollos had written it, Harnack believes their names would not have been obliterated.
Donald Guthrie's commentary The Letter to the Hebrews (1983) mentions Priscilla by name as a suggested author.
Believing the author to have been Priscilla, Ruth Hoppin posits that the name was omitted either to suppress its female authorship, or to protect the letter itself from suppression.
Also convinced that Priscilla was the author of Hebrews, Gilbert Bilezikian, professor of biblical studies at Wheaton College, remarks on "the conspiracy of anonymity in the ancient church," and reasons: "The lack of any firm data concerning the identity of the author in the extant writings of the church suggests a deliberate blackout more than a case of collective loss of memory."
Despite some theories of Hebrews being authored by Priscilla, a majority of scholars hold that the author was presumably male, since he refers to himself using a masculine participle in 11:32: "would fail me to tell".
Bob Anderson (2023) and others consider its author to be Silas. Anderson gives ten reasons for thinking this.
The use of tabernacle terminology in Hebrews has been used to date the epistle before the destruction of the temple, the idea being that knowing about the destruction of both Jerusalem and the temple would have influenced the development of the author's overall argument. Therefore, the most probable date for its composition is the second half of the year 63 or the beginning of 64, according to the Catholic Encyclopedia.
The text itself, for example, makes a contrast between the resurrected Christ "in heaven" "who serves in the sanctuary, the true tabernacle set up by the Lord" and the version on earth, where "there are already priests who offer the gifts prescribed by the law. They serve at a sanctuary that is a copy and shadow of what is in heaven." (NIV version)
Despite this, some scholars, such as Harold Attridge and Ellen Aitken, hold to a later date of composition, between 70 and 100 AD.
Scholars have suggested that Hebrews is part of an internal New Testament debate between the extreme Judaizers (who argued that non-Jews must convert to Judaism before they can receive the Holy Spirit of Jesus' New Covenant) versus the extreme antinomians (who argued that Jews must reject God's commandments and that Jewish law was no longer in effect). James and Paul represent the moderates of each faction, respectively, and Peter may have served as moderator.
It sets before the Jew the claims of Christianity – to bring the Jew to the full realization of the relation of Judaism to Christianity, to make clear that Christ has fulfilled those temporary and provisional institutions, and has thus abolished them. This view is commonly referred to as supersessionism. According to the theology of supersessionism, the church replaces Israel, and thus the church takes the place of Israel as the people of God. The dominant interpretation in modern Hebrews scholarship has been that the epistle contains an implicit supersessionist claim (that the Levitical sacrifices and the Levitical priests have been replaced/superseded by Christ's sacrifice). Per Bibliowicz, Hebrews scholars may be divided into those that are supportive-sympathetic to the epistle's theological message, those that are critical of the epistle's supersessionary message, and those attempting a middle ground.
Due to the importance of Hebrews for the formation of future Christian attitudes toward Jews and Judaism, a distinction must be made between the author's intent and the way in which the text was interpreted by future generations. The impact of the deployment and implementation of supersession theology is difficult to convey and grasp. The implementation of this theological claim eventually led to the negation and disenfranchisement of the Jewish followers of Jesus, and later, of all Jews.
Those to whom Hebrews is written seem to have begun to doubt whether Jesus could really be the Messiah for whom they were waiting, because they believed the Messiah prophesied in the Hebrew Scriptures was to come as a militant king and destroy the enemies of his people. In contrast, Jesus came as a man of no social standing who was slandered, arrested and condemned by the Jewish leaders and who suffered and was crucified by the Romans. Although he was seen resurrected, he still left the earth and his people, who now faced persecution rather than victory. The Book of Hebrews argues that the Hebrew Scriptures also foretold that the Messiah would be a priest (although of a different sort than the traditional Levitical priests) and Jesus came to fulfill this role, as a sacrificial offering to God, to atone for sins. His role of a king is yet to come, and so those who follow him should be patient and not be surprised that they suffer for now.
Some scholars today believe the document was written to prevent apostasy. Some have interpreted apostasy to mean a number of different things, such as a group of Christians in one sect leaving for another more conservative sect, one of which the author disapproves. Some have seen apostasy as a move from the Christian assembly to pagan ritual. In light of a possibly Jewish-Christian audience, the apostasy in this sense may be in regard to Jewish Christians leaving the Christian assembly to return to the Jewish synagogue. The author writes, "Let us hold fast to our confession". The epistle has been viewed as a long, rhetorical argument for having confidence in the new way to God revealed in Jesus Christ.
The book could be argued to affirm special creation. It says that God by his Son, Jesus Christ, made the worlds. "God [...] hath in these last days spoken unto us by his Son [...] by whom also he made the worlds". The epistle also emphasizes the importance of faith. "Through faith we understand that the worlds were framed by the word of God, so that things which are seen were not made of things which do appear".
...the Epistle opens with the solemn announcement of the superiority of the New Testament Revelation by the Son over Old Testament Revelation by the prophets. It then proves and explains from the Scriptures the superiority of this New Covenant over the Old by the comparison of the Son with the angels as mediators of the Old Covenant, with Moses and Joshua as the founders of the Old Covenant, and finally, by opposing the high-priesthood of Christ after the order of Melchisedech to the Levitical priesthood after the order of Aaron.
Hebrews is a very consciously "literary" document. The purity of its Greek was noted by Clement of Alexandria, according to Eusebius, and Origen of Alexandria asserted that every competent judge must recognize a great difference between this epistle and those of Paul.
The letter consists of two strands: an expositional or doctrinal strand, and a hortatory or strongly urging strand which punctuates the exposition parenthetically at key points as warnings to the readers.
Hebrews does not fit the form of a traditional Hellenistic epistle, lacking a proper prescript. Modern scholars generally believe this book was originally a sermon or homily, although possibly modified after it was delivered to include the travel plans, greetings and closing.
Hebrews contains many references to the Old Testament – specifically to the Septuagint text.
The Epistle to the Hebrews is notable for the manner in which it expresses the divine nature of Christ. As A.C. Purdy summarized for The Interpreter's Bible:
We may sum up our author's Christology negatively by saying that he has nothing to do with the older Hebrew messianic hopes of a coming Son of David, who would be a divinely empowered human leader to bring in the kingdom of God on earth; and that while he still employs the figure of a militant, apocalyptic king [...] who will come again [...], this is not of the essence of his thought about Christ. Positively, our author presents Christ as divine in nature, and solves any possible objection to a divine being who participates in human experience, especially in the experience of death, by the priestly analogy. He seems quite unconscious of the logical difficulties of his position proceeding from the assumption that Christ is both divine and human, at least human in experience although hardly in nature.
Mikeal Parsons has commented:
If the humanity of Jesus is an important theme for Hebrews, how much more is Jesus' deity. While this theme of exaltation is asserted 'in many and various ways' we shall content ourselves by considering how the writer addresses this theme by asserting Jesus' superiority to a) angels, and b) Moses. The first chapter of Hebrews stresses the superiority of the Son to the angels. The very name 'Son' indicates superiority. This exaltation theme, in which the Son is contrasted with the angels (1:4), is expanded in the following string of OT quotations (1:5–13). While some have understood the catena as referring primarily to Christ's pre-existence, it is more likely that the verses should be understood, 'as a Christological hymn which traces the entire Christ event, including the pre-existence, earthly life, and exaltation of Christ'. The overall structure of the catena seems to point to exaltation as the underlying motif... At least it may be concluded that the superiority of the Son is demonstrated by this comparison/contrast with angels.
Peter Rhea Jones has reminded us that 'Moses is not merely one of the figures compared unfavourably to Jesus'; but rather, 'Moses and Jesus are yoked throughout the entirety of the epistle'. Allowing that Moses is much more than a 'whipping boy' for the author, the fact remains that the figure Moses is utilized as a basis for Christology. While there are several references to Moses, only two will be needed to demonstrate Jesus' superiority. The first passage to be considered is Hebrews 3:1–6. D'Angelo and others regard the larger context of this passage (3:1–4:16) to be the superiority of Christ's message to the Law. While the comparison between Jesus and the angels is based on a number of OT citations, the comparison of Jesus and Moses turns on a single verse, Nu. 12:7. Like the angels (1:14), Moses was a servant who witnessed, as it were, to the Son. In other words, 'faithful Sonship is superior to faithful servantship'. The Son is once again exalted. The exaltation theme finds expression in a more opaque way at 11:26. Here in the famous chapter on faith in which Moses is said to count 'abuse suffered for the Christ greater wealth than the treasures of Egypt'. The portrait of Moses drawn here is that of a martyr, and a Christian martyr at that. In effect, Moses joins that great cloud of witnesses who looked to Jesus as pioneer and perfecter of faith. Once again, Christ's superiority is asserted, this time over Moses and the entire Mosaic epoch.
In summary, the writer [of Hebrews] stressed the Sonship of Jesus and expressed it in a three-stage Christology of pre-existence, humanity, and exaltation.
Online translations of the Epistle to the Hebrews:
Other: | [
{
"paragraph_id": 0,
"text": "The Epistle to the Hebrews (Ancient Greek: Πρὸς Ἑβραίους, romanized: Pros Hebraious, lit. 'to the Hebrews') is one of the books of the New Testament.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The text does not mention the name of its author, but was traditionally attributed to Paul the Apostle. Most of the Ancient Greek manuscripts, the Old Syriac Peshitto and some of the Old Latin manuscripts have the epistle to the Hebrews among Paul's letters. However, doubt on Pauline authorship in the Roman Church is reported by Eusebius. Modern biblical scholarship considers its authorship unknown, written in deliberate imitation of the style of Paul, with some contending that it was authored by Priscilla and Aquila or Silas.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Scholars of Greek consider its writing to be more polished and eloquent than any other book of the New Testament, and \"the very carefully composed and studied Greek of Hebrews is not Paul's spontaneous, volatile contextual Greek\". The book has earned the reputation of being a masterpiece. It has also been described as an intricate New Testament book. Some scholars believe it was written for Jewish Christians who lived in Jerusalem. Its essential purpose was to exhort Christians to persevere in the face of persecution. At this time, certain believers were considering turning back to Judaism and to the Jewish system of law to escape being persecuted for believing Christ to be the messiah. The theme of the epistle is the teaching of the person of Christ and his role as mediator between God and humanity.",
"title": ""
},
{
"paragraph_id": 3,
"text": "According to traditional scholarship, the author of the Epistle to the Hebrews, following in the footsteps of Paul, argued that Jewish Law had played a legitimate role in the past but was superseded by a New Covenant for the Gentiles (cf. Romans 7:1–6; Galatians 3:23–25; Hebrews 8, 10). However, a growing number of scholars note that the terms Gentile, Christian and Christianity are not present in the text and posit that Hebrews was written for a Jewish audience, and is best seen as a debate between Jewish followers of Jesus and mainstream Judaism. In tone, and detail, Hebrews goes beyond Paul and attempts a more complex, nuanced, and openly adversarial definition of the relationship. The epistle opens with an exaltation of Jesus as \"the radiance of God's glory, the express image of his being, and upholding all things by his powerful word\" (Hebrews 1:1–3). The epistle presents Jesus with the titles \"pioneer\" or \"forerunner\", \"Son\" and \"Son of God\", \"priest\" and \"high priest\". The epistle casts Jesus as both exalted Son and High Priest, a unique dual Christology.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Hebrews uses Old Testament quotations interpreted in light of first-century rabbinical Judaism. New Testament and Second Temple Judaism scholar Eric Mason argues that the conceptual background of the priestly Christology of the Epistle to the Hebrews closely parallels presentations of the messianic priest and Melchizedek in the Qumran scrolls. In both Hebrews and Qumran, a priestly figure is discussed in the context of a Davidic figure; in both cases a divine decree appoints the priests to their eschatological duty; both priestly figures offer an eschatological sacrifice of atonement. Although the author of Hebrews was not directly influenced by Qumran's \"Messiah of Aaron\", these and other conceptions did provide \"a precedent... to conceive Jesus similarly as a priest making atonement and eternal intercession in the heavenly sanctuary\".",
"title": "Composition"
},
{
"paragraph_id": 5,
"text": "By the end of the first century there was no consensus on the author's identity. Clement of Rome, Barnabas, Paul the Apostle, and other names were proposed. Others later suggested Luke the Evangelist, Apollos, or his teacher Priscilla as possible authors.",
"title": "Composition"
},
{
"paragraph_id": 6,
"text": "In the 3rd century, Origen wrote of the letter:",
"title": "Composition"
},
{
"paragraph_id": 7,
"text": "In the epistle entitled To The Hebrews the diction does not exhibit the characteristic roughness of speech or phraseology admitted by the Apostle [Paul] himself, the construction of the sentences is closer to the Greek usage, as anyone capable of recognising differences of style would agree. On the other hand the matter of the epistle is wonderful, and quite equal to the Apostle's acknowledged writings: the truth of this would be admitted by anyone who has read the Apostle carefully... If I were asked my personal opinion, I would say that the matter is the Apostle's but the phraseology and construction are those of someone who remembered the Apostle's teaching and wrote his own interpretation of what his master had said. So if any church regards this epistle as Paul's, it should be commended for so doing, for the primitive Church had every justification for handing it down as his. Who wrote the epistle is known to God alone: the accounts that have reached us suggest that it was either Clement, who became Bishop of Rome, or Luke, who wrote the gospel and the Acts.",
"title": "Composition"
},
{
"paragraph_id": 8,
"text": "Matthew J. Thomas argues that Origen was not denying Paul's authorship of Hebrews in that quote, but that he was only meaning that Paul would have employed an amanuensis to compose the letter. He points out that in other writings and quotations of Hebrews, Origen describes Paul as the author of the letter.",
"title": "Composition"
},
{
"paragraph_id": 9,
"text": "In the 4th century, Jerome and Augustine of Hippo supported Paul's authorship: the Church largely agreed to include Hebrews as the fourteenth letter of Paul, and affirmed this authorship until the Reformation. Scholars argued that in the 13th chapter of Hebrews, Timothy is referred to as a companion. Timothy was Paul's missionary companion in the same way Jesus sent disciples out in pairs. The writer also states that he wrote the letter from \"Italy\", which also at the time fits Paul. The difference in style is explained as simply an adjustment to a distinct audience, to the Jewish Christians who were being persecuted and pressured to go back to traditional Judaism.",
"title": "Composition"
},
{
"paragraph_id": 10,
"text": "Many scholars now believe that the author was one of Paul's pupils or associates, citing stylistic differences between Hebrews and the other Pauline epistles. Recent scholarship has favored the idea that the author was probably a leader of a predominantly Jewish congregation to whom they were writing.",
"title": "Composition"
},
{
"paragraph_id": 11,
"text": "Because of its anonymity, it had some trouble being accepted as part of the Christian canon, being classed with the Antilegomena. Eventually it was accepted as Scripture because of its sound theology, eloquent presentation, and other intrinsic factors. In antiquity, certain circles began to ascribe it to Paul in an attempt to provide the anonymous work with an explicit apostolic pedigree.",
"title": "Composition"
},
{
"paragraph_id": 12,
"text": "The original King James Version of the Bible titled the work \"The Epistle of Paul the Apostle to the Hebrews\". However, the KJV's attribution to Paul was only a guess, and is currently disputed by recent research. Its vastly different style, different theological focus, different spiritual experience and different Greek vocabulary are all believed to make Paul's authorship of Hebrews increasingly indefensible. At present, modern scholarship does not ascribe Hebrews to Paul.",
"title": "Composition"
},
{
"paragraph_id": 13,
"text": "A.J. Gordon ascribes the authorship of Hebrews to Priscilla, writing that \"It is evident that the Holy Spirit made this woman Priscilla a teacher of teachers\". Later proposed by Adolf von Harnack in 1900, Harnack's reasoning won the support of prominent Bible scholars of the early twentieth century. Harnack believes the letter was written in Rome – not to the Church, but to the inner circle. In setting forth his evidence for Priscillan authorship, he finds it amazing that the name of the author was blotted out by the earliest tradition. Citing Hebrews 13, he says it was written by a person of \"high standing and apostolic teacher of equal rank with Timothy\". If Luke, Clement, Barnabas, or Apollos had written it, Harnack believes their names would not have been obliterated.",
"title": "Composition"
},
{
"paragraph_id": 14,
"text": "Donald Guthrie's commentary The Letter to the Hebrews (1983) mentions Priscilla by name as a suggested author.",
"title": "Composition"
},
{
"paragraph_id": 15,
"text": "Believing the author to have been Priscilla, Ruth Hoppin posits that the name was omitted either to suppress its female authorship, or to protect the letter itself from suppression.",
"title": "Composition"
},
{
"paragraph_id": 16,
"text": "Also convinced that Priscilla was the author of Hebrews, Gilbert Bilezikian, professor of biblical studies at Wheaton College, remarks on \"the conspiracy of anonymity in the ancient church,\" and reasons: \"The lack of any firm data concerning the identity of the author in the extant writings of the church suggests a deliberate blackout more than a case of collective loss of memory.\"",
"title": "Composition"
},
{
"paragraph_id": 17,
"text": "Despite some theories of Hebrews being authored by Priscilla, a majority of scholars hold that the author was presumably male, since he refers to himself using a masculine participle in 11:32: \"would fail me to tell\".",
"title": "Composition"
},
{
"paragraph_id": 18,
"text": "Bob Anderson (2023) and others consider its author to be Silas. Anderson gives ten reasons for thinking this.",
"title": "Composition"
},
{
"paragraph_id": 19,
"text": "The use of tabernacle terminology in Hebrews has been used to date the epistle before the destruction of the temple, the idea being that knowing about the destruction of both Jerusalem and the temple would have influenced the development of the author's overall argument. Therefore, the most probable date for its composition is the second half of the year 63 or the beginning of 64, according to the Catholic Encyclopedia.",
"title": "Composition"
},
{
"paragraph_id": 20,
"text": "The text itself, for example, makes a contrast between the resurrected Christ \"in heaven\" \"who serves in the sanctuary, the true tabernacle set up by the Lord\" and the version on earth, where \"there are already priests who offer the gifts prescribed by the law. They serve at a sanctuary that is a copy and shadow of what is in heaven.\" (NIV version)",
"title": "Composition"
},
{
"paragraph_id": 21,
"text": "Despite this, some scholars, such as Harold Attridge and Ellen Aitken, hold to a later date of composition, between 70 and 100 AD.",
"title": "Composition"
},
{
"paragraph_id": 22,
"text": "Scholars have suggested that Hebrews is part of an internal New Testament debate between the extreme Judaizers (who argued that non-Jews must convert to Judaism before they can receive the Holy Spirit of Jesus' New Covenant) versus the extreme antinomians (who argued that Jews must reject God's commandments and that Jewish law was no longer in effect). James and Paul represent the moderates of each faction, respectively, and Peter may have served as moderator.",
"title": "Audience"
},
{
"paragraph_id": 23,
"text": "It sets before the Jew the claims of Christianity – to bring the Jew to the full realization of the relation of Judaism to Christianity, to make clear that Christ has fulfilled those temporary and provisional institutions, and has thus abolished them. This view is commonly referred to as supersessionism. According to the theology of supersessionism, the church replaces Israel, and thus the church takes the place of Israel as the people of God. The dominant interpretation in modern Hebrews scholarship has been that the epistle contains an implicit supersessionist claim (that the Levitical sacrifices and the Levitical priests have been replaced/superseded by Christ's sacrifice). Per Bibliowicz, Hebrews scholars may be divided into those that are supportive-sympathetic to the epistle's theological message, those that are critical of the epistle's supersessionary message, and those attempting a middle ground.",
"title": "Audience"
},
{
"paragraph_id": 24,
"text": "Due to the importance of Hebrews for the formation of future Christian attitudes toward Jews and Judaism, a distinction must be made between the author's intent and the way in which the text was interpreted by future generations. The impact of the deployment and implementation of supersession theology is difficult to convey and grasp. The implementation of this theological claim eventually led to the negation and disenfranchisement of the Jewish followers of Jesus, and later, of all Jews.",
"title": "Audience"
},
{
"paragraph_id": 25,
"text": "Those to whom Hebrews is written seem to have begun to doubt whether Jesus could really be the Messiah for whom they were waiting, because they believed the Messiah prophesied in the Hebrew Scriptures was to come as a militant king and destroy the enemies of his people. In contrast, Jesus came as a man of no social standing who was slandered, arrested and condemned by the Jewish leaders and who suffered and was crucified by the Romans. Although he was seen resurrected, he still left the earth and his people, who now faced persecution rather than victory. The Book of Hebrews argues that the Hebrew Scriptures also foretold that the Messiah would be a priest (although of a different sort than the traditional Levitical priests) and Jesus came to fulfill this role, as a sacrificial offering to God, to atone for sins. His role of a king is yet to come, and so those who follow him should be patient and not be surprised that they suffer for now.",
"title": "Purpose for writing"
},
{
"paragraph_id": 26,
"text": "Some scholars today believe the document was written to prevent apostasy. Some have interpreted apostasy to mean a number of different things, such as a group of Christians in one sect leaving for another more conservative sect, one of which the author disapproves. Some have seen apostasy as a move from the Christian assembly to pagan ritual. In light of a possibly Jewish-Christian audience, the apostasy in this sense may be in regard to Jewish Christians leaving the Christian assembly to return to the Jewish synagogue. The author writes, \"Let us hold fast to our confession\". The epistle has been viewed as a long, rhetorical argument for having confidence in the new way to God revealed in Jesus Christ.",
"title": "Purpose for writing"
},
{
"paragraph_id": 27,
"text": "The book could be argued to affirm special creation. It says that God by his Son, Jesus Christ, made the worlds. \"God [...] hath in these last days spoken unto us by his Son [...] by whom also he made the worlds\". The epistle also emphasizes the importance of faith. \"Through faith we understand that the worlds were framed by the word of God, so that things which are seen were not made of things which do appear\".",
"title": "Purpose for writing"
},
{
"paragraph_id": 28,
"text": "...the Epistle opens with the solemn announcement of the superiority of the New Testament Revelation by the Son over Old Testament Revelation by the prophets. It then proves and explains from the Scriptures the superiority of this New Covenant over the Old by the comparison of the Son with the angels as mediators of the Old Covenant, with Moses and Joshua as the founders of the Old Covenant, and finally, by opposing the high-priesthood of Christ after the order of Melchisedech to the Levitical priesthood after the order of Aaron.",
"title": "Purpose for writing"
},
{
"paragraph_id": 29,
"text": "Hebrews is a very consciously \"literary\" document. The purity of its Greek was noted by Clement of Alexandria, according to Eusebius, and Origen of Alexandria asserted that every competent judge must recognize a great difference between this epistle and those of Paul.",
"title": "Style"
},
{
"paragraph_id": 30,
"text": "The letter consists of two strands: an expositional or doctrinal strand, and a hortatory or strongly urging strand which punctuates the exposition parenthetically at key points as warnings to the readers.",
"title": "Style"
},
{
"paragraph_id": 31,
"text": "Hebrews does not fit the form of a traditional Hellenistic epistle, lacking a proper prescript. Modern scholars generally believe this book was originally a sermon or homily, although possibly modified after it was delivered to include the travel plans, greetings and closing.",
"title": "Style"
},
{
"paragraph_id": 32,
"text": "Hebrews contains many references to the Old Testament – specifically to the Septuagint text.",
"title": "Style"
},
{
"paragraph_id": 33,
"text": "The Epistle to the Hebrews is notable for the manner in which it expresses the divine nature of Christ. As A.C. Purdy summarized for The Interpreter's Bible:",
"title": "Christology"
},
{
"paragraph_id": 34,
"text": "We may sum up our author's Christology negatively by saying that he has nothing to do with the older Hebrew messianic hopes of a coming Son of David, who would be a divinely empowered human leader to bring in the kingdom of God on earth; and that while he still employs the figure of a militant, apocalyptic king [...] who will come again [...], this is not of the essence of his thought about Christ. Positively, our author presents Christ as divine in nature, and solves any possible objection to a divine being who participates in human experience, especially in the experience of death, by the priestly analogy. He seems quite unconscious of the logical difficulties of his position proceeding from the assumption that Christ is both divine and human, at least human in experience although hardly in nature.",
"title": "Christology"
},
{
"paragraph_id": 35,
"text": "Mikeal Parsons has commented:",
"title": "Christology"
},
{
"paragraph_id": 36,
"text": "If the humanity of Jesus is an important theme for Hebrews, how much more is Jesus' deity. While this theme of exaltation is asserted 'in many and various ways' we shall content ourselves by considering how the writer addresses this theme by asserting Jesus' superiority to a) angels, and b) Moses. The first chapter of Hebrews stresses the superiority of the Son to the angels. The very name 'Son' indicates superiority. This exaltation theme, in which the Son is contrasted with the angels (1:4), is expanded in the following string of OT quotations (1:5–13). While some have understood the catena as referring primarily to Christ's pre-existence, it is more likely that the verses should be understood, 'as a Christological hymn which traces the entire Christ event, including the pre-existence, earthly life, and exaltation of Christ'. The overall structure of the catena seems to point to exaltation as the underlying motif... At least it may be concluded that the superiority of the Son is demonstrated by this comparison/contrast with angels.",
"title": "Christology"
},
{
"paragraph_id": 37,
"text": "Peter Rhea Jones has reminded us that 'Moses is not merely one of the figures compared unfavourably to Jesus'; but rather, 'Moses and Jesus are yoked throughout the entirety of the epistle'. Allowing that Moses is much more than a 'whipping boy' for the author, the fact remains that the figure Moses is utilized as a basis for Christology. While there are several references to Moses, only two will be needed to demonstrate Jesus' superiority. The first passage to be considered is Hebrews 3:1–6. D'Angelo and others regard the larger context of this passage (3:1–4:16) to be the superiority of Christ's message to the Law. While the comparison between Jesus and the angels is based on a number of OT citations, the comparison of Jesus and Moses turns on a single verse, Nu. 12:7. Like the angels (1:14), Moses was a servant who witnessed, as it were, to the Son. In other words, 'faithful Sonship is superior to faithful servantship'. The Son is once again exalted. The exaltation theme finds expression in a more opaque way at 11:26. Here in the famous chapter on faith in which Moses is said to count 'abuse suffered for the Christ greater wealth than the treasures of Egypt'. The portrait of Moses drawn here is that of a martyr, and a Christian martyr at that. In effect, Moses joins that great cloud of witnesses who looked to Jesus as pioneer and perfecter of faith. Once again, Christ's superiority is asserted, this time over Moses and the entire Mosaic epoch.",
"title": "Christology"
},
{
"paragraph_id": 38,
"text": "In summary, the writer [of Hebrews] stressed the Sonship of Jesus and expressed it in a three-stage Christology of pre-existence, humanity, and exaltation.",
"title": "Christology"
},
{
"paragraph_id": 39,
"text": "Online translations of the Epistle to the Hebrews:",
"title": "External links"
},
{
"paragraph_id": 40,
"text": "Other:",
"title": "External links"
}
]
| The Epistle to the Hebrews is one of the books of the New Testament. The text does not mention the name of its author, but was traditionally attributed to Paul the Apostle. Most of the Ancient Greek manuscripts, the Old Syriac Peshitto and some of the Old Latin manuscripts have the epistle to the Hebrews among Paul's letters. However, doubt on Pauline authorship in the Roman Church is reported by Eusebius. Modern biblical scholarship considers its authorship unknown, written in deliberate imitation of the style of Paul, with some contending that it was authored by Priscilla and Aquila or Silas. Scholars of Greek consider its writing to be more polished and eloquent than any other book of the New Testament, and "the very carefully composed and studied Greek of Hebrews is not Paul's spontaneous, volatile contextual Greek". The book has earned the reputation of being a masterpiece. It has also been described as an intricate New Testament book. Some scholars believe it was written for Jewish Christians who lived in Jerusalem. Its essential purpose was to exhort Christians to persevere in the face of persecution. At this time, certain believers were considering turning back to Judaism and to the Jewish system of law to escape being persecuted for believing Christ to be the messiah. The theme of the epistle is the teaching of the person of Christ and his role as mediator between God and humanity. According to traditional scholarship, the author of the Epistle to the Hebrews, following in the footsteps of Paul, argued that Jewish Law had played a legitimate role in the past but was superseded by a New Covenant for the Gentiles. However, a growing number of scholars note that the terms Gentile, Christian and Christianity are not present in the text and posit that Hebrews was written for a Jewish audience, and is best seen as a debate between Jewish followers of Jesus and mainstream Judaism. In tone, and detail, Hebrews goes beyond Paul and attempts a more complex, nuanced, and openly adversarial definition of the relationship. The epistle opens with an exaltation of Jesus as "the radiance of God's glory, the express image of his being, and upholding all things by his powerful word". The epistle presents Jesus with the titles "pioneer" or "forerunner", "Son" and "Son of God", "priest" and "high priest". The epistle casts Jesus as both exalted Son and High Priest, a unique dual Christology. | 2001-10-06T20:56:44Z | 2023-12-31T22:06:35Z | [
"Template:Short description",
"Template:Overcited",
"Template:Notelist",
"Template:S-hou",
"Template:Books of the New Testament",
"Template:Lang-grc",
"Template:ISBN",
"Template:ISSN",
"Template:Wikiquote",
"Template:Who",
"Template:Bibleverse",
"Template:Webarchive",
"Template:Wikisource",
"Template:S-start",
"Template:Blockquote",
"Template:Cite EB1911",
"Template:S-bef",
"Template:Sfn",
"Template:Cite web",
"Template:Librivox book",
"Template:S-end",
"Template:Authority control",
"Template:Rp",
"Template:S-ttl",
"Template:Epistle to the Hebrews",
"Template:Books of the Bible",
"Template:Efn",
"Template:Main",
"Template:Cite book",
"Template:See also",
"Template:Reflist",
"Template:Doi",
"Template:Cite journal",
"Template:Snd",
"Template:S-aft"
]
| https://en.wikipedia.org/wiki/Epistle_to_the_Hebrews |
9,902 | Esther | Esther (originally Hadassah) is the eponymous heroine of the Book of Esther. The story the book tells is as follows: Ahasuerus, the king of the Persian Achaemenid Empire, falls in love with the beautiful Jewish woman Esther and makes her his Queen. His grand vizier, Haman, is offended by Esther's cousin and guardian, Mordecai, who refuses to prostrate himself before Haman. Haman plots to have all the Jews in Persia killed, and convinces Ahasuerus to permit him to do so. However, Esther foils the plan by revealing Haman's eradication plans to Ahasuerus, who then has Haman executed and grants permission to the Jews to kill their enemies.
The Book of Esther provides the traditional explanation for the Jewish holiday of Purim, celebrated on the date given in the story for when Haman's order was to go into effect, which is the day that the Jews killed their enemies after the plan was reversed. The book exists in two related forms: a shorter Biblical Hebrew-sourced version found in Jewish and Protestant Bibles, and a longer Koine Greek-sourced version found in Catholic and Orthodox Bibles.
When she is introduced, in Esther 2:7, she is first referred to by the Hebrew name Hadassah. This name is absent from the early Greek manuscripts, although present in the targumic texts, and was probably added to the Hebrew text in the 2nd century CE at the earliest to stress the heroine's Jewishness. The name "Esther" probably derives from the name of the Babylonian goddess Ishtar or from the Persian word cognate with the English word "star" (implying an association with Ishtar) though some scholars contend it is related to the Persian words for "woman" or "myrtle".
In the third year of the reign of King Ahasuerus of Persia the king banishes his queen, Vashti, and seeks a new queen. Beautiful maidens gather together at the harem in the citadel of Susa under the authority of the eunuch Hegai.
Esther, a cousin of Mordecai, was a member of the Jewish community in the Exilic Period who claimed as an ancestor Kish, a Benjamite who had been taken from Jerusalem into captivity. She was the orphaned daughter of Mordecai's uncle, another Benjamite named Abihail. Upon the king's orders, Esther is taken to the palace where Hegai prepares her to meet the king. Even as she advances to the highest position of the harem, perfumed with gold and myrrh and allocated certain foods and servants, she is under strict instructions from Mordecai, who meets with her each day, to conceal her Jewish origins. The king falls in love with her and makes her his Queen.
Following Esther's coronation, Mordecai learns of an assassination plot by Bigthan and Teresh to kill King Ahasuerus. Mordecai tells Esther, who tells the king in the name of Mordecai, and he is saved. This act of great service to the king is recorded in the Annals of the Kingdom.
After Mordecai saves the king's life, Haman the Agagite is made Ahasuerus' highest adviser, and orders that everyone bow down to him. When Mordecai (who had stationed himself in the street to advise Esther) refuses to bow to him, Haman pays King Ahasuerus 10,000 silver talents for the right to exterminate all of the Jews in Ahasuerus' kingdom. Haman casts lots, Purim, using supernatural means, and sees that the thirteenth day of the Month of Adar is a fortuitous day for the genocide. Using the seal of the king, in the name of the king, Haman sends an order to the provinces of the kingdom to allow the extermination of the Jews on the thirteenth of Adar. When Mordecai learns of this, he tells Esther to reveal to the king that she is Jewish and ask that he repeal the order. Esther hesitates, saying that she could be put to death if she goes to the king without being summoned; nevertheless, Mordecai urges her to try. Esther asks that the entire Jewish community fast and pray for three days before she goes to see the king; Mordecai agrees.
On the third day, Esther goes to the courtyard in front of the king's palace, and she is welcomed by the king, who stretches out his scepter for her to touch, and offers her anything she wants "up to half of the kingdom". Esther invites the king and Haman to a banquet she has prepared for the next day. She tells the king she will reveal her request at the banquet. During the banquet, the king repeats his offer again, whereupon Esther invites both the king and Haman to a banquet she is making on the following day as well.
Seeing that he is in favor with the king and queen, Haman takes counsel from his wife and friends to build a gallows upon which to hang Mordecai; as he is in their good favors, he believes he will be granted his wish to hang Mordecai the very next day. After building the gallows, Haman goes to the palace in the middle of the night to wait for the earliest moment he can see the king.
That evening, the king, unable to sleep, asks that the Annals of the Kingdom be read to him so that he will become drowsy. The book miraculously opens to the page telling of Mordecai's great service, and the king asks if he had already received a reward. When his attendants answer in the negative, Ahasuerus is suddenly distracted and demands to know who is standing in the palace courtyard in the middle of the night. The attendants answer that it is Haman. Ahasuerus invites Haman into his room. Haman, instead of requesting that Mordecai be hanged, is ordered to take Mordecai through the streets of the capital on the Royal Horse wearing the royal robes. Haman is also instructed to yell, "This is what shall be done to the man whom the king wishes to honor!"
After spending the entire day honoring Mordecai, Haman rushes to Esther's second banquet, where Ahasuerus is already waiting. Ahasuerus repeats his offer to Esther of anything "up to half of the kingdom". Esther tells Ahasuerus that while she appreciates the offer, she must put before him a more basic issue: she explains that there is a person plotting to kill her and her entire people, and that this person's intentions are to harm the king and the kingdom. When Ahasuerus asks who this person is, Esther points to Haman and names him. Upon hearing this, an enraged Ahasuerus goes out to the garden to calm down and consider the situation.
While Ahasuerus is in the garden, Haman throws himself at Esther's feet asking for mercy. Upon returning from the garden, the king is further enraged. As it was the custom to eat on reclining couches, it appears to the king as if Haman is attacking Esther. He orders Haman to be removed from his sight. While Haman is being led out, Harvona, a civil servant, tells the king that Haman had built a gallows for Mordecai, "who had saved the king's life". In response, the king says "Hang him (Haman) on it".
After Haman is put to death, Ahasuerus gives Haman's estate to Esther. Esther tells the king about Mordecai being her relative, and the king makes Mordecai his adviser. When Esther asks the king to revoke the order exterminating the Jews, the king is initially hesitant, saying that an order issued by the king cannot be repealed. Ahasuerus allows Esther and Mordecai to draft another order, with the seal of the king and in the name of the king, to allow the Jewish people to defend themselves and fight with their oppressors on the thirteenth day of Adar.
On the thirteenth day of Adar, the same day that Haman had set for them to be killed, the Jews defend themselves in all parts of the kingdom and rest on the fourteenth day of Adar. The fourteenth day of Adar is celebrated with the giving of charity, exchanging foodstuffs, and feasting. In Susa, the Jews of the capital were given another day to kill their oppressors; they rested and celebrated on the fifteenth day of Adar, again giving charity, exchanging foodstuffs, and feasting as well.
The Jews established an annual feast, the feast of Purim, in memory of their deliverance. Haman having set the date of the thirteenth of Adar to commence his campaign against the Jews, this determined the date of the festival of Purim.
Although the details of the setting are entirely plausible and the story may even have some basis in actual events, there is general agreement among scholars that the book of Esther is a work of fiction. Persian kings did not marry outside of seven Persian noble families, making it unlikely that there was a Jewish queen Esther. Further, the name Ahasuerus can be translated to Xerxes, as both derive from the Persian Khshayārsha. Ahasuerus as described in the Book of Esther is usually identified in modern sources to refer to Xerxes I, who ruled between 486 and 465 BCE, as it is to this monarch that the events described in Esther are thought to fit the most closely. Xerxes I's queen was Amestris, further highlighting the fictitious nature of the story.
Some scholars speculate that the story was created to justify the Jewish appropriation of an originally non-Jewish feast. The festival which the book explains is Purim, which is explained as meaning "lot", from the Babylonian word puru. One popular theory says the festival has its origins in a historicized Babylonian myth or ritual in which Mordecai and Esther represent the Babylonian gods Marduk and Ishtar, while others trace the ritual to the Persian New Year, and scholars have surveyed other theories in their works Some scholars have defended the story as real history, but the attempt to find a historical kernel to the narrative "is likely to be futile".
The Book of Esther begins by portraying Esther as beautiful and obedient, though a relatively passive figure. Throughout the story, she evolves into a character who takes a decisive role in her own future and that of her people. According to Sidnie White Crawford, "Esther's position in a male court mirrors that of the Jews in a Gentile world, with the threat of danger ever present below the seemingly calm surface." Esther is compared to Daniel in that both represent a "type" for Jews living in Diaspora, and hoping to live a successful life in an alien environment.
According to Susan Zaeske, by virtue of the fact that Esther used only rhetoric to convince the king to save her people, the story of Esther is a "rhetoric of exile and empowerment that, for millennia, has notably shaped the discourse of marginalized peoples such as Jews, women, and African Americans", persuading those who have power over them.
Modern day Persian Jews are called "Esther's Children". A building venerated as being the Tomb of Esther and Mordechai is located in Hamadan, Iran, although the village of Kfar Bar'am in northern Israel also claims to be the burial place of Queen Esther.
Throughout history, many artists have created paintings depicting Esther. Notable early portrayals include the Heilspiegel Altarpiece by Konrad Witz and Esther Before Ahasuerus by Tintoretto (1546–47, Royal Collection) which show Esther appearing before the king to beg mercy for the Jews, despite the punishment for appearing without being summoned being death. This scene became one of the most commonly depicted parts of the story.
Esther's faint had not often been depicted in art before Tintoretto. It is shown in the series of cassone scenes of the Life of Esther attributed variously to Sandro Botticelli and Filippino Lippi from the 1470s. In other cassone depictions, for example by Filippino Lippi, Esther's readiness to show herself before the court is contrasted to Vashti's refusal to expose herself to the public assembly.
Esther was regarded in Catholic theology as a typological forerunner of the Virgin Mary in her role as intercessor Her regal election parallels Mary's Assumption and as she becomes queen of Persia, Mary becomes queen of heaven; Mary's epithet as 'stella maris' parallels Esther as a 'star' and both figure as sponsors of the humble before the powerful. Contemporary viewers would likely have recognized a similarity between the faint and the common motif of the Swoon of the Virgin, seen in many depictions of the Crucifixion of Jesus. Esther's fainting became a popular subject in the Baroque painting of the following century. A notable Baroque example is Esther Before Ahasuerus by Artemisia Gentileschi.
Esther is commemorated as a matriarch in the Calendar of Saints of the Lutheran Church–Missouri Synod on May 24.
Esther is recognized as a saint in the Eastern Orthodox Church, commemorated on the Sunday before Christmas. "The Septuagint edition of Esther contains six parts (totaling 107 verses) not found in the Hebrew Bible. Although these interpretations originally may have been composed in Hebrew, they survive only in Greek texts. Because the Hebrew Bible's version of Esther's story contains neither prayers nor even a single reference to God, Greek redactors apparently felt compelled to give the tale a more explicit religious orientation, alluding to "God" or the "Lord" fifty times." These additions to Esther in the Apocrypha were added approximately in the second or first century BCE.
The story of Esther is also referenced in chapter 28 of 1 Meqabyan, a book considered canonical in the Ethiopian Orthodox Tewahedo Church. | [
{
"paragraph_id": 0,
"text": "Esther (originally Hadassah) is the eponymous heroine of the Book of Esther. The story the book tells is as follows: Ahasuerus, the king of the Persian Achaemenid Empire, falls in love with the beautiful Jewish woman Esther and makes her his Queen. His grand vizier, Haman, is offended by Esther's cousin and guardian, Mordecai, who refuses to prostrate himself before Haman. Haman plots to have all the Jews in Persia killed, and convinces Ahasuerus to permit him to do so. However, Esther foils the plan by revealing Haman's eradication plans to Ahasuerus, who then has Haman executed and grants permission to the Jews to kill their enemies.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Book of Esther provides the traditional explanation for the Jewish holiday of Purim, celebrated on the date given in the story for when Haman's order was to go into effect, which is the day that the Jews killed their enemies after the plan was reversed. The book exists in two related forms: a shorter Biblical Hebrew-sourced version found in Jewish and Protestant Bibles, and a longer Koine Greek-sourced version found in Catholic and Orthodox Bibles.",
"title": ""
},
{
"paragraph_id": 2,
"text": "When she is introduced, in Esther 2:7, she is first referred to by the Hebrew name Hadassah. This name is absent from the early Greek manuscripts, although present in the targumic texts, and was probably added to the Hebrew text in the 2nd century CE at the earliest to stress the heroine's Jewishness. The name \"Esther\" probably derives from the name of the Babylonian goddess Ishtar or from the Persian word cognate with the English word \"star\" (implying an association with Ishtar) though some scholars contend it is related to the Persian words for \"woman\" or \"myrtle\".",
"title": "Name"
},
{
"paragraph_id": 3,
"text": "In the third year of the reign of King Ahasuerus of Persia the king banishes his queen, Vashti, and seeks a new queen. Beautiful maidens gather together at the harem in the citadel of Susa under the authority of the eunuch Hegai.",
"title": "Narrative"
},
{
"paragraph_id": 4,
"text": "Esther, a cousin of Mordecai, was a member of the Jewish community in the Exilic Period who claimed as an ancestor Kish, a Benjamite who had been taken from Jerusalem into captivity. She was the orphaned daughter of Mordecai's uncle, another Benjamite named Abihail. Upon the king's orders, Esther is taken to the palace where Hegai prepares her to meet the king. Even as she advances to the highest position of the harem, perfumed with gold and myrrh and allocated certain foods and servants, she is under strict instructions from Mordecai, who meets with her each day, to conceal her Jewish origins. The king falls in love with her and makes her his Queen.",
"title": "Narrative"
},
{
"paragraph_id": 5,
"text": "Following Esther's coronation, Mordecai learns of an assassination plot by Bigthan and Teresh to kill King Ahasuerus. Mordecai tells Esther, who tells the king in the name of Mordecai, and he is saved. This act of great service to the king is recorded in the Annals of the Kingdom.",
"title": "Narrative"
},
{
"paragraph_id": 6,
"text": "After Mordecai saves the king's life, Haman the Agagite is made Ahasuerus' highest adviser, and orders that everyone bow down to him. When Mordecai (who had stationed himself in the street to advise Esther) refuses to bow to him, Haman pays King Ahasuerus 10,000 silver talents for the right to exterminate all of the Jews in Ahasuerus' kingdom. Haman casts lots, Purim, using supernatural means, and sees that the thirteenth day of the Month of Adar is a fortuitous day for the genocide. Using the seal of the king, in the name of the king, Haman sends an order to the provinces of the kingdom to allow the extermination of the Jews on the thirteenth of Adar. When Mordecai learns of this, he tells Esther to reveal to the king that she is Jewish and ask that he repeal the order. Esther hesitates, saying that she could be put to death if she goes to the king without being summoned; nevertheless, Mordecai urges her to try. Esther asks that the entire Jewish community fast and pray for three days before she goes to see the king; Mordecai agrees.",
"title": "Narrative"
},
{
"paragraph_id": 7,
"text": "On the third day, Esther goes to the courtyard in front of the king's palace, and she is welcomed by the king, who stretches out his scepter for her to touch, and offers her anything she wants \"up to half of the kingdom\". Esther invites the king and Haman to a banquet she has prepared for the next day. She tells the king she will reveal her request at the banquet. During the banquet, the king repeats his offer again, whereupon Esther invites both the king and Haman to a banquet she is making on the following day as well.",
"title": "Narrative"
},
{
"paragraph_id": 8,
"text": "Seeing that he is in favor with the king and queen, Haman takes counsel from his wife and friends to build a gallows upon which to hang Mordecai; as he is in their good favors, he believes he will be granted his wish to hang Mordecai the very next day. After building the gallows, Haman goes to the palace in the middle of the night to wait for the earliest moment he can see the king.",
"title": "Narrative"
},
{
"paragraph_id": 9,
"text": "That evening, the king, unable to sleep, asks that the Annals of the Kingdom be read to him so that he will become drowsy. The book miraculously opens to the page telling of Mordecai's great service, and the king asks if he had already received a reward. When his attendants answer in the negative, Ahasuerus is suddenly distracted and demands to know who is standing in the palace courtyard in the middle of the night. The attendants answer that it is Haman. Ahasuerus invites Haman into his room. Haman, instead of requesting that Mordecai be hanged, is ordered to take Mordecai through the streets of the capital on the Royal Horse wearing the royal robes. Haman is also instructed to yell, \"This is what shall be done to the man whom the king wishes to honor!\"",
"title": "Narrative"
},
{
"paragraph_id": 10,
"text": "After spending the entire day honoring Mordecai, Haman rushes to Esther's second banquet, where Ahasuerus is already waiting. Ahasuerus repeats his offer to Esther of anything \"up to half of the kingdom\". Esther tells Ahasuerus that while she appreciates the offer, she must put before him a more basic issue: she explains that there is a person plotting to kill her and her entire people, and that this person's intentions are to harm the king and the kingdom. When Ahasuerus asks who this person is, Esther points to Haman and names him. Upon hearing this, an enraged Ahasuerus goes out to the garden to calm down and consider the situation.",
"title": "Narrative"
},
{
"paragraph_id": 11,
"text": "While Ahasuerus is in the garden, Haman throws himself at Esther's feet asking for mercy. Upon returning from the garden, the king is further enraged. As it was the custom to eat on reclining couches, it appears to the king as if Haman is attacking Esther. He orders Haman to be removed from his sight. While Haman is being led out, Harvona, a civil servant, tells the king that Haman had built a gallows for Mordecai, \"who had saved the king's life\". In response, the king says \"Hang him (Haman) on it\".",
"title": "Narrative"
},
{
"paragraph_id": 12,
"text": "After Haman is put to death, Ahasuerus gives Haman's estate to Esther. Esther tells the king about Mordecai being her relative, and the king makes Mordecai his adviser. When Esther asks the king to revoke the order exterminating the Jews, the king is initially hesitant, saying that an order issued by the king cannot be repealed. Ahasuerus allows Esther and Mordecai to draft another order, with the seal of the king and in the name of the king, to allow the Jewish people to defend themselves and fight with their oppressors on the thirteenth day of Adar.",
"title": "Narrative"
},
{
"paragraph_id": 13,
"text": "On the thirteenth day of Adar, the same day that Haman had set for them to be killed, the Jews defend themselves in all parts of the kingdom and rest on the fourteenth day of Adar. The fourteenth day of Adar is celebrated with the giving of charity, exchanging foodstuffs, and feasting. In Susa, the Jews of the capital were given another day to kill their oppressors; they rested and celebrated on the fifteenth day of Adar, again giving charity, exchanging foodstuffs, and feasting as well.",
"title": "Narrative"
},
{
"paragraph_id": 14,
"text": "The Jews established an annual feast, the feast of Purim, in memory of their deliverance. Haman having set the date of the thirteenth of Adar to commence his campaign against the Jews, this determined the date of the festival of Purim.",
"title": "Narrative"
},
{
"paragraph_id": 15,
"text": "Although the details of the setting are entirely plausible and the story may even have some basis in actual events, there is general agreement among scholars that the book of Esther is a work of fiction. Persian kings did not marry outside of seven Persian noble families, making it unlikely that there was a Jewish queen Esther. Further, the name Ahasuerus can be translated to Xerxes, as both derive from the Persian Khshayārsha. Ahasuerus as described in the Book of Esther is usually identified in modern sources to refer to Xerxes I, who ruled between 486 and 465 BCE, as it is to this monarch that the events described in Esther are thought to fit the most closely. Xerxes I's queen was Amestris, further highlighting the fictitious nature of the story.",
"title": "Historicity"
},
{
"paragraph_id": 16,
"text": "Some scholars speculate that the story was created to justify the Jewish appropriation of an originally non-Jewish feast. The festival which the book explains is Purim, which is explained as meaning \"lot\", from the Babylonian word puru. One popular theory says the festival has its origins in a historicized Babylonian myth or ritual in which Mordecai and Esther represent the Babylonian gods Marduk and Ishtar, while others trace the ritual to the Persian New Year, and scholars have surveyed other theories in their works Some scholars have defended the story as real history, but the attempt to find a historical kernel to the narrative \"is likely to be futile\".",
"title": "Historicity"
},
{
"paragraph_id": 17,
"text": "The Book of Esther begins by portraying Esther as beautiful and obedient, though a relatively passive figure. Throughout the story, she evolves into a character who takes a decisive role in her own future and that of her people. According to Sidnie White Crawford, \"Esther's position in a male court mirrors that of the Jews in a Gentile world, with the threat of danger ever present below the seemingly calm surface.\" Esther is compared to Daniel in that both represent a \"type\" for Jews living in Diaspora, and hoping to live a successful life in an alien environment.",
"title": "Interpretations"
},
{
"paragraph_id": 18,
"text": "According to Susan Zaeske, by virtue of the fact that Esther used only rhetoric to convince the king to save her people, the story of Esther is a \"rhetoric of exile and empowerment that, for millennia, has notably shaped the discourse of marginalized peoples such as Jews, women, and African Americans\", persuading those who have power over them.",
"title": "Interpretations"
},
{
"paragraph_id": 19,
"text": "Modern day Persian Jews are called \"Esther's Children\". A building venerated as being the Tomb of Esther and Mordechai is located in Hamadan, Iran, although the village of Kfar Bar'am in northern Israel also claims to be the burial place of Queen Esther.",
"title": "Persian culture"
},
{
"paragraph_id": 20,
"text": "Throughout history, many artists have created paintings depicting Esther. Notable early portrayals include the Heilspiegel Altarpiece by Konrad Witz and Esther Before Ahasuerus by Tintoretto (1546–47, Royal Collection) which show Esther appearing before the king to beg mercy for the Jews, despite the punishment for appearing without being summoned being death. This scene became one of the most commonly depicted parts of the story.",
"title": "Artistic Depictions of Esther"
},
{
"paragraph_id": 21,
"text": "Esther's faint had not often been depicted in art before Tintoretto. It is shown in the series of cassone scenes of the Life of Esther attributed variously to Sandro Botticelli and Filippino Lippi from the 1470s. In other cassone depictions, for example by Filippino Lippi, Esther's readiness to show herself before the court is contrasted to Vashti's refusal to expose herself to the public assembly.",
"title": "Artistic Depictions of Esther"
},
{
"paragraph_id": 22,
"text": "Esther was regarded in Catholic theology as a typological forerunner of the Virgin Mary in her role as intercessor Her regal election parallels Mary's Assumption and as she becomes queen of Persia, Mary becomes queen of heaven; Mary's epithet as 'stella maris' parallels Esther as a 'star' and both figure as sponsors of the humble before the powerful. Contemporary viewers would likely have recognized a similarity between the faint and the common motif of the Swoon of the Virgin, seen in many depictions of the Crucifixion of Jesus. Esther's fainting became a popular subject in the Baroque painting of the following century. A notable Baroque example is Esther Before Ahasuerus by Artemisia Gentileschi.",
"title": "Artistic Depictions of Esther"
},
{
"paragraph_id": 23,
"text": "Esther is commemorated as a matriarch in the Calendar of Saints of the Lutheran Church–Missouri Synod on May 24.",
"title": "In Christianity"
},
{
"paragraph_id": 24,
"text": "Esther is recognized as a saint in the Eastern Orthodox Church, commemorated on the Sunday before Christmas. \"The Septuagint edition of Esther contains six parts (totaling 107 verses) not found in the Hebrew Bible. Although these interpretations originally may have been composed in Hebrew, they survive only in Greek texts. Because the Hebrew Bible's version of Esther's story contains neither prayers nor even a single reference to God, Greek redactors apparently felt compelled to give the tale a more explicit religious orientation, alluding to \"God\" or the \"Lord\" fifty times.\" These additions to Esther in the Apocrypha were added approximately in the second or first century BCE.",
"title": "In Christianity"
},
{
"paragraph_id": 25,
"text": "The story of Esther is also referenced in chapter 28 of 1 Meqabyan, a book considered canonical in the Ethiopian Orthodox Tewahedo Church.",
"title": "In Christianity"
}
]
| Esther is the eponymous heroine of the Book of Esther. The story the book tells is as follows: Ahasuerus, the king of the Persian Achaemenid Empire, falls in love with the beautiful Jewish woman Esther and makes her his Queen. His grand vizier, Haman, is offended by Esther's cousin and guardian, Mordecai, who refuses to prostrate himself before Haman. Haman plots to have all the Jews in Persia killed, and convinces Ahasuerus to permit him to do so. However, Esther foils the plan by revealing Haman's eradication plans to Ahasuerus, who then has Haman executed and grants permission to the Jews to kill their enemies. The Book of Esther provides the traditional explanation for the Jewish holiday of Purim, celebrated on the date given in the story for when Haman's order was to go into effect, which is the day that the Jews killed their enemies after the plan was reversed. The book exists in two related forms: a shorter Biblical Hebrew-sourced version found in Jewish and Protestant Bibles, and a longer Koine Greek-sourced version found in Catholic and Orthodox Bibles. | 2001-10-07T03:07:56Z | 2023-11-07T15:38:40Z | [
"Template:See also",
"Template:Circa",
"Template:Cite encyclopedia",
"Template:Citation",
"Template:Short description",
"Template:Efn",
"Template:Main",
"Template:Reflist",
"Template:Cite web",
"Template:Sfn whitelist",
"Template:Use dmy dates",
"Template:Sfn",
"Template:Prophets of the Tanakh",
"Template:Purim Footer",
"Template:Further",
"Template:Refbegin",
"Template:Cite news",
"Template:Refend",
"Template:Infobox person",
"Template:Wide image",
"Template:Use Oxford spelling",
"Template:Book of Esther",
"Template:Cite magazine",
"Template:About",
"Template:Citation needed",
"Template:Notelist",
"Template:Commons category",
"Template:Authority control",
"Template:Cite book",
"Template:Cite journal"
]
| https://en.wikipedia.org/wiki/Esther |
9,903 | Entamoeba | Entamoeba is a genus of Amoebozoa found as internal parasites or commensals of animals. In 1875, Fedor Lösch described the first proven case of amoebic dysentery in St. Petersburg, Russia. He referred to the amoeba he observed microscopically as Amoeba coli; however, it is not clear whether he was using this as a descriptive term or intended it as a formal taxonomic name. The genus Entamoeba was defined by Casagrandi and Barbagallo for the species Entamoeba coli, which is known to be a commensal organism. Lösch's organism was renamed Entamoeba histolytica by Fritz Schaudinn in 1903; he later died, in 1906, from a self-inflicted infection when studying this amoeba. For a time during the first half of the 20th century the entire genus Entamoeba was transferred to Endamoeba, a genus of amoebas infecting invertebrates about which little is known. This move was reversed by the International Commission on Zoological Nomenclature in the late 1950s, and Entamoeba has stayed 'stable' ever since.
Several species are found in humans and animals. Entamoeba histolytica is the pathogen responsible for invasive 'amoebiasis' (which includes amoebic dysentery and amoebic liver abscesses). Others such as Entamoeba coli (not to be confused with Escherichia coli) and Entamoeba dispar are harmless. With the exception of Entamoeba gingivalis, which lives in the mouth, and E. moshkovskii, which is frequently isolated from river and lake sediments, all Entamoeba species are found in the intestines of the animals they infect. Entamoeba invadens is a species that can cause a disease similar to E. histolytica but in reptiles. In contrast to other species, E. invadens forms cysts in vitro in the absence of bacteria and is used as a model system to study this aspect of the life cycle. Many other species of Entamoeba have been described, and it is likely that many others remain to be found.
Entamoeba cells are small, with a single nucleus and typically a single lobose pseudopod taking the form of a clear anterior bulge. They have a simple life cycle. The trophozoite (feeding-dividing form) is approximately 10-20 μm in diameter and feeds primarily on bacteria. It divides by simple binary fission to form two smaller daughter cells. Almost all species form cysts, the stage involved in transmission (the exception is Entamoeba gingivalis). Depending on the species, these can have one, four or eight nuclei and are variable in size; these characteristics help in species identification.
Entamoeba belongs to the Archamoebae, which like many other anaerobic eukaryotes have reduced mitochondria. This group also includes Endolimax and Iodamoeba, which also live in animal intestines and are similar in appearance to Entamoeba, although this may partly be due to convergence. Also in this group are the free-living amoebo-flagellates of the genus Mastigamoeba and related genera. Certain other genera of symbiotic amoebae, such as Endamoeba, might prove to be synonyms of Entamoeba but this is still unclear.
Studying Entamoeba invadens, David Biron of the Weizmann Institute of Science and coworkers found that about one third of the cells are unable to separate unaided and recruit a neighboring amoeba (dubbed the "midwife") to complete the fission. He writes:
They also reported a similar behavior in Dictyostelium.
Since E. histolytica does not form cysts in the absence of bacteria, E. invadens has become used as a model for encystation studies as it will form cysts under axenic growth conditions, which simplifies analysis. After inducing encystation in E. invadens, DNA replication increases initially and then slows down. On completion of encystation, predominantly tetra-nucleate cysts are formed along with some uni-, bi- and tri-nucleate cysts.
Uninucleated trophozoites convert into cysts in a process called encystation. The number of nuclei in the cyst varies from 1 to 8 among species and is one of the characteristics used to tell species apart. Of the species already mentioned, Entamoeba coli forms cysts with 8 nuclei while the others form tetra-nucleated cysts. Since E. histolytica does not form cysts in vitro in the absence of bacteria, it is not possible to study the differentiation process in detail in that species. Instead the differentiation process is studied using E. invadens, a reptilian parasite that causes a very similar disease to E. histolytica and which can be induced to encyst in vitro. Until recently there was no genetic transfection vector available for this organism and detailed study at the cellular level was not possible. However, recently a transfection vector was developed and the transfection conditions for E. invadens were optimised which should enhance the research possibilities at the molecular level of the differentiation process.
In sexually reproducing eukaryotes, homologous recombination (HR) ordinarily occurs during meiosis. The meiosis-specific recombinase, Dmc1, is required for efficient meiotic HR, and Dmc1 is expressed in E. histolytica. The purified Dmc1 from E. histolytica forms presynaptic filaments and catalyzes ATP-dependent homologous DNA pairing and DNA strand exchange over at least several thousand base pairs. The DNA pairing and strand exchange reactions are enhanced by the eukaryotic meiosis-specific recombination accessory factor (heterodimer) Hop2-Mnd1. These processes are central to meiotic recombination, suggesting that E. histolytica undergoes meiosis.
Studies of E. invadens found that, during the conversion from the tetraploid uninucleate trophozoite to the tetranucleate cyst, homologous recombination is enhanced. Expression of genes with functions related to the major steps of meiotic recombination also increased during encystations. These findings in E. invadens, combined with evidence from studies of E. histolytica indicate the presence of meiosis in the Entamoeba. | [
{
"paragraph_id": 0,
"text": "Entamoeba is a genus of Amoebozoa found as internal parasites or commensals of animals. In 1875, Fedor Lösch described the first proven case of amoebic dysentery in St. Petersburg, Russia. He referred to the amoeba he observed microscopically as Amoeba coli; however, it is not clear whether he was using this as a descriptive term or intended it as a formal taxonomic name. The genus Entamoeba was defined by Casagrandi and Barbagallo for the species Entamoeba coli, which is known to be a commensal organism. Lösch's organism was renamed Entamoeba histolytica by Fritz Schaudinn in 1903; he later died, in 1906, from a self-inflicted infection when studying this amoeba. For a time during the first half of the 20th century the entire genus Entamoeba was transferred to Endamoeba, a genus of amoebas infecting invertebrates about which little is known. This move was reversed by the International Commission on Zoological Nomenclature in the late 1950s, and Entamoeba has stayed 'stable' ever since.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Several species are found in humans and animals. Entamoeba histolytica is the pathogen responsible for invasive 'amoebiasis' (which includes amoebic dysentery and amoebic liver abscesses). Others such as Entamoeba coli (not to be confused with Escherichia coli) and Entamoeba dispar are harmless. With the exception of Entamoeba gingivalis, which lives in the mouth, and E. moshkovskii, which is frequently isolated from river and lake sediments, all Entamoeba species are found in the intestines of the animals they infect. Entamoeba invadens is a species that can cause a disease similar to E. histolytica but in reptiles. In contrast to other species, E. invadens forms cysts in vitro in the absence of bacteria and is used as a model system to study this aspect of the life cycle. Many other species of Entamoeba have been described, and it is likely that many others remain to be found.",
"title": "Species"
},
{
"paragraph_id": 2,
"text": "Entamoeba cells are small, with a single nucleus and typically a single lobose pseudopod taking the form of a clear anterior bulge. They have a simple life cycle. The trophozoite (feeding-dividing form) is approximately 10-20 μm in diameter and feeds primarily on bacteria. It divides by simple binary fission to form two smaller daughter cells. Almost all species form cysts, the stage involved in transmission (the exception is Entamoeba gingivalis). Depending on the species, these can have one, four or eight nuclei and are variable in size; these characteristics help in species identification.",
"title": "Structure"
},
{
"paragraph_id": 3,
"text": "Entamoeba belongs to the Archamoebae, which like many other anaerobic eukaryotes have reduced mitochondria. This group also includes Endolimax and Iodamoeba, which also live in animal intestines and are similar in appearance to Entamoeba, although this may partly be due to convergence. Also in this group are the free-living amoebo-flagellates of the genus Mastigamoeba and related genera. Certain other genera of symbiotic amoebae, such as Endamoeba, might prove to be synonyms of Entamoeba but this is still unclear.",
"title": "Classification"
},
{
"paragraph_id": 4,
"text": "Studying Entamoeba invadens, David Biron of the Weizmann Institute of Science and coworkers found that about one third of the cells are unable to separate unaided and recruit a neighboring amoeba (dubbed the \"midwife\") to complete the fission. He writes:",
"title": "Culture"
},
{
"paragraph_id": 5,
"text": "They also reported a similar behavior in Dictyostelium.",
"title": "Culture"
},
{
"paragraph_id": 6,
"text": "Since E. histolytica does not form cysts in the absence of bacteria, E. invadens has become used as a model for encystation studies as it will form cysts under axenic growth conditions, which simplifies analysis. After inducing encystation in E. invadens, DNA replication increases initially and then slows down. On completion of encystation, predominantly tetra-nucleate cysts are formed along with some uni-, bi- and tri-nucleate cysts.",
"title": "Culture"
},
{
"paragraph_id": 7,
"text": "Uninucleated trophozoites convert into cysts in a process called encystation. The number of nuclei in the cyst varies from 1 to 8 among species and is one of the characteristics used to tell species apart. Of the species already mentioned, Entamoeba coli forms cysts with 8 nuclei while the others form tetra-nucleated cysts. Since E. histolytica does not form cysts in vitro in the absence of bacteria, it is not possible to study the differentiation process in detail in that species. Instead the differentiation process is studied using E. invadens, a reptilian parasite that causes a very similar disease to E. histolytica and which can be induced to encyst in vitro. Until recently there was no genetic transfection vector available for this organism and detailed study at the cellular level was not possible. However, recently a transfection vector was developed and the transfection conditions for E. invadens were optimised which should enhance the research possibilities at the molecular level of the differentiation process.",
"title": "Differentiation and cell biology"
},
{
"paragraph_id": 8,
"text": "In sexually reproducing eukaryotes, homologous recombination (HR) ordinarily occurs during meiosis. The meiosis-specific recombinase, Dmc1, is required for efficient meiotic HR, and Dmc1 is expressed in E. histolytica. The purified Dmc1 from E. histolytica forms presynaptic filaments and catalyzes ATP-dependent homologous DNA pairing and DNA strand exchange over at least several thousand base pairs. The DNA pairing and strand exchange reactions are enhanced by the eukaryotic meiosis-specific recombination accessory factor (heterodimer) Hop2-Mnd1. These processes are central to meiotic recombination, suggesting that E. histolytica undergoes meiosis.",
"title": "Meiosis"
},
{
"paragraph_id": 9,
"text": "Studies of E. invadens found that, during the conversion from the tetraploid uninucleate trophozoite to the tetranucleate cyst, homologous recombination is enhanced. Expression of genes with functions related to the major steps of meiotic recombination also increased during encystations. These findings in E. invadens, combined with evidence from studies of E. histolytica indicate the presence of meiosis in the Entamoeba.",
"title": "Meiosis"
}
]
| Entamoeba is a genus of Amoebozoa found as internal parasites or commensals of animals. In 1875, Fedor Lösch described the first proven case of amoebic dysentery in St. Petersburg, Russia. He referred to the amoeba he observed microscopically as Amoeba coli; however, it is not clear whether he was using this as a descriptive term or intended it as a formal taxonomic name. The genus Entamoeba was defined by Casagrandi and Barbagallo for the species Entamoeba coli, which is known to be a commensal organism. Lösch's organism was renamed Entamoeba histolytica by Fritz Schaudinn in 1903; he later died, in 1906, from a self-inflicted infection when studying this amoeba. For a time during the first half of the 20th century the entire genus Entamoeba was transferred to Endamoeba, a genus of amoebas infecting invertebrates about which little is known. This move was reversed by the International Commission on Zoological Nomenclature in the late 1950s, and Entamoeba has stayed 'stable' ever since. | 2002-02-25T15:43:11Z | 2023-10-16T20:12:03Z | [
"Template:Short description",
"Template:Reflist",
"Template:Cite journal",
"Template:Wikispecies",
"Template:Distinguish",
"Template:Automatic taxobox",
"Template:Commons category",
"Template:Amoebozoa",
"Template:Taxonbar"
]
| https://en.wikipedia.org/wiki/Entamoeba |
9,904 | England national football team | The England national football team have represented England in international football since the first international match in 1872. It is controlled by The Football Association (FA), the governing body for football in England, which is affiliated with UEFA and comes under the global jurisdiction of world football's governing body FIFA. England competes in the three major international tournament contested by European nations: the FIFA World Cup, the UEFA European Championship and the UEFA Nations League.
England is the joint oldest national team in football having played in the world's first international football match in 1872, against Scotland. England's home ground is Wembley Stadium, London, and its training headquarters is at St George's Park, Burton upon Trent. Gareth Southgate is the current manager of the team.
England won the 1966 World Cup final on home soil, making it one of eight nations to have won the World Cup. They have qualified for the World Cup sixteen times, with their best other performances being fourth place in both 1990 and 2018. England has never won the European Championship, with their best performance to date being runners-up in 2020. As a constituent country of the United Kingdom, England is not a member of the International Olympic Committee and so does not compete at the Olympic Games. England is currently the only team to have won the World Cup at senior level, but not their major continental title, and the only non-sovereign entity to have won the World Cup.
The England men's national football team is the joint-oldest in the world; it was formed at the same time as Scotland. A representative match between England and Scotland was played on 5 March 1870, having been organised by the Football Association. A return fixture was organised by representatives of Scottish football teams on 30 November 1872. This match, played at Hamilton Crescent in Scotland, is viewed as the first official international football match, because the two teams were independently selected and operated, rather than being the work of a single football association. Over the next 40 years, England played exclusively with the other three Home Nations—Scotland, Wales and Ireland—in the British Home Championship.
At first, England had no permanent home stadium. They joined FIFA in 1906 and played their first games against countries other than the Home Nations on a tour of Central Europe in 1908. Wembley Stadium was opened in 1923 and became their home ground. The relationship between England and FIFA became strained, and this resulted in their departure from FIFA in 1928, before they rejoined in 1946. As a result, they did not compete in a World Cup until 1950, in which they were beaten in a 1–0 defeat by the United States, failing to get past the first round in one of the most embarrassing defeats in the team's history.
Their first defeat on home soil to a foreign team was a 2–0 loss to Ireland, on 21 September 1949 at Goodison Park. A 6–3 loss in 1953 to Hungary, was their second defeat by a foreign team at Wembley. In the return match in Budapest, Hungary won 7–1. This stands as England's largest ever defeat. After the game, a bewildered Syd Owen said, "it was like playing men from outer space". In the 1954 FIFA World Cup, England reached the quarter-finals for the first time, and lost 4–2 to reigning champions Uruguay.
Although Walter Winterbottom was appointed as England's first full-time manager in 1946, the team was still picked by a committee until Alf Ramsey took over in 1963. The 1966 FIFA World Cup was hosted in England and Ramsey guided England to victory with a 4–2 win against West Germany after extra time in the final, during which Geoff Hurst scored a hat-trick. In UEFA Euro 1968, the team reached the semi-finals for the first time, being eliminated by Yugoslavia.
England qualified automatically for the 1970 World Cup in Mexico as reigning champions, and reached the quarter-finals, where they were knocked out by West Germany. England had been 2–0 up, but were eventually beaten 3–2 after extra time. They then failed to qualify for the 1974 World Cup, leading to Ramsey's dismissal by the FA.
Following Ramsey's dismissal, Joe Mercer took immediate temporary charge of England for a seven-match spell until Don Revie was appointed as new permanent manager in 1974. Under Revie, the team underperformed and failed to qualify for either UEFA Euro 1976 or the 1978 World Cup. Revie resigned in 1977 and was replaced by Ron Greenwood, under whom performances improved. The team qualified for Euro 1980 without losing any of their games, but exited in the group stage of the final tournament. They also qualified for the 1982 World Cup in Spain; despite not losing a game, they were eliminated at the second group stage.
Bobby Robson managed England from 1982 to 1990. Although the team failed to qualify for UEFA Euro 1984, they reached the quarter-finals of the 1986 World Cup, losing 2–1 to Argentina in a game made famous by two highly contrasting goals scored by Diego Maradona – the first being blatantly knocked in by his hand, prompting his "Hand of God" remark, the second being an outstandingly skilful individual goal, involving high speed dribbling past several opponents. England striker Gary Lineker finished as the tournament's top scorer with six goals.
England went on to lose every match at UEFA Euro 1988. They next achieved their second best result in the 1990 FIFA World Cup by finishing fourth – losing again to West Germany after a closely contested semi-final finishing 1–1 after extra time, then 3–4 in England's first penalty shoot-out. Despite losing to Italy in the third place play-off, the members of the England team were given bronze medals identical to the Italians'. Due to the team's good performance at the tournament against general expectations, and the emotional nature of the narrow defeat to West Germany, the team were welcomed home as heroes and thousands of people lined the streets for an open-top bus parade.
The 1990s saw four England managers follow Robson, each in the role for a relatively brief period. Graham Taylor was Robson's immediate successor. England failed to win any matches at UEFA Euro 1992, drawing with tournament winners Denmark and later with France, before being eliminated by host nation Sweden. The team then failed to qualify for the 1994 FIFA World Cup after losing a controversial game against the Netherlands in Rotterdam, which resulted in Taylor's resignation. Taylor faced much newspaper criticism during his tenure for his tactics and team selections.
Between 1994 and 1996, Terry Venables took charge of the team. At UEFA Euro 1996, held in England, they equalled their best performance at a European Championship, reaching the semi-finals as they did in 1968, before exiting via another penalty shoot-out loss to Germany. England striker Alan Shearer was the tournament's top scorer with five goals. At Euro 96, the song "Three Lions" by Baddiel, Skinner and The Lightning Seeds became the definitive anthem for fans on the terraces. Venables announced before the tournament that he would resign at the end of it, following investigations into his personal financial activities and ahead of upcoming court cases. Due to the controversy around him, the FA stressed that he was the coach, not the manager, of the team.
Venables' successor, Glenn Hoddle, took the team to the 1998 World Cup — in which England were eliminated in the second round, again by Argentina and again on penalties (after a 2–2 draw). In February 1999, Hoddle was sacked by the FA due to controversial comments he had made about disabled people to a newspaper. Howard Wilkinson took over as caretaker manager for two matches. Kevin Keegan was then appointed as the new permanent manager and took England to UEFA Euro 2000, but the team exited in the group stage and he unexpectedly resigned shortly afterwards.
Peter Taylor was appointed as caretaker manager for one match, before Sven-Göran Eriksson took charge between 2001 and 2006, and was the team's first non-English manager. Although England's players in this era were dubbed a "golden generation" and only lost five competitive matches during Eriksson's tenure, they exited at the quarter-finals of the 2002 FIFA World Cup, UEFA Euro 2004 and the 2006 FIFA World Cup. In January 2006 it was announced that Eriksson would leave the role following that year's World Cup.
Steve McClaren was then appointed as manager, but after failing to qualify for Euro 2008 he was sacked on 22 November 2007 after 18 matches in charge. The following month, he was replaced by a second foreign manager, Italian Fabio Capello. England won all but one of their qualifying games for the 2010 FIFA World Cup, but at the tournament itself, England drew their opening two games; this led to questions about the team's spirit, tactics and ability to handle pressure. They progressed to the next round, where they were beaten 4–1 by Germany, their heaviest defeat in a World Cup finals tournament match. In February 2012, Capello resigned from his role as England manager, following a disagreement with the FA over their request to remove John Terry from team captaincy after accusations of racial abuse concerning the player.
Following Capello's departure, Stuart Pearce was appointed as caretaker manager for one match, after which in May 2012, Roy Hodgson was announced as the new manager, just six weeks before UEFA Euro 2012. England managed to finish top of their group, but exited the Championships in the quarter-finals via a penalty shoot-out against Italy. In the 2014 FIFA World Cup, England were eliminated at the group stage for the first time since the 1958 World Cup. At UEFA Euro 2016, England were eliminated in the round of 16, losing 2–1 to Iceland. Hodgson resigned as manager in June 2016, and just under a month later was replaced by Sam Allardyce. After only 67 days in charge, Allardyce resigned from his managerial post by mutual agreement, after an alleged breach of FA rules, making him the shortest serving permanent England manager.
Gareth Southgate, then the coach of the England under-21 team, was put in temporary charge of the national team until November 2016, before being given the position on a permanent basis. At the 2018 FIFA World Cup, England reached the semi-finals for only the third time. After finishing second in their group, England won on penalties against Colombia in the round of 16 before beating Sweden in the quarter-finals. In the semi-final, they were beaten 2–1 in extra time by Croatia and finished 4th after losing the third place play-off match against Belgium. England striker Harry Kane finished the tournament as top scorer with six goals.
On 14 November 2019, England played their 1000th International match, defeating Montenegro 7–0 at Wembley in a UEFA Euro 2020 qualifying match.
At the delayed UEFA Euro 2020, England reached the final of a major tournament for the first time since 1966 and their first ever European Championship final appearance. After finishing top of a group including Croatia, Scotland and Czech Republic, the Three Lions would subsequently defeat Germany, Ukraine and Denmark to advance to the final. In the final held at Wembley, England were defeated by Italy on penalties after a 1–1 draw.
At the 2022 World Cup, England defeated Iran and Wales in the group stage to qualify for the round of 16. In the round of 16, England defeated the reigning African champions Senegal by 3–0, but were eliminated by the reigning world champions France in the quarter-finals, 2–1. Harry Kane's goal against France was his 53rd for England, equalling the all-time record. He would later miss an 84th-minute penalty with the chance to level the match.
The motif of the England national football team has three lions passant guardant, the emblem of King Richard I, who reigned from 1189 to 1199. In 1872, English players wore white jerseys emblazoned with the three lions crest of the Football Association. The lions, often blue, have had minor changes to colour and appearance. Initially topped by a crown, this was removed in 1949 when the FA was given an official coat of arms by the College of Arms; this introduced ten Tudor roses, one for each of the regional branches of the FA. Since 2003, England top their logo with a star to recognise their World Cup win in 1966; this was first embroidered onto the left sleeve of the home kit, and a year later was moved to its current position, first on the away shirt.
England's traditional home colours are white shirts, navy blue shorts and white or black socks. The team has periodically worn an all-white kit.
Although England's first away kits were blue, England's traditional away colours are red shirts, white shorts and red socks. In 1996, England's away kit was changed to grey shirts, shorts and socks. This kit was only worn three times, including against Germany in the semi-final of Euro 1996 but the deviation from the traditional red was unpopular with supporters and the England away kit remained red until 2011, when a navy blue away kit was introduced. The away kit is also sometimes worn during home matches, when a new edition has been released to promote it.
England have occasionally had a third kit. At the 1970 World Cup England wore a third kit with pale blue shirts, shorts and socks against Czechoslovakia. They had a kit similar to Brazil's, with yellow shirts, yellow socks and blue shorts which they wore in the summer of 1973. For the World Cup in 1986 England had a third kit of pale blue, imitating that worn in Mexico 16 years before and England retained pale blue third kits until 1992, but they were rarely used.
Umbro first agreed to manufacture the kit in 1954 and since then has supplied most of the kits, the exceptions being from 1959 to 1965 with Bukta and 1974–1984 with Admiral. Nike purchased Umbro in 2008 and took over as kit supplier in 2013 following their sale of the Umbro brand.
For the first 50 years of their existence, England played their home matches all around the country. They initially used cricket grounds before later moving on to football club stadiums. The original Empire Stadium was built in Wembley, London, for the British Empire Exhibition.
England played their first match at the stadium in 1924 against Scotland and for the next 27 years Wembley was used as a venue for matches against Scotland only. The stadium later became known simply as Wembley Stadium and it became England's permanent home stadium during the 1950s. In October 2000, the stadium closed its doors, ending with a defeat against Germany.
This stadium was demolished during the period of 2002–03, and work began to completely rebuild it. During this time, England played at venues across the country, though by the time of the 2006 World Cup qualification, this had largely settled down to having Manchester United's Old Trafford stadium as the primary venue, with Newcastle United's St. James' Park used on occasions when Old Trafford was unavailable.
Their first match in the new Wembley Stadium was in March 2007 when they drew with Brazil. The stadium is now owned by the Football Association, via its subsidiary Wembley National Stadium Limited.
England's three main rivalries are Scotland, Germany and Argentina. Smaller rivalries with France, Wales and the Republic of Ireland have also been observed.
England's rivalry with Scotland is one of the fiercest international rivalries that exists. It is the oldest international fixture in the world, first played in 1872 at Hamilton Crescent, Glasgow. The history of the British Isles has led to much rivalry between the nations in many forms, and the social and cultural effects of centuries of antagonism and conflict between the two has contributed to the intense nature of the sporting contests. Scottish nationalism has also been a factor in the Scots' desire to defeat England above all other rivals, with Scottish sports journalists traditionally referring to the English as the "Auld Enemy". The footballing rivalry has diminished somewhat since the late 1970s, particularly since the annual fixture stopped in 1989. For England, games against Germany and Argentina are now considered to be more important than the historic rivalry with Scotland.
England's rivalry with Germany is considered to be mainly an English phenomenon—in the run-up to any competition match between the two teams, many UK newspapers will print articles detailing results of previous encounters, such as those in 1966 and 1990. However, this rivalry has diminished significantly in recent years. Most German fans consider the Netherlands or Italy to be their traditional footballing rivals, and as such, usually the rivalry is not taken quite as seriously in Germany as it is in England.
England's rivalry with Argentina is highly competitive. Games between the two teams, even those that are only friendly matches, are often marked by notable and sometimes controversial incidents such as the hand of God in 1986. The rivalry is unusual in that it is an intercontinental one; typically such footballing rivalries exist between bordering nations. England is regarded in Argentina as one of the major rivals of the national football team, matched only by Brazil and Uruguay. The rivalry is, to a lesser extent reciprocal in England, locally described as a grudge match although matches against Germany carry a greater significance in popular perception. The rivalry emerged across several games during the latter half of the 20th century, even though as of 2008 the teams have played each other on only 14 occasions in full internationals. The rivalry was intensified, particularly in Argentina, by non-footballing events, especially the 1982 Falklands War between Argentina and the United Kingdom. However, England and Argentina have not met since a friendly in November 2005.
Numerous songs have been released about the England national football team.
All England matches are broadcast with full commentary on talkSPORT and BBC Radio 5 Live. From the 2008–09 season until the 2017–18 season, England's home and away qualifiers, and friendlies both home and away were broadcast live on ITV Sport (often with the exception of STV, the ITV franchisee in central and northern Scotland). England's away qualifiers for the 2010 World Cup were shown on Setanta Sports until that company's collapse. As a result of Setanta Sports's demise, England's World Cup qualifier in Ukraine on 10 October 2009 was shown in the United Kingdom on a pay-per-view basis via the internet only. This one-off event was the first time an England game had been screened in such a way. The number of subscribers, paying between £4.99 and £11.99 each, was estimated at between 250,000 and 300,000 and the total number of viewers at around 500,000. In 2018, Sky Sports broadcast the England Nations League and in-season friendlies, until 2021 and ITV Sport broadcast the European Qualifiers for Euro-World Cups and pre-tournament friendlies (after the Nations League group matches end), until 2022. In April 2022, Channel 4 won the rights for England matches until June 2024, including 2022–23 UEFA Nations League matches, UEFA Euro 2024 qualifying games, and friendlies. 2022 World Cup rights remained with the BBC and ITV.
The following is a list of match results in the last 12 months, as well as any future matches that have been scheduled.
Win Draw Loss Fixture
The following 21 players were named in the squad for the UEFA Euro 2024 qualifying matches against Malta and North Macedonia on 17 and 20 November 2023, respectively.
Caps and goals are correct as of 20 November 2023, after the match against North Macedonia.
The following players have also been called up to the England squad within the last twelve months.
For the all-time record of the national team against opposing nations, see the team's all-time record page
England first appeared at the 1950 FIFA World Cup, and have subsequently qualified for a total of 16 FIFA World Cup finals tournaments, tied for sixth best by number of appearances. They are also placed sixth by number of wins, with 32. The national team is one of only eight nations to have won at least one FIFA World Cup title. The England team won their first and only World Cup title in 1966. The tournament was played on home soil, and England defeated West Germany 4–2 in the final. In 1990, England finished in fourth place, losing 2–1 to host nation Italy in the third place play-off, following defeat on penalties, after extra time, to champions West Germany in the semi-final. They also finished in fourth place in 2018, losing 2–0 to Belgium in the third place play-off, following a 2–1 defeat to Croatia, again after extra time, in the semi-final. The team also reached the quarter-final stage in 1954, 1962, 1970, 1986, 2002, 2006 and 2022.
England failed to qualify for the World Cup in 1974, 1978 and 1994. The team's earliest exit in the finals tournament was its elimination in the first round in 1950, 1958 and, most recently, 2014. This was after being defeated in both their opening two matches for the first time, against Italy and Uruguay in Group D. In 1950, four teams remained after the first round, in 1958 eight teams remained and in 2014 sixteen teams remained. In 2010, England suffered its most resounding World Cup defeat, 4–1 to Germany, in the round of 16 stage.
England first entered the UEFA European Championship in 1964, and have since qualified for eleven finals tournaments, tied for fourth-best by number of finals appearances. England's greatest results at the tournament were finishing as runners-up in the 2020 edition (held in 2021), and a third-place finish in 1968. The team also reached the semi-finals in 1996, a tournament they hosted. England additionally reached the quarter-finals on two further occasions, in 2004 and 2012.
England's worst results in the finals tournament to date have been first round eliminations in 1980, 1988, 1992 and 2000, whilst they failed to qualify for the finals in 1964, 1972, 1976, 1984 and 2008. | [
{
"paragraph_id": 0,
"text": "The England national football team have represented England in international football since the first international match in 1872. It is controlled by The Football Association (FA), the governing body for football in England, which is affiliated with UEFA and comes under the global jurisdiction of world football's governing body FIFA. England competes in the three major international tournament contested by European nations: the FIFA World Cup, the UEFA European Championship and the UEFA Nations League.",
"title": ""
},
{
"paragraph_id": 1,
"text": "England is the joint oldest national team in football having played in the world's first international football match in 1872, against Scotland. England's home ground is Wembley Stadium, London, and its training headquarters is at St George's Park, Burton upon Trent. Gareth Southgate is the current manager of the team.",
"title": ""
},
{
"paragraph_id": 2,
"text": "England won the 1966 World Cup final on home soil, making it one of eight nations to have won the World Cup. They have qualified for the World Cup sixteen times, with their best other performances being fourth place in both 1990 and 2018. England has never won the European Championship, with their best performance to date being runners-up in 2020. As a constituent country of the United Kingdom, England is not a member of the International Olympic Committee and so does not compete at the Olympic Games. England is currently the only team to have won the World Cup at senior level, but not their major continental title, and the only non-sovereign entity to have won the World Cup.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The England men's national football team is the joint-oldest in the world; it was formed at the same time as Scotland. A representative match between England and Scotland was played on 5 March 1870, having been organised by the Football Association. A return fixture was organised by representatives of Scottish football teams on 30 November 1872. This match, played at Hamilton Crescent in Scotland, is viewed as the first official international football match, because the two teams were independently selected and operated, rather than being the work of a single football association. Over the next 40 years, England played exclusively with the other three Home Nations—Scotland, Wales and Ireland—in the British Home Championship.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "At first, England had no permanent home stadium. They joined FIFA in 1906 and played their first games against countries other than the Home Nations on a tour of Central Europe in 1908. Wembley Stadium was opened in 1923 and became their home ground. The relationship between England and FIFA became strained, and this resulted in their departure from FIFA in 1928, before they rejoined in 1946. As a result, they did not compete in a World Cup until 1950, in which they were beaten in a 1–0 defeat by the United States, failing to get past the first round in one of the most embarrassing defeats in the team's history.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Their first defeat on home soil to a foreign team was a 2–0 loss to Ireland, on 21 September 1949 at Goodison Park. A 6–3 loss in 1953 to Hungary, was their second defeat by a foreign team at Wembley. In the return match in Budapest, Hungary won 7–1. This stands as England's largest ever defeat. After the game, a bewildered Syd Owen said, \"it was like playing men from outer space\". In the 1954 FIFA World Cup, England reached the quarter-finals for the first time, and lost 4–2 to reigning champions Uruguay.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Although Walter Winterbottom was appointed as England's first full-time manager in 1946, the team was still picked by a committee until Alf Ramsey took over in 1963. The 1966 FIFA World Cup was hosted in England and Ramsey guided England to victory with a 4–2 win against West Germany after extra time in the final, during which Geoff Hurst scored a hat-trick. In UEFA Euro 1968, the team reached the semi-finals for the first time, being eliminated by Yugoslavia.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "England qualified automatically for the 1970 World Cup in Mexico as reigning champions, and reached the quarter-finals, where they were knocked out by West Germany. England had been 2–0 up, but were eventually beaten 3–2 after extra time. They then failed to qualify for the 1974 World Cup, leading to Ramsey's dismissal by the FA.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Following Ramsey's dismissal, Joe Mercer took immediate temporary charge of England for a seven-match spell until Don Revie was appointed as new permanent manager in 1974. Under Revie, the team underperformed and failed to qualify for either UEFA Euro 1976 or the 1978 World Cup. Revie resigned in 1977 and was replaced by Ron Greenwood, under whom performances improved. The team qualified for Euro 1980 without losing any of their games, but exited in the group stage of the final tournament. They also qualified for the 1982 World Cup in Spain; despite not losing a game, they were eliminated at the second group stage.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Bobby Robson managed England from 1982 to 1990. Although the team failed to qualify for UEFA Euro 1984, they reached the quarter-finals of the 1986 World Cup, losing 2–1 to Argentina in a game made famous by two highly contrasting goals scored by Diego Maradona – the first being blatantly knocked in by his hand, prompting his \"Hand of God\" remark, the second being an outstandingly skilful individual goal, involving high speed dribbling past several opponents. England striker Gary Lineker finished as the tournament's top scorer with six goals.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "England went on to lose every match at UEFA Euro 1988. They next achieved their second best result in the 1990 FIFA World Cup by finishing fourth – losing again to West Germany after a closely contested semi-final finishing 1–1 after extra time, then 3–4 in England's first penalty shoot-out. Despite losing to Italy in the third place play-off, the members of the England team were given bronze medals identical to the Italians'. Due to the team's good performance at the tournament against general expectations, and the emotional nature of the narrow defeat to West Germany, the team were welcomed home as heroes and thousands of people lined the streets for an open-top bus parade.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The 1990s saw four England managers follow Robson, each in the role for a relatively brief period. Graham Taylor was Robson's immediate successor. England failed to win any matches at UEFA Euro 1992, drawing with tournament winners Denmark and later with France, before being eliminated by host nation Sweden. The team then failed to qualify for the 1994 FIFA World Cup after losing a controversial game against the Netherlands in Rotterdam, which resulted in Taylor's resignation. Taylor faced much newspaper criticism during his tenure for his tactics and team selections.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Between 1994 and 1996, Terry Venables took charge of the team. At UEFA Euro 1996, held in England, they equalled their best performance at a European Championship, reaching the semi-finals as they did in 1968, before exiting via another penalty shoot-out loss to Germany. England striker Alan Shearer was the tournament's top scorer with five goals. At Euro 96, the song \"Three Lions\" by Baddiel, Skinner and The Lightning Seeds became the definitive anthem for fans on the terraces. Venables announced before the tournament that he would resign at the end of it, following investigations into his personal financial activities and ahead of upcoming court cases. Due to the controversy around him, the FA stressed that he was the coach, not the manager, of the team.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Venables' successor, Glenn Hoddle, took the team to the 1998 World Cup — in which England were eliminated in the second round, again by Argentina and again on penalties (after a 2–2 draw). In February 1999, Hoddle was sacked by the FA due to controversial comments he had made about disabled people to a newspaper. Howard Wilkinson took over as caretaker manager for two matches. Kevin Keegan was then appointed as the new permanent manager and took England to UEFA Euro 2000, but the team exited in the group stage and he unexpectedly resigned shortly afterwards.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Peter Taylor was appointed as caretaker manager for one match, before Sven-Göran Eriksson took charge between 2001 and 2006, and was the team's first non-English manager. Although England's players in this era were dubbed a \"golden generation\" and only lost five competitive matches during Eriksson's tenure, they exited at the quarter-finals of the 2002 FIFA World Cup, UEFA Euro 2004 and the 2006 FIFA World Cup. In January 2006 it was announced that Eriksson would leave the role following that year's World Cup.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Steve McClaren was then appointed as manager, but after failing to qualify for Euro 2008 he was sacked on 22 November 2007 after 18 matches in charge. The following month, he was replaced by a second foreign manager, Italian Fabio Capello. England won all but one of their qualifying games for the 2010 FIFA World Cup, but at the tournament itself, England drew their opening two games; this led to questions about the team's spirit, tactics and ability to handle pressure. They progressed to the next round, where they were beaten 4–1 by Germany, their heaviest defeat in a World Cup finals tournament match. In February 2012, Capello resigned from his role as England manager, following a disagreement with the FA over their request to remove John Terry from team captaincy after accusations of racial abuse concerning the player.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Following Capello's departure, Stuart Pearce was appointed as caretaker manager for one match, after which in May 2012, Roy Hodgson was announced as the new manager, just six weeks before UEFA Euro 2012. England managed to finish top of their group, but exited the Championships in the quarter-finals via a penalty shoot-out against Italy. In the 2014 FIFA World Cup, England were eliminated at the group stage for the first time since the 1958 World Cup. At UEFA Euro 2016, England were eliminated in the round of 16, losing 2–1 to Iceland. Hodgson resigned as manager in June 2016, and just under a month later was replaced by Sam Allardyce. After only 67 days in charge, Allardyce resigned from his managerial post by mutual agreement, after an alleged breach of FA rules, making him the shortest serving permanent England manager.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Gareth Southgate, then the coach of the England under-21 team, was put in temporary charge of the national team until November 2016, before being given the position on a permanent basis. At the 2018 FIFA World Cup, England reached the semi-finals for only the third time. After finishing second in their group, England won on penalties against Colombia in the round of 16 before beating Sweden in the quarter-finals. In the semi-final, they were beaten 2–1 in extra time by Croatia and finished 4th after losing the third place play-off match against Belgium. England striker Harry Kane finished the tournament as top scorer with six goals.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "On 14 November 2019, England played their 1000th International match, defeating Montenegro 7–0 at Wembley in a UEFA Euro 2020 qualifying match.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "At the delayed UEFA Euro 2020, England reached the final of a major tournament for the first time since 1966 and their first ever European Championship final appearance. After finishing top of a group including Croatia, Scotland and Czech Republic, the Three Lions would subsequently defeat Germany, Ukraine and Denmark to advance to the final. In the final held at Wembley, England were defeated by Italy on penalties after a 1–1 draw.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "At the 2022 World Cup, England defeated Iran and Wales in the group stage to qualify for the round of 16. In the round of 16, England defeated the reigning African champions Senegal by 3–0, but were eliminated by the reigning world champions France in the quarter-finals, 2–1. Harry Kane's goal against France was his 53rd for England, equalling the all-time record. He would later miss an 84th-minute penalty with the chance to level the match.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "The motif of the England national football team has three lions passant guardant, the emblem of King Richard I, who reigned from 1189 to 1199. In 1872, English players wore white jerseys emblazoned with the three lions crest of the Football Association. The lions, often blue, have had minor changes to colour and appearance. Initially topped by a crown, this was removed in 1949 when the FA was given an official coat of arms by the College of Arms; this introduced ten Tudor roses, one for each of the regional branches of the FA. Since 2003, England top their logo with a star to recognise their World Cup win in 1966; this was first embroidered onto the left sleeve of the home kit, and a year later was moved to its current position, first on the away shirt.",
"title": "Team image"
},
{
"paragraph_id": 22,
"text": "England's traditional home colours are white shirts, navy blue shorts and white or black socks. The team has periodically worn an all-white kit.",
"title": "Team image"
},
{
"paragraph_id": 23,
"text": "Although England's first away kits were blue, England's traditional away colours are red shirts, white shorts and red socks. In 1996, England's away kit was changed to grey shirts, shorts and socks. This kit was only worn three times, including against Germany in the semi-final of Euro 1996 but the deviation from the traditional red was unpopular with supporters and the England away kit remained red until 2011, when a navy blue away kit was introduced. The away kit is also sometimes worn during home matches, when a new edition has been released to promote it.",
"title": "Team image"
},
{
"paragraph_id": 24,
"text": "England have occasionally had a third kit. At the 1970 World Cup England wore a third kit with pale blue shirts, shorts and socks against Czechoslovakia. They had a kit similar to Brazil's, with yellow shirts, yellow socks and blue shorts which they wore in the summer of 1973. For the World Cup in 1986 England had a third kit of pale blue, imitating that worn in Mexico 16 years before and England retained pale blue third kits until 1992, but they were rarely used.",
"title": "Team image"
},
{
"paragraph_id": 25,
"text": "Umbro first agreed to manufacture the kit in 1954 and since then has supplied most of the kits, the exceptions being from 1959 to 1965 with Bukta and 1974–1984 with Admiral. Nike purchased Umbro in 2008 and took over as kit supplier in 2013 following their sale of the Umbro brand.",
"title": "Team image"
},
{
"paragraph_id": 26,
"text": "For the first 50 years of their existence, England played their home matches all around the country. They initially used cricket grounds before later moving on to football club stadiums. The original Empire Stadium was built in Wembley, London, for the British Empire Exhibition.",
"title": "Team image"
},
{
"paragraph_id": 27,
"text": "England played their first match at the stadium in 1924 against Scotland and for the next 27 years Wembley was used as a venue for matches against Scotland only. The stadium later became known simply as Wembley Stadium and it became England's permanent home stadium during the 1950s. In October 2000, the stadium closed its doors, ending with a defeat against Germany.",
"title": "Team image"
},
{
"paragraph_id": 28,
"text": "This stadium was demolished during the period of 2002–03, and work began to completely rebuild it. During this time, England played at venues across the country, though by the time of the 2006 World Cup qualification, this had largely settled down to having Manchester United's Old Trafford stadium as the primary venue, with Newcastle United's St. James' Park used on occasions when Old Trafford was unavailable.",
"title": "Team image"
},
{
"paragraph_id": 29,
"text": "Their first match in the new Wembley Stadium was in March 2007 when they drew with Brazil. The stadium is now owned by the Football Association, via its subsidiary Wembley National Stadium Limited.",
"title": "Team image"
},
{
"paragraph_id": 30,
"text": "England's three main rivalries are Scotland, Germany and Argentina. Smaller rivalries with France, Wales and the Republic of Ireland have also been observed.",
"title": "Team image"
},
{
"paragraph_id": 31,
"text": "England's rivalry with Scotland is one of the fiercest international rivalries that exists. It is the oldest international fixture in the world, first played in 1872 at Hamilton Crescent, Glasgow. The history of the British Isles has led to much rivalry between the nations in many forms, and the social and cultural effects of centuries of antagonism and conflict between the two has contributed to the intense nature of the sporting contests. Scottish nationalism has also been a factor in the Scots' desire to defeat England above all other rivals, with Scottish sports journalists traditionally referring to the English as the \"Auld Enemy\". The footballing rivalry has diminished somewhat since the late 1970s, particularly since the annual fixture stopped in 1989. For England, games against Germany and Argentina are now considered to be more important than the historic rivalry with Scotland.",
"title": "Team image"
},
{
"paragraph_id": 32,
"text": "England's rivalry with Germany is considered to be mainly an English phenomenon—in the run-up to any competition match between the two teams, many UK newspapers will print articles detailing results of previous encounters, such as those in 1966 and 1990. However, this rivalry has diminished significantly in recent years. Most German fans consider the Netherlands or Italy to be their traditional footballing rivals, and as such, usually the rivalry is not taken quite as seriously in Germany as it is in England.",
"title": "Team image"
},
{
"paragraph_id": 33,
"text": "England's rivalry with Argentina is highly competitive. Games between the two teams, even those that are only friendly matches, are often marked by notable and sometimes controversial incidents such as the hand of God in 1986. The rivalry is unusual in that it is an intercontinental one; typically such footballing rivalries exist between bordering nations. England is regarded in Argentina as one of the major rivals of the national football team, matched only by Brazil and Uruguay. The rivalry is, to a lesser extent reciprocal in England, locally described as a grudge match although matches against Germany carry a greater significance in popular perception. The rivalry emerged across several games during the latter half of the 20th century, even though as of 2008 the teams have played each other on only 14 occasions in full internationals. The rivalry was intensified, particularly in Argentina, by non-footballing events, especially the 1982 Falklands War between Argentina and the United Kingdom. However, England and Argentina have not met since a friendly in November 2005.",
"title": "Team image"
},
{
"paragraph_id": 34,
"text": "Numerous songs have been released about the England national football team.",
"title": "Team image"
},
{
"paragraph_id": 35,
"text": "All England matches are broadcast with full commentary on talkSPORT and BBC Radio 5 Live. From the 2008–09 season until the 2017–18 season, England's home and away qualifiers, and friendlies both home and away were broadcast live on ITV Sport (often with the exception of STV, the ITV franchisee in central and northern Scotland). England's away qualifiers for the 2010 World Cup were shown on Setanta Sports until that company's collapse. As a result of Setanta Sports's demise, England's World Cup qualifier in Ukraine on 10 October 2009 was shown in the United Kingdom on a pay-per-view basis via the internet only. This one-off event was the first time an England game had been screened in such a way. The number of subscribers, paying between £4.99 and £11.99 each, was estimated at between 250,000 and 300,000 and the total number of viewers at around 500,000. In 2018, Sky Sports broadcast the England Nations League and in-season friendlies, until 2021 and ITV Sport broadcast the European Qualifiers for Euro-World Cups and pre-tournament friendlies (after the Nations League group matches end), until 2022. In April 2022, Channel 4 won the rights for England matches until June 2024, including 2022–23 UEFA Nations League matches, UEFA Euro 2024 qualifying games, and friendlies. 2022 World Cup rights remained with the BBC and ITV.",
"title": "Team image"
},
{
"paragraph_id": 36,
"text": "The following is a list of match results in the last 12 months, as well as any future matches that have been scheduled.",
"title": "Results and fixtures"
},
{
"paragraph_id": 37,
"text": "Win Draw Loss Fixture",
"title": "Results and fixtures"
},
{
"paragraph_id": 38,
"text": "The following 21 players were named in the squad for the UEFA Euro 2024 qualifying matches against Malta and North Macedonia on 17 and 20 November 2023, respectively.",
"title": "Players"
},
{
"paragraph_id": 39,
"text": "Caps and goals are correct as of 20 November 2023, after the match against North Macedonia.",
"title": "Players"
},
{
"paragraph_id": 40,
"text": "The following players have also been called up to the England squad within the last twelve months.",
"title": "Players"
},
{
"paragraph_id": 41,
"text": "For the all-time record of the national team against opposing nations, see the team's all-time record page",
"title": "Competitive record"
},
{
"paragraph_id": 42,
"text": "England first appeared at the 1950 FIFA World Cup, and have subsequently qualified for a total of 16 FIFA World Cup finals tournaments, tied for sixth best by number of appearances. They are also placed sixth by number of wins, with 32. The national team is one of only eight nations to have won at least one FIFA World Cup title. The England team won their first and only World Cup title in 1966. The tournament was played on home soil, and England defeated West Germany 4–2 in the final. In 1990, England finished in fourth place, losing 2–1 to host nation Italy in the third place play-off, following defeat on penalties, after extra time, to champions West Germany in the semi-final. They also finished in fourth place in 2018, losing 2–0 to Belgium in the third place play-off, following a 2–1 defeat to Croatia, again after extra time, in the semi-final. The team also reached the quarter-final stage in 1954, 1962, 1970, 1986, 2002, 2006 and 2022.",
"title": "Competitive record"
},
{
"paragraph_id": 43,
"text": "England failed to qualify for the World Cup in 1974, 1978 and 1994. The team's earliest exit in the finals tournament was its elimination in the first round in 1950, 1958 and, most recently, 2014. This was after being defeated in both their opening two matches for the first time, against Italy and Uruguay in Group D. In 1950, four teams remained after the first round, in 1958 eight teams remained and in 2014 sixteen teams remained. In 2010, England suffered its most resounding World Cup defeat, 4–1 to Germany, in the round of 16 stage.",
"title": "Competitive record"
},
{
"paragraph_id": 44,
"text": "England first entered the UEFA European Championship in 1964, and have since qualified for eleven finals tournaments, tied for fourth-best by number of finals appearances. England's greatest results at the tournament were finishing as runners-up in the 2020 edition (held in 2021), and a third-place finish in 1968. The team also reached the semi-finals in 1996, a tournament they hosted. England additionally reached the quarter-finals on two further occasions, in 2004 and 2012.",
"title": "Competitive record"
},
{
"paragraph_id": 45,
"text": "England's worst results in the finals tournament to date have been first round eliminations in 1980, 1988, 1992 and 2000, whilst they failed to qualify for the finals in 1964, 1972, 1976, 1984 and 2008.",
"title": "Competitive record"
}
]
| The England national football team have represented England in international football since the first international match in 1872. It is controlled by The Football Association (FA), the governing body for football in England, which is affiliated with UEFA and comes under the global jurisdiction of world football's governing body FIFA. England competes in the three major international tournament contested by European nations: the FIFA World Cup, the UEFA European Championship and the UEFA Nations League. England is the joint oldest national team in football having played in the world's first international football match in 1872, against Scotland. England's home ground is Wembley Stadium, London, and its training headquarters is at St George's Park, Burton upon Trent. Gareth Southgate is the current manager of the team. England won the 1966 World Cup final on home soil, making it one of eight nations to have won the World Cup. They have qualified for the World Cup sixteen times, with their best other performances being fourth place in both 1990 and 2018. England has never won the European Championship, with their best performance to date being runners-up in 2020. As a constituent country of the United Kingdom, England is not a member of the International Olympic Committee and so does not compete at the Olympic Games. England is currently the only team to have won the World Cup at senior level, but not their major continental title, and the only non-sovereign entity to have won the World Cup. | 2001-10-07T08:58:38Z | 2023-12-31T11:42:02Z | [
"Template:Main article",
"Template:Updated",
"Template:Pending",
"Template:Silver2",
"Template:Webarchive",
"Template:About",
"Template:Tooltip",
"Template:Nowrap",
"Template:Round",
"Template:Navboxes",
"Template:Use dmy dates",
"Template:Main",
"Template:Commons",
"Template:Football box collapsible",
"Template:Fb",
"Template:Cite news",
"Template:Portal",
"Template:Short description",
"Template:Use British English",
"Template:Infobox national football team",
"Template:Further",
"Template:Color box",
"Template:Gold1",
"Template:Abbr",
"Template:Same position",
"Template:Pp-semi-indef",
"Template:Incomplete list",
"Template:Flagicon",
"Template:Nat fs g start",
"Template:Nat fs g player",
"Template:Nat fs end",
"Template:Cite book",
"Template:Authority control",
"Template:Down",
"Template:Flagicon image",
"Template:Emdash",
"Template:Legend2",
"Template:For",
"Template:Nat fs r start",
"Template:Nat fs r player",
"Template:Citation needed",
"Template:England national football team",
"Template:FIFA World Cup champions",
"Template:Nat fs break",
"Template:Bronze3",
"Template:Cbignore",
"Template:Commons category",
"Template:Official website",
"Template:See also",
"Template:Notelist",
"Template:Reflist",
"Template:Cite web"
]
| https://en.wikipedia.org/wiki/England_national_football_team |
9,907 | Eureka, Missouri | 38°30′10″N 90°38′42″W / 38.502736°N 90.645075°W / 38.502736; -90.645075
Eureka is a city in St. Louis County and Jefferson County, Missouri, adjacent to Wildwood and Pacific. It is in the extreme southwest of the Greater St. Louis metro area. As of the 2020 census, the city had a population of 11,646. Since 1971, Eureka has been known as the home of the amusement park Six Flags St. Louis.
The area's first known inhabitants were Shawnee Native Americans on the banks of the Meramec river; archaeological artifacts can still be found today as evidence of their past occupation of the area.
The village of Eureka was platted in 1858 along the route of the Pacific Railroad. By 1890, the village consisted of about 100 homes. According to the Eureka, railroad workers, while clearing the way for the track and the next railroad camp, saw Eureka, level land with little to clear, and declared, "Eureka!" Greek meaning "I have found it." Thus, Eureka was founded. In 1898, Eureka became home to the St. Louis Children's Industrial Farm, established to give children from St. Louis tenement neighborhoods a chance to experience life in a rural setting. It later became Camp Wyman (now part of Wyman Center) and is one of the oldest camps in the United States. The first high school class in Eureka was held in 1909. Eureka was incorporated as a fourth-class city on April 7, 1954.
Historically, Eureka was wholly within St. Louis County. In September 2019, the city's Board of Aldermen voted to annex two commercial lots—one of them a 72.5-acre tract that houses Kirkwood Materials West, a sand and gravel quarry, and the other a 75-acre field, both at highways 109 and FF—located just across the Meramec River in Jefferson County into the city. On October 1, 2019, the city voted to annex the under construction 549-home Windswept Farms subdivision just to the south into the city. Both annexations both voluntary by the owners.
The railroad town of Allenton is a former community on U.S. Route 66 located (now) at the junction of Interstate 44 and Business Loop 44 in western St. Louis County. In 1985, it was annexed by the city of Eureka. The town is currently rural, with adjacent farmland and forested Ozark ridges. This community was declared blighted by St. Louis County in 1973.
According to the United States Census Bureau, the city has a total area of 10.45 square miles (27.07 km), of which 10.35 square miles (26.81 km) is land, and 0.10 square miles (0.26 km) is water.
The city of Eureka has suffered multiple floods, the two most catastrophic being in 2015 and 2017. This caused the city and U.S. Army Corps of Engineers to evaluate a dozen strategic options, from the use of levees and walls, to buyouts of high-risk properties, to the restoration of flood plain as water storage. Scientific researchers determined that the flooding was a man-made calamity caused in part by “inaccurate Federal Emergency Management Agency flood frequencies based on the assumption that today’s river will behave as it has in the past greatly underestimating our real flood risk and leading to inappropriate development in floodways and floodplains.”
The december 2015 North American storm complex deeply impacted the state of Missouri with heavy rain and snow causing severe floods. The storm system was responsible for heavy rain that caused severe flooding. Parts of the state were hit with over 10 in (250 mm) of heavy rainfall. In Eureka, more than 100 boat rescues were conducted by Eureka Fire Department of people and several pets from the second stories of homes near the Meramec River.
A flooding event caused by a strong spring storm system brought multiple rounds of thunderstorms and heavy rain to portions of the Midwest the weekend of April 29th-30th, 2017. The middle portion of the Mississippi approached historical record flooding. The National Weather Service anticipated a 48.5 ft. crest at Cape Girardeau, Missouri on May 5, 2017, which was within 6 inches of the January 2, 2016 crest of 48.86 ft. The first floor of a church flooded with about 48 inches of water, the same amount as in December 2015. Floodwater from the Meramec River covered athletic fields at Eureka High School, encroached on the school's buildings, and ruined the gymnasium floor.
The 2020 United States census counted 11,646 people, 3,486 households, and 2,575 families in Eureka. The population density was 1,053.9 per square mile (406.9/km). There were 3,740 housing units at an average density of 338.5 per square mile (130.7/km). The racial makeup was 90.73% (10,566) white, 0.82% (96) black or African-American, 0.12% (14) Native American, 1.57% (183) Asian, 0.05% (6) Pacific Islander, 0.76% (89) from other races, and 5.94% (692) from two or more races. Hispanic or Latino of any race was 2.1% (211) of the population.
Of the 3,486 households, 40.5% had children under the age of 18; 64.5% were married couples living together; 16.1% had a female householder with no husband present. Of all households, 20.1% consisted of individuals and 11.4% had someone living alone who was 65 years of age or older. The average household size was 2.8 and the average family size was 3.4.
26.4% of the population was under the age of 18, 3.9% from 18 to 24, 20.4% from 25 to 44, 24.1% from 45 to 64, and 12.9% who were 65 years of age or older. The median age was 39.8 years. For every 100 females, the population had 109.5 males. For every 100 females ages 18 and older, there were 98.5 males.
The 2016-2020 5-year American Community Survey estimates show that the median household income was $112,750 (with a margin of error of +/- $13,390) and the median family income was $121,977 (+/- $8,559). Males had a median income of $74,452 (+/- $8,634) versus $47,137 (+/- $8,637) for females. The median income for those above 16 years old was $59,316 (+/- $9,813). Approximately, 0.0% of families and 0.6% of the population were below the poverty line, including 0.0% of those under the age of 18 and 0.8% of those ages 65 or over.
As of the 2010 census, there were 10,189 people, 3,474 households, and 2,758 families residing in the city. The population density was 984.4 inhabitants per square mile (380.1/km). There were 3,683 housing units at an average density of 355.8 per square mile (137.4/km). The racial makeup of the city was 94.9% White, 0.8% African American, 0.2% Native American, 1.9% Asian, 0.1% Pacific Islander, 0.3% from other races, and 1.7% from two or more races. Hispanic or Latino of any race were 2.0% of the population.
There were 3,474 households, of which 46.9% had children under the age of 18 living with them, 66.2% were married couples living together, 9.3% had a female householder with no husband present, 3.9% had a male householder with no wife present, and 20.6% were non-families. 17.2% of all households were made up of individuals, and 5.9% had someone living alone who was 65 years of age or older. The average household size was 2.87 and the average family size was 3.27.
The median age in the city was 37.1 years. 30.9% of residents were under the age of 18; 6% were between the ages of 18 and 24; 26.6% were from 25 to 44; 26.7% were from 45 to 64, and 9.6% were 65 years of age or older. The gender makeup of the city was 49.6% male and 50.4% female.
As of the 2000 census, there were 7,676 people in the city, organized into 2,487 households and two families. Its population density was 763.7 inhabitants per square mile (294.9/km). There were 2,622 housing units at an average density of 260.9 per square mile (100.7/km). The racial makeup of the city was 97.38% White, 0.82% Asian, 0.57% Black or African American, 0.20% Native American, no Pacific Islanders, 0.26% from other races, and 0.77% from two or more races. 1.22% of the population were Hispanic or Latino of any race.
There were 2,487 households, out of which half have children under the age of 18 living with them, 71.6% were married couples living together, 8.2% had a female householder with no husband present, and 17.0% were non-families. 13.8% of all households were made up of individuals, and 4.3% had someone living alone who was 65 years of age or older. The average household size was 2.98 and the average family size was 3.30.
In the city, the population was spread out, with 31.9% under the age of 18, 5.7% from 18 to 24, 34.4% from 25 to 44, 19.5% from 45 to 64, and 8.5% 65 years of age or older. The median age was 34 years. For every 100 females, there were 94.9 males. For every 100 females age 18 and over, there were 89.6 males.
The median income for a household in the city was $74,301, and the median income for a family was $80,625. Males had a median income of $51,799 compared to $33,269 for females. The per capita income for the city was $27,553. 2.2% of the population and 1.3% of families were below the poverty line. Out of the total population, 3.1% of those under the age of 18 and 5.9% of those 65 and older were living below the poverty line.
Rockwood R-Vi School District operates 3 elementary schools, Lasalle Springs Middle School and Eureka High School.
The city also contains two private schools, St. Mark's Lutheran Church and School and Most Sacred Heart Church and School.
The city has the Eureka Hills Branch lending library, a branch of the St. Louis County Library. It was moved to a newly built location that opened on June 2, 2021.
Local news coverage for the town and some of its neighbors is provided by the Tri County Journal, the Eureka and Pacific Current NewsMagazine, and the Washington Missourian. | [
{
"paragraph_id": 0,
"text": "38°30′10″N 90°38′42″W / 38.502736°N 90.645075°W / 38.502736; -90.645075",
"title": ""
},
{
"paragraph_id": 1,
"text": "Eureka is a city in St. Louis County and Jefferson County, Missouri, adjacent to Wildwood and Pacific. It is in the extreme southwest of the Greater St. Louis metro area. As of the 2020 census, the city had a population of 11,646. Since 1971, Eureka has been known as the home of the amusement park Six Flags St. Louis.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The area's first known inhabitants were Shawnee Native Americans on the banks of the Meramec river; archaeological artifacts can still be found today as evidence of their past occupation of the area.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The village of Eureka was platted in 1858 along the route of the Pacific Railroad. By 1890, the village consisted of about 100 homes. According to the Eureka, railroad workers, while clearing the way for the track and the next railroad camp, saw Eureka, level land with little to clear, and declared, \"Eureka!\" Greek meaning \"I have found it.\" Thus, Eureka was founded. In 1898, Eureka became home to the St. Louis Children's Industrial Farm, established to give children from St. Louis tenement neighborhoods a chance to experience life in a rural setting. It later became Camp Wyman (now part of Wyman Center) and is one of the oldest camps in the United States. The first high school class in Eureka was held in 1909. Eureka was incorporated as a fourth-class city on April 7, 1954.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Historically, Eureka was wholly within St. Louis County. In September 2019, the city's Board of Aldermen voted to annex two commercial lots—one of them a 72.5-acre tract that houses Kirkwood Materials West, a sand and gravel quarry, and the other a 75-acre field, both at highways 109 and FF—located just across the Meramec River in Jefferson County into the city. On October 1, 2019, the city voted to annex the under construction 549-home Windswept Farms subdivision just to the south into the city. Both annexations both voluntary by the owners.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The railroad town of Allenton is a former community on U.S. Route 66 located (now) at the junction of Interstate 44 and Business Loop 44 in western St. Louis County. In 1985, it was annexed by the city of Eureka. The town is currently rural, with adjacent farmland and forested Ozark ridges. This community was declared blighted by St. Louis County in 1973.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "According to the United States Census Bureau, the city has a total area of 10.45 square miles (27.07 km), of which 10.35 square miles (26.81 km) is land, and 0.10 square miles (0.26 km) is water.",
"title": "Geography"
},
{
"paragraph_id": 7,
"text": "The city of Eureka has suffered multiple floods, the two most catastrophic being in 2015 and 2017. This caused the city and U.S. Army Corps of Engineers to evaluate a dozen strategic options, from the use of levees and walls, to buyouts of high-risk properties, to the restoration of flood plain as water storage. Scientific researchers determined that the flooding was a man-made calamity caused in part by “inaccurate Federal Emergency Management Agency flood frequencies based on the assumption that today’s river will behave as it has in the past greatly underestimating our real flood risk and leading to inappropriate development in floodways and floodplains.”",
"title": "Geography"
},
{
"paragraph_id": 8,
"text": "The december 2015 North American storm complex deeply impacted the state of Missouri with heavy rain and snow causing severe floods. The storm system was responsible for heavy rain that caused severe flooding. Parts of the state were hit with over 10 in (250 mm) of heavy rainfall. In Eureka, more than 100 boat rescues were conducted by Eureka Fire Department of people and several pets from the second stories of homes near the Meramec River.",
"title": "Geography"
},
{
"paragraph_id": 9,
"text": "A flooding event caused by a strong spring storm system brought multiple rounds of thunderstorms and heavy rain to portions of the Midwest the weekend of April 29th-30th, 2017. The middle portion of the Mississippi approached historical record flooding. The National Weather Service anticipated a 48.5 ft. crest at Cape Girardeau, Missouri on May 5, 2017, which was within 6 inches of the January 2, 2016 crest of 48.86 ft. The first floor of a church flooded with about 48 inches of water, the same amount as in December 2015. Floodwater from the Meramec River covered athletic fields at Eureka High School, encroached on the school's buildings, and ruined the gymnasium floor.",
"title": "Geography"
},
{
"paragraph_id": 10,
"text": "The 2020 United States census counted 11,646 people, 3,486 households, and 2,575 families in Eureka. The population density was 1,053.9 per square mile (406.9/km). There were 3,740 housing units at an average density of 338.5 per square mile (130.7/km). The racial makeup was 90.73% (10,566) white, 0.82% (96) black or African-American, 0.12% (14) Native American, 1.57% (183) Asian, 0.05% (6) Pacific Islander, 0.76% (89) from other races, and 5.94% (692) from two or more races. Hispanic or Latino of any race was 2.1% (211) of the population.",
"title": "Demographics"
},
{
"paragraph_id": 11,
"text": "Of the 3,486 households, 40.5% had children under the age of 18; 64.5% were married couples living together; 16.1% had a female householder with no husband present. Of all households, 20.1% consisted of individuals and 11.4% had someone living alone who was 65 years of age or older. The average household size was 2.8 and the average family size was 3.4.",
"title": "Demographics"
},
{
"paragraph_id": 12,
"text": "26.4% of the population was under the age of 18, 3.9% from 18 to 24, 20.4% from 25 to 44, 24.1% from 45 to 64, and 12.9% who were 65 years of age or older. The median age was 39.8 years. For every 100 females, the population had 109.5 males. For every 100 females ages 18 and older, there were 98.5 males.",
"title": "Demographics"
},
{
"paragraph_id": 13,
"text": "The 2016-2020 5-year American Community Survey estimates show that the median household income was $112,750 (with a margin of error of +/- $13,390) and the median family income was $121,977 (+/- $8,559). Males had a median income of $74,452 (+/- $8,634) versus $47,137 (+/- $8,637) for females. The median income for those above 16 years old was $59,316 (+/- $9,813). Approximately, 0.0% of families and 0.6% of the population were below the poverty line, including 0.0% of those under the age of 18 and 0.8% of those ages 65 or over.",
"title": "Demographics"
},
{
"paragraph_id": 14,
"text": "As of the 2010 census, there were 10,189 people, 3,474 households, and 2,758 families residing in the city. The population density was 984.4 inhabitants per square mile (380.1/km). There were 3,683 housing units at an average density of 355.8 per square mile (137.4/km). The racial makeup of the city was 94.9% White, 0.8% African American, 0.2% Native American, 1.9% Asian, 0.1% Pacific Islander, 0.3% from other races, and 1.7% from two or more races. Hispanic or Latino of any race were 2.0% of the population.",
"title": "Demographics"
},
{
"paragraph_id": 15,
"text": "There were 3,474 households, of which 46.9% had children under the age of 18 living with them, 66.2% were married couples living together, 9.3% had a female householder with no husband present, 3.9% had a male householder with no wife present, and 20.6% were non-families. 17.2% of all households were made up of individuals, and 5.9% had someone living alone who was 65 years of age or older. The average household size was 2.87 and the average family size was 3.27.",
"title": "Demographics"
},
{
"paragraph_id": 16,
"text": "The median age in the city was 37.1 years. 30.9% of residents were under the age of 18; 6% were between the ages of 18 and 24; 26.6% were from 25 to 44; 26.7% were from 45 to 64, and 9.6% were 65 years of age or older. The gender makeup of the city was 49.6% male and 50.4% female.",
"title": "Demographics"
},
{
"paragraph_id": 17,
"text": "As of the 2000 census, there were 7,676 people in the city, organized into 2,487 households and two families. Its population density was 763.7 inhabitants per square mile (294.9/km). There were 2,622 housing units at an average density of 260.9 per square mile (100.7/km). The racial makeup of the city was 97.38% White, 0.82% Asian, 0.57% Black or African American, 0.20% Native American, no Pacific Islanders, 0.26% from other races, and 0.77% from two or more races. 1.22% of the population were Hispanic or Latino of any race.",
"title": "Demographics"
},
{
"paragraph_id": 18,
"text": "There were 2,487 households, out of which half have children under the age of 18 living with them, 71.6% were married couples living together, 8.2% had a female householder with no husband present, and 17.0% were non-families. 13.8% of all households were made up of individuals, and 4.3% had someone living alone who was 65 years of age or older. The average household size was 2.98 and the average family size was 3.30.",
"title": "Demographics"
},
{
"paragraph_id": 19,
"text": "In the city, the population was spread out, with 31.9% under the age of 18, 5.7% from 18 to 24, 34.4% from 25 to 44, 19.5% from 45 to 64, and 8.5% 65 years of age or older. The median age was 34 years. For every 100 females, there were 94.9 males. For every 100 females age 18 and over, there were 89.6 males.",
"title": "Demographics"
},
{
"paragraph_id": 20,
"text": "The median income for a household in the city was $74,301, and the median income for a family was $80,625. Males had a median income of $51,799 compared to $33,269 for females. The per capita income for the city was $27,553. 2.2% of the population and 1.3% of families were below the poverty line. Out of the total population, 3.1% of those under the age of 18 and 5.9% of those 65 and older were living below the poverty line.",
"title": "Demographics"
},
{
"paragraph_id": 21,
"text": "Rockwood R-Vi School District operates 3 elementary schools, Lasalle Springs Middle School and Eureka High School.",
"title": "Education"
},
{
"paragraph_id": 22,
"text": "The city also contains two private schools, St. Mark's Lutheran Church and School and Most Sacred Heart Church and School.",
"title": "Education"
},
{
"paragraph_id": 23,
"text": "The city has the Eureka Hills Branch lending library, a branch of the St. Louis County Library. It was moved to a newly built location that opened on June 2, 2021.",
"title": "Education"
},
{
"paragraph_id": 24,
"text": "Local news coverage for the town and some of its neighbors is provided by the Tri County Journal, the Eureka and Pacific Current NewsMagazine, and the Washington Missourian.",
"title": "News media"
}
]
| Eureka is a city in St. Louis County and Jefferson County, Missouri, adjacent to Wildwood and Pacific. It is in the extreme southwest of the Greater St. Louis metro area. As of the 2020 census, the city had a population of 11,646. Since 1971, Eureka has been known as the home of the amusement park Six Flags St. Louis. | 2002-02-25T15:51:15Z | 2023-12-30T16:52:20Z | [
"Template:Sup",
"Template:Cite news",
"Template:St. Louis County, Missouri",
"Template:Short description",
"Template:Cite book",
"Template:Authority control",
"Template:Webarchive",
"Template:For",
"Template:Coord",
"Template:Use mdy dates",
"Template:Convert",
"Template:US Census population",
"Template:Reflist",
"Template:Infobox settlement",
"Template:Cite web"
]
| https://en.wikipedia.org/wiki/Eureka,_Missouri |
9,908 | Equation of state | In physics and chemistry, an equation of state is a thermodynamic equation relating state variables, which describe the state of matter under a given set of physical conditions, such as pressure, volume, temperature, or internal energy. Most modern equations of state are formulated in the Helmholtz free energy. Equations of state are useful in describing the properties of pure substances and mixtures in liquids, gases, and solid states as well as the state of matter in the interior of stars.
At present, there is no single equation of state that accurately predicts the properties of all substances under all conditions. An example of an equation of state correlates densities of gases and liquids to temperatures and pressures, known as the ideal gas law, which is roughly accurate for weakly polar gases at low pressures and moderate temperatures. This equation becomes increasingly inaccurate at higher pressures and lower temperatures, and fails to predict condensation from a gas to a liquid.
The general form of an equation of state may be written as
where p {\displaystyle p} is the pressure, V {\displaystyle V} the volume, and T {\displaystyle T} the temperature of the system. Yet also other variables may be used in that form. It is directly related to Gibbs phase rule, that is, the number of independent variables depends on the number of substances and phases in the system.
An equation used to model this relationship is called an equation of state. In most cases this model will comprise some empirical parameters that are usually adjusted to measurement data. Equations of state can also describe solids, including the transition of solids from one crystalline state to another. Equations of state are also used for the modeling of the state of matter in the interior of stars, including neutron stars, dense matter (quark–gluon plasmas) and radiation fields. A related concept is the perfect fluid equation of state used in cosmology.
Equations of state are applied in many fields such as process engineering and petroleum industry as well as pharmaceutical industry.
Any consistent set of units may be used, although SI units are preferred. Absolute temperature refers to the use of the Kelvin (K), with zero being absolute zero.
Boyle's law was one of the earliest formulation of an equation of state. In 1662, the Irish physicist and chemist Robert Boyle performed a series of experiments employing a J-shaped glass tube, which was sealed on one end. Mercury was added to the tube, trapping a fixed quantity of air in the short, sealed end of the tube. Then the volume of gas was measured as additional mercury was added to the tube. The pressure of the gas could be determined by the difference between the mercury level in the short end of the tube and that in the long, open end. Through these experiments, Boyle noted that the gas volume varied inversely with the pressure. In mathematical form, this can be stated as:
The above relationship has also been attributed to Edme Mariotte and is sometimes referred to as Mariotte's law. However, Mariotte's work was not published until 1676. In 1787 the French physicist Jacques Charles found that oxygen, nitrogen, hydrogen, carbon dioxide, and air expand to roughly the same extent over the same 80-kelvin interval. This is known today as Charles's law. Later, in 1802, Joseph Louis Gay-Lussac published results of similar experiments, indicating a linear relationship between volume and temperature:
Dalton's law (1801) of partial pressure states that the pressure of a mixture of gases is equal to the sum of the pressures of all of the constituent gases alone. Mathematically, this can be represented for n {\displaystyle n} species as:
In 1834, Émile Clapeyron combined Boyle's law and Charles' law into the first statement of the ideal gas law. Initially, the law was formulated as pVm = R(TC + 267) (with temperature expressed in degrees Celsius), where R is the gas constant. However, later work revealed that the number should actually be closer to 273.2, and then the Celsius scale was defined with 0 ∘ C = 273.15 K {\displaystyle 0~^{\circ }\mathrm {C} =273.15~\mathrm {K} } , giving:
In 1873, J. D. van der Waals introduced the first equation of state derived by the assumption of a finite volume occupied by the constituent molecules. His new formula revolutionized the study of equations of state, and was the starting point of cubic equations of state, which most famously continued via the Redlich–Kwong equation of state and the Soave modification of Redlich-Kwong.
The van der Waals equation of state can be written as
where a {\displaystyle a} is a parameter describing the attractive energy between particles and b {\displaystyle b} is a parameter describing the volume of the particles.
The classical ideal gas law may be written
In the form shown above, the equation of state is thus
If the calorically perfect gas approximation is used, then the ideal gas law may also be expressed as follows
where ρ {\displaystyle \rho } is the density, γ = C p / C v {\displaystyle \gamma =C_{p}/C_{v}} is the (constant) adiabatic index (ratio of specific heats), e = C v T {\displaystyle e=C_{v}T} is the internal energy per unit mass (the "specific internal energy"), C v {\displaystyle C_{v}} is the specific heat capacity at constant volume, and C p {\displaystyle C_{p}} is the specific heat capacity at constant pressure.
Since for atomic and molecular gases, the classical ideal gas law is well suited in most cases, let us describe the equation of state for elementary particles with mass m {\displaystyle m} and spin s {\displaystyle s} that takes into account quantum effects. In the following, the upper sign will always correspond to Fermi–Dirac statistics and the lower sign to Bose–Einstein statistics. The equation of state of such gases with N {\displaystyle N} particles occupying a volume V {\displaystyle V} with temperature T {\displaystyle T} and pressure p {\displaystyle p} is given by
where k B {\displaystyle k_{\text{B}}} is the Boltzmann constant and μ ( T , N / V ) {\displaystyle \mu (T,N/V)} the chemical potential is given by the following implicit function
In the limiting case where e μ / ( k B T ) ≪ 1 {\displaystyle e^{\mu /(k_{\text{B}}T)}\ll 1} , this equation of state will reduce to that of the classical ideal gas. It can be shown that the above equation of state in the limit e μ / ( k B T ) ≪ 1 {\displaystyle e^{\mu /(k_{\text{B}}T)}\ll 1} reduces to
With a fixed number density N / V {\displaystyle N/V} , decreasing the temperature causes in Fermi gas, an increase in the value for pressure from its classical value implying an effective repulsion between particles (this is an apparent repulsion due to quantum exchange effects not because of actual interactions between particles since in ideal gas, interactional forces are neglected) and in Bose gas, a decrease in pressure from its classical value implying an effective attraction. The quantum nature of this equation is in it dependence on s and ħ.
Cubic equations of state are called such because they can be rewritten as a cubic function of V m {\displaystyle V_{m}} . Cubic equations of state originated from the van der Waals equation of state. Hence, all cubic equations of state can be considered 'modified van der Waals equation of state'. There is a very large number of such cubic equations of state. For process engineering, cubic equations of state are today still highly relevant, e.g. the Peng Robinson equation of state or the Soave Redlich Kwong equation of state.
Although usually not the most convenient equation of state, the virial equation is important because it can be derived directly from statistical mechanics. This equation is also called the Kamerlingh Onnes equation. If appropriate assumptions are made about the mathematical form of intermolecular forces, theoretical expressions can be developed for each of the coefficients. A is the first virial coefficient, which has a constant value of 1 and makes the statement that when volume is large, all fluids behave like ideal gases. The second virial coefficient B corresponds to interactions between pairs of molecules, C to triplets, and so on. Accuracy can be increased indefinitely by considering higher order terms. The coefficients B, C, D, etc. are functions of temperature only.
where
Values of the various parameters can be found in reference materials. The BWR equation of state has also frequently been used for the modelling of the Lennard-Jones fluid. There are several extensions and modifications of the classical BWR equation of state available.
The Benedict–Webb–Rubin–Starling equation of state is a modified BWR equation of state and can be written as
Note that in this virial equation, the fourth and fifth virial terms are zero. The second virial coefficient is monotonically decreasing as temperature is lowered. The third virial coefficient is monotonically increasing as temperature is lowered.
The Lee–Kesler equation of state is based on the corresponding states principle, and is a modification of the BWR equation of state.
There is a large number of physically based equations of state available today. Most of those are formulated in the Helmholtz free energy as a function of temperature, density (and for mixtures additionally the composition). The Helmholtz energy is formulated as a sum of multiple terms modelling different types of molecular interaction or molecular structures, e.g. the formation of chains or dipolar interactions. Hence, physically based equations of state model the effect of molecular size, attraction and shape as well as hydrogen bonding and polar interactions of fluids. In general, physically based equations of state give more accurate results than traditional cubic equations of state, especially for systems containing liquids or solids. Most physically based equations of state are built on monomer term describing the Lennard-Jones fluid or the Mie fluid.
Perturbation theory is frequently used for modelling dispersive interactions in an equation of state. There is a large number of perturbation theory based equations of state available today, e.g. for the classical Lennard-Jones fluid. The two most important theories used for these types of equations of state are the Barker-Henderson perturbation theory and the Weeks–Chandler–Andersen perturbation theory.
An important contribution for physically based equations of state is the statistical associating fluid theory (SAFT) that contributes the Helmholtz energy that describes the association (a.k.a. hydrogen bonding) in fluids, which can also be applied for modelling chain formation (in the limit of infinite association strength). The SAFT equation of state was developed using statistical mechanical methods (in particular the perturbation theory of Wertheim) to describe the interactions between molecules in a system. The idea of a SAFT equation of state was first proposed by Chapman et al. in 1988 and 1989. Many different versions of the SAFT models have been proposed, but all use the same chain and association terms derived by Chapman et al.
Multiparameter equations of state are empirical equations of state that can be used to represent pure fluids with high accuracy. Multiparameter equations of state are empirical correlations of experimental data and are usually formulated in the Helmholtz free energy. The functional form of these models is in most parts not physically motivated. They can be usually applied in both liquid and gaseous states. Empirical multiparameter equations of state represent the Helmholtz energy of the fluid as the sum of ideal gas and residual terms. Both terms are explicit in temperature and density:
with
The reduced density ρ r {\displaystyle \rho _{r}} and reduced temperature T r {\displaystyle T_{r}} are in most cases the critical values for the pure fluid. Because integration of the multiparameter equations of state is not required and thermodynamic properties can be determined using classical thermodynamic relations, there are few restrictions as to the functional form of the ideal or residual terms. Typical multiparameter equations of state use upwards of 50 fluid specific parameters, but are able to represent the fluid's properties with high accuracy. Multiparameter equations of state are available currently for about 50 of the most common industrial fluids including refrigerants. The IAPWS95 reference equation of state for water is also a multiparameter equations of state. Mixture models for multiparameter equations of state exist, as well. Yet, multiparameter equations of state applied to mixtures are known to exhibit artifacts at times.
One example of such an equation of state is the form proposed by Span and Wagner.
This is a somewhat simpler form that is intended to be used more in technical applications. Equations of state that require a higher accuracy use a more complicated form with more terms.
When considering water under very high pressures, in situations such as underwater nuclear explosions, sonic shock lithotripsy, and sonoluminescence, the stiffened equation of state is often used:
where e {\displaystyle e} is the internal energy per unit mass, γ {\displaystyle \gamma } is an empirically determined constant typically taken to be about 6.1, and p 0 {\displaystyle p^{0}} is another constant, representing the molecular attraction between water molecules. The magnitude of the correction is about 2 gigapascals (20,000 atmospheres).
The equation is stated in this form because the speed of sound in water is given by c 2 = γ ( p + p 0 ) / ρ {\displaystyle c^{2}=\gamma \left(p+p^{0}\right)/\rho } .
Thus water behaves as though it is an ideal gas that is already under about 20,000 atmospheres (2 GPa) pressure, and explains why water is commonly assumed to be incompressible: when the external pressure changes from 1 atmosphere to 2 atmospheres (100 kPa to 200 kPa), the water behaves as an ideal gas would when changing from 20,001 to 20,002 atmospheres (2000.1 MPa to 2000.2 MPa).
This equation mispredicts the specific heat capacity of water but few simple alternatives are available for severely nonisentropic processes such as strong shocks.
An equation of state of Morse oscillator has been derived, and it has the following form:
p = Γ 1 ν + Γ 2 ν 2 {\displaystyle p=\Gamma _{1}\nu +\Gamma _{2}\nu ^{2}}
Where Γ 1 {\displaystyle \Gamma _{1}} is the first order virial parameter and it depends on the temperature, Γ 2 {\displaystyle \Gamma _{2}} is the second order virial parameter of Morse oscillator and it depends on the parameters of Morse oscillator in addition to the absolute temperature. ν {\displaystyle \nu } is the fractional volume of the system.
An ultrarelativistic fluid has equation of state
where p {\displaystyle p} is the pressure, ρ m {\displaystyle \rho _{m}} is the mass density, and c s {\displaystyle c_{s}} is the speed of sound.
The equation of state for an ideal Bose gas is
where α is an exponent specific to the system (e.g. in the absence of a potential field, α = 3/2), z is exp(μ/kBT) where μ is the chemical potential, Li is the polylogarithm, ζ is the Riemann zeta function, and Tc is the critical temperature at which a Bose–Einstein condensate begins to form.
The equation of state from Jones–Wilkins–Lee is used to describe the detonation products of explosives.
The ratio V = ρ e / ρ {\displaystyle V=\rho _{e}/\rho } is defined by using ρ e {\displaystyle \rho _{e}} , which is the density of the explosive (solid part) and ρ {\displaystyle \rho } , which is the density of the detonation products. The parameters A {\displaystyle A} , B {\displaystyle B} , R 1 {\displaystyle R_{1}} , R 2 {\displaystyle R_{2}} and ω {\displaystyle \omega } are given by several references. In addition, the initial density (solid part) ρ 0 {\displaystyle \rho _{0}} , speed of detonation V D {\displaystyle V_{D}} , Chapman–Jouguet pressure P C J {\displaystyle P_{CJ}} and the chemical energy per unit volume of the explosive e 0 {\displaystyle e_{0}} are given in such references. These parameters are obtained by fitting the JWL-EOS to experimental results. Typical parameters for some explosives are listed in the table below. | [
{
"paragraph_id": 0,
"text": "In physics and chemistry, an equation of state is a thermodynamic equation relating state variables, which describe the state of matter under a given set of physical conditions, such as pressure, volume, temperature, or internal energy. Most modern equations of state are formulated in the Helmholtz free energy. Equations of state are useful in describing the properties of pure substances and mixtures in liquids, gases, and solid states as well as the state of matter in the interior of stars.",
"title": ""
},
{
"paragraph_id": 1,
"text": "At present, there is no single equation of state that accurately predicts the properties of all substances under all conditions. An example of an equation of state correlates densities of gases and liquids to temperatures and pressures, known as the ideal gas law, which is roughly accurate for weakly polar gases at low pressures and moderate temperatures. This equation becomes increasingly inaccurate at higher pressures and lower temperatures, and fails to predict condensation from a gas to a liquid.",
"title": "Overview"
},
{
"paragraph_id": 2,
"text": "The general form of an equation of state may be written as",
"title": "Overview"
},
{
"paragraph_id": 3,
"text": "where p {\\displaystyle p} is the pressure, V {\\displaystyle V} the volume, and T {\\displaystyle T} the temperature of the system. Yet also other variables may be used in that form. It is directly related to Gibbs phase rule, that is, the number of independent variables depends on the number of substances and phases in the system.",
"title": "Overview"
},
{
"paragraph_id": 4,
"text": "An equation used to model this relationship is called an equation of state. In most cases this model will comprise some empirical parameters that are usually adjusted to measurement data. Equations of state can also describe solids, including the transition of solids from one crystalline state to another. Equations of state are also used for the modeling of the state of matter in the interior of stars, including neutron stars, dense matter (quark–gluon plasmas) and radiation fields. A related concept is the perfect fluid equation of state used in cosmology.",
"title": "Overview"
},
{
"paragraph_id": 5,
"text": "Equations of state are applied in many fields such as process engineering and petroleum industry as well as pharmaceutical industry.",
"title": "Overview"
},
{
"paragraph_id": 6,
"text": "Any consistent set of units may be used, although SI units are preferred. Absolute temperature refers to the use of the Kelvin (K), with zero being absolute zero.",
"title": "Overview"
},
{
"paragraph_id": 7,
"text": "Boyle's law was one of the earliest formulation of an equation of state. In 1662, the Irish physicist and chemist Robert Boyle performed a series of experiments employing a J-shaped glass tube, which was sealed on one end. Mercury was added to the tube, trapping a fixed quantity of air in the short, sealed end of the tube. Then the volume of gas was measured as additional mercury was added to the tube. The pressure of the gas could be determined by the difference between the mercury level in the short end of the tube and that in the long, open end. Through these experiments, Boyle noted that the gas volume varied inversely with the pressure. In mathematical form, this can be stated as:",
"title": "Historical background"
},
{
"paragraph_id": 8,
"text": "The above relationship has also been attributed to Edme Mariotte and is sometimes referred to as Mariotte's law. However, Mariotte's work was not published until 1676. In 1787 the French physicist Jacques Charles found that oxygen, nitrogen, hydrogen, carbon dioxide, and air expand to roughly the same extent over the same 80-kelvin interval. This is known today as Charles's law. Later, in 1802, Joseph Louis Gay-Lussac published results of similar experiments, indicating a linear relationship between volume and temperature:",
"title": "Historical background"
},
{
"paragraph_id": 9,
"text": "Dalton's law (1801) of partial pressure states that the pressure of a mixture of gases is equal to the sum of the pressures of all of the constituent gases alone. Mathematically, this can be represented for n {\\displaystyle n} species as:",
"title": "Historical background"
},
{
"paragraph_id": 10,
"text": "In 1834, Émile Clapeyron combined Boyle's law and Charles' law into the first statement of the ideal gas law. Initially, the law was formulated as pVm = R(TC + 267) (with temperature expressed in degrees Celsius), where R is the gas constant. However, later work revealed that the number should actually be closer to 273.2, and then the Celsius scale was defined with 0 ∘ C = 273.15 K {\\displaystyle 0~^{\\circ }\\mathrm {C} =273.15~\\mathrm {K} } , giving:",
"title": "Historical background"
},
{
"paragraph_id": 11,
"text": "In 1873, J. D. van der Waals introduced the first equation of state derived by the assumption of a finite volume occupied by the constituent molecules. His new formula revolutionized the study of equations of state, and was the starting point of cubic equations of state, which most famously continued via the Redlich–Kwong equation of state and the Soave modification of Redlich-Kwong.",
"title": "Historical background"
},
{
"paragraph_id": 12,
"text": "The van der Waals equation of state can be written as",
"title": "Historical background"
},
{
"paragraph_id": 13,
"text": "where a {\\displaystyle a} is a parameter describing the attractive energy between particles and b {\\displaystyle b} is a parameter describing the volume of the particles.",
"title": "Historical background"
},
{
"paragraph_id": 14,
"text": "The classical ideal gas law may be written",
"title": "Ideal gas law"
},
{
"paragraph_id": 15,
"text": "In the form shown above, the equation of state is thus",
"title": "Ideal gas law"
},
{
"paragraph_id": 16,
"text": "If the calorically perfect gas approximation is used, then the ideal gas law may also be expressed as follows",
"title": "Ideal gas law"
},
{
"paragraph_id": 17,
"text": "where ρ {\\displaystyle \\rho } is the density, γ = C p / C v {\\displaystyle \\gamma =C_{p}/C_{v}} is the (constant) adiabatic index (ratio of specific heats), e = C v T {\\displaystyle e=C_{v}T} is the internal energy per unit mass (the \"specific internal energy\"), C v {\\displaystyle C_{v}} is the specific heat capacity at constant volume, and C p {\\displaystyle C_{p}} is the specific heat capacity at constant pressure.",
"title": "Ideal gas law"
},
{
"paragraph_id": 18,
"text": "Since for atomic and molecular gases, the classical ideal gas law is well suited in most cases, let us describe the equation of state for elementary particles with mass m {\\displaystyle m} and spin s {\\displaystyle s} that takes into account quantum effects. In the following, the upper sign will always correspond to Fermi–Dirac statistics and the lower sign to Bose–Einstein statistics. The equation of state of such gases with N {\\displaystyle N} particles occupying a volume V {\\displaystyle V} with temperature T {\\displaystyle T} and pressure p {\\displaystyle p} is given by",
"title": "Ideal gas law"
},
{
"paragraph_id": 19,
"text": "where k B {\\displaystyle k_{\\text{B}}} is the Boltzmann constant and μ ( T , N / V ) {\\displaystyle \\mu (T,N/V)} the chemical potential is given by the following implicit function",
"title": "Ideal gas law"
},
{
"paragraph_id": 20,
"text": "In the limiting case where e μ / ( k B T ) ≪ 1 {\\displaystyle e^{\\mu /(k_{\\text{B}}T)}\\ll 1} , this equation of state will reduce to that of the classical ideal gas. It can be shown that the above equation of state in the limit e μ / ( k B T ) ≪ 1 {\\displaystyle e^{\\mu /(k_{\\text{B}}T)}\\ll 1} reduces to",
"title": "Ideal gas law"
},
{
"paragraph_id": 21,
"text": "With a fixed number density N / V {\\displaystyle N/V} , decreasing the temperature causes in Fermi gas, an increase in the value for pressure from its classical value implying an effective repulsion between particles (this is an apparent repulsion due to quantum exchange effects not because of actual interactions between particles since in ideal gas, interactional forces are neglected) and in Bose gas, a decrease in pressure from its classical value implying an effective attraction. The quantum nature of this equation is in it dependence on s and ħ.",
"title": "Ideal gas law"
},
{
"paragraph_id": 22,
"text": "Cubic equations of state are called such because they can be rewritten as a cubic function of V m {\\displaystyle V_{m}} . Cubic equations of state originated from the van der Waals equation of state. Hence, all cubic equations of state can be considered 'modified van der Waals equation of state'. There is a very large number of such cubic equations of state. For process engineering, cubic equations of state are today still highly relevant, e.g. the Peng Robinson equation of state or the Soave Redlich Kwong equation of state.",
"title": "Cubic equations of state"
},
{
"paragraph_id": 23,
"text": "Although usually not the most convenient equation of state, the virial equation is important because it can be derived directly from statistical mechanics. This equation is also called the Kamerlingh Onnes equation. If appropriate assumptions are made about the mathematical form of intermolecular forces, theoretical expressions can be developed for each of the coefficients. A is the first virial coefficient, which has a constant value of 1 and makes the statement that when volume is large, all fluids behave like ideal gases. The second virial coefficient B corresponds to interactions between pairs of molecules, C to triplets, and so on. Accuracy can be increased indefinitely by considering higher order terms. The coefficients B, C, D, etc. are functions of temperature only.",
"title": "Virial equations of state"
},
{
"paragraph_id": 24,
"text": "where",
"title": "Virial equations of state"
},
{
"paragraph_id": 25,
"text": "Values of the various parameters can be found in reference materials. The BWR equation of state has also frequently been used for the modelling of the Lennard-Jones fluid. There are several extensions and modifications of the classical BWR equation of state available.",
"title": "Virial equations of state"
},
{
"paragraph_id": 26,
"text": "The Benedict–Webb–Rubin–Starling equation of state is a modified BWR equation of state and can be written as",
"title": "Virial equations of state"
},
{
"paragraph_id": 27,
"text": "Note that in this virial equation, the fourth and fifth virial terms are zero. The second virial coefficient is monotonically decreasing as temperature is lowered. The third virial coefficient is monotonically increasing as temperature is lowered.",
"title": "Virial equations of state"
},
{
"paragraph_id": 28,
"text": "The Lee–Kesler equation of state is based on the corresponding states principle, and is a modification of the BWR equation of state.",
"title": "Virial equations of state"
},
{
"paragraph_id": 29,
"text": "There is a large number of physically based equations of state available today. Most of those are formulated in the Helmholtz free energy as a function of temperature, density (and for mixtures additionally the composition). The Helmholtz energy is formulated as a sum of multiple terms modelling different types of molecular interaction or molecular structures, e.g. the formation of chains or dipolar interactions. Hence, physically based equations of state model the effect of molecular size, attraction and shape as well as hydrogen bonding and polar interactions of fluids. In general, physically based equations of state give more accurate results than traditional cubic equations of state, especially for systems containing liquids or solids. Most physically based equations of state are built on monomer term describing the Lennard-Jones fluid or the Mie fluid.",
"title": "Physically based equations of state"
},
{
"paragraph_id": 30,
"text": "Perturbation theory is frequently used for modelling dispersive interactions in an equation of state. There is a large number of perturbation theory based equations of state available today, e.g. for the classical Lennard-Jones fluid. The two most important theories used for these types of equations of state are the Barker-Henderson perturbation theory and the Weeks–Chandler–Andersen perturbation theory.",
"title": "Physically based equations of state"
},
{
"paragraph_id": 31,
"text": "An important contribution for physically based equations of state is the statistical associating fluid theory (SAFT) that contributes the Helmholtz energy that describes the association (a.k.a. hydrogen bonding) in fluids, which can also be applied for modelling chain formation (in the limit of infinite association strength). The SAFT equation of state was developed using statistical mechanical methods (in particular the perturbation theory of Wertheim) to describe the interactions between molecules in a system. The idea of a SAFT equation of state was first proposed by Chapman et al. in 1988 and 1989. Many different versions of the SAFT models have been proposed, but all use the same chain and association terms derived by Chapman et al.",
"title": "Physically based equations of state"
},
{
"paragraph_id": 32,
"text": "Multiparameter equations of state are empirical equations of state that can be used to represent pure fluids with high accuracy. Multiparameter equations of state are empirical correlations of experimental data and are usually formulated in the Helmholtz free energy. The functional form of these models is in most parts not physically motivated. They can be usually applied in both liquid and gaseous states. Empirical multiparameter equations of state represent the Helmholtz energy of the fluid as the sum of ideal gas and residual terms. Both terms are explicit in temperature and density:",
"title": "Multiparameter equations of state"
},
{
"paragraph_id": 33,
"text": "with",
"title": "Multiparameter equations of state"
},
{
"paragraph_id": 34,
"text": "The reduced density ρ r {\\displaystyle \\rho _{r}} and reduced temperature T r {\\displaystyle T_{r}} are in most cases the critical values for the pure fluid. Because integration of the multiparameter equations of state is not required and thermodynamic properties can be determined using classical thermodynamic relations, there are few restrictions as to the functional form of the ideal or residual terms. Typical multiparameter equations of state use upwards of 50 fluid specific parameters, but are able to represent the fluid's properties with high accuracy. Multiparameter equations of state are available currently for about 50 of the most common industrial fluids including refrigerants. The IAPWS95 reference equation of state for water is also a multiparameter equations of state. Mixture models for multiparameter equations of state exist, as well. Yet, multiparameter equations of state applied to mixtures are known to exhibit artifacts at times.",
"title": "Multiparameter equations of state"
},
{
"paragraph_id": 35,
"text": "One example of such an equation of state is the form proposed by Span and Wagner.",
"title": "Multiparameter equations of state"
},
{
"paragraph_id": 36,
"text": "This is a somewhat simpler form that is intended to be used more in technical applications. Equations of state that require a higher accuracy use a more complicated form with more terms.",
"title": "Multiparameter equations of state"
},
{
"paragraph_id": 37,
"text": "When considering water under very high pressures, in situations such as underwater nuclear explosions, sonic shock lithotripsy, and sonoluminescence, the stiffened equation of state is often used:",
"title": "List of further equations of state"
},
{
"paragraph_id": 38,
"text": "where e {\\displaystyle e} is the internal energy per unit mass, γ {\\displaystyle \\gamma } is an empirically determined constant typically taken to be about 6.1, and p 0 {\\displaystyle p^{0}} is another constant, representing the molecular attraction between water molecules. The magnitude of the correction is about 2 gigapascals (20,000 atmospheres).",
"title": "List of further equations of state"
},
{
"paragraph_id": 39,
"text": "The equation is stated in this form because the speed of sound in water is given by c 2 = γ ( p + p 0 ) / ρ {\\displaystyle c^{2}=\\gamma \\left(p+p^{0}\\right)/\\rho } .",
"title": "List of further equations of state"
},
{
"paragraph_id": 40,
"text": "Thus water behaves as though it is an ideal gas that is already under about 20,000 atmospheres (2 GPa) pressure, and explains why water is commonly assumed to be incompressible: when the external pressure changes from 1 atmosphere to 2 atmospheres (100 kPa to 200 kPa), the water behaves as an ideal gas would when changing from 20,001 to 20,002 atmospheres (2000.1 MPa to 2000.2 MPa).",
"title": "List of further equations of state"
},
{
"paragraph_id": 41,
"text": "This equation mispredicts the specific heat capacity of water but few simple alternatives are available for severely nonisentropic processes such as strong shocks.",
"title": "List of further equations of state"
},
{
"paragraph_id": 42,
"text": "An equation of state of Morse oscillator has been derived, and it has the following form:",
"title": "List of further equations of state"
},
{
"paragraph_id": 43,
"text": "p = Γ 1 ν + Γ 2 ν 2 {\\displaystyle p=\\Gamma _{1}\\nu +\\Gamma _{2}\\nu ^{2}}",
"title": "List of further equations of state"
},
{
"paragraph_id": 44,
"text": "Where Γ 1 {\\displaystyle \\Gamma _{1}} is the first order virial parameter and it depends on the temperature, Γ 2 {\\displaystyle \\Gamma _{2}} is the second order virial parameter of Morse oscillator and it depends on the parameters of Morse oscillator in addition to the absolute temperature. ν {\\displaystyle \\nu } is the fractional volume of the system.",
"title": "List of further equations of state"
},
{
"paragraph_id": 45,
"text": "An ultrarelativistic fluid has equation of state",
"title": "List of further equations of state"
},
{
"paragraph_id": 46,
"text": "where p {\\displaystyle p} is the pressure, ρ m {\\displaystyle \\rho _{m}} is the mass density, and c s {\\displaystyle c_{s}} is the speed of sound.",
"title": "List of further equations of state"
},
{
"paragraph_id": 47,
"text": "The equation of state for an ideal Bose gas is",
"title": "List of further equations of state"
},
{
"paragraph_id": 48,
"text": "where α is an exponent specific to the system (e.g. in the absence of a potential field, α = 3/2), z is exp(μ/kBT) where μ is the chemical potential, Li is the polylogarithm, ζ is the Riemann zeta function, and Tc is the critical temperature at which a Bose–Einstein condensate begins to form.",
"title": "List of further equations of state"
},
{
"paragraph_id": 49,
"text": "The equation of state from Jones–Wilkins–Lee is used to describe the detonation products of explosives.",
"title": "List of further equations of state"
},
{
"paragraph_id": 50,
"text": "The ratio V = ρ e / ρ {\\displaystyle V=\\rho _{e}/\\rho } is defined by using ρ e {\\displaystyle \\rho _{e}} , which is the density of the explosive (solid part) and ρ {\\displaystyle \\rho } , which is the density of the detonation products. The parameters A {\\displaystyle A} , B {\\displaystyle B} , R 1 {\\displaystyle R_{1}} , R 2 {\\displaystyle R_{2}} and ω {\\displaystyle \\omega } are given by several references. In addition, the initial density (solid part) ρ 0 {\\displaystyle \\rho _{0}} , speed of detonation V D {\\displaystyle V_{D}} , Chapman–Jouguet pressure P C J {\\displaystyle P_{CJ}} and the chemical energy per unit volume of the explosive e 0 {\\displaystyle e_{0}} are given in such references. These parameters are obtained by fitting the JWL-EOS to experimental results. Typical parameters for some explosives are listed in the table below.",
"title": "List of further equations of state"
}
]
| In physics and chemistry, an equation of state is a thermodynamic equation relating state variables, which describe the state of matter under a given set of physical conditions, such as pressure, volume, temperature, or internal energy. Most modern equations of state are formulated in the Helmholtz free energy. Equations of state are useful in describing the properties of pure substances and mixtures in liquids, gases, and solid states as well as the state of matter in the interior of stars. | 2001-08-23T18:56:23Z | 2023-12-21T21:25:32Z | [
"Template:Toclimit",
"Template:About",
"Template:Topics in continuum mechanics",
"Template:Short description",
"Template:Main",
"Template:States of matter",
"Template:Authority control",
"Template:Citation",
"Template:Statistical mechanics topics",
"Template:Thermodynamics",
"Template:Nbsp",
"Template:Reflist",
"Template:Cite book",
"Template:Cite journal"
]
| https://en.wikipedia.org/wiki/Equation_of_state |
9,910 | Ecclesiastes | Ecclesiastes (/ɪˌkliːziˈæstiːz/ ih-KLEE-zee-ASS-teez; Biblical Hebrew: קֹהֶלֶת, romanized: Qōheleṯ, Ancient Greek: Ἐκκλησιαστής, romanized: Ekklēsiastēs) is one of the Ketuvim ("Writings") of the Hebrew Bible and part of the Wisdom literature of the Christian Old Testament. The title commonly used in English is a Latin transliteration of the Greek translation of the Hebrew word קֹהֶלֶת (Kohelet, Koheleth, Qoheleth or Qohelet). An unnamed author introduces "The words of Kohelet, son of David, king in Jerusalem" (1:1) and does not use his own voice again until the final verses (12:9–14), where he gives his own thoughts and summarises the statements of Kohelet; the main body of the text is ascribed to Kohelet himself.
Kohelet proclaims (1:2) "Vanity of vanities! All is futile!"; the Hebrew word hevel, "vapor" or "breath", can figuratively mean "insubstantial", "vain", "futile", or "meaningless". Given this, the next verse presents the basic existential question with which the rest of the book is concerned: "What profit hath a man for all his toil, in which he toils under the sun?", expressing that the lives of both wise and foolish people all end in death. In light of this perceived meaninglessness, he suggests that human beings should enjoy the simple pleasures of daily life, such as eating, drinking, and taking enjoyment in one's work, which are gifts from the hand of God. The book concludes with the injunction to "Fear God and keep his commandments; for that is the duty of all of mankind. Since every deed will God bring to judgment, for every hidden act, be it good or evil."
According to rabbinic tradition the book was written by King Solomon (reigned c. 970–931 BCE) in his old age, but the presence of Persian loanwords and Aramaisms point to a date no earlier than about 450 BCE, while the latest possible date for its composition is 180 BCE.
Ecclesiastes is a phonetic transliteration of the Greek word Ἐκκλησιαστής (Ekklesiastes), which in the Septuagint translates the Hebrew name of its stated author, Kohelet (קֹהֶלֶת). The Greek word derives from ekklesia (assembly), as the Hebrew word derives from kahal (assembly), but while the Greek word means 'member of an assembly', the meaning of the original Hebrew word it translates is less certain. As Strong's concordance mentions, it is a female active participle of the verb kahal in its simple (qal) paradigm, a form not used elsewhere in the Bible and which is sometimes understood as active or passive depending on the verb, so that Kohelet would mean '(female) assembler' in the active case (recorded as such by Strong's concordance), and '(female) assembled, member of an assembly' in the passive case (as per the Septuagint translators). According to the majority understanding today, the word is a more general (mishkal, קוֹטֶלֶת) form rather than a literal participle, and the intended meaning of Kohelet in the text is 'someone speaking before an assembly', hence 'Teacher' or 'Preacher' (this was also the position of the Midrash and of Jerome).
Commentators struggle to explain why a man was given an apparently feminine name. According to Isaiah di Trani (also adopted by Simonis), "He authored this work in his old age, when he was weak like a woman, and therefore he received a feminine name". According to Solomon b. Jeroham (also Lorinus, Zirkel), "This is because, even as a woman births and raises children, Qoheleth revealed and organized wisdom". According to Yefet b. Ali (also adopted by Abraham ibn Ezra and Joseph Ibn Kaspi), "He ascribed this activity to his wisdom, and because Wisdom is female, he used a feminine name". This last opinion is accepted by a wide variety of modern scholars, including C. D. Ginsburg.
Ecclesiastes is presented as the biography of "Kohelet" or "Qoheleth"; his story is framed by the voice of the narrator, who refers to Kohelet in the third person, praises his wisdom, but reminds the reader that wisdom has its limitations and is not man's main concern. Kohelet reports what he planned, did, experienced and thought, but his journey to knowledge is, in the end, incomplete; the reader is not only to hear Kohelet's wisdom, but to observe his journey towards understanding and acceptance of life's frustrations and uncertainties: the journey itself is important.
The Jerusalem Bible divides the book into two parts, part one comprising Ecclesiastes 1:4-6:12, part two consisting of chapters 7 to 12, each commencing with a separate prologue.
Few of the many attempts to uncover an underlying structure to Ecclesiastes have met with widespread acceptance; among them, the following is one of the more influential:
Despite the acceptance by some of this structure, there have been many criticisms, such as that of Fox: "[Addison G. Wright's] proposed structure has no more effect on interpretation than a ghost in the attic. A literary or rhetorical structure should not merely 'be there'; it must do something. It should guide readers in recognizing and remembering the author's train of thought."
Verse 1:1 is a superscription, the ancient equivalent of a title page: it introduces the book as "the words of Kohelet, son of David, king in Jerusalem."
Most, though not all, modern commentators regard the epilogue (12:9–14) as an addition by a later scribe. Some have identified certain other statements as further additions intended to make the book more religiously orthodox (e.g., the affirmations of God's justice and the need for piety).
It has been proposed that the text is composed of three distinct voices. The first belongs to Qoheleth as the prophet, the "true voice of wisdom", which speaks in the first person, recounting wisdom through his own experience. The second voice belongs to Qoheleth as the king of Jerusalem, who is more didactic and thus speaks primarily in second-person imperative statements. The third voice is that of the epilogist, who speaks proverbially in the third person. The epilogist is most identified in the book's first and final verses. Kyle R. Greenwood suggests that following this structure, Ecclesiastes should be read as a dialogue between these voices.
The ten-verse introduction in verses 1:2–11 are the words of the frame narrator; they set the mood for what is to follow. Kohelet's message is that all is meaningless. This distinction first appeared in the commentaries of Samuel ibn Tibbon (d. 1230) and Aaron ben Joseph of Constantinople (d. 1320).
After the introduction come the words of Kohelet. As king, he has experienced everything and done everything, but concludes that nothing is ultimately reliable, as death levels all. Kohelet states that the only good is to partake of life in the present, for enjoyment is from the hand of God. Everything is ordered in time and people are subject to time in contrast to God's eternal character. The world is filled with injustice, which only God will adjudicate. God and humans do not belong in the same realm, and it is therefore necessary to have a right attitude before God. People should enjoy, but should not be greedy; no one knows what is good for humanity; righteousness and wisdom escape humanity. Kohelet reflects on the limits of human power: all people face death, and death is better than life, but people should enjoy life when they can, for a time may come when no one can. The world is full of risk: he gives advice on living with risk, both political and economic. Kohelet's words finish with imagery of nature languishing and humanity marching to the grave.
The frame narrator returns with an epilogue: the words of the wise are hard, but they are applied as the shepherd applies goads and pricks to his flock. The ending of the book sums up its message: "Fear God and keep his commandments for God will bring every deed to judgment." Some scholars suggest 12:13–14 were an addition by a more orthodox author than the original writer (that the epilogue was added later was first proposed by Samuel ibn Tibbon); others think it is likely the work of the original author.
The book takes its name from the Greek ekklesiastes, a translation of the title by which the central figure refers to himself: "Kohelet", meaning something like "one who convenes or addresses an assembly". According to rabbinic tradition, Ecclesiastes was written by King Solomon in his old age (an alternative tradition that "Hezekiah and his colleagues wrote Isaiah, Proverbs, the Song of Songs and Ecclesiastes" probably means simply that the book was edited under Hezekiah), but critical scholars have long rejected the idea of a pre-exilic origin. According to Christian tradition, the book was probably written by another Solomon (Gregory of Nyssa wrote that it was written by another Solomon; Didymus the Blind wrote that it was probably written by several authors). The presence of Persian loanwords and numerous Aramaisms points to a date no earlier than about 450 BCE, while the latest possible date according to those claims for its composition is 180 BCE, when the Jewish writer Ben Sira quotes from it. The dispute as to whether Ecclesiastes belongs to the Persian or the Hellenistic periods (i.e., the earlier or later part of this period) revolves around the degree of Hellenization (influence of Greek culture and thought) present in the book. Scholars arguing for a Persian date (c. 450–330 BCE) hold that there is a complete lack of Greek influence; those who argue for a Hellenistic date (c. 330–180 BCE) argue that it shows internal evidence of Greek thought and social setting.
Also unresolved is whether the author and narrator of Kohelet are identical. Ecclesiastes regularly switches between third-person quotations of Kohelet and first-person reflections on Kohelet's words, which would indicate the book was written as a commentary on Kohelet's parables rather than a personally-authored repository of his sayings. Some scholars have argued that the third-person narrative structure is an artificial literary device along the lines of Uncle Remus, although the description of the Kohelet in 12:8–14 seems to favour a historical person whose thoughts are presented by the narrator. It has been argued, however, that the question has no theological importance; one scholar (Roland Murphy) has commented that Kohelet himself would have regarded the time and ingenuity put into interpreting his book as "one more example of the futility of human effort".
Ecclesiastes has taken its literary form from the Middle Eastern tradition of the fictional autobiography, in which a character, often a king, relates his experiences and draws lessons from them, often self-critical: Kohelet likewise identifies himself as a king, speaks of his search for wisdom, relates his conclusions, and recognises his limitations. The book belongs to the category of wisdom literature, the body of biblical writings which give advice on life, together with reflections on its problems and meanings—other examples include the Book of Job, Proverbs, and some of the Psalms. Ecclesiastes differs from the other biblical Wisdom books in being deeply skeptical of the usefulness of wisdom itself. Ecclesiastes in turn influenced the deuterocanonical works, Wisdom of Solomon and Sirach, both of which contain vocal rejections of the Ecclesiastical philosophy of futility.
Wisdom was a popular genre in the ancient world, where it was cultivated in scribal circles and directed towards young men who would take up careers in high officialdom and royal courts; there is strong evidence that some of these books, or at least sayings and teachings, were translated into Hebrew and influenced the Book of Proverbs, and the author of Ecclesiastes was probably familiar with examples from Egypt and Mesopotamia. He may also have been influenced by Greek philosophy, specifically the schools of Stoicism, which held that all things are fated, and Epicureanism, which held that happiness was best pursued through the quiet cultivation of life's simpler pleasures.
The presence of Ecclesiastes in the Bible is something of a puzzle, as the common themes of the Hebrew canon—a God who reveals and redeems, who elects and cares for a chosen people—are absent from it, which suggests that Kohelet had lost his faith in his old age. Understanding the book was a topic of the earliest recorded discussions (the hypothetical Council of Jamnia in the 1st century CE). One argument advanced at that time was that the name of Solomon carried enough authority to ensure its inclusion; however, other works which appeared with Solomon's name were excluded despite being more orthodox than Ecclesiastes. Another was that the words of the epilogue, in which the reader is told to fear God and keep his commands, made it orthodox; but all later attempts to find anything in the rest of the book that would reflect this orthodoxy have failed. A modern suggestion treats the book as a dialogue in which different statements belong to different voices, with Kohelet himself answering and refuting unorthodox opinions, but there are no explicit markers for this in the book, as there are (for example) in the Book of Job.
Yet another suggestion is that Ecclesiastes is simply the most extreme example of a tradition of skepticism, but none of the proposed examples match Ecclesiastes for a sustained denial of faith and doubt in the goodness of God. Martin A. Shields, in his 2006 book The End of Wisdom: A Reappraisal of the Historical and Canonical Function of Ecclesiastes, summarized that "In short, we do not know why or how this book found its way into such esteemed company".
Scholars disagree about the themes of Ecclesiastes: whether it is positive and life-affirming, or deeply pessimistic; whether it is coherent or incoherent, insightful or confused, orthodox or heterodox; whether the ultimate message of the book is to copy Kohelet, "the wise man," or to avoid his errors. At times, Kohelet raises deep questions; he "doubted every aspect of religion, from the very ideal of righteousness, to the by now traditional idea of divine justice for individuals". Some passages of Ecclesiastes seem to contradict other portions of the Hebrew Bible, and even itself. The Talmud even suggests that the rabbis considered censoring Ecclesiastes due to its seeming contradictions. One suggestion for resolving the contradictions is to read the book as the record of Kohelet's quest for knowledge: opposing judgments (e.g., "the dead are better off than the living" (4:2) vs. "a living dog is better off than a dead lion" (9:4)) are therefore provisional, and it is only at the conclusion that the verdict is delivered (11–12:7). On this reading, Kohelet's sayings are goads, designed to provoke dialogue and reflection in his readers, rather than to reach premature and self-assured conclusions.
The subjects of Ecclesiastes are the pain and frustration engendered by observing and meditating on the distortions and inequities pervading the world, the uselessness of human ambition, and the limitations of worldly wisdom and righteousness. The phrase "under the sun" appears twenty-nine times in connection with these observations; all this coexists with a firm belief in God, whose power, justice and unpredictability are sovereign. History and nature move in cycles, so that all events are predictable and unchangeable, and life, without the Sun, has no meaning or purpose: the wise man and the man who does not study wisdom will both die and be forgotten: man should be reverent (i.e., fear God), but in this life it is best to simply enjoy God's gifts.
In Judaism, Ecclesiastes is read either on Shemini Atzeret (by Yemenites, Italians, some Sephardim, and the mediaeval French Jewish rite) or on the Shabbat of the intermediate days of Sukkot (by Ashkenazim). If there is no intermediate Shabbat of Sukkot, Ashkenazim too read it on Shemini Atzeret (or, in Israel, on the first Shabbat of Sukkot). It is read on Sukkot as a reminder to not get too caught up in the festivities of the holiday and to carry over the happiness of Sukkot to the rest of the year by telling the listeners that, without God, life is meaningless.
The final poem of Kohelet has been interpreted in the Targum, Talmud and Midrash, and by the rabbis Rashi, Rashbam and ibn Ezra, as an allegory of old age.
Ecclesiastes has been cited in the writings of past and current Catholic Church leaders. For example, Doctors of the Church have cited Ecclesiastes. Augustine of Hippo cited Ecclesiastes in Book XX of City of God. Jerome wrote a commentary on Ecclesiastes. Thomas Aquinas cited Ecclesiastes ("The number of fools is infinite.") in his Summa Theologica.
The 20th-century Catholic theologian and cardinal-elect Hans Urs von Balthasar discussed Ecclesiastes in his work on theological aesthetics, The Glory of the Lord. He describes Qoheleth as "a critical transcendentalist avant la lettre", whose God is distant from the world, and whose kairos is a "form of time which is itself empty of meaning". For Balthasar, the role of Ecclesiastes in the Biblical canon is to represent the "final dance on the part of wisdom, [the] conclusion of the ways of man", a logical end-point to the unfolding of human wisdom in the Old Testament that paves the way for the advent of the New.
The book continues to be cited by recent popes, including Pope John Paul II and Pope Francis. Pope John Paul II, in his general audience of October 20, 2004, called the author of Ecclesiastes "an ancient biblical sage" whose description of death "makes frantic clinging to earthly things completely pointless". Pope Francis cited Ecclesiastes in his address on September 9, 2014. Speaking of vain people, he said, "How many Christians live for appearances? Their life seems like a soap bubble."
Ecclesiastes has had a deep influence on Western literature. It contains several phrases that have resonated in British and American culture, such as "eat, drink and be merry", "nothing new under the sun", "a time to be born and a time to die", and "vanity of vanities; all is vanity". American novelist Thomas Wolfe wrote: "[O]f all I have ever seen or learned, that book seems to me the noblest, the wisest, and the most powerful expression of man's life upon this earth—and also the highest flower of poetry, eloquence, and truth. I am not given to dogmatic judgments in the matter of literary creation, but if I had to make one I could say that Ecclesiastes is the greatest single piece of writing I have ever known, and the wisdom expressed in it the most lasting and profound." | [
{
"paragraph_id": 0,
"text": "Ecclesiastes (/ɪˌkliːziˈæstiːz/ ih-KLEE-zee-ASS-teez; Biblical Hebrew: קֹהֶלֶת, romanized: Qōheleṯ, Ancient Greek: Ἐκκλησιαστής, romanized: Ekklēsiastēs) is one of the Ketuvim (\"Writings\") of the Hebrew Bible and part of the Wisdom literature of the Christian Old Testament. The title commonly used in English is a Latin transliteration of the Greek translation of the Hebrew word קֹהֶלֶת (Kohelet, Koheleth, Qoheleth or Qohelet). An unnamed author introduces \"The words of Kohelet, son of David, king in Jerusalem\" (1:1) and does not use his own voice again until the final verses (12:9–14), where he gives his own thoughts and summarises the statements of Kohelet; the main body of the text is ascribed to Kohelet himself.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Kohelet proclaims (1:2) \"Vanity of vanities! All is futile!\"; the Hebrew word hevel, \"vapor\" or \"breath\", can figuratively mean \"insubstantial\", \"vain\", \"futile\", or \"meaningless\". Given this, the next verse presents the basic existential question with which the rest of the book is concerned: \"What profit hath a man for all his toil, in which he toils under the sun?\", expressing that the lives of both wise and foolish people all end in death. In light of this perceived meaninglessness, he suggests that human beings should enjoy the simple pleasures of daily life, such as eating, drinking, and taking enjoyment in one's work, which are gifts from the hand of God. The book concludes with the injunction to \"Fear God and keep his commandments; for that is the duty of all of mankind. Since every deed will God bring to judgment, for every hidden act, be it good or evil.\"",
"title": ""
},
{
"paragraph_id": 2,
"text": "According to rabbinic tradition the book was written by King Solomon (reigned c. 970–931 BCE) in his old age, but the presence of Persian loanwords and Aramaisms point to a date no earlier than about 450 BCE, while the latest possible date for its composition is 180 BCE.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Ecclesiastes is a phonetic transliteration of the Greek word Ἐκκλησιαστής (Ekklesiastes), which in the Septuagint translates the Hebrew name of its stated author, Kohelet (קֹהֶלֶת). The Greek word derives from ekklesia (assembly), as the Hebrew word derives from kahal (assembly), but while the Greek word means 'member of an assembly', the meaning of the original Hebrew word it translates is less certain. As Strong's concordance mentions, it is a female active participle of the verb kahal in its simple (qal) paradigm, a form not used elsewhere in the Bible and which is sometimes understood as active or passive depending on the verb, so that Kohelet would mean '(female) assembler' in the active case (recorded as such by Strong's concordance), and '(female) assembled, member of an assembly' in the passive case (as per the Septuagint translators). According to the majority understanding today, the word is a more general (mishkal, קוֹטֶלֶת) form rather than a literal participle, and the intended meaning of Kohelet in the text is 'someone speaking before an assembly', hence 'Teacher' or 'Preacher' (this was also the position of the Midrash and of Jerome).",
"title": "Title"
},
{
"paragraph_id": 4,
"text": "Commentators struggle to explain why a man was given an apparently feminine name. According to Isaiah di Trani (also adopted by Simonis), \"He authored this work in his old age, when he was weak like a woman, and therefore he received a feminine name\". According to Solomon b. Jeroham (also Lorinus, Zirkel), \"This is because, even as a woman births and raises children, Qoheleth revealed and organized wisdom\". According to Yefet b. Ali (also adopted by Abraham ibn Ezra and Joseph Ibn Kaspi), \"He ascribed this activity to his wisdom, and because Wisdom is female, he used a feminine name\". This last opinion is accepted by a wide variety of modern scholars, including C. D. Ginsburg.",
"title": "Title"
},
{
"paragraph_id": 5,
"text": "Ecclesiastes is presented as the biography of \"Kohelet\" or \"Qoheleth\"; his story is framed by the voice of the narrator, who refers to Kohelet in the third person, praises his wisdom, but reminds the reader that wisdom has its limitations and is not man's main concern. Kohelet reports what he planned, did, experienced and thought, but his journey to knowledge is, in the end, incomplete; the reader is not only to hear Kohelet's wisdom, but to observe his journey towards understanding and acceptance of life's frustrations and uncertainties: the journey itself is important.",
"title": "Structure"
},
{
"paragraph_id": 6,
"text": "The Jerusalem Bible divides the book into two parts, part one comprising Ecclesiastes 1:4-6:12, part two consisting of chapters 7 to 12, each commencing with a separate prologue.",
"title": "Structure"
},
{
"paragraph_id": 7,
"text": "Few of the many attempts to uncover an underlying structure to Ecclesiastes have met with widespread acceptance; among them, the following is one of the more influential:",
"title": "Structure"
},
{
"paragraph_id": 8,
"text": "Despite the acceptance by some of this structure, there have been many criticisms, such as that of Fox: \"[Addison G. Wright's] proposed structure has no more effect on interpretation than a ghost in the attic. A literary or rhetorical structure should not merely 'be there'; it must do something. It should guide readers in recognizing and remembering the author's train of thought.\"",
"title": "Structure"
},
{
"paragraph_id": 9,
"text": "Verse 1:1 is a superscription, the ancient equivalent of a title page: it introduces the book as \"the words of Kohelet, son of David, king in Jerusalem.\"",
"title": "Structure"
},
{
"paragraph_id": 10,
"text": "Most, though not all, modern commentators regard the epilogue (12:9–14) as an addition by a later scribe. Some have identified certain other statements as further additions intended to make the book more religiously orthodox (e.g., the affirmations of God's justice and the need for piety).",
"title": "Structure"
},
{
"paragraph_id": 11,
"text": "It has been proposed that the text is composed of three distinct voices. The first belongs to Qoheleth as the prophet, the \"true voice of wisdom\", which speaks in the first person, recounting wisdom through his own experience. The second voice belongs to Qoheleth as the king of Jerusalem, who is more didactic and thus speaks primarily in second-person imperative statements. The third voice is that of the epilogist, who speaks proverbially in the third person. The epilogist is most identified in the book's first and final verses. Kyle R. Greenwood suggests that following this structure, Ecclesiastes should be read as a dialogue between these voices.",
"title": "Structure"
},
{
"paragraph_id": 12,
"text": "The ten-verse introduction in verses 1:2–11 are the words of the frame narrator; they set the mood for what is to follow. Kohelet's message is that all is meaningless. This distinction first appeared in the commentaries of Samuel ibn Tibbon (d. 1230) and Aaron ben Joseph of Constantinople (d. 1320).",
"title": "Summary"
},
{
"paragraph_id": 13,
"text": "After the introduction come the words of Kohelet. As king, he has experienced everything and done everything, but concludes that nothing is ultimately reliable, as death levels all. Kohelet states that the only good is to partake of life in the present, for enjoyment is from the hand of God. Everything is ordered in time and people are subject to time in contrast to God's eternal character. The world is filled with injustice, which only God will adjudicate. God and humans do not belong in the same realm, and it is therefore necessary to have a right attitude before God. People should enjoy, but should not be greedy; no one knows what is good for humanity; righteousness and wisdom escape humanity. Kohelet reflects on the limits of human power: all people face death, and death is better than life, but people should enjoy life when they can, for a time may come when no one can. The world is full of risk: he gives advice on living with risk, both political and economic. Kohelet's words finish with imagery of nature languishing and humanity marching to the grave.",
"title": "Summary"
},
{
"paragraph_id": 14,
"text": "The frame narrator returns with an epilogue: the words of the wise are hard, but they are applied as the shepherd applies goads and pricks to his flock. The ending of the book sums up its message: \"Fear God and keep his commandments for God will bring every deed to judgment.\" Some scholars suggest 12:13–14 were an addition by a more orthodox author than the original writer (that the epilogue was added later was first proposed by Samuel ibn Tibbon); others think it is likely the work of the original author.",
"title": "Summary"
},
{
"paragraph_id": 15,
"text": "The book takes its name from the Greek ekklesiastes, a translation of the title by which the central figure refers to himself: \"Kohelet\", meaning something like \"one who convenes or addresses an assembly\". According to rabbinic tradition, Ecclesiastes was written by King Solomon in his old age (an alternative tradition that \"Hezekiah and his colleagues wrote Isaiah, Proverbs, the Song of Songs and Ecclesiastes\" probably means simply that the book was edited under Hezekiah), but critical scholars have long rejected the idea of a pre-exilic origin. According to Christian tradition, the book was probably written by another Solomon (Gregory of Nyssa wrote that it was written by another Solomon; Didymus the Blind wrote that it was probably written by several authors). The presence of Persian loanwords and numerous Aramaisms points to a date no earlier than about 450 BCE, while the latest possible date according to those claims for its composition is 180 BCE, when the Jewish writer Ben Sira quotes from it. The dispute as to whether Ecclesiastes belongs to the Persian or the Hellenistic periods (i.e., the earlier or later part of this period) revolves around the degree of Hellenization (influence of Greek culture and thought) present in the book. Scholars arguing for a Persian date (c. 450–330 BCE) hold that there is a complete lack of Greek influence; those who argue for a Hellenistic date (c. 330–180 BCE) argue that it shows internal evidence of Greek thought and social setting.",
"title": "Composition"
},
{
"paragraph_id": 16,
"text": "Also unresolved is whether the author and narrator of Kohelet are identical. Ecclesiastes regularly switches between third-person quotations of Kohelet and first-person reflections on Kohelet's words, which would indicate the book was written as a commentary on Kohelet's parables rather than a personally-authored repository of his sayings. Some scholars have argued that the third-person narrative structure is an artificial literary device along the lines of Uncle Remus, although the description of the Kohelet in 12:8–14 seems to favour a historical person whose thoughts are presented by the narrator. It has been argued, however, that the question has no theological importance; one scholar (Roland Murphy) has commented that Kohelet himself would have regarded the time and ingenuity put into interpreting his book as \"one more example of the futility of human effort\".",
"title": "Composition"
},
{
"paragraph_id": 17,
"text": "Ecclesiastes has taken its literary form from the Middle Eastern tradition of the fictional autobiography, in which a character, often a king, relates his experiences and draws lessons from them, often self-critical: Kohelet likewise identifies himself as a king, speaks of his search for wisdom, relates his conclusions, and recognises his limitations. The book belongs to the category of wisdom literature, the body of biblical writings which give advice on life, together with reflections on its problems and meanings—other examples include the Book of Job, Proverbs, and some of the Psalms. Ecclesiastes differs from the other biblical Wisdom books in being deeply skeptical of the usefulness of wisdom itself. Ecclesiastes in turn influenced the deuterocanonical works, Wisdom of Solomon and Sirach, both of which contain vocal rejections of the Ecclesiastical philosophy of futility.",
"title": "Composition"
},
{
"paragraph_id": 18,
"text": "Wisdom was a popular genre in the ancient world, where it was cultivated in scribal circles and directed towards young men who would take up careers in high officialdom and royal courts; there is strong evidence that some of these books, or at least sayings and teachings, were translated into Hebrew and influenced the Book of Proverbs, and the author of Ecclesiastes was probably familiar with examples from Egypt and Mesopotamia. He may also have been influenced by Greek philosophy, specifically the schools of Stoicism, which held that all things are fated, and Epicureanism, which held that happiness was best pursued through the quiet cultivation of life's simpler pleasures.",
"title": "Composition"
},
{
"paragraph_id": 19,
"text": "The presence of Ecclesiastes in the Bible is something of a puzzle, as the common themes of the Hebrew canon—a God who reveals and redeems, who elects and cares for a chosen people—are absent from it, which suggests that Kohelet had lost his faith in his old age. Understanding the book was a topic of the earliest recorded discussions (the hypothetical Council of Jamnia in the 1st century CE). One argument advanced at that time was that the name of Solomon carried enough authority to ensure its inclusion; however, other works which appeared with Solomon's name were excluded despite being more orthodox than Ecclesiastes. Another was that the words of the epilogue, in which the reader is told to fear God and keep his commands, made it orthodox; but all later attempts to find anything in the rest of the book that would reflect this orthodoxy have failed. A modern suggestion treats the book as a dialogue in which different statements belong to different voices, with Kohelet himself answering and refuting unorthodox opinions, but there are no explicit markers for this in the book, as there are (for example) in the Book of Job.",
"title": "Composition"
},
{
"paragraph_id": 20,
"text": "Yet another suggestion is that Ecclesiastes is simply the most extreme example of a tradition of skepticism, but none of the proposed examples match Ecclesiastes for a sustained denial of faith and doubt in the goodness of God. Martin A. Shields, in his 2006 book The End of Wisdom: A Reappraisal of the Historical and Canonical Function of Ecclesiastes, summarized that \"In short, we do not know why or how this book found its way into such esteemed company\".",
"title": "Composition"
},
{
"paragraph_id": 21,
"text": "Scholars disagree about the themes of Ecclesiastes: whether it is positive and life-affirming, or deeply pessimistic; whether it is coherent or incoherent, insightful or confused, orthodox or heterodox; whether the ultimate message of the book is to copy Kohelet, \"the wise man,\" or to avoid his errors. At times, Kohelet raises deep questions; he \"doubted every aspect of religion, from the very ideal of righteousness, to the by now traditional idea of divine justice for individuals\". Some passages of Ecclesiastes seem to contradict other portions of the Hebrew Bible, and even itself. The Talmud even suggests that the rabbis considered censoring Ecclesiastes due to its seeming contradictions. One suggestion for resolving the contradictions is to read the book as the record of Kohelet's quest for knowledge: opposing judgments (e.g., \"the dead are better off than the living\" (4:2) vs. \"a living dog is better off than a dead lion\" (9:4)) are therefore provisional, and it is only at the conclusion that the verdict is delivered (11–12:7). On this reading, Kohelet's sayings are goads, designed to provoke dialogue and reflection in his readers, rather than to reach premature and self-assured conclusions.",
"title": "Themes"
},
{
"paragraph_id": 22,
"text": "The subjects of Ecclesiastes are the pain and frustration engendered by observing and meditating on the distortions and inequities pervading the world, the uselessness of human ambition, and the limitations of worldly wisdom and righteousness. The phrase \"under the sun\" appears twenty-nine times in connection with these observations; all this coexists with a firm belief in God, whose power, justice and unpredictability are sovereign. History and nature move in cycles, so that all events are predictable and unchangeable, and life, without the Sun, has no meaning or purpose: the wise man and the man who does not study wisdom will both die and be forgotten: man should be reverent (i.e., fear God), but in this life it is best to simply enjoy God's gifts.",
"title": "Themes"
},
{
"paragraph_id": 23,
"text": "In Judaism, Ecclesiastes is read either on Shemini Atzeret (by Yemenites, Italians, some Sephardim, and the mediaeval French Jewish rite) or on the Shabbat of the intermediate days of Sukkot (by Ashkenazim). If there is no intermediate Shabbat of Sukkot, Ashkenazim too read it on Shemini Atzeret (or, in Israel, on the first Shabbat of Sukkot). It is read on Sukkot as a reminder to not get too caught up in the festivities of the holiday and to carry over the happiness of Sukkot to the rest of the year by telling the listeners that, without God, life is meaningless.",
"title": "Usage"
},
{
"paragraph_id": 24,
"text": "The final poem of Kohelet has been interpreted in the Targum, Talmud and Midrash, and by the rabbis Rashi, Rashbam and ibn Ezra, as an allegory of old age.",
"title": "Usage"
},
{
"paragraph_id": 25,
"text": "Ecclesiastes has been cited in the writings of past and current Catholic Church leaders. For example, Doctors of the Church have cited Ecclesiastes. Augustine of Hippo cited Ecclesiastes in Book XX of City of God. Jerome wrote a commentary on Ecclesiastes. Thomas Aquinas cited Ecclesiastes (\"The number of fools is infinite.\") in his Summa Theologica.",
"title": "Usage"
},
{
"paragraph_id": 26,
"text": "The 20th-century Catholic theologian and cardinal-elect Hans Urs von Balthasar discussed Ecclesiastes in his work on theological aesthetics, The Glory of the Lord. He describes Qoheleth as \"a critical transcendentalist avant la lettre\", whose God is distant from the world, and whose kairos is a \"form of time which is itself empty of meaning\". For Balthasar, the role of Ecclesiastes in the Biblical canon is to represent the \"final dance on the part of wisdom, [the] conclusion of the ways of man\", a logical end-point to the unfolding of human wisdom in the Old Testament that paves the way for the advent of the New.",
"title": "Usage"
},
{
"paragraph_id": 27,
"text": "The book continues to be cited by recent popes, including Pope John Paul II and Pope Francis. Pope John Paul II, in his general audience of October 20, 2004, called the author of Ecclesiastes \"an ancient biblical sage\" whose description of death \"makes frantic clinging to earthly things completely pointless\". Pope Francis cited Ecclesiastes in his address on September 9, 2014. Speaking of vain people, he said, \"How many Christians live for appearances? Their life seems like a soap bubble.\"",
"title": "Usage"
},
{
"paragraph_id": 28,
"text": "Ecclesiastes has had a deep influence on Western literature. It contains several phrases that have resonated in British and American culture, such as \"eat, drink and be merry\", \"nothing new under the sun\", \"a time to be born and a time to die\", and \"vanity of vanities; all is vanity\". American novelist Thomas Wolfe wrote: \"[O]f all I have ever seen or learned, that book seems to me the noblest, the wisest, and the most powerful expression of man's life upon this earth—and also the highest flower of poetry, eloquence, and truth. I am not given to dogmatic judgments in the matter of literary creation, but if I had to make one I could say that Ecclesiastes is the greatest single piece of writing I have ever known, and the wisdom expressed in it the most lasting and profound.\"",
"title": "Influence on Western literature"
}
]
| Ecclesiastes is one of the Ketuvim ("Writings") of the Hebrew Bible and part of the Wisdom literature of the Christian Old Testament. The title commonly used in English is a Latin transliteration of the Greek translation of the Hebrew word קֹהֶלֶת. An unnamed author introduces "The words of Kohelet, son of David, king in Jerusalem" (1:1) and does not use his own voice again until the final verses (12:9–14), where he gives his own thoughts and summarises the statements of Kohelet; the main body of the text is ascribed to Kohelet himself. Kohelet proclaims (1:2) "Vanity of vanities! All is futile!"; the Hebrew word hevel, "vapor" or "breath", can figuratively mean "insubstantial", "vain", "futile", or "meaningless". Given this, the next verse presents the basic existential question with which the rest of the book is concerned: "What profit hath a man for all his toil, in which he toils under the sun?", expressing that the lives of both wise and foolish people all end in death. In light of this perceived meaninglessness, he suggests that human beings should enjoy the simple pleasures of daily life, such as eating, drinking, and taking enjoyment in one's work, which are gifts from the hand of God. The book concludes with the injunction to "Fear God and keep his commandments; for that is the duty of all of mankind. Since every deed will God bring to judgment, for every hidden act, be it good or evil." According to rabbinic tradition the book was written by King Solomon in his old age, but the presence of Persian loanwords and Aramaisms point to a date no earlier than about 450 BCE, while the latest possible date for its composition is 180 BCE. | 2001-10-08T05:33:23Z | 2023-12-28T22:07:12Z | [
"Template:Hatgrp",
"Template:Tanakh OT",
"Template:Lang",
"Template:Sfn",
"Template:Cite journal",
"Template:S-ttl",
"Template:Transliteration",
"Template:Librivox book",
"Template:S-bef",
"Template:Wikiquote",
"Template:S-end",
"Template:Books of the Bible",
"Template:Cite book",
"Template:Bibleverse",
"Template:Short description",
"Template:Lang-hbo",
"Template:Efn",
"Template:More citations needed",
"Template:Div col end",
"Template:Cite web",
"Template:Wikisource",
"Template:Div col",
"Template:ISBN",
"Template:S-start",
"Template:Sukkot",
"Template:Solomon",
"Template:Authority control",
"Template:IPAc-en",
"Template:Respell",
"Template:Lang-grc",
"Template:Who",
"Template:S-aft",
"Template:Circa",
"Template:Commons category",
"Template:Em",
"Template:Notelist",
"Template:Reflist",
"Template:S-hou",
"Template:Ecclesiastes"
]
| https://en.wikipedia.org/wiki/Ecclesiastes |
9,911 | Ezekiel | Ezekiel or Ezechiel (/ɪˈziːkiəl/; Hebrew: יְחֶזְקֵאל Yəḥezqēʾl [jə.ħɛzˈqeːl]; in the Septuagint written in Koinē Greek: Ἰεζεκιήλ Iezekiḗl [i.ɛ.zɛ.kiˈel]) is the central protagonist of the Book of Ezekiel in the Hebrew Bible.
In Judaism, Christianity, and Islam, Ezekiel is acknowledged as a Hebrew prophet. In Judaism and Christianity, he is also viewed as the 6th-century BCE author of the Book of Ezekiel, which includes prophecies about the destruction of Jerusalem and the Jews' restoration to the land of Israel.
The name Ezekiel means "God is strong" or "God strengthens".
The author of the Book of Ezekiel presents himself as Ezekiel, the son of Buzi, born into a priestly (kohen) lineage. Apart from identifying himself, the author gives a date for the first divine encounter which he presents: "in the thirtieth year". Ezekiel describes his calling to be a prophet by going into great detail about his encounter with God and four "living creatures" with four wheels that stayed beside the creatures.
According to the Bible, Ezekiel and his wife lived during the Babylonian captivity on the banks of the Kebar Canal in Tel Abib near Nippur with other exiles from the Kingdom of Judah. There is no mention of him having any offspring.
Ezekiel's "thirtieth year" is given as the fifth year of the exile of Judah's king Jehoiachin by the Babylonians, counting the years after the exile in 598 BCE, that is from 597 to 593 BCE. The last recorded prophecy of Ezekiel dates to April 571 BCE, sixteen years after the destruction of Jerusalem in 587 BCE. On the basis of dates given in the Book of Ezekiel, his span of prophecies can be calculated to have occurred over the course of about 22 years, starting in 593 BCE.
The Aramaic Targum on Ezekiel 1:1 and the 2nd-century rabbinic work Seder Olam Rabba (chapter 26) both say that Ezekiel's vision came "in the thirtieth year after Josiah was presented with a Book of the Law discovered in the Temple", the latter taking place about the time of Josiah's reforms in 622 BCE, shortly after the call of Jeremiah to prophetic ministry around 626 BCE. If the "thirtieth year" of Ezekiel 1:1 instead refers to Ezekiel's age, then he was born around 622 BCE and was fifty years old when he had his final vision.
According to Jewish tradition, Ezekiel did not write his own book, the Book of Ezekiel, but rather his prophecies were collected and written by the Great Assembly.
Ezekiel, like Jeremiah, is said by Talmud and Midrash to have been a descendant of Joshua by his marriage with the proselyte and former prostitute Rahab. Some statements found in rabbinic literature posit that Ezekiel was the son of Jeremiah, who was (also) called "Buzi" because he was despised by the Jews.
Ezekiel was said to be already active as a prophet while in the Land of Israel, and he retained this gift when he was exiled with Jehoiachin and the nobles of the country to Babylon. Josephus claims that Nebuchadnezzar of Babylonia's armies exiled three thousand people from Judah, after deposing King Jehoiachin in 598 BCE.
Rava states in the Babylonian Talmud that although Ezekiel describes the appearance of the throne of God (merkabah), this is not because he had seen more than the prophet Isaiah, but rather because the latter was more accustomed to such visions; for the relation of the two prophets is that of a courtier to a peasant, the latter of whom would always describe a royal court more floridly than the former, to whom such things would be familiar. Ezekiel, like all the other prophets, has beheld only a blurred reflection of the divine majesty, just as a poor mirror reflects objects only imperfectly.
According to the midrash Shir HaShirim Rabbah, it was Ezekiel whom the three pious men, Hananiah, Mishael, and Azariah (also called Shadrach, Meshach, and Abednego) asked for advice as to whether they should resist Nebuchadnezzar's command and choose death by fire rather than worship his idol.
At first God revealed to the prophet that they could not hope for a miraculous rescue; whereupon the prophet was greatly grieved, since these three men constituted the "remnant of Judah". But after they had left the house of the prophet, fully determined to sacrifice their lives to God, Ezekiel received this revelation:
When they went out from before Ezekiel, the Holy One blessed be He revealed Himself and said: 'Ezekiel, what do you think, that I will not stand by them? I will certainly stand by them.' That is what is written: "So said the Lord God: Concerning this too, I will acquiesce to the house of Israel" (Ezekiel 36:37). 'But leave them and do not say anything to them. I will leave them to proceed unsuspecting.'
Ezekiel is commemorated as a saint in the liturgical calendar of the Eastern Orthodox Church—and those Eastern Catholic Churches which follow the Byzantine Rite—on July 21 (for those churches which use the traditional Julian Calendar, July 21 falls on August 5 of the modern Gregorian Calendar). Ezekiel is commemorated on August 28 on the Calendar of Saints of the Armenian Apostolic Church, and on April 10 in the Roman Martyrology.
Certain Lutheran churches also celebrate his commemoration on July 21.
Saint Bonaventure interpreted Ezekiel's statement about the "closed gate" as a prophecy of the Incarnation: the "gate" signifying the Virgin Mary and the "prince" referring to Jesus. This is one of the readings at Vespers on Great Feasts of the Theotokos in the Eastern Orthodox and Byzantine Catholic Churches. This imagery is also found in the traditional Catholic Christmas hymn "Gaudete" and in a saying by Bonaventure, quoted by Alphonsus Maria de' Liguori: "No one can enter Heaven unless by Mary, as though through a door." The imagery provides the basis for the concept that God gave Mary to humanity as the "Gate of Heaven" (thence the dedication of churches and convents to the Porta Coeli), an idea also laid out in the Salve Regina (Hail Holy Queen) prayer.
John B. Taylor credits the subject with imparting the Biblical understanding of the nature of God.
Ezekiel (Arabic: حزقيال; "Ḥazqiyāl") is recognized as a prophet in Islamic tradition. Although not mentioned by name in the Quran, Muslim scholars, both classical and modern have included Ezekiel in lists of the prophets of Islam.
The Quran mentions a prophet called Dhū al-Kifl (ذو الكفل). Although Dhu al-Kifl's identity is disputed, he is often identified with Ezekiel. Carsten Niebuhr, in his Reisebeschreibung nach Arabian, says he visited Al Kifl in Iraq, midway between Najaf and Hilla and said Kifl was the Arabic form of Ezekiel. He further explained in his book that Ezekiel's Tomb was present in Al Kifl and that the Jews came to it on pilgrimage. The name "Dhu al-Kifl" means "Possessor of the Double" or "Possesor of the Fold" (ذو dhū "possessor of, owner of" and الكفل al-kifl "double, folded"). Some Islamic scholars have likened Ezekiel's mission to the description of Dhu al-Kifl. During the exile, the monarchy and state were annihilated, and political and national life were no longer possible. In the absence of a worldly foundation, it became necessary to build a spiritual one and Ezekiel performed this mission by observing the signs of the time and deducing his doctrines from them. In conformity with the two parts of his book, his personality and his preaching are alike twofold.
Regardless of the identification of Dhu al-Kifl with Ezekiel, Muslims have viewed Ezekiel as a prophet. Ezekiel appears in all collections of Stories of the Prophets. Muslim exegesis further lists Ezekiel's father as Buzi (Budhi) and Ezekiel is given the title ibn al-‘ajūz, denoting "son of the old (man)", as his parents are supposed to have been very old when he was born. A tradition, which resembles that of Hannah and Samuel in the Hebrew Bible, states that Ezekiel's mother prayed to God in old age for the birth of an offspring and was given Ezekiel as a gift from God.
The tomb of Ezekiel is a structure within the Al-Nukhailah Mosque complex, located at modern-day south Iraq near Kefil, believed to be the final resting place of Ezekiel. It has been a place of pilgrimage to both Muslims and Jews alike. After the Jewish exodus from Iraq, Jewish activity in the tomb decreased, although a disused synagogue remains in place.
A tomb in the Ergani district of Diyarbakır Province, Turkey, is also believed to be the resting place of prophet Ezekiel. It is located 5 km from the city centre on a hill, revered and visited by the local Muslims, called Makam Dağı.
Ezekiel is portrayed by Darrell Dunham in a 1979 episode of the television series Our Jewish Roots (1978–). | [
{
"paragraph_id": 0,
"text": "Ezekiel or Ezechiel (/ɪˈziːkiəl/; Hebrew: יְחֶזְקֵאל Yəḥezqēʾl [jə.ħɛzˈqeːl]; in the Septuagint written in Koinē Greek: Ἰεζεκιήλ Iezekiḗl [i.ɛ.zɛ.kiˈel]) is the central protagonist of the Book of Ezekiel in the Hebrew Bible.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In Judaism, Christianity, and Islam, Ezekiel is acknowledged as a Hebrew prophet. In Judaism and Christianity, he is also viewed as the 6th-century BCE author of the Book of Ezekiel, which includes prophecies about the destruction of Jerusalem and the Jews' restoration to the land of Israel.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The name Ezekiel means \"God is strong\" or \"God strengthens\".",
"title": ""
},
{
"paragraph_id": 3,
"text": "The author of the Book of Ezekiel presents himself as Ezekiel, the son of Buzi, born into a priestly (kohen) lineage. Apart from identifying himself, the author gives a date for the first divine encounter which he presents: \"in the thirtieth year\". Ezekiel describes his calling to be a prophet by going into great detail about his encounter with God and four \"living creatures\" with four wheels that stayed beside the creatures.",
"title": "In the Bible"
},
{
"paragraph_id": 4,
"text": "According to the Bible, Ezekiel and his wife lived during the Babylonian captivity on the banks of the Kebar Canal in Tel Abib near Nippur with other exiles from the Kingdom of Judah. There is no mention of him having any offspring.",
"title": "In the Bible"
},
{
"paragraph_id": 5,
"text": "Ezekiel's \"thirtieth year\" is given as the fifth year of the exile of Judah's king Jehoiachin by the Babylonians, counting the years after the exile in 598 BCE, that is from 597 to 593 BCE. The last recorded prophecy of Ezekiel dates to April 571 BCE, sixteen years after the destruction of Jerusalem in 587 BCE. On the basis of dates given in the Book of Ezekiel, his span of prophecies can be calculated to have occurred over the course of about 22 years, starting in 593 BCE.",
"title": "Chronology"
},
{
"paragraph_id": 6,
"text": "The Aramaic Targum on Ezekiel 1:1 and the 2nd-century rabbinic work Seder Olam Rabba (chapter 26) both say that Ezekiel's vision came \"in the thirtieth year after Josiah was presented with a Book of the Law discovered in the Temple\", the latter taking place about the time of Josiah's reforms in 622 BCE, shortly after the call of Jeremiah to prophetic ministry around 626 BCE. If the \"thirtieth year\" of Ezekiel 1:1 instead refers to Ezekiel's age, then he was born around 622 BCE and was fifty years old when he had his final vision.",
"title": "Chronology"
},
{
"paragraph_id": 7,
"text": "According to Jewish tradition, Ezekiel did not write his own book, the Book of Ezekiel, but rather his prophecies were collected and written by the Great Assembly.",
"title": "Extrabiblical traditions"
},
{
"paragraph_id": 8,
"text": "Ezekiel, like Jeremiah, is said by Talmud and Midrash to have been a descendant of Joshua by his marriage with the proselyte and former prostitute Rahab. Some statements found in rabbinic literature posit that Ezekiel was the son of Jeremiah, who was (also) called \"Buzi\" because he was despised by the Jews.",
"title": "Extrabiblical traditions"
},
{
"paragraph_id": 9,
"text": "Ezekiel was said to be already active as a prophet while in the Land of Israel, and he retained this gift when he was exiled with Jehoiachin and the nobles of the country to Babylon. Josephus claims that Nebuchadnezzar of Babylonia's armies exiled three thousand people from Judah, after deposing King Jehoiachin in 598 BCE.",
"title": "Extrabiblical traditions"
},
{
"paragraph_id": 10,
"text": "Rava states in the Babylonian Talmud that although Ezekiel describes the appearance of the throne of God (merkabah), this is not because he had seen more than the prophet Isaiah, but rather because the latter was more accustomed to such visions; for the relation of the two prophets is that of a courtier to a peasant, the latter of whom would always describe a royal court more floridly than the former, to whom such things would be familiar. Ezekiel, like all the other prophets, has beheld only a blurred reflection of the divine majesty, just as a poor mirror reflects objects only imperfectly.",
"title": "Extrabiblical traditions"
},
{
"paragraph_id": 11,
"text": "According to the midrash Shir HaShirim Rabbah, it was Ezekiel whom the three pious men, Hananiah, Mishael, and Azariah (also called Shadrach, Meshach, and Abednego) asked for advice as to whether they should resist Nebuchadnezzar's command and choose death by fire rather than worship his idol.",
"title": "Extrabiblical traditions"
},
{
"paragraph_id": 12,
"text": "At first God revealed to the prophet that they could not hope for a miraculous rescue; whereupon the prophet was greatly grieved, since these three men constituted the \"remnant of Judah\". But after they had left the house of the prophet, fully determined to sacrifice their lives to God, Ezekiel received this revelation:",
"title": "Extrabiblical traditions"
},
{
"paragraph_id": 13,
"text": "When they went out from before Ezekiel, the Holy One blessed be He revealed Himself and said: 'Ezekiel, what do you think, that I will not stand by them? I will certainly stand by them.' That is what is written: \"So said the Lord God: Concerning this too, I will acquiesce to the house of Israel\" (Ezekiel 36:37). 'But leave them and do not say anything to them. I will leave them to proceed unsuspecting.'",
"title": "Extrabiblical traditions"
},
{
"paragraph_id": 14,
"text": "Ezekiel is commemorated as a saint in the liturgical calendar of the Eastern Orthodox Church—and those Eastern Catholic Churches which follow the Byzantine Rite—on July 21 (for those churches which use the traditional Julian Calendar, July 21 falls on August 5 of the modern Gregorian Calendar). Ezekiel is commemorated on August 28 on the Calendar of Saints of the Armenian Apostolic Church, and on April 10 in the Roman Martyrology.",
"title": "Extrabiblical traditions"
},
{
"paragraph_id": 15,
"text": "Certain Lutheran churches also celebrate his commemoration on July 21.",
"title": "Extrabiblical traditions"
},
{
"paragraph_id": 16,
"text": "Saint Bonaventure interpreted Ezekiel's statement about the \"closed gate\" as a prophecy of the Incarnation: the \"gate\" signifying the Virgin Mary and the \"prince\" referring to Jesus. This is one of the readings at Vespers on Great Feasts of the Theotokos in the Eastern Orthodox and Byzantine Catholic Churches. This imagery is also found in the traditional Catholic Christmas hymn \"Gaudete\" and in a saying by Bonaventure, quoted by Alphonsus Maria de' Liguori: \"No one can enter Heaven unless by Mary, as though through a door.\" The imagery provides the basis for the concept that God gave Mary to humanity as the \"Gate of Heaven\" (thence the dedication of churches and convents to the Porta Coeli), an idea also laid out in the Salve Regina (Hail Holy Queen) prayer.",
"title": "Extrabiblical traditions"
},
{
"paragraph_id": 17,
"text": "John B. Taylor credits the subject with imparting the Biblical understanding of the nature of God.",
"title": "Extrabiblical traditions"
},
{
"paragraph_id": 18,
"text": "Ezekiel (Arabic: حزقيال; \"Ḥazqiyāl\") is recognized as a prophet in Islamic tradition. Although not mentioned by name in the Quran, Muslim scholars, both classical and modern have included Ezekiel in lists of the prophets of Islam.",
"title": "Extrabiblical traditions"
},
{
"paragraph_id": 19,
"text": "The Quran mentions a prophet called Dhū al-Kifl (ذو الكفل). Although Dhu al-Kifl's identity is disputed, he is often identified with Ezekiel. Carsten Niebuhr, in his Reisebeschreibung nach Arabian, says he visited Al Kifl in Iraq, midway between Najaf and Hilla and said Kifl was the Arabic form of Ezekiel. He further explained in his book that Ezekiel's Tomb was present in Al Kifl and that the Jews came to it on pilgrimage. The name \"Dhu al-Kifl\" means \"Possessor of the Double\" or \"Possesor of the Fold\" (ذو dhū \"possessor of, owner of\" and الكفل al-kifl \"double, folded\"). Some Islamic scholars have likened Ezekiel's mission to the description of Dhu al-Kifl. During the exile, the monarchy and state were annihilated, and political and national life were no longer possible. In the absence of a worldly foundation, it became necessary to build a spiritual one and Ezekiel performed this mission by observing the signs of the time and deducing his doctrines from them. In conformity with the two parts of his book, his personality and his preaching are alike twofold.",
"title": "Extrabiblical traditions"
},
{
"paragraph_id": 20,
"text": "Regardless of the identification of Dhu al-Kifl with Ezekiel, Muslims have viewed Ezekiel as a prophet. Ezekiel appears in all collections of Stories of the Prophets. Muslim exegesis further lists Ezekiel's father as Buzi (Budhi) and Ezekiel is given the title ibn al-‘ajūz, denoting \"son of the old (man)\", as his parents are supposed to have been very old when he was born. A tradition, which resembles that of Hannah and Samuel in the Hebrew Bible, states that Ezekiel's mother prayed to God in old age for the birth of an offspring and was given Ezekiel as a gift from God.",
"title": "Extrabiblical traditions"
},
{
"paragraph_id": 21,
"text": "The tomb of Ezekiel is a structure within the Al-Nukhailah Mosque complex, located at modern-day south Iraq near Kefil, believed to be the final resting place of Ezekiel. It has been a place of pilgrimage to both Muslims and Jews alike. After the Jewish exodus from Iraq, Jewish activity in the tomb decreased, although a disused synagogue remains in place.",
"title": "Purported tombs"
},
{
"paragraph_id": 22,
"text": "A tomb in the Ergani district of Diyarbakır Province, Turkey, is also believed to be the resting place of prophet Ezekiel. It is located 5 km from the city centre on a hill, revered and visited by the local Muslims, called Makam Dağı.",
"title": "Purported tombs"
},
{
"paragraph_id": 23,
"text": "Ezekiel is portrayed by Darrell Dunham in a 1979 episode of the television series Our Jewish Roots (1978–).",
"title": "In popular culture"
}
]
| Ezekiel or Ezechiel is the central protagonist of the Book of Ezekiel in the Hebrew Bible. In Judaism, Christianity, and Islam, Ezekiel is acknowledged as a Hebrew prophet. In Judaism and Christianity, he is also viewed as the 6th-century BCE author of the Book of Ezekiel, which includes prophecies about the destruction of Jerusalem and the Jews' restoration to the land of Israel. The name Ezekiel means "God is strong" or "God strengthens". | 2002-02-25T15:43:11Z | 2023-12-06T05:17:56Z | [
"Template:Lang-he",
"Template:Citation",
"Template:Catholic saints",
"Template:Book of Ezekiel",
"Template:Circa",
"Template:Authority control",
"Template:About",
"Template:For",
"Template:Reflist",
"Template:Bibleref",
"Template:Cite web",
"Template:ISBN",
"Template:Prophets of the Tanakh",
"Template:Quote",
"Template:IPA-grc",
"Template:Citation needed",
"Template:Cite book",
"Template:Transliteration",
"Template:Lang",
"Template:Notelist",
"Template:IPAc-en",
"Template:Infobox saint",
"Template:Lang-grc-koi",
"Template:Efn",
"Template:Infobox person",
"Template:Cite journal",
"Template:Short description",
"Template:More citations needed",
"Template:Lang-ar",
"Template:CathEncy",
"Template:IPA-he",
"Template:Commons category-inline"
]
| https://en.wikipedia.org/wiki/Ezekiel |
9,914 | Executable and Linkable Format | In computing, the Executable and Linkable Format (ELF, formerly named Extensible Linking Format), is a common standard file format for executable files, object code, shared libraries, and core dumps. First published in the specification for the application binary interface (ABI) of the Unix operating system version named System V Release 4 (SVR4), and later in the Tool Interface Standard, it was quickly accepted among different vendors of Unix systems. In 1999, it was chosen as the standard binary file format for Unix and Unix-like systems on x86 processors by the 86open project.
By design, the ELF format is flexible, extensible, and cross-platform. For instance, it supports different endiannesses and address sizes so it does not exclude any particular central processing unit (CPU) or instruction set architecture. This has allowed it to be adopted by many different operating systems on many different hardware platforms.
Each ELF file is made up of one ELF header, followed by file data. The data can include:
The segments contain information that is needed for run time execution of the file, while sections contain important data for linking and relocation. Any byte in the entire file can be owned by one section at most, and orphan bytes can occur which are unowned by any section.
00000000 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 |.ELF............|
00000010 02 00 3e 00 01 00 00 00 c5 48 40 00 00 00 00 00 |..>......H@.....|
Example hexdump of ELF file header
The ELF header defines whether to use 32-bit or 64-bit addresses. The header contains three fields that are affected by this setting and offset other fields that follow them. The ELF header is 52 or 64 bytes long for 32-bit and 64-bit binaries respectively.
The program header table tells the system how to create a process image. It is found at file offset e_phoff, and consists of e_phnum entries, each with size e_phentsize. The layout is slightly different in 32-bit ELF vs 64-bit ELF, because the p_flags are in a different structure location for alignment reasons. Each entry is structured as:
The ELF format has replaced older executable formats in various environments. It has replaced a.out and COFF formats in Unix-like operating systems:
ELF has also seen some adoption in non-Unix operating systems, such as:
Microsoft Windows also uses the ELF format, but only for its Windows Subsystem for Linux compatibility system.
Some game consoles also use ELF:
Other (operating) systems running on PowerPC that use ELF:
Some operating systems for mobile phones and mobile devices use ELF:
Some phones can run ELF files through the use of a patch that adds assembly code to the main firmware, which is a feature known as ELFPack in the underground modding culture. The ELF file format is also used with the Atmel AVR (8-bit), AVR32 and with Texas Instruments MSP430 microcontroller architectures. Some implementations of Open Firmware can also load ELF files, most notably Apple's implementation used in almost all PowerPC machines the company produced.
The Linux Standard Base (LSB) supplements some of the above specifications for architectures in which it is specified. For example, that is the case for the System V ABI, AMD64 Supplement.
86open was a project to form consensus on a common binary file format for Unix and Unix-like operating systems on the common PC compatible x86 architecture, to encourage software developers to port to the architecture. The initial idea was to standardize on a small subset of Spec 1170, a predecessor of the Single UNIX Specification, and the GNU C Library (glibc) to enable unmodified binaries to run on the x86 Unix-like operating systems. The project was originally designated "Spec 150".
The format eventually chosen was ELF, specifically the Linux implementation of ELF, after it had turned out to be a de facto standard supported by all involved vendors and operating systems.
The group began email discussions in 1997 and first met together at the Santa Cruz Operation offices on August 22, 1997.
The steering committee was Marc Ewing, Dion Johnson, Evan Leibovitch, Bruce Perens, Andrew Roach, Bryan Wayne Sparks and Linus Torvalds. Other people on the project were Keith Bostic, Chuck Cranor, Michael Davidson, Chris G. Demetriou, Ulrich Drepper, Don Dugger, Steve Ginzburg, Jon "maddog" Hall, Ron Holt, Jordan Hubbard, Dave Jensen, Kean Johnston, Andrew Josey, Robert Lipe, Bela Lubkin, Tim Marsland, Greg Page, Ronald Joe Record, Tim Ruckle, Joel Silverstein, Chia-pi Tien, and Erik Troan. Operating systems and companies represented were BeOS, BSDI, FreeBSD, Intel, Linux, NetBSD, SCO and SunSoft.
The project progressed and in mid-1998, SCO began developing lxrun, an open-source compatibility layer able to run Linux binaries on OpenServer, UnixWare, and Solaris. SCO announced official support of lxrun at LinuxWorld in March 1999. Sun Microsystems began officially supporting lxrun for Solaris in early 1999, and later moved to integrated support of the Linux binary format via Solaris Containers for Linux Applications.
With the BSDs having long supported Linux binaries (through a compatibility layer) and the main x86 Unix vendors having added support for the format, the project decided that Linux ELF was the format chosen by the industry and "declare[d] itself dissolved" on July 25, 1999.
FatELF is an ELF binary-format extension that adds fat binary capabilities. It is aimed for Linux and other Unix-like operating systems. Additionally to the CPU architecture abstraction (byte order, word size, CPU instruction set etc.), there is the potential advantage of software-platform abstraction e.g., binaries which support multiple kernel ABI versions. As of 2021, FatELF has not been integrated into the mainline Linux kernel. | [
{
"paragraph_id": 0,
"text": "In computing, the Executable and Linkable Format (ELF, formerly named Extensible Linking Format), is a common standard file format for executable files, object code, shared libraries, and core dumps. First published in the specification for the application binary interface (ABI) of the Unix operating system version named System V Release 4 (SVR4), and later in the Tool Interface Standard, it was quickly accepted among different vendors of Unix systems. In 1999, it was chosen as the standard binary file format for Unix and Unix-like systems on x86 processors by the 86open project.",
"title": ""
},
{
"paragraph_id": 1,
"text": "By design, the ELF format is flexible, extensible, and cross-platform. For instance, it supports different endiannesses and address sizes so it does not exclude any particular central processing unit (CPU) or instruction set architecture. This has allowed it to be adopted by many different operating systems on many different hardware platforms.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Each ELF file is made up of one ELF header, followed by file data. The data can include:",
"title": "File layout"
},
{
"paragraph_id": 3,
"text": "The segments contain information that is needed for run time execution of the file, while sections contain important data for linking and relocation. Any byte in the entire file can be owned by one section at most, and orphan bytes can occur which are unowned by any section.",
"title": "File layout"
},
{
"paragraph_id": 4,
"text": "00000000 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 |.ELF............|",
"title": "File layout"
},
{
"paragraph_id": 5,
"text": "00000010 02 00 3e 00 01 00 00 00 c5 48 40 00 00 00 00 00 |..>......H@.....|",
"title": "File layout"
},
{
"paragraph_id": 6,
"text": "Example hexdump of ELF file header",
"title": "File layout"
},
{
"paragraph_id": 7,
"text": "The ELF header defines whether to use 32-bit or 64-bit addresses. The header contains three fields that are affected by this setting and offset other fields that follow them. The ELF header is 52 or 64 bytes long for 32-bit and 64-bit binaries respectively.",
"title": "File layout"
},
{
"paragraph_id": 8,
"text": "The program header table tells the system how to create a process image. It is found at file offset e_phoff, and consists of e_phnum entries, each with size e_phentsize. The layout is slightly different in 32-bit ELF vs 64-bit ELF, because the p_flags are in a different structure location for alignment reasons. Each entry is structured as:",
"title": "File layout"
},
{
"paragraph_id": 9,
"text": "The ELF format has replaced older executable formats in various environments. It has replaced a.out and COFF formats in Unix-like operating systems:",
"title": "Applications"
},
{
"paragraph_id": 10,
"text": "ELF has also seen some adoption in non-Unix operating systems, such as:",
"title": "Applications"
},
{
"paragraph_id": 11,
"text": "Microsoft Windows also uses the ELF format, but only for its Windows Subsystem for Linux compatibility system.",
"title": "Applications"
},
{
"paragraph_id": 12,
"text": "Some game consoles also use ELF:",
"title": "Applications"
},
{
"paragraph_id": 13,
"text": "Other (operating) systems running on PowerPC that use ELF:",
"title": "Applications"
},
{
"paragraph_id": 14,
"text": "Some operating systems for mobile phones and mobile devices use ELF:",
"title": "Applications"
},
{
"paragraph_id": 15,
"text": "Some phones can run ELF files through the use of a patch that adds assembly code to the main firmware, which is a feature known as ELFPack in the underground modding culture. The ELF file format is also used with the Atmel AVR (8-bit), AVR32 and with Texas Instruments MSP430 microcontroller architectures. Some implementations of Open Firmware can also load ELF files, most notably Apple's implementation used in almost all PowerPC machines the company produced.",
"title": "Applications"
},
{
"paragraph_id": 16,
"text": "The Linux Standard Base (LSB) supplements some of the above specifications for architectures in which it is specified. For example, that is the case for the System V ABI, AMD64 Supplement.",
"title": "Specifications"
},
{
"paragraph_id": 17,
"text": "86open was a project to form consensus on a common binary file format for Unix and Unix-like operating systems on the common PC compatible x86 architecture, to encourage software developers to port to the architecture. The initial idea was to standardize on a small subset of Spec 1170, a predecessor of the Single UNIX Specification, and the GNU C Library (glibc) to enable unmodified binaries to run on the x86 Unix-like operating systems. The project was originally designated \"Spec 150\".",
"title": "86open"
},
{
"paragraph_id": 18,
"text": "The format eventually chosen was ELF, specifically the Linux implementation of ELF, after it had turned out to be a de facto standard supported by all involved vendors and operating systems.",
"title": "86open"
},
{
"paragraph_id": 19,
"text": "The group began email discussions in 1997 and first met together at the Santa Cruz Operation offices on August 22, 1997.",
"title": "86open"
},
{
"paragraph_id": 20,
"text": "The steering committee was Marc Ewing, Dion Johnson, Evan Leibovitch, Bruce Perens, Andrew Roach, Bryan Wayne Sparks and Linus Torvalds. Other people on the project were Keith Bostic, Chuck Cranor, Michael Davidson, Chris G. Demetriou, Ulrich Drepper, Don Dugger, Steve Ginzburg, Jon \"maddog\" Hall, Ron Holt, Jordan Hubbard, Dave Jensen, Kean Johnston, Andrew Josey, Robert Lipe, Bela Lubkin, Tim Marsland, Greg Page, Ronald Joe Record, Tim Ruckle, Joel Silverstein, Chia-pi Tien, and Erik Troan. Operating systems and companies represented were BeOS, BSDI, FreeBSD, Intel, Linux, NetBSD, SCO and SunSoft.",
"title": "86open"
},
{
"paragraph_id": 21,
"text": "The project progressed and in mid-1998, SCO began developing lxrun, an open-source compatibility layer able to run Linux binaries on OpenServer, UnixWare, and Solaris. SCO announced official support of lxrun at LinuxWorld in March 1999. Sun Microsystems began officially supporting lxrun for Solaris in early 1999, and later moved to integrated support of the Linux binary format via Solaris Containers for Linux Applications.",
"title": "86open"
},
{
"paragraph_id": 22,
"text": "With the BSDs having long supported Linux binaries (through a compatibility layer) and the main x86 Unix vendors having added support for the format, the project decided that Linux ELF was the format chosen by the industry and \"declare[d] itself dissolved\" on July 25, 1999.",
"title": "86open"
},
{
"paragraph_id": 23,
"text": "FatELF is an ELF binary-format extension that adds fat binary capabilities. It is aimed for Linux and other Unix-like operating systems. Additionally to the CPU architecture abstraction (byte order, word size, CPU instruction set etc.), there is the potential advantage of software-platform abstraction e.g., binaries which support multiple kernel ABI versions. As of 2021, FatELF has not been integrated into the mainline Linux kernel.",
"title": "FatELF: universal binaries for Linux"
}
]
| In computing, the Executable and Linkable Format, is a common standard file format for executable files, object code, shared libraries, and core dumps. First published in the specification for the application binary interface (ABI) of the Unix operating system version named System V Release 4 (SVR4), and later in the Tool Interface Standard, it was quickly accepted among different vendors of Unix systems. In 1999, it was chosen as the standard binary file format for Unix and Unix-like systems on x86 processors by the 86open project. By design, the ELF format is flexible, extensible, and cross-platform. For instance, it supports different endiannesses and address sizes so it does not exclude any particular central processing unit (CPU) or instruction set architecture. This has allowed it to be adopted by many different operating systems on many different hardware platforms. | 2001-10-08T09:32:44Z | 2023-09-08T07:49:16Z | [
"Template:Cite web",
"Template:Infobox file format",
"Template:Quote box",
"Template:Code",
"Template:Main",
"Template:Prose",
"Template:Webarchive",
"Template:Reflist",
"Template:Cite journal",
"Template:Snd",
"Template:Citation",
"Template:Executables",
"Template:Short description",
"Template:Mono",
"Template:Source?",
"Template:Anchor",
"Template:Div col end",
"Template:Portal",
"Template:Cite book",
"Template:Div col",
"Template:As of"
]
| https://en.wikipedia.org/wiki/Executable_and_Linkable_Format |
9,917 | Explorers Program | The Explorers program is a NASA exploration program that provides flight opportunities for physics, geophysics, heliophysics, and astrophysics investigations from space. Launched in 1958, Explorer 1 was the first spacecraft of the United States to achieve orbit. Over 90 space missions have been launched since. Starting with Explorer 6, it has been operated by NASA, with regular collaboration with a variety of other institutions, including many international partners.
Launchers for the Explorer program have included Juno I, Juno II, various Thor, Scout, Delta and Pegasus launch vehicles, and Falcon 9.
The program has three classes: Medium-Class Explorers (MIDEX), Small Explorers (SMEX), and University-Class Explorers (UNEX), with select Missions of Opportunity operated with other agencies.
The Explorer program began as a U.S. Army proposal (Project Orbiter) to place a "civilian" artificial satellite into orbit during the International Geophysical Year (IGY). Although that proposal was rejected in favor of the U.S. Navy's Project Vanguard, which made the first sub-orbital flight Vanguard TV0 in December 1956, the Soviet Union's launch of Sputnik 1 on 4 October 1957 (and the resulting "Sputnik crisis") and the failure of the Vanguard 1 launch attempt resulted in the Army program being funded to match the Soviet space achievements. Explorer 1 was launched on the Juno I on 1 February 1958, becoming the first U.S. satellite, as well as discovering the Van Allen radiation belt.
Four follow-up satellites of the Explorer series were launched by the Juno I launch vehicle in 1958, of which Explorer 3 and Explorer 4 were successful, while Explorer 2 and Explorer 5 failed to reach orbit. The Juno I vehicle was replaced by the Juno II in 1959.
With the establishment of NASA in 1958, the Explorer program was transferred to NASA from the U.S. Army. NASA continued to use the name for an ongoing series of relatively small space missions, typically an artificial satellite with a specific science focus. Explorer 6 in 1959 was the first scientific satellite under the project direction of NASA's Goddard Space Flight Center (GSFC) in Greenbelt, Maryland.
The Interplanetary Monitoring Platform (IMP) was launched in 1963, and involved a network of eleven Explorer satellites designed to collect data on space radiation in support of the Apollo program. The IMP program was a major step forward in spacecraft electronics design, as it was the first space program to use integrated circuit (IC) chips and MOSFETs (MOS transistors). The IMP-A (Explorer 18) in 1963 was the first spacecraft to use IC chips, and the IMP-D (Explorer 33) in 1966 was the first to use MOSFETs.
Over the following two decades, NASA has launched over 50 Explorer missions, some in conjunction to military programs, usually of an exploratory or survey nature or had specific objectives not requiring the capabilities of a major space observatory. Explorer satellites have made many important discoveries on: Earth's magnetosphere and the shape of its gravity field; the solar wind; properties of micrometeoroids raining down on the Earth; ultraviolet, cosmic and X-rays from the Solar System and beyond; ionospheric physics; Solar plasma; solar energetic particles; and atmospheric physics. These missions have also investigated air density, radio astronomy, geodesy, and gamma-ray astronomy.
With drops in NASA's budget, Explorer missions became infrequent in the early 1980s.
In 1988, the Small Explorer (SMEX) class was established with a focus on frequent flight opportunities for highly focused and relatively inexpensive space science missions in the disciplines of astrophysics and space physics. The first three SMEX missions were chosen in April 1989 out of 51 candidates, and launched in 1992, 1996 and 1998 The second set of two missions were announced in September 1994 and launched in 1998 and 1999.
In the mid 1990s, NASA initiated the Medium-class Explorer (MIDEX) to enable more frequent flights. These are larger than SMEX missions and were to be launched aboard a new kind of medium-light class launch vehicle. This new launch vehicle was not developed and instead, these missions were flown on a modified Delta II rocket. The first announcement opportunity for MIDEX was issued in March 1995, and the first launch under this new class was FUSE in 1999.
In May 1994, NASA started the Student Explorer Demonstration Initiative (STEDI) pilot program, to demonstrate that high-quality space science can be carried out with small, low-cost missions. Of the three selected missions, SNOE was launched in 1998 and TERRIERS in 1999, but the latter failed after launch. The STEDI program was terminated in 2001. Later, NASA established the University-Class Explorer (UNEX) program for much cheaper missions, which is regarded as a successor to STEDI.
The Explorer missions were at first managed by the Small Explorer Project Office at NASA's Goddard Space Flight Center (GSFC). In early 1999, that office was closed and with the announcement of opportunity for the third set of SMEX missions NASA converted the SMEX class so that each mission was managed by its principal investigator, with oversight by the GSFC Explorer Project. The Explorer program Office at Goddard Space Flight Center, provides management of the many operational scientific exploration missions that are characterized by relatively moderate costs and small to medium-sized missions that are capable of being built, tested, and launched in a short time interval compared to larger observatories like NASA's Great Observatories.
Excluding the launches, the MIDEX class has a current mission cap cost of US$250 million in 2018, with future MIDEX missions being capped at US$350 million. The cost cap for SMEX missions in 2017 was US$165 million. UNEX missions are capped at US$15 million. A sub-project called Missions of Opportunity (MO) has funded science instruments or hardware components of onboard non-NASA space missions, and have a total NASA cost cap of US$70 million.
The Small Explorers class was implemented in 1989 specifically to fund space exploration missions that cost no more than US$120 million. The missions are managed by the Explorers Project at the Goddard Space Flight Center (GSFC).
The first set of three SMEX missions were launched between 1992 and 1998. The second set of two missions were launched in 1998 and 1999. These early missions were managed by the Small Explorer Project Office at Goddard Space Flight Center. In early 1999, that office was closed and with the announcement of opportunity for the third set of SMEX missions NASA converted the program so that each mission was managed by its Principal Investigator, with oversight by the GSFC Explorers Project.
NASA funded a competitive study of five candidate heliophysics Small Explorers missions for flight in 2022. The proposals were Mechanisms of Energetic Mass Ejection – eXplorer (MEME-X), Focusing Optics X-ray Solar Imager (FOXSI), Multi-Slit Solar Explorer (MUSE), Tandem Reconnection and Cusp Electrodynamics Reconnaissance Satellites (TRACERS), and Polarimeter to Unify the Corona and Heliosphere (PUNCH). In June 2019 NASA selected TRACERS and PUNCH for flight.
Missions of Opportunity (MO) are investigations characterized by being part of a non-NASA space mission of any size and having a total NASA cost of under $55 million. These missions are conducted on a no-exchange-of-funds basis with the organization sponsoring the mission. NASA solicits proposals for Missions of Opportunity on SMEX, MIDEX and UNEX investigations.
Three satellites were planned in this series: Beacon Explorer-A, Beacon Explorer-B, Beacon Explorer-C.
A series of three Geodetic Earth Orbiting Satellite (GEOS) were put in orbit: GEOS 1, GEOS 2, GEOS 3.
Explorer name numbers can be found in the NSSDC master catalog, typically assigned to each spacecraft in a mission. These numbers were not officially assigned until after 1975.
Many missions are proposed, but not selected. For example, in 2011, the Explorers Program received 22 full missions solicitations, 20 Missions of Opportunity, and 8 USPI. Sometimes mission are only partially developed but must be stopped for financial, technological, or bureaucratic reasons. Some missions failed upon reaching orbit including WIRE and TERRIERS.
Examples of missions that were not developed or cancelled were:
Recent examples of conclusions of launched missions, cancelled due to budgetary constraints:
Number of launches per decade: | [
{
"paragraph_id": 0,
"text": "The Explorers program is a NASA exploration program that provides flight opportunities for physics, geophysics, heliophysics, and astrophysics investigations from space. Launched in 1958, Explorer 1 was the first spacecraft of the United States to achieve orbit. Over 90 space missions have been launched since. Starting with Explorer 6, it has been operated by NASA, with regular collaboration with a variety of other institutions, including many international partners.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Launchers for the Explorer program have included Juno I, Juno II, various Thor, Scout, Delta and Pegasus launch vehicles, and Falcon 9.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The program has three classes: Medium-Class Explorers (MIDEX), Small Explorers (SMEX), and University-Class Explorers (UNEX), with select Missions of Opportunity operated with other agencies.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Explorer program began as a U.S. Army proposal (Project Orbiter) to place a \"civilian\" artificial satellite into orbit during the International Geophysical Year (IGY). Although that proposal was rejected in favor of the U.S. Navy's Project Vanguard, which made the first sub-orbital flight Vanguard TV0 in December 1956, the Soviet Union's launch of Sputnik 1 on 4 October 1957 (and the resulting \"Sputnik crisis\") and the failure of the Vanguard 1 launch attempt resulted in the Army program being funded to match the Soviet space achievements. Explorer 1 was launched on the Juno I on 1 February 1958, becoming the first U.S. satellite, as well as discovering the Van Allen radiation belt.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Four follow-up satellites of the Explorer series were launched by the Juno I launch vehicle in 1958, of which Explorer 3 and Explorer 4 were successful, while Explorer 2 and Explorer 5 failed to reach orbit. The Juno I vehicle was replaced by the Juno II in 1959.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "With the establishment of NASA in 1958, the Explorer program was transferred to NASA from the U.S. Army. NASA continued to use the name for an ongoing series of relatively small space missions, typically an artificial satellite with a specific science focus. Explorer 6 in 1959 was the first scientific satellite under the project direction of NASA's Goddard Space Flight Center (GSFC) in Greenbelt, Maryland.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The Interplanetary Monitoring Platform (IMP) was launched in 1963, and involved a network of eleven Explorer satellites designed to collect data on space radiation in support of the Apollo program. The IMP program was a major step forward in spacecraft electronics design, as it was the first space program to use integrated circuit (IC) chips and MOSFETs (MOS transistors). The IMP-A (Explorer 18) in 1963 was the first spacecraft to use IC chips, and the IMP-D (Explorer 33) in 1966 was the first to use MOSFETs.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Over the following two decades, NASA has launched over 50 Explorer missions, some in conjunction to military programs, usually of an exploratory or survey nature or had specific objectives not requiring the capabilities of a major space observatory. Explorer satellites have made many important discoveries on: Earth's magnetosphere and the shape of its gravity field; the solar wind; properties of micrometeoroids raining down on the Earth; ultraviolet, cosmic and X-rays from the Solar System and beyond; ionospheric physics; Solar plasma; solar energetic particles; and atmospheric physics. These missions have also investigated air density, radio astronomy, geodesy, and gamma-ray astronomy.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "With drops in NASA's budget, Explorer missions became infrequent in the early 1980s.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In 1988, the Small Explorer (SMEX) class was established with a focus on frequent flight opportunities for highly focused and relatively inexpensive space science missions in the disciplines of astrophysics and space physics. The first three SMEX missions were chosen in April 1989 out of 51 candidates, and launched in 1992, 1996 and 1998 The second set of two missions were announced in September 1994 and launched in 1998 and 1999.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In the mid 1990s, NASA initiated the Medium-class Explorer (MIDEX) to enable more frequent flights. These are larger than SMEX missions and were to be launched aboard a new kind of medium-light class launch vehicle. This new launch vehicle was not developed and instead, these missions were flown on a modified Delta II rocket. The first announcement opportunity for MIDEX was issued in March 1995, and the first launch under this new class was FUSE in 1999.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In May 1994, NASA started the Student Explorer Demonstration Initiative (STEDI) pilot program, to demonstrate that high-quality space science can be carried out with small, low-cost missions. Of the three selected missions, SNOE was launched in 1998 and TERRIERS in 1999, but the latter failed after launch. The STEDI program was terminated in 2001. Later, NASA established the University-Class Explorer (UNEX) program for much cheaper missions, which is regarded as a successor to STEDI.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The Explorer missions were at first managed by the Small Explorer Project Office at NASA's Goddard Space Flight Center (GSFC). In early 1999, that office was closed and with the announcement of opportunity for the third set of SMEX missions NASA converted the SMEX class so that each mission was managed by its principal investigator, with oversight by the GSFC Explorer Project. The Explorer program Office at Goddard Space Flight Center, provides management of the many operational scientific exploration missions that are characterized by relatively moderate costs and small to medium-sized missions that are capable of being built, tested, and launched in a short time interval compared to larger observatories like NASA's Great Observatories.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Excluding the launches, the MIDEX class has a current mission cap cost of US$250 million in 2018, with future MIDEX missions being capped at US$350 million. The cost cap for SMEX missions in 2017 was US$165 million. UNEX missions are capped at US$15 million. A sub-project called Missions of Opportunity (MO) has funded science instruments or hardware components of onboard non-NASA space missions, and have a total NASA cost cap of US$70 million.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The Small Explorers class was implemented in 1989 specifically to fund space exploration missions that cost no more than US$120 million. The missions are managed by the Explorers Project at the Goddard Space Flight Center (GSFC).",
"title": "Classes"
},
{
"paragraph_id": 15,
"text": "The first set of three SMEX missions were launched between 1992 and 1998. The second set of two missions were launched in 1998 and 1999. These early missions were managed by the Small Explorer Project Office at Goddard Space Flight Center. In early 1999, that office was closed and with the announcement of opportunity for the third set of SMEX missions NASA converted the program so that each mission was managed by its Principal Investigator, with oversight by the GSFC Explorers Project.",
"title": "Classes"
},
{
"paragraph_id": 16,
"text": "NASA funded a competitive study of five candidate heliophysics Small Explorers missions for flight in 2022. The proposals were Mechanisms of Energetic Mass Ejection – eXplorer (MEME-X), Focusing Optics X-ray Solar Imager (FOXSI), Multi-Slit Solar Explorer (MUSE), Tandem Reconnection and Cusp Electrodynamics Reconnaissance Satellites (TRACERS), and Polarimeter to Unify the Corona and Heliosphere (PUNCH). In June 2019 NASA selected TRACERS and PUNCH for flight.",
"title": "Classes"
},
{
"paragraph_id": 17,
"text": "Missions of Opportunity (MO) are investigations characterized by being part of a non-NASA space mission of any size and having a total NASA cost of under $55 million. These missions are conducted on a no-exchange-of-funds basis with the organization sponsoring the mission. NASA solicits proposals for Missions of Opportunity on SMEX, MIDEX and UNEX investigations.",
"title": "Classes"
},
{
"paragraph_id": 18,
"text": "Three satellites were planned in this series: Beacon Explorer-A, Beacon Explorer-B, Beacon Explorer-C.",
"title": "Classes"
},
{
"paragraph_id": 19,
"text": "A series of three Geodetic Earth Orbiting Satellite (GEOS) were put in orbit: GEOS 1, GEOS 2, GEOS 3.",
"title": "Classes"
},
{
"paragraph_id": 20,
"text": "Explorer name numbers can be found in the NSSDC master catalog, typically assigned to each spacecraft in a mission. These numbers were not officially assigned until after 1975.",
"title": "Launched spacecraft"
},
{
"paragraph_id": 21,
"text": "Many missions are proposed, but not selected. For example, in 2011, the Explorers Program received 22 full missions solicitations, 20 Missions of Opportunity, and 8 USPI. Sometimes mission are only partially developed but must be stopped for financial, technological, or bureaucratic reasons. Some missions failed upon reaching orbit including WIRE and TERRIERS.",
"title": "Cancelled missions"
},
{
"paragraph_id": 22,
"text": "Examples of missions that were not developed or cancelled were:",
"title": "Cancelled missions"
},
{
"paragraph_id": 23,
"text": "Recent examples of conclusions of launched missions, cancelled due to budgetary constraints:",
"title": "Cancelled missions"
},
{
"paragraph_id": 24,
"text": "Number of launches per decade:",
"title": "Launch statistics"
},
{
"paragraph_id": 25,
"text": "",
"title": "External links"
}
]
| The Explorers program is a NASA exploration program that provides flight opportunities for physics, geophysics, heliophysics, and astrophysics investigations from space. Launched in 1958, Explorer 1 was the first spacecraft of the United States to achieve orbit. Over 90 space missions have been launched since. Starting with Explorer 6, it has been operated by NASA, with regular collaboration with a variety of other institutions, including many international partners. Launchers for the Explorer program have included Juno I, Juno II, various Thor, Scout, Delta and Pegasus launch vehicles, and Falcon 9. The program has three classes: Medium-Class Explorers (MIDEX), Small Explorers (SMEX), and University-Class Explorers (UNEX), with select Missions of Opportunity operated with other agencies. | 2002-02-25T15:51:15Z | 2023-12-27T02:47:37Z | [
"Template:Cite book",
"Template:Commons category",
"Template:Jet Propulsion Laboratory",
"Template:Center",
"Template:Reflist",
"Template:Success",
"Template:Pending",
"Template:N/a",
"Template:Anchor",
"Template:Portal",
"Template:Cite web",
"Template:Short description",
"Template:Use dmy dates",
"Template:Explorers program",
"Template:NASA planetary exploration programs",
"Template:GSFC",
"Template:Citation-attribution",
"Template:Cite conference",
"Template:Bar graph",
"Template:Cite magazine",
"Template:NASA navbox",
"Template:Use American English",
"Template:US$",
"Template:Clear",
"Template:Nowrap",
"Template:Failure"
]
| https://en.wikipedia.org/wiki/Explorers_Program |
9,920 | Electronic oscillator | An electronic oscillator is an electronic circuit that produces a periodic, oscillating or alternating current (AC) signal, usually a sine wave, square wave or a triangle wave, powered by a direct current (DC) source. Oscillators are found in many electronic devices, such as radio receivers, television sets, radio and television broadcast transmitters, computers, computer peripherals, cellphones, radar, and many other devices.
Oscillators are often characterized by the frequency of their output signal:
There are two general types of electronic oscillators: the linear or harmonic oscillator, and the nonlinear or relaxation oscillator. The two types are fundamentally different in how oscillation is produced, as well as in the characteristic type of output signal that is generated.
The most-common linear oscillator in use is the crystal oscillator, in which the output frequency is controlled by a piezo-electric resonator consisting of a vibrating quartz crystal. Crystal oscillators are ubiquitous in modern electronics, being the source for the clock signal in computers and digital watches, as well as a source for the signals generated in radio transmitters and receivers. As a crystal oscillator’s “native” output waveform is sinusoidal, a signal-conditioning circuit may be used to convert the output to other waveform types, such as the square wave typically utilized in computer clock circuits.
Linear or harmonic oscillators generate a sinusoidal (or nearly-sinusoidal) signal. There are two types:
The most common form of linear oscillator is an electronic amplifier such as a transistor or operational amplifier connected in a feedback loop with its output fed back into its input through a frequency selective electronic filter to provide positive feedback. When the power supply to the amplifier is switched on initially, electronic noise in the circuit provides a non-zero signal to get oscillations started. The noise travels around the loop and is amplified and filtered until very quickly it converges on a sine wave at a single frequency.
Feedback oscillator circuits can be classified according to the type of frequency selective filter they use in the feedback loop:
In addition to the feedback oscillators described above, which use two-port amplifying active elements such as transistors and operational amplifiers, linear oscillators can also be built using one-port (two terminal) devices with negative resistance, such as magnetron tubes, tunnel diodes, IMPATT diodes and Gunn diodes. Negative-resistance oscillators are usually used at high frequencies in the microwave range and above, since at these frequencies feedback oscillators perform poorly due to excessive phase shift in the feedback path.
In negative-resistance oscillators, a resonant circuit, such as an LC circuit, crystal, or cavity resonator, is connected across a device with negative differential resistance, and a DC bias voltage is applied to supply energy. A resonant circuit by itself is "almost" an oscillator; it can store energy in the form of electronic oscillations if excited, but because it has electrical resistance and other losses the oscillations are damped and decay to zero. The negative resistance of the active device cancels the (positive) internal loss resistance in the resonator, in effect creating a resonator with no damping, which generates spontaneous continuous oscillations at its resonant frequency.
The negative-resistance oscillator model is not limited to one-port devices like diodes; feedback oscillator circuits with two-port amplifying devices such as transistors and tubes also have negative resistance. At high frequencies, three terminal devices such as transistors and FETs are also used in negative resistance oscillators. At high frequencies these devices do not need a feedback loop, but with certain loads applied to one port can become unstable at the other port and show negative resistance due to internal feedback. The negative resistance port is connected to a tuned circuit or resonant cavity, causing them to oscillate. High-frequency oscillators in general are designed using negative-resistance techniques.
Some of the many harmonic oscillator circuits are listed below:
A nonlinear or relaxation oscillator produces a non-sinusoidal output, such as a square, sawtooth or triangle wave. It consists of an energy-storing element (a capacitor or, more rarely, an inductor) and a nonlinear switching device (a latch, Schmitt trigger, or negative-resistance element) connected in a feedback loop. The switching device periodically charges and discharges the energy stored in the storage element thus causing abrupt changes in the output waveform.
Square-wave relaxation oscillators are used to provide the clock signal for sequential logic circuits such as timers and counters, although crystal oscillators are often preferred for their greater stability. Triangle-wave or sawtooth oscillators are used in the timebase circuits that generate the horizontal deflection signals for cathode ray tubes in analogue oscilloscopes and television sets. They are also used in voltage-controlled oscillators (VCOs), inverters and switching power supplies, dual-slope analog to digital converters (ADCs), and in function generators to generate square and triangle waves for testing equipment. In general, relaxation oscillators are used at lower frequencies and have poorer frequency stability than linear oscillators.
Ring oscillators are built of a ring of active delay stages. Generally the ring has an odd number of inverting stages, so that there is no single stable state for the internal ring voltages. Instead, a single transition propagates endlessly around the ring.
Some of the more common relaxation oscillator circuits are listed below:
An oscillator can be designed so that the oscillation frequency can be varied over some range by an input voltage or current. These voltage controlled oscillators are widely used in phase-locked loops, in which the oscillator's frequency can be locked to the frequency of another oscillator. These are ubiquitous in modern communications circuits, used in filters, modulators, demodulators, and forming the basis of frequency synthesizer circuits which are used to tune radios and televisions.
Radio frequency VCOs are usually made by adding a varactor diode to the tuned circuit or resonator in an oscillator circuit. Changing the DC voltage across the varactor changes its capacitance, which changes the resonant frequency of the tuned circuit. Voltage controlled relaxation oscillators can be constructed by charging and discharging the energy storage capacitor with a voltage controlled current source. Increasing the input voltage increases the rate of charging the capacitor, decreasing the time between switching events.
A feedback oscillator circuit consists of two parts connected in a feedback loop; an amplifier A {\displaystyle A} and an electronic filter β ( j ω ) {\displaystyle \beta (j\omega )} . The filter's purpose is to limit the frequencies that can pass through the loop so the circuit only oscillates at the desired frequency. Since the filter and wires in the circuit have resistance they consume energy and the amplitude of the signal drops as it passes through the filter. The amplifier is needed to increase the amplitude of the signal to compensate for the energy lost in the other parts of the circuit, so the loop will oscillate, as well as supply energy to the load attached to the output.
To determine the frequency(s) ω 0 = 2 π f 0 {\displaystyle \omega _{0}\;=\;2\pi f_{0}} at which a feedback oscillator circuit will oscillate, the feedback loop is thought of as broken at some point (see diagrams) to give an input and output port. A sine wave is applied to the input v i ( t ) = V i e j ω t {\displaystyle v_{i}(t)=V_{i}e^{j\omega t}} and the amplitude and phase of the sine wave after going through the loop v o = V o e j ( ω t + ϕ ) {\displaystyle v_{o}=V_{o}e^{j(\omega t+\phi )}} is calculated
Since in the complete circuit v o {\displaystyle v_{o}} is connected to v i {\displaystyle v_{i}} , for oscillations to exist
The ratio of output to input of the loop, v o v i = A β ( j ω ) {\displaystyle {v_{o} \over v_{i}}=A\beta (j\omega )} , is called the loop gain. So the condition for oscillation is that the loop gain must be one
Since A β ( j ω ) {\displaystyle A\beta (j\omega )} is a complex number with two parts, a magnitude and an angle, the above equation actually consists of two conditions:
Equations (1) and (2) are called the Barkhausen stability criterion. It is a necessary but not a sufficient criterion for oscillation, so there are some circuits which satisfy these equations that will not oscillate. An equivalent condition often used instead of the Barkhausen condition is that the circuit's closed loop transfer function (the circuit's complex impedance at its output) have a pair of poles on the imaginary axis.
In general, the phase shift of the feedback network increases with increasing frequency so there are only a few discrete frequencies (often only one) which satisfy the second equation. If the amplifier gain A {\displaystyle A} is high enough that the loop gain is unity (or greater, see Startup section) at one of these frequencies, the circuit will oscillate at that frequency. Many amplifiers such as common-emitter transistor circuits are "inverting", meaning that their output voltage decreases when their input increases. In these the amplifier provides 180° phase shift, so the circuit will oscillate at the frequency at which the feedback network provides the other 180° phase shift.
At frequencies well below the poles of the amplifying device, the amplifier will act as a pure gain A {\displaystyle A} , but if the oscillation frequency ω 0 {\displaystyle \omega _{0}} is near the amplifier's cutoff frequency ω C {\displaystyle \omega _{C}} , within 0.1 ω C {\displaystyle 0.1\omega _{C}} , the active device can no longer be considered a 'pure gain', and it will contribute some phase shift to the loop.
An alternate mathematical stability test sometimes used instead of the Barkhausen criterion is the Nyquist stability criterion. This has a wider applicability than the Barkhausen, so it can identify some of the circuits which pass the Barkhausen criterion but do not oscillate.
Temperature changes, aging, and manufacturing tolerances will cause component values to "drift" away from their designed values. Changes in frequency determining components such as the tank circuit in LC oscillators will cause the oscillation frequency to change, so for a constant frequency these components must have stable values. How stable the oscillator's frequency is to other changes in the circuit, such as changes in values of other components, gain of the amplifier, the load impedance, or the supply voltage, is mainly dependent on the Q factor ("quality factor") of the feedback filter. Since the amplitude of the output is constant due to the nonlinearity of the amplifier (see Startup section below), changes in component values cause changes in the phase shift ϕ = ∠ A β ( j ω ) {\displaystyle \phi \;=\;\angle A\beta (j\omega )} of the feedback loop. Since oscillation can only occur at frequencies where the phase shift is a multiple of 360°, ϕ = 360 n ∘ {\displaystyle \phi \;=\;360n^{\circ }} , shifts in component values cause the oscillation frequency ω 0 {\displaystyle \omega _{0}} to change to bring the loop phase back to 360n°. The amount of frequency change Δ ω {\displaystyle \Delta \omega } caused by a given phase change Δ ϕ {\displaystyle \Delta \phi } depends on the slope of the loop phase curve at ω 0 {\displaystyle \omega _{0}} , which is determined by the Q {\displaystyle Q}
RC oscillators have the equivalent of a very low Q {\displaystyle Q} , so the phase changes very slowly with frequency, therefore a given phase change will cause a large change in the frequency. In contrast, LC oscillators have tank circuits with high Q {\displaystyle Q} (~10). This means the phase shift of the feedback network increases rapidly with frequency near the resonant frequency of the tank circuit. So a large change in phase causes only a small change in frequency. Therefore the circuit's oscillation frequency is very close to the natural resonant frequency of the tuned circuit, and doesn't depend much on other components in the circuit. The quartz crystal resonators used in crystal oscillators have even higher Q {\displaystyle Q} (10 to 10) and their frequency is very stable and independent of other circuit components.
The frequency of RC and LC oscillators can be tuned over a wide range by using variable components in the filter. A microwave cavity can be tuned mechanically by moving one of the walls. In contrast, a quartz crystal is a mechanical resonator whose resonant frequency is mainly determined by its dimensions, so a crystal oscillator's frequency is only adjustable over a very narrow range, a tiny fraction of one percent. It's frequency can be changed slightly by using a trimmer capacitor in series or parallel with the crystal.
The Barkhausen criterion above, eqs. (1) and (2), merely gives the frequencies at which steady-state oscillation is possible, but says nothing about the amplitude of the oscillation, whether the amplitude is stable, or whether the circuit will start oscillating when the power is turned on. For a practical oscillator two additional requirements are necessary:
A typical rule of thumb is to make the small signal loop gain at the oscillation frequency 2 or 3. When the power is turned on, oscillation is started by the power turn-on transient or random electronic noise present in the circuit. Noise guarantees that the circuit will not remain "balanced" precisely at its unstable DC equilibrium point (Q point) indefinitely. Due to the narrow passband of the filter, the response of the circuit to a noise pulse will be sinusoidal, it will excite a small sine wave of voltage in the loop. Since for small signals the loop gain is greater than one, the amplitude of the sine wave increases exponentially.
During startup, while the amplitude of the oscillation is small, the circuit is approximately linear, so the analysis used in the Barkhausen criterion is applicable. When the amplitude becomes large enough that the amplifier becomes nonlinear, technically the frequency domain analysis used in normal amplifier circuits is no longer applicable, so the "gain" of the circuit is undefined. However the filter attenuates the harmonic components produced by the nonlinearity of the amplifier, so the fundamental frequency component sin ω 0 t {\displaystyle \sin \omega _{0}t} mainly determines the loop gain (this is the "harmonic balance" analysis technique for nonlinear circuits).
The sine wave cannot grow indefinitely; in all real oscillators some nonlinear process in the circuit limits its amplitude, reducing the gain as the amplitude increases, resulting in stable operation at some constant amplitude. In most oscillators this nonlinearity is simply the saturation (limiting) of the amplifying device, the transistor, vacuum tube or op-amp. The maximum voltage swing of the amplifier's output is limited by the DC voltage provided by its power supply. Another possibility is that the output may be limited by the amplifier slew rate.
As the amplitude of the output nears the power supply voltage rails, the amplifier begins to saturate on the peaks (top and bottom) of the sine wave, flattening or "clipping" the peaks. Since the output of the amplifier can no longer increase with increasing input, further increases in amplitude cause the equivalent gain of the amplifier and thus the loop gain to decrease. The amplitude of the sine wave, and the resulting clipping, continues to grow until the loop gain is reduced to unity, | A β ( j ω 0 ) | = 1 {\displaystyle |A\beta (j\omega _{0})|\;=\;1\,} , satisfying the Barkhausen criterion, at which point the amplitude levels off and steady state operation is achieved, with the output a slightly distorted sine wave with peak amplitude determined by the supply voltage. This is a stable equilibrium; if the amplitude of the sine wave increases for some reason, increased clipping of the output causes the loop gain | A β ( j ω 0 ) | {\displaystyle |A\beta (j\omega _{0})|} to drop below one temporarily, reducing the sine wave's amplitude back to its unity-gain value. Similarly if the amplitude of the wave decreases, the decreased clipping will cause the loop gain to increase above one, increasing the amplitude.
The amount of harmonic distortion in the output is dependent on how much excess loop gain the circuit has:
An exception to the above are high Q oscillator circuits such as crystal oscillators; the narrow bandwidth of the crystal removes the harmonics from the output, producing a 'pure' sinusoidal wave with almost no distortion even with large loop gains.
Since oscillators depend on nonlinearity for their operation, the usual linear frequency domain circuit analysis techniques used for amplifiers based on the Laplace transform, such as root locus and gain and phase plots (Bode plots), cannot capture their full behavior. To determine startup and transient behavior and calculate the detailed shape of the output waveform, electronic circuit simulation computer programs like SPICE are used. A typical design procedure for oscillator circuits is to use linear techniques such as the Barkhausen stability criterion or Nyquist stability criterion to design the circuit, then simulate the circuit on computer to make sure it starts up reliably and to determine the nonlinear aspects of operation such as harmonic distortion. Component values are tweaked until the simulation results are satisfactory. The distorted oscillations of real-world (nonlinear) oscillators are called limit cycles and are studied in nonlinear control theory.
In applications where a 'pure' very low distortion sine wave is needed, such as precision signal generators, a nonlinear component is often used in the feedback loop that provides a 'slow' gain reduction with amplitude. This stabilizes the loop gain at an amplitude below the saturation level of the amplifier, so it does not saturate and "clip" the sine wave. Resistor-diode networks and FETs are often used for the nonlinear element. An older design uses a thermistor or an ordinary incandescent light bulb; both provide a resistance that increases with temperature as the current through them increases.
As the amplitude of the signal current through them increases during oscillator startup, the increasing resistance of these devices reduces the loop gain. The essential characteristic of all these circuits is that the nonlinear gain-control circuit must have a long time constant, much longer than a single period of the oscillation. Therefore over a single cycle they act as virtually linear elements, and so introduce very little distortion. The operation of these circuits is somewhat analogous to an automatic gain control (AGC) circuit in a radio receiver. The Wein bridge oscillator is a widely used circuit in which this type of gain stabilization is used.
At high frequencies it becomes difficult to physically implement feedback oscillators because of shortcomings of the components. Since at high frequencies the tank circuit has very small capacitance and inductance, parasitic capacitance and parasitic inductance of component leads and PCB traces become significant. These may create unwanted feedback paths between the output and input of the active device, creating instability and oscillations at unwanted frequencies (parasitic oscillation). Parasitic feedback paths inside the active device itself, such as the interelectrode capacitance between output and input, make the device unstable. The input impedance of the active device falls with frequency, so it may load the feedback network. As a result, stable feedback oscillators are difficult to build for frequencies above 500 MHz, and negative resistance oscillators are usually used for frequencies above this.
The first practical oscillators were based on electric arcs, which were used for lighting in the 19th century. The current through an arc light is unstable due to its negative resistance, and often breaks into spontaneous oscillations, causing the arc to make hissing, humming or howling sounds which had been noticed by Humphry Davy in 1821, Benjamin Silliman in 1822, Auguste Arthur de la Rive in 1846, and David Edward Hughes in 1878. Ernst Lecher in 1888 showed that the current through an electric arc could be oscillatory.
An oscillator was built by Elihu Thomson in 1892 by placing an LC tuned circuit in parallel with an electric arc and included a magnetic blowout. Independently, in the same year, George Francis FitzGerald realized that if the damping resistance in a resonant circuit could be made zero or negative, the circuit would produce oscillations, and, unsuccessfully, tried to build a negative resistance oscillator with a dynamo, what would now be called a parametric oscillator. The arc oscillator was rediscovered and popularized by William Duddell in 1900. Duddell, a student at London Technical College, was investigating the hissing arc effect. He attached an LC circuit (tuned circuit) to the electrodes of an arc lamp, and the negative resistance of the arc excited oscillation in the tuned circuit. Some of the energy was radiated as sound waves by the arc, producing a musical tone. Duddell demonstrated his oscillator before the London Institute of Electrical Engineers by sequentially connecting different tuned circuits across the arc to play the national anthem "God Save the Queen". Duddell's "singing arc" did not generate frequencies above the audio range. In 1902 Danish physicists Valdemar Poulsen and P. O. Pederson were able to increase the frequency produced into the radio range by operating the arc in a hydrogen atmosphere with a magnetic field, inventing the Poulsen arc radio transmitter, the first continuous wave radio transmitter, which was used through the 1920s.
The vacuum-tube feedback oscillator was invented around 1912, when it was discovered that feedback ("regeneration") in the recently invented audion (triode) vacuum tube could produce oscillations. At least six researchers independently made this discovery, although not all of them can be said to have a role in the invention of the oscillator. In the summer of 1912, Edwin Armstrong observed oscillations in audion radio receiver circuits and went on to use positive feedback in his invention of the regenerative receiver. Austrian Alexander Meissner independently discovered positive feedback and invented oscillators in March 1913. Irving Langmuir at General Electric observed feedback in 1913. Fritz Lowenstein may have preceded the others with a crude oscillator in late 1911. In Britain, H. J. Round patented amplifying and oscillating circuits in 1913. In August 1912, Lee De Forest, the inventor of the audion, had also observed oscillations in his amplifiers, but he didn't understand the significance and tried to eliminate it until he read Armstrong's patents in 1914, which he promptly challenged. Armstrong and De Forest fought a protracted legal battle over the rights to the "regenerative" oscillator circuit which has been called "the most complicated patent litigation in the history of radio". De Forest ultimately won before the Supreme Court in 1934 on technical grounds, but most sources regard Armstrong's claim as the stronger one.
The first and most widely used relaxation oscillator circuit, the astable multivibrator, was invented in 1917 by French engineers Henri Abraham and Eugene Bloch. They called their cross-coupled, dual-vacuum-tube circuit a multivibrateur, because the square-wave signal it produced was rich in harmonics, compared to the sinusoidal signal of other vacuum-tube oscillators.
Vacuum-tube feedback oscillators became the basis of radio transmission by 1920. However, the triode vacuum tube oscillator performed poorly above 300 MHz because of interelectrode capacitance. To reach higher frequencies, new "transit time" (velocity modulation) vacuum tubes were developed, in which electrons traveled in "bunches" through the tube. The first of these was the Barkhausen–Kurz oscillator (1920), the first tube to produce power in the UHF range. The most important and widely used were the klystron (R. and S. Varian, 1937) and the cavity magnetron (J. Randall and H. Boot, 1940).
Mathematical conditions for feedback oscillations, now called the Barkhausen criterion, were derived by Heinrich Georg Barkhausen in 1921. The first analysis of a nonlinear electronic oscillator model, the Van der Pol oscillator, was done by Balthasar van der Pol in 1927. He showed that the stability of the oscillations (limit cycles) in actual oscillators was due to the nonlinearity of the amplifying device. He originated the term "relaxation oscillation" and was first to distinguish between linear and relaxation oscillators. Further advances in mathematical analysis of oscillation were made by Hendrik Wade Bode and Harry Nyquist in the 1930s. In 1969 Kaneyuki Kurokawa derived necessary and sufficient conditions for oscillation in negative-resistance circuits, which form the basis of modern microwave oscillator design. | [
{
"paragraph_id": 0,
"text": "An electronic oscillator is an electronic circuit that produces a periodic, oscillating or alternating current (AC) signal, usually a sine wave, square wave or a triangle wave, powered by a direct current (DC) source. Oscillators are found in many electronic devices, such as radio receivers, television sets, radio and television broadcast transmitters, computers, computer peripherals, cellphones, radar, and many other devices.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Oscillators are often characterized by the frequency of their output signal:",
"title": ""
},
{
"paragraph_id": 2,
"text": "There are two general types of electronic oscillators: the linear or harmonic oscillator, and the nonlinear or relaxation oscillator. The two types are fundamentally different in how oscillation is produced, as well as in the characteristic type of output signal that is generated.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The most-common linear oscillator in use is the crystal oscillator, in which the output frequency is controlled by a piezo-electric resonator consisting of a vibrating quartz crystal. Crystal oscillators are ubiquitous in modern electronics, being the source for the clock signal in computers and digital watches, as well as a source for the signals generated in radio transmitters and receivers. As a crystal oscillator’s “native” output waveform is sinusoidal, a signal-conditioning circuit may be used to convert the output to other waveform types, such as the square wave typically utilized in computer clock circuits.",
"title": ""
},
{
"paragraph_id": 4,
"text": "",
"title": "Harmonic oscillators"
},
{
"paragraph_id": 5,
"text": "Linear or harmonic oscillators generate a sinusoidal (or nearly-sinusoidal) signal. There are two types:",
"title": "Harmonic oscillators"
},
{
"paragraph_id": 6,
"text": "The most common form of linear oscillator is an electronic amplifier such as a transistor or operational amplifier connected in a feedback loop with its output fed back into its input through a frequency selective electronic filter to provide positive feedback. When the power supply to the amplifier is switched on initially, electronic noise in the circuit provides a non-zero signal to get oscillations started. The noise travels around the loop and is amplified and filtered until very quickly it converges on a sine wave at a single frequency.",
"title": "Harmonic oscillators"
},
{
"paragraph_id": 7,
"text": "Feedback oscillator circuits can be classified according to the type of frequency selective filter they use in the feedback loop:",
"title": "Harmonic oscillators"
},
{
"paragraph_id": 8,
"text": "",
"title": "Harmonic oscillators"
},
{
"paragraph_id": 9,
"text": "In addition to the feedback oscillators described above, which use two-port amplifying active elements such as transistors and operational amplifiers, linear oscillators can also be built using one-port (two terminal) devices with negative resistance, such as magnetron tubes, tunnel diodes, IMPATT diodes and Gunn diodes. Negative-resistance oscillators are usually used at high frequencies in the microwave range and above, since at these frequencies feedback oscillators perform poorly due to excessive phase shift in the feedback path.",
"title": "Harmonic oscillators"
},
{
"paragraph_id": 10,
"text": "In negative-resistance oscillators, a resonant circuit, such as an LC circuit, crystal, or cavity resonator, is connected across a device with negative differential resistance, and a DC bias voltage is applied to supply energy. A resonant circuit by itself is \"almost\" an oscillator; it can store energy in the form of electronic oscillations if excited, but because it has electrical resistance and other losses the oscillations are damped and decay to zero. The negative resistance of the active device cancels the (positive) internal loss resistance in the resonator, in effect creating a resonator with no damping, which generates spontaneous continuous oscillations at its resonant frequency.",
"title": "Harmonic oscillators"
},
{
"paragraph_id": 11,
"text": "The negative-resistance oscillator model is not limited to one-port devices like diodes; feedback oscillator circuits with two-port amplifying devices such as transistors and tubes also have negative resistance. At high frequencies, three terminal devices such as transistors and FETs are also used in negative resistance oscillators. At high frequencies these devices do not need a feedback loop, but with certain loads applied to one port can become unstable at the other port and show negative resistance due to internal feedback. The negative resistance port is connected to a tuned circuit or resonant cavity, causing them to oscillate. High-frequency oscillators in general are designed using negative-resistance techniques.",
"title": "Harmonic oscillators"
},
{
"paragraph_id": 12,
"text": "Some of the many harmonic oscillator circuits are listed below:",
"title": "Harmonic oscillators"
},
{
"paragraph_id": 13,
"text": "A nonlinear or relaxation oscillator produces a non-sinusoidal output, such as a square, sawtooth or triangle wave. It consists of an energy-storing element (a capacitor or, more rarely, an inductor) and a nonlinear switching device (a latch, Schmitt trigger, or negative-resistance element) connected in a feedback loop. The switching device periodically charges and discharges the energy stored in the storage element thus causing abrupt changes in the output waveform.",
"title": "Relaxation oscillator"
},
{
"paragraph_id": 14,
"text": "Square-wave relaxation oscillators are used to provide the clock signal for sequential logic circuits such as timers and counters, although crystal oscillators are often preferred for their greater stability. Triangle-wave or sawtooth oscillators are used in the timebase circuits that generate the horizontal deflection signals for cathode ray tubes in analogue oscilloscopes and television sets. They are also used in voltage-controlled oscillators (VCOs), inverters and switching power supplies, dual-slope analog to digital converters (ADCs), and in function generators to generate square and triangle waves for testing equipment. In general, relaxation oscillators are used at lower frequencies and have poorer frequency stability than linear oscillators.",
"title": "Relaxation oscillator"
},
{
"paragraph_id": 15,
"text": "Ring oscillators are built of a ring of active delay stages. Generally the ring has an odd number of inverting stages, so that there is no single stable state for the internal ring voltages. Instead, a single transition propagates endlessly around the ring.",
"title": "Relaxation oscillator"
},
{
"paragraph_id": 16,
"text": "Some of the more common relaxation oscillator circuits are listed below:",
"title": "Relaxation oscillator"
},
{
"paragraph_id": 17,
"text": "An oscillator can be designed so that the oscillation frequency can be varied over some range by an input voltage or current. These voltage controlled oscillators are widely used in phase-locked loops, in which the oscillator's frequency can be locked to the frequency of another oscillator. These are ubiquitous in modern communications circuits, used in filters, modulators, demodulators, and forming the basis of frequency synthesizer circuits which are used to tune radios and televisions.",
"title": "Voltage-controlled oscillator (VCO)"
},
{
"paragraph_id": 18,
"text": "Radio frequency VCOs are usually made by adding a varactor diode to the tuned circuit or resonator in an oscillator circuit. Changing the DC voltage across the varactor changes its capacitance, which changes the resonant frequency of the tuned circuit. Voltage controlled relaxation oscillators can be constructed by charging and discharging the energy storage capacitor with a voltage controlled current source. Increasing the input voltage increases the rate of charging the capacitor, decreasing the time between switching events.",
"title": "Voltage-controlled oscillator (VCO)"
},
{
"paragraph_id": 19,
"text": "A feedback oscillator circuit consists of two parts connected in a feedback loop; an amplifier A {\\displaystyle A} and an electronic filter β ( j ω ) {\\displaystyle \\beta (j\\omega )} . The filter's purpose is to limit the frequencies that can pass through the loop so the circuit only oscillates at the desired frequency. Since the filter and wires in the circuit have resistance they consume energy and the amplitude of the signal drops as it passes through the filter. The amplifier is needed to increase the amplitude of the signal to compensate for the energy lost in the other parts of the circuit, so the loop will oscillate, as well as supply energy to the load attached to the output.",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 20,
"text": "To determine the frequency(s) ω 0 = 2 π f 0 {\\displaystyle \\omega _{0}\\;=\\;2\\pi f_{0}} at which a feedback oscillator circuit will oscillate, the feedback loop is thought of as broken at some point (see diagrams) to give an input and output port. A sine wave is applied to the input v i ( t ) = V i e j ω t {\\displaystyle v_{i}(t)=V_{i}e^{j\\omega t}} and the amplitude and phase of the sine wave after going through the loop v o = V o e j ( ω t + ϕ ) {\\displaystyle v_{o}=V_{o}e^{j(\\omega t+\\phi )}} is calculated",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 21,
"text": "Since in the complete circuit v o {\\displaystyle v_{o}} is connected to v i {\\displaystyle v_{i}} , for oscillations to exist",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 22,
"text": "The ratio of output to input of the loop, v o v i = A β ( j ω ) {\\displaystyle {v_{o} \\over v_{i}}=A\\beta (j\\omega )} , is called the loop gain. So the condition for oscillation is that the loop gain must be one",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 23,
"text": "Since A β ( j ω ) {\\displaystyle A\\beta (j\\omega )} is a complex number with two parts, a magnitude and an angle, the above equation actually consists of two conditions:",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 24,
"text": "Equations (1) and (2) are called the Barkhausen stability criterion. It is a necessary but not a sufficient criterion for oscillation, so there are some circuits which satisfy these equations that will not oscillate. An equivalent condition often used instead of the Barkhausen condition is that the circuit's closed loop transfer function (the circuit's complex impedance at its output) have a pair of poles on the imaginary axis.",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 25,
"text": "In general, the phase shift of the feedback network increases with increasing frequency so there are only a few discrete frequencies (often only one) which satisfy the second equation. If the amplifier gain A {\\displaystyle A} is high enough that the loop gain is unity (or greater, see Startup section) at one of these frequencies, the circuit will oscillate at that frequency. Many amplifiers such as common-emitter transistor circuits are \"inverting\", meaning that their output voltage decreases when their input increases. In these the amplifier provides 180° phase shift, so the circuit will oscillate at the frequency at which the feedback network provides the other 180° phase shift.",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 26,
"text": "At frequencies well below the poles of the amplifying device, the amplifier will act as a pure gain A {\\displaystyle A} , but if the oscillation frequency ω 0 {\\displaystyle \\omega _{0}} is near the amplifier's cutoff frequency ω C {\\displaystyle \\omega _{C}} , within 0.1 ω C {\\displaystyle 0.1\\omega _{C}} , the active device can no longer be considered a 'pure gain', and it will contribute some phase shift to the loop.",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 27,
"text": "An alternate mathematical stability test sometimes used instead of the Barkhausen criterion is the Nyquist stability criterion. This has a wider applicability than the Barkhausen, so it can identify some of the circuits which pass the Barkhausen criterion but do not oscillate.",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 28,
"text": "Temperature changes, aging, and manufacturing tolerances will cause component values to \"drift\" away from their designed values. Changes in frequency determining components such as the tank circuit in LC oscillators will cause the oscillation frequency to change, so for a constant frequency these components must have stable values. How stable the oscillator's frequency is to other changes in the circuit, such as changes in values of other components, gain of the amplifier, the load impedance, or the supply voltage, is mainly dependent on the Q factor (\"quality factor\") of the feedback filter. Since the amplitude of the output is constant due to the nonlinearity of the amplifier (see Startup section below), changes in component values cause changes in the phase shift ϕ = ∠ A β ( j ω ) {\\displaystyle \\phi \\;=\\;\\angle A\\beta (j\\omega )} of the feedback loop. Since oscillation can only occur at frequencies where the phase shift is a multiple of 360°, ϕ = 360 n ∘ {\\displaystyle \\phi \\;=\\;360n^{\\circ }} , shifts in component values cause the oscillation frequency ω 0 {\\displaystyle \\omega _{0}} to change to bring the loop phase back to 360n°. The amount of frequency change Δ ω {\\displaystyle \\Delta \\omega } caused by a given phase change Δ ϕ {\\displaystyle \\Delta \\phi } depends on the slope of the loop phase curve at ω 0 {\\displaystyle \\omega _{0}} , which is determined by the Q {\\displaystyle Q}",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 29,
"text": "RC oscillators have the equivalent of a very low Q {\\displaystyle Q} , so the phase changes very slowly with frequency, therefore a given phase change will cause a large change in the frequency. In contrast, LC oscillators have tank circuits with high Q {\\displaystyle Q} (~10). This means the phase shift of the feedback network increases rapidly with frequency near the resonant frequency of the tank circuit. So a large change in phase causes only a small change in frequency. Therefore the circuit's oscillation frequency is very close to the natural resonant frequency of the tuned circuit, and doesn't depend much on other components in the circuit. The quartz crystal resonators used in crystal oscillators have even higher Q {\\displaystyle Q} (10 to 10) and their frequency is very stable and independent of other circuit components.",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 30,
"text": "The frequency of RC and LC oscillators can be tuned over a wide range by using variable components in the filter. A microwave cavity can be tuned mechanically by moving one of the walls. In contrast, a quartz crystal is a mechanical resonator whose resonant frequency is mainly determined by its dimensions, so a crystal oscillator's frequency is only adjustable over a very narrow range, a tiny fraction of one percent. It's frequency can be changed slightly by using a trimmer capacitor in series or parallel with the crystal.",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 31,
"text": "The Barkhausen criterion above, eqs. (1) and (2), merely gives the frequencies at which steady-state oscillation is possible, but says nothing about the amplitude of the oscillation, whether the amplitude is stable, or whether the circuit will start oscillating when the power is turned on. For a practical oscillator two additional requirements are necessary:",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 32,
"text": "A typical rule of thumb is to make the small signal loop gain at the oscillation frequency 2 or 3. When the power is turned on, oscillation is started by the power turn-on transient or random electronic noise present in the circuit. Noise guarantees that the circuit will not remain \"balanced\" precisely at its unstable DC equilibrium point (Q point) indefinitely. Due to the narrow passband of the filter, the response of the circuit to a noise pulse will be sinusoidal, it will excite a small sine wave of voltage in the loop. Since for small signals the loop gain is greater than one, the amplitude of the sine wave increases exponentially.",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 33,
"text": "During startup, while the amplitude of the oscillation is small, the circuit is approximately linear, so the analysis used in the Barkhausen criterion is applicable. When the amplitude becomes large enough that the amplifier becomes nonlinear, technically the frequency domain analysis used in normal amplifier circuits is no longer applicable, so the \"gain\" of the circuit is undefined. However the filter attenuates the harmonic components produced by the nonlinearity of the amplifier, so the fundamental frequency component sin ω 0 t {\\displaystyle \\sin \\omega _{0}t} mainly determines the loop gain (this is the \"harmonic balance\" analysis technique for nonlinear circuits).",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 34,
"text": "The sine wave cannot grow indefinitely; in all real oscillators some nonlinear process in the circuit limits its amplitude, reducing the gain as the amplitude increases, resulting in stable operation at some constant amplitude. In most oscillators this nonlinearity is simply the saturation (limiting) of the amplifying device, the transistor, vacuum tube or op-amp. The maximum voltage swing of the amplifier's output is limited by the DC voltage provided by its power supply. Another possibility is that the output may be limited by the amplifier slew rate.",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 35,
"text": "As the amplitude of the output nears the power supply voltage rails, the amplifier begins to saturate on the peaks (top and bottom) of the sine wave, flattening or \"clipping\" the peaks. Since the output of the amplifier can no longer increase with increasing input, further increases in amplitude cause the equivalent gain of the amplifier and thus the loop gain to decrease. The amplitude of the sine wave, and the resulting clipping, continues to grow until the loop gain is reduced to unity, | A β ( j ω 0 ) | = 1 {\\displaystyle |A\\beta (j\\omega _{0})|\\;=\\;1\\,} , satisfying the Barkhausen criterion, at which point the amplitude levels off and steady state operation is achieved, with the output a slightly distorted sine wave with peak amplitude determined by the supply voltage. This is a stable equilibrium; if the amplitude of the sine wave increases for some reason, increased clipping of the output causes the loop gain | A β ( j ω 0 ) | {\\displaystyle |A\\beta (j\\omega _{0})|} to drop below one temporarily, reducing the sine wave's amplitude back to its unity-gain value. Similarly if the amplitude of the wave decreases, the decreased clipping will cause the loop gain to increase above one, increasing the amplitude.",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 36,
"text": "The amount of harmonic distortion in the output is dependent on how much excess loop gain the circuit has:",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 37,
"text": "An exception to the above are high Q oscillator circuits such as crystal oscillators; the narrow bandwidth of the crystal removes the harmonics from the output, producing a 'pure' sinusoidal wave with almost no distortion even with large loop gains.",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 38,
"text": "Since oscillators depend on nonlinearity for their operation, the usual linear frequency domain circuit analysis techniques used for amplifiers based on the Laplace transform, such as root locus and gain and phase plots (Bode plots), cannot capture their full behavior. To determine startup and transient behavior and calculate the detailed shape of the output waveform, electronic circuit simulation computer programs like SPICE are used. A typical design procedure for oscillator circuits is to use linear techniques such as the Barkhausen stability criterion or Nyquist stability criterion to design the circuit, then simulate the circuit on computer to make sure it starts up reliably and to determine the nonlinear aspects of operation such as harmonic distortion. Component values are tweaked until the simulation results are satisfactory. The distorted oscillations of real-world (nonlinear) oscillators are called limit cycles and are studied in nonlinear control theory.",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 39,
"text": "In applications where a 'pure' very low distortion sine wave is needed, such as precision signal generators, a nonlinear component is often used in the feedback loop that provides a 'slow' gain reduction with amplitude. This stabilizes the loop gain at an amplitude below the saturation level of the amplifier, so it does not saturate and \"clip\" the sine wave. Resistor-diode networks and FETs are often used for the nonlinear element. An older design uses a thermistor or an ordinary incandescent light bulb; both provide a resistance that increases with temperature as the current through them increases.",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 40,
"text": "As the amplitude of the signal current through them increases during oscillator startup, the increasing resistance of these devices reduces the loop gain. The essential characteristic of all these circuits is that the nonlinear gain-control circuit must have a long time constant, much longer than a single period of the oscillation. Therefore over a single cycle they act as virtually linear elements, and so introduce very little distortion. The operation of these circuits is somewhat analogous to an automatic gain control (AGC) circuit in a radio receiver. The Wein bridge oscillator is a widely used circuit in which this type of gain stabilization is used.",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 41,
"text": "At high frequencies it becomes difficult to physically implement feedback oscillators because of shortcomings of the components. Since at high frequencies the tank circuit has very small capacitance and inductance, parasitic capacitance and parasitic inductance of component leads and PCB traces become significant. These may create unwanted feedback paths between the output and input of the active device, creating instability and oscillations at unwanted frequencies (parasitic oscillation). Parasitic feedback paths inside the active device itself, such as the interelectrode capacitance between output and input, make the device unstable. The input impedance of the active device falls with frequency, so it may load the feedback network. As a result, stable feedback oscillators are difficult to build for frequencies above 500 MHz, and negative resistance oscillators are usually used for frequencies above this.",
"title": "Theory of feedback oscillators"
},
{
"paragraph_id": 42,
"text": "The first practical oscillators were based on electric arcs, which were used for lighting in the 19th century. The current through an arc light is unstable due to its negative resistance, and often breaks into spontaneous oscillations, causing the arc to make hissing, humming or howling sounds which had been noticed by Humphry Davy in 1821, Benjamin Silliman in 1822, Auguste Arthur de la Rive in 1846, and David Edward Hughes in 1878. Ernst Lecher in 1888 showed that the current through an electric arc could be oscillatory.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "An oscillator was built by Elihu Thomson in 1892 by placing an LC tuned circuit in parallel with an electric arc and included a magnetic blowout. Independently, in the same year, George Francis FitzGerald realized that if the damping resistance in a resonant circuit could be made zero or negative, the circuit would produce oscillations, and, unsuccessfully, tried to build a negative resistance oscillator with a dynamo, what would now be called a parametric oscillator. The arc oscillator was rediscovered and popularized by William Duddell in 1900. Duddell, a student at London Technical College, was investigating the hissing arc effect. He attached an LC circuit (tuned circuit) to the electrodes of an arc lamp, and the negative resistance of the arc excited oscillation in the tuned circuit. Some of the energy was radiated as sound waves by the arc, producing a musical tone. Duddell demonstrated his oscillator before the London Institute of Electrical Engineers by sequentially connecting different tuned circuits across the arc to play the national anthem \"God Save the Queen\". Duddell's \"singing arc\" did not generate frequencies above the audio range. In 1902 Danish physicists Valdemar Poulsen and P. O. Pederson were able to increase the frequency produced into the radio range by operating the arc in a hydrogen atmosphere with a magnetic field, inventing the Poulsen arc radio transmitter, the first continuous wave radio transmitter, which was used through the 1920s.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "The vacuum-tube feedback oscillator was invented around 1912, when it was discovered that feedback (\"regeneration\") in the recently invented audion (triode) vacuum tube could produce oscillations. At least six researchers independently made this discovery, although not all of them can be said to have a role in the invention of the oscillator. In the summer of 1912, Edwin Armstrong observed oscillations in audion radio receiver circuits and went on to use positive feedback in his invention of the regenerative receiver. Austrian Alexander Meissner independently discovered positive feedback and invented oscillators in March 1913. Irving Langmuir at General Electric observed feedback in 1913. Fritz Lowenstein may have preceded the others with a crude oscillator in late 1911. In Britain, H. J. Round patented amplifying and oscillating circuits in 1913. In August 1912, Lee De Forest, the inventor of the audion, had also observed oscillations in his amplifiers, but he didn't understand the significance and tried to eliminate it until he read Armstrong's patents in 1914, which he promptly challenged. Armstrong and De Forest fought a protracted legal battle over the rights to the \"regenerative\" oscillator circuit which has been called \"the most complicated patent litigation in the history of radio\". De Forest ultimately won before the Supreme Court in 1934 on technical grounds, but most sources regard Armstrong's claim as the stronger one.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "The first and most widely used relaxation oscillator circuit, the astable multivibrator, was invented in 1917 by French engineers Henri Abraham and Eugene Bloch. They called their cross-coupled, dual-vacuum-tube circuit a multivibrateur, because the square-wave signal it produced was rich in harmonics, compared to the sinusoidal signal of other vacuum-tube oscillators.",
"title": "History"
},
{
"paragraph_id": 46,
"text": "Vacuum-tube feedback oscillators became the basis of radio transmission by 1920. However, the triode vacuum tube oscillator performed poorly above 300 MHz because of interelectrode capacitance. To reach higher frequencies, new \"transit time\" (velocity modulation) vacuum tubes were developed, in which electrons traveled in \"bunches\" through the tube. The first of these was the Barkhausen–Kurz oscillator (1920), the first tube to produce power in the UHF range. The most important and widely used were the klystron (R. and S. Varian, 1937) and the cavity magnetron (J. Randall and H. Boot, 1940).",
"title": "History"
},
{
"paragraph_id": 47,
"text": "Mathematical conditions for feedback oscillations, now called the Barkhausen criterion, were derived by Heinrich Georg Barkhausen in 1921. The first analysis of a nonlinear electronic oscillator model, the Van der Pol oscillator, was done by Balthasar van der Pol in 1927. He showed that the stability of the oscillations (limit cycles) in actual oscillators was due to the nonlinearity of the amplifying device. He originated the term \"relaxation oscillation\" and was first to distinguish between linear and relaxation oscillators. Further advances in mathematical analysis of oscillation were made by Hendrik Wade Bode and Harry Nyquist in the 1930s. In 1969 Kaneyuki Kurokawa derived necessary and sufficient conditions for oscillation in negative-resistance circuits, which form the basis of modern microwave oscillator design.",
"title": "History"
}
]
| An electronic oscillator is an electronic circuit that produces a periodic, oscillating or alternating current (AC) signal, usually a sine wave, square wave or a triangle wave, powered by a direct current (DC) source. Oscillators are found in many electronic devices, such as radio receivers, television sets, radio and television broadcast transmitters, computers, computer peripherals, cellphones, radar, and many other devices. Oscillators are often characterized by the frequency of their output signal: A low-frequency oscillator (LFO) is an oscillator that generates a frequency below approximately 20 Hz. This term is typically used in the field of audio synthesizers, to distinguish it from an audio frequency oscillator.
An audio oscillator produces frequencies in the audio range, 20 Hz to 20 kHz.
A radio frequency (RF) oscillator produces signals above the audio range, more generally in the range of 100 kHz to 100 GHz. There are two general types of electronic oscillators: the linear or harmonic oscillator, and the nonlinear or relaxation oscillator. The two types are fundamentally different in how oscillation is produced, as well as in the characteristic type of output signal that is generated. The most-common linear oscillator in use is the crystal oscillator, in which the output frequency is controlled by a piezo-electric resonator consisting of a vibrating quartz crystal. Crystal oscillators are ubiquitous in modern electronics, being the source for the clock signal in computers and digital watches, as well as a source for the signals generated in radio transmitters and receivers. As a crystal oscillator’s “native” output waveform is sinusoidal, a signal-conditioning circuit may be used to convert the output to other waveform types, such as the square wave typically utilized in computer clock circuits. | 2002-02-25T15:51:15Z | 2023-10-09T11:20:00Z | [
"Template:Electronic oscillators",
"Template:Short description",
"Template:Anchor",
"Template:Cite patent",
"Template:ISBN",
"Template:Citation needed",
"Template:Cite journal",
"Template:Commons category",
"Template:Rp",
"Template:Multiple image",
"Template:Cite book",
"Template:Citation",
"Template:Harvnb",
"Template:Authority control",
"Template:Breakafterimages",
"Template:Main",
"Template:Reflist",
"Template:Cite web"
]
| https://en.wikipedia.org/wiki/Electronic_oscillator |
9,922 | Societas Europaea | A societas Europaea (Classical Latin: [sɔˈkɪ.ɛtaːs eu̯roːˈpae̯.a], Ecclesiastical Latin: [soˈtʃi.etas eu̯roˈpe.a]; "European society" or "company"; plural: societates Europaeae; abbr. SE) is a public company registered in accordance with the corporate law of the European Union (EU), introduced in 2004 with the Council Regulation on the Statute for a European Company. Such a company may more easily transfer to or merge with companies in other member states.
As of April 2018, more than 3,000 registrations have been reported, including the following nine components (18%) of the Euro Stoxx 50 stock market index of leading eurozone companies (excluding the SE designation): Airbus, Allianz, BASF, E.ON, Fresenius, LVMH Moët Hennessy Louis Vuitton (and its parent company Dior), SAP, Schneider Electric and Unibail-Rodamco-Westfield.
National law continues to supplement the basic rules in the Regulation on formation and mergers. The European Company Regulation is complemented by an Employee Involvement Directive which manages the rules for participation by employees on the company's board of directors. There is also a statute allowing European Cooperative Societies.
The statute provides four ways of forming a European limited company:
Formation by merger is available only to public limited companies from different member states. Formation of an SE holding company is available to public and private limited companies with their registered offices in different member states or having subsidiaries or branches in member states other than that of their registered office. Formation of a joint subsidiary is available under the same circumstances to any legal entities governed by public or private law.
The SE must have a minimum subscribed capital of €120,000 as per article 4(2) of the directive, subject to the provision that where a member state requires a larger capital for companies exercising certain types of activities, the same requirement will also apply to an SE with its registered office in that member state (article 4(3)).
The registered office of the SE designated in the statutes must be the place where it has its central administration, that is to say its true centre of operations. The SE may transfer its registered office within the European Economic Area without dissolving the company in one member state in order to form a new one in another member state; however, such a transfer is subject to the provisions of 8 which require, inter alia, the drawing up of a transfer proposal, a report justifying the legal and economic aspects of the transfer and the issuing, by the competent authority in the member state in which the SE is registered, of a certificate attesting to the completion of the required acts and formalities.
The order of precedence of the laws applicable to the SE is clarified.
The registration and completion of the liquidation of an SE must be disclosed for information purposes in the Official Journal of the European Communities. Every SE must be registered in the state where it has its registered office, in a register designated by the law of that state.
The statutes of the SE must provide as governing bodies the annual general meeting of shareholders and either a management board and a supervisory board (two-tier system) or an administrative board (single-tier system). Under the two-tier system the SE is managed by a management board. The member or members of the management board have the power to represent the company in dealings with third parties and in legal proceedings. They are appointed and removed by the supervisory board. No person may be a member of both the management board and the supervisory board of the same company at the same time. But the supervisory board may appoint one of its members to exercise the functions of a member of the management board in the event of absence through holidays. During such a period the function of the person concerned as a member of the supervisory board shall be suspended. Under the single-tier system, the SE is managed by an administrative board. The member or members of the administrative board have the power to represent the company in dealings with third parties and in legal proceedings. Under the single-tier system the administrative board may delegate the power of management to one or more of its members.
The following operations require the authorization of the supervisory board or the deliberation of the administrative board:
The SE must draw up annual accounts comprising the balance sheet, the profit and loss account, and the notes to the accounts, and an annual report giving a fair view of the company's business and of its position; consolidated accounts may also be required.
In tax matters, the SE is treated the same as any other multinational, i.e., it is subject to the tax regime of the national legislation applicable to the company and its subsidiaries. SEs are subject to taxes and charges in all member states where their administrative centres are situated.
Winding-up, liquidation, insolvency, and suspension of payments are in large measure to be governed by national law. When an SE transfers its registered office outside the Community, or in any other manner no longer complies with requirements of article 7, the member state must take appropriate measures to ensure compliance or take necessary measures to ensure that the SE is liquidated.
Council Regulation (EC) No 2157/2001 of 8 October 2001 on the Statute for a European company (SE).
Council Directive 2001/86/EC of 8 October 2001 supplementing the Statute for a European company with regard to the involvement of employees.
See also: Europa's collection of press releases, regulations, directives and FAQs on the European Company Statute.
Following the withdrawal of the UK from the European Union, any SE registered in the United Kingdom converted to a United Kingdom Societas and UK Societas replaced SE in its name.
The regulation is complemented by the Council Directive supplementing the Statute for a European Company with regard to the involvement of employees (informally "Council Directive on Employee Participation"), adopted 8 October 2001. The directive establishes rules on worker involvement in the management of the SE.
EU member states differ in the degree of worker involvement in corporate management. In Germany, most large corporations are required to allow employees to elect a certain percentage of seats on the supervisory board. Other member states have no such requirement, and furthermore in these states such practices are largely unknown and considered a threat to the rights of management.
These differing traditions of worker involvement have held back the adoption of the statute for over a decade. States without worker involvement provisions were afraid that the SE might lead to having such provisions being imposed on their companies; and states with those provisions were afraid they might lead to those provisions being circumvented.
A compromise, contained in the directive, was worked out as follows: worker involvement provisions in the SE will be decided upon by negotiations between employees and management before the creation of the SE. If agreement cannot be reached, provisions contained in the directive will apply. The directive provides for worker involvement in the SE if a minimum percentage of employees from the entities coming together to form the SE enjoyed worker involvement provisions. The directive permits member states to not implement these default worker involvement provisions in their national law, but then an SE cannot be created in that member state if the provisions in the directive would apply and negotiations between workers and management are unsuccessful.
Definition of employee participation: it does not mean participation in day-to-day decisions, which are a matter for the management, but participation in the supervision and strategic development of the company.
Employment contracts and pensions are not covered by the directive. With regard to occupational pension schemes, the SE is covered by the provisions laid down in the proposal for a directive on institutions for occupational schemes, presented by the Commission in October 2000, in particular in connection with the possibility of introducing a single pension scheme for all their employees in the European Union.
Two approaches have been attempted to solve the problems cited above. One approach is to harmonize the company law of the member states. This approach has had some successes, but after thirty years only limited progress has been made. It is difficult to harmonize widely different regulatory systems, especially when they reflect different national attitudes to issues such as worker involvement in the management of the company.
The other approach is to construct a whole new system of EU company law, that co-exists with the individual company laws of the member states. Companies would have the choice of operating either under national regulations or under the EU-wide system. However, this approach has been only somewhat more effective than the harmonization approach: while states are not as concerned about having foreign traditions of corporate governance imposed on their companies, which the harmonization approach could well entail; they also wish to ensure that the EU-wide system would be palatable to the traditions of their national companies, so that they will not be put at a disadvantage compared to the other member states.
The European Company Statute represents a step in this direction, albeit a limited one. While it establishes some common EU rules on the SE, these rules are incomplete, and the holes in the rules are to be filled in using the law of the member state in which the SE is registered. This has been due to the difficulties of agreeing on common European rules on these issues.
As of 11 April 2018, 3,015 registrations have been made. In terms of registrations, the Czech Republic is vastly overrepresented, accounting for 79% of all Societates Europaeae as of December 2015. 9 of the 50 constituents of the Euro Stoxx 50 stock market index of leading eurozone companies are as of December 2015 Societates Europaeae.
Annual registrations by member state are presented in the following chart:
Sectors in which societates with more than five employees have been registered (2014)
Registrations of new societates are to be published in the Official Journal of the European Union. There is no official union-wide register of societates, as they are registered in the nation in which their corporate seats are located. worker-participation.eu does however maintain a database of current and planned registrations. Examples of companies include: | [
{
"paragraph_id": 0,
"text": "A societas Europaea (Classical Latin: [sɔˈkɪ.ɛtaːs eu̯roːˈpae̯.a], Ecclesiastical Latin: [soˈtʃi.etas eu̯roˈpe.a]; \"European society\" or \"company\"; plural: societates Europaeae; abbr. SE) is a public company registered in accordance with the corporate law of the European Union (EU), introduced in 2004 with the Council Regulation on the Statute for a European Company. Such a company may more easily transfer to or merge with companies in other member states.",
"title": ""
},
{
"paragraph_id": 1,
"text": "As of April 2018, more than 3,000 registrations have been reported, including the following nine components (18%) of the Euro Stoxx 50 stock market index of leading eurozone companies (excluding the SE designation): Airbus, Allianz, BASF, E.ON, Fresenius, LVMH Moët Hennessy Louis Vuitton (and its parent company Dior), SAP, Schneider Electric and Unibail-Rodamco-Westfield.",
"title": ""
},
{
"paragraph_id": 2,
"text": "National law continues to supplement the basic rules in the Regulation on formation and mergers. The European Company Regulation is complemented by an Employee Involvement Directive which manages the rules for participation by employees on the company's board of directors. There is also a statute allowing European Cooperative Societies.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The statute provides four ways of forming a European limited company:",
"title": "Main provisions"
},
{
"paragraph_id": 4,
"text": "Formation by merger is available only to public limited companies from different member states. Formation of an SE holding company is available to public and private limited companies with their registered offices in different member states or having subsidiaries or branches in member states other than that of their registered office. Formation of a joint subsidiary is available under the same circumstances to any legal entities governed by public or private law.",
"title": "Main provisions"
},
{
"paragraph_id": 5,
"text": "The SE must have a minimum subscribed capital of €120,000 as per article 4(2) of the directive, subject to the provision that where a member state requires a larger capital for companies exercising certain types of activities, the same requirement will also apply to an SE with its registered office in that member state (article 4(3)).",
"title": "Main provisions"
},
{
"paragraph_id": 6,
"text": "The registered office of the SE designated in the statutes must be the place where it has its central administration, that is to say its true centre of operations. The SE may transfer its registered office within the European Economic Area without dissolving the company in one member state in order to form a new one in another member state; however, such a transfer is subject to the provisions of 8 which require, inter alia, the drawing up of a transfer proposal, a report justifying the legal and economic aspects of the transfer and the issuing, by the competent authority in the member state in which the SE is registered, of a certificate attesting to the completion of the required acts and formalities.",
"title": "Main provisions"
},
{
"paragraph_id": 7,
"text": "The order of precedence of the laws applicable to the SE is clarified.",
"title": "Main provisions"
},
{
"paragraph_id": 8,
"text": "The registration and completion of the liquidation of an SE must be disclosed for information purposes in the Official Journal of the European Communities. Every SE must be registered in the state where it has its registered office, in a register designated by the law of that state.",
"title": "Main provisions"
},
{
"paragraph_id": 9,
"text": "The statutes of the SE must provide as governing bodies the annual general meeting of shareholders and either a management board and a supervisory board (two-tier system) or an administrative board (single-tier system). Under the two-tier system the SE is managed by a management board. The member or members of the management board have the power to represent the company in dealings with third parties and in legal proceedings. They are appointed and removed by the supervisory board. No person may be a member of both the management board and the supervisory board of the same company at the same time. But the supervisory board may appoint one of its members to exercise the functions of a member of the management board in the event of absence through holidays. During such a period the function of the person concerned as a member of the supervisory board shall be suspended. Under the single-tier system, the SE is managed by an administrative board. The member or members of the administrative board have the power to represent the company in dealings with third parties and in legal proceedings. Under the single-tier system the administrative board may delegate the power of management to one or more of its members.",
"title": "Main provisions"
},
{
"paragraph_id": 10,
"text": "The following operations require the authorization of the supervisory board or the deliberation of the administrative board:",
"title": "Main provisions"
},
{
"paragraph_id": 11,
"text": "The SE must draw up annual accounts comprising the balance sheet, the profit and loss account, and the notes to the accounts, and an annual report giving a fair view of the company's business and of its position; consolidated accounts may also be required.",
"title": "Main provisions"
},
{
"paragraph_id": 12,
"text": "In tax matters, the SE is treated the same as any other multinational, i.e., it is subject to the tax regime of the national legislation applicable to the company and its subsidiaries. SEs are subject to taxes and charges in all member states where their administrative centres are situated.",
"title": "Main provisions"
},
{
"paragraph_id": 13,
"text": "Winding-up, liquidation, insolvency, and suspension of payments are in large measure to be governed by national law. When an SE transfers its registered office outside the Community, or in any other manner no longer complies with requirements of article 7, the member state must take appropriate measures to ensure compliance or take necessary measures to ensure that the SE is liquidated.",
"title": "Main provisions"
},
{
"paragraph_id": 14,
"text": "Council Regulation (EC) No 2157/2001 of 8 October 2001 on the Statute for a European company (SE).",
"title": "Status of the legislation and implementation"
},
{
"paragraph_id": 15,
"text": "Council Directive 2001/86/EC of 8 October 2001 supplementing the Statute for a European company with regard to the involvement of employees.",
"title": "Status of the legislation and implementation"
},
{
"paragraph_id": 16,
"text": "See also: Europa's collection of press releases, regulations, directives and FAQs on the European Company Statute.",
"title": "Status of the legislation and implementation"
},
{
"paragraph_id": 17,
"text": "Following the withdrawal of the UK from the European Union, any SE registered in the United Kingdom converted to a United Kingdom Societas and UK Societas replaced SE in its name.",
"title": "Status of the legislation and implementation"
},
{
"paragraph_id": 18,
"text": "The regulation is complemented by the Council Directive supplementing the Statute for a European Company with regard to the involvement of employees (informally \"Council Directive on Employee Participation\"), adopted 8 October 2001. The directive establishes rules on worker involvement in the management of the SE.",
"title": "Employee participation"
},
{
"paragraph_id": 19,
"text": "EU member states differ in the degree of worker involvement in corporate management. In Germany, most large corporations are required to allow employees to elect a certain percentage of seats on the supervisory board. Other member states have no such requirement, and furthermore in these states such practices are largely unknown and considered a threat to the rights of management.",
"title": "Employee participation"
},
{
"paragraph_id": 20,
"text": "These differing traditions of worker involvement have held back the adoption of the statute for over a decade. States without worker involvement provisions were afraid that the SE might lead to having such provisions being imposed on their companies; and states with those provisions were afraid they might lead to those provisions being circumvented.",
"title": "Employee participation"
},
{
"paragraph_id": 21,
"text": "A compromise, contained in the directive, was worked out as follows: worker involvement provisions in the SE will be decided upon by negotiations between employees and management before the creation of the SE. If agreement cannot be reached, provisions contained in the directive will apply. The directive provides for worker involvement in the SE if a minimum percentage of employees from the entities coming together to form the SE enjoyed worker involvement provisions. The directive permits member states to not implement these default worker involvement provisions in their national law, but then an SE cannot be created in that member state if the provisions in the directive would apply and negotiations between workers and management are unsuccessful.",
"title": "Employee participation"
},
{
"paragraph_id": 22,
"text": "Definition of employee participation: it does not mean participation in day-to-day decisions, which are a matter for the management, but participation in the supervision and strategic development of the company.",
"title": "Employee participation"
},
{
"paragraph_id": 23,
"text": "Employment contracts and pensions are not covered by the directive. With regard to occupational pension schemes, the SE is covered by the provisions laid down in the proposal for a directive on institutions for occupational schemes, presented by the Commission in October 2000, in particular in connection with the possibility of introducing a single pension scheme for all their employees in the European Union.",
"title": "Employee participation"
},
{
"paragraph_id": 24,
"text": "Two approaches have been attempted to solve the problems cited above. One approach is to harmonize the company law of the member states. This approach has had some successes, but after thirty years only limited progress has been made. It is difficult to harmonize widely different regulatory systems, especially when they reflect different national attitudes to issues such as worker involvement in the management of the company.",
"title": "Development"
},
{
"paragraph_id": 25,
"text": "The other approach is to construct a whole new system of EU company law, that co-exists with the individual company laws of the member states. Companies would have the choice of operating either under national regulations or under the EU-wide system. However, this approach has been only somewhat more effective than the harmonization approach: while states are not as concerned about having foreign traditions of corporate governance imposed on their companies, which the harmonization approach could well entail; they also wish to ensure that the EU-wide system would be palatable to the traditions of their national companies, so that they will not be put at a disadvantage compared to the other member states.",
"title": "Development"
},
{
"paragraph_id": 26,
"text": "The European Company Statute represents a step in this direction, albeit a limited one. While it establishes some common EU rules on the SE, these rules are incomplete, and the holes in the rules are to be filled in using the law of the member state in which the SE is registered. This has been due to the difficulties of agreeing on common European rules on these issues.",
"title": "Development"
},
{
"paragraph_id": 27,
"text": "As of 11 April 2018, 3,015 registrations have been made. In terms of registrations, the Czech Republic is vastly overrepresented, accounting for 79% of all Societates Europaeae as of December 2015. 9 of the 50 constituents of the Euro Stoxx 50 stock market index of leading eurozone companies are as of December 2015 Societates Europaeae.",
"title": "Registrations"
},
{
"paragraph_id": 28,
"text": "Annual registrations by member state are presented in the following chart:",
"title": "Registrations"
},
{
"paragraph_id": 29,
"text": "Sectors in which societates with more than five employees have been registered (2014)",
"title": "Registrations"
},
{
"paragraph_id": 30,
"text": "Registrations of new societates are to be published in the Official Journal of the European Union. There is no official union-wide register of societates, as they are registered in the nation in which their corporate seats are located. worker-participation.eu does however maintain a database of current and planned registrations. Examples of companies include:",
"title": "Registrations"
}
]
| A societas Europaea is a public company registered in accordance with the corporate law of the European Union (EU), introduced in 2004 with the Council Regulation on the Statute for a European Company. Such a company may more easily transfer to or merge with companies in other member states. As of April 2018, more than 3,000 registrations have been reported, including the following nine components (18%) of the Euro Stoxx 50 stock market index of leading eurozone companies: Airbus, Allianz, BASF, E.ON, Fresenius, LVMH Moët Hennessy Louis Vuitton, SAP, Schneider Electric and Unibail-Rodamco-Westfield. National law continues to supplement the basic rules in the Regulation on formation and mergers. The European Company Regulation is complemented by an Employee Involvement Directive which manages the rules for participation by employees on the company's board of directors. There is also a statute allowing European Cooperative Societies. | 2001-10-12T10:46:59Z | 2023-09-23T08:11:33Z | [
"Template:Societas Europaeae registrations",
"Template:Lang",
"Template:Graph:Chart",
"Template:ESP",
"Template:Authority control",
"Template:Italic title",
"Template:IPA",
"Template:Main article",
"Template:Pie chart",
"Template:Legend",
"Template:Reflist",
"Template:Cite web",
"Template:Corporate law",
"Template:Cite book",
"Template:European Union topics",
"Template:Multiple image",
"Template:Use dmy dates",
"Template:Flag",
"Template:Efn",
"Template:Notelist",
"Template:Portal",
"Template:Short description"
]
| https://en.wikipedia.org/wiki/Societas_Europaea |
9,924 | Electronic mixer | An electronic mixer is a device that combines two or more electrical or electronic signals into one or two composite output signals. There are two basic circuits that both use the term mixer, but they are very different types of circuits: additive mixers and multiplicative mixers. Additive mixers are also known as analog adders to distinguish from the related digital adder circuits.
Simple additive mixers use Kirchhoff's circuit laws to add the currents of two or more signals together, and this terminology ("mixer") is only used in the realm of audio electronics where audio mixers are used to add together audio signals such as voice signals, music signals, and sound effects.
Multiplicative mixers multiply together two time-varying input signals instantaneously (instant-by-instant). If the two input signals are both sinusoids of specified frequencies f1 and f2, then the output of the mixer will contain two new sinusoids that have the sum f1 + f2 frequency and the difference frequency absolute value |f1 - f2|.
Any nonlinear electronic block driven by two signals with frequencies f1 and f2 would generate intermodulation (mixing) products. A multiplier (which is a nonlinear device) will generate ideally only the sum and difference frequencies, whereas an arbitrary nonlinear block will also generate signals at 2·f1-3·f2, etc. Therefore, normal nonlinear amplifiers or just single diodes have been used as mixers, instead of a more complex multiplier. A multiplier usually has the advantage of rejecting – at least partly – undesired higher-order intermodulations and larger conversion gain.
Additive mixers add two or more signals, giving out a composite signal that contains the frequency components of each of the source signals. The simplest additive mixers are resistor networks, and thus purely passive, while more complex matrix mixers employ active components such as buffer amplifiers for impedance matching and better isolation.
An ideal multiplicative mixer produces an output signal equal to the product of the two input signals. In communications, a multiplicative mixer is often used together with an oscillator to modulate signal frequencies. A multiplicative mixer can be coupled with a filter to either up-convert or down-convert an input signal frequency, but they are more commonly used to down-convert to a lower frequency to allow for simpler filter designs, as done in superheterodyne receivers. In many typical circuits, the single output signal actually contains multiple waveforms, namely those at the sum and difference of the two input frequencies and harmonic waveforms. The output signal may be obtained by removing the other signal components with a filter.í
The received signal can be represented as
and that of the local oscillator can be represented as
For simplicity, assume that the output I of the detector is proportional to the square of the amplitude:
The output has high frequency ( 2 ω s i g {\displaystyle 2\omega _{\mathrm {sig} }} , 2 ω L O {\displaystyle 2\omega _{\mathrm {LO} }} and ω s i g + ω L O {\displaystyle \omega _{\mathrm {sig} }+\omega _{\mathrm {LO} }} ) and constant components. In heterodyne detection, the high frequency components and usually the constant components are filtered out, leaving the intermediate (beat) frequency at ω s i g − ω L O {\displaystyle \omega _{\mathrm {sig} }-\omega _{\mathrm {LO} }} . The amplitude of this last component is proportional to the amplitude of the signal radiation. With appropriate signal analysis the phase of the signal can be recovered as well.
If ω L O {\displaystyle \omega _{\mathrm {LO} }} is equal to ω s i g {\displaystyle \omega _{\mathrm {sig} }} then the beat component is a recovered version of the original signal, with the amplitude equal to the product of E s i g {\displaystyle E_{\mathrm {sig} }} and E L O {\displaystyle E_{\mathrm {LO} }} ; that is, the received signal is amplified by mixing with the local oscillator. This is the basis for a Direct conversion receiver.
Multiplicative mixers have been implemented in many ways. The most popular are Gilbert cell mixers, diode mixers, diode ring mixers (ring modulation) and switching mixers. Diode mixers take advantage of the non-linearity of diode devices to produce the desired multiplication in the squared term. They are very inefficient as most of the power output is in other unwanted terms which need filtering out. Inexpensive AM radios still use diode mixers.
Electronic mixers are usually made with transistors and/or diodes arranged in a balanced circuit or even a double-balanced circuit. They are readily manufactured as monolithic integrated circuits or hybrid integrated circuits. They are designed for a wide variety of frequency ranges, and they are mass-produced to tight tolerances by the hundreds of thousands, making them relatively cheap.
Double-balanced mixers are very widely used in microwave communications, satellite communications, ultrahigh frequency (UHF) communications transmitters, radio receivers, and radar systems.
Gilbert cell mixers are an arrangement of transistors that multiplies the two signals.
Switching mixers use arrays of field-effect transistors or vacuum tubes. These are used as electronic switches, to alternate the signal direction. They are controlled by the signal being mixed. They are especially popular with digitally controlled radios. Switching mixers pass more power and usually insert less distortion than Gilbert cell mixers. | [
{
"paragraph_id": 0,
"text": "An electronic mixer is a device that combines two or more electrical or electronic signals into one or two composite output signals. There are two basic circuits that both use the term mixer, but they are very different types of circuits: additive mixers and multiplicative mixers. Additive mixers are also known as analog adders to distinguish from the related digital adder circuits.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Simple additive mixers use Kirchhoff's circuit laws to add the currents of two or more signals together, and this terminology (\"mixer\") is only used in the realm of audio electronics where audio mixers are used to add together audio signals such as voice signals, music signals, and sound effects.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Multiplicative mixers multiply together two time-varying input signals instantaneously (instant-by-instant). If the two input signals are both sinusoids of specified frequencies f1 and f2, then the output of the mixer will contain two new sinusoids that have the sum f1 + f2 frequency and the difference frequency absolute value |f1 - f2|.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Any nonlinear electronic block driven by two signals with frequencies f1 and f2 would generate intermodulation (mixing) products. A multiplier (which is a nonlinear device) will generate ideally only the sum and difference frequencies, whereas an arbitrary nonlinear block will also generate signals at 2·f1-3·f2, etc. Therefore, normal nonlinear amplifiers or just single diodes have been used as mixers, instead of a more complex multiplier. A multiplier usually has the advantage of rejecting – at least partly – undesired higher-order intermodulations and larger conversion gain.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Additive mixers add two or more signals, giving out a composite signal that contains the frequency components of each of the source signals. The simplest additive mixers are resistor networks, and thus purely passive, while more complex matrix mixers employ active components such as buffer amplifiers for impedance matching and better isolation.",
"title": "Additive mixers"
},
{
"paragraph_id": 5,
"text": "An ideal multiplicative mixer produces an output signal equal to the product of the two input signals. In communications, a multiplicative mixer is often used together with an oscillator to modulate signal frequencies. A multiplicative mixer can be coupled with a filter to either up-convert or down-convert an input signal frequency, but they are more commonly used to down-convert to a lower frequency to allow for simpler filter designs, as done in superheterodyne receivers. In many typical circuits, the single output signal actually contains multiple waveforms, namely those at the sum and difference of the two input frequencies and harmonic waveforms. The output signal may be obtained by removing the other signal components with a filter.í",
"title": "Multiplicative mixers"
},
{
"paragraph_id": 6,
"text": "The received signal can be represented as",
"title": "Multiplicative mixers"
},
{
"paragraph_id": 7,
"text": "and that of the local oscillator can be represented as",
"title": "Multiplicative mixers"
},
{
"paragraph_id": 8,
"text": "For simplicity, assume that the output I of the detector is proportional to the square of the amplitude:",
"title": "Multiplicative mixers"
},
{
"paragraph_id": 9,
"text": "The output has high frequency ( 2 ω s i g {\\displaystyle 2\\omega _{\\mathrm {sig} }} , 2 ω L O {\\displaystyle 2\\omega _{\\mathrm {LO} }} and ω s i g + ω L O {\\displaystyle \\omega _{\\mathrm {sig} }+\\omega _{\\mathrm {LO} }} ) and constant components. In heterodyne detection, the high frequency components and usually the constant components are filtered out, leaving the intermediate (beat) frequency at ω s i g − ω L O {\\displaystyle \\omega _{\\mathrm {sig} }-\\omega _{\\mathrm {LO} }} . The amplitude of this last component is proportional to the amplitude of the signal radiation. With appropriate signal analysis the phase of the signal can be recovered as well.",
"title": "Multiplicative mixers"
},
{
"paragraph_id": 10,
"text": "If ω L O {\\displaystyle \\omega _{\\mathrm {LO} }} is equal to ω s i g {\\displaystyle \\omega _{\\mathrm {sig} }} then the beat component is a recovered version of the original signal, with the amplitude equal to the product of E s i g {\\displaystyle E_{\\mathrm {sig} }} and E L O {\\displaystyle E_{\\mathrm {LO} }} ; that is, the received signal is amplified by mixing with the local oscillator. This is the basis for a Direct conversion receiver.",
"title": "Multiplicative mixers"
},
{
"paragraph_id": 11,
"text": "Multiplicative mixers have been implemented in many ways. The most popular are Gilbert cell mixers, diode mixers, diode ring mixers (ring modulation) and switching mixers. Diode mixers take advantage of the non-linearity of diode devices to produce the desired multiplication in the squared term. They are very inefficient as most of the power output is in other unwanted terms which need filtering out. Inexpensive AM radios still use diode mixers.",
"title": "Multiplicative mixers"
},
{
"paragraph_id": 12,
"text": "Electronic mixers are usually made with transistors and/or diodes arranged in a balanced circuit or even a double-balanced circuit. They are readily manufactured as monolithic integrated circuits or hybrid integrated circuits. They are designed for a wide variety of frequency ranges, and they are mass-produced to tight tolerances by the hundreds of thousands, making them relatively cheap.",
"title": "Multiplicative mixers"
},
{
"paragraph_id": 13,
"text": "Double-balanced mixers are very widely used in microwave communications, satellite communications, ultrahigh frequency (UHF) communications transmitters, radio receivers, and radar systems.",
"title": "Multiplicative mixers"
},
{
"paragraph_id": 14,
"text": "Gilbert cell mixers are an arrangement of transistors that multiplies the two signals.",
"title": "Multiplicative mixers"
},
{
"paragraph_id": 15,
"text": "Switching mixers use arrays of field-effect transistors or vacuum tubes. These are used as electronic switches, to alternate the signal direction. They are controlled by the signal being mixed. They are especially popular with digitally controlled radios. Switching mixers pass more power and usually insert less distortion than Gilbert cell mixers.",
"title": "Multiplicative mixers"
}
]
| An electronic mixer is a device that combines two or more electrical or electronic signals into one or two composite output signals. There are two basic circuits that both use the term mixer, but they are very different types of circuits: additive mixers and multiplicative mixers. Additive mixers are also known as analog adders to distinguish from the related digital adder circuits. Simple additive mixers use Kirchhoff's circuit laws to add the currents of two or more signals together, and this terminology ("mixer") is only used in the realm of audio electronics where audio mixers are used to add together audio signals such as voice signals, music signals, and sound effects. Multiplicative mixers multiply together two time-varying input signals instantaneously (instant-by-instant). If the two input signals are both sinusoids of specified frequencies f1 and f2, then the output of the mixer will contain two new sinusoids that have the sum f1 + f2 frequency and the difference frequency absolute value |f1 - f2|. Any nonlinear electronic block driven by two signals with frequencies f1 and f2 would generate intermodulation (mixing) products. A multiplier will generate ideally only the sum and difference frequencies, whereas an arbitrary nonlinear block will also generate signals at 2·f1-3·f2, etc. Therefore, normal nonlinear amplifiers or just single diodes have been used as mixers, instead of a more complex multiplier. A multiplier usually has the advantage of rejecting – at least partly – undesired higher-order intermodulations and larger conversion gain. | 2023-03-26T14:39:38Z | [
"Template:Main",
"Template:Further",
"Template:Clarify",
"Template:Authority control",
"Template:For",
"Template:Unreferenced",
"Template:See also"
]
| https://en.wikipedia.org/wiki/Electronic_mixer |
|
9,925 | Eubulides | Eubulides (Greek: Εὑβουλίδης; fl. 4th century BCE) of Miletus was a philosopher of the Megarian school who is famous for his paradoxes.
According to Diogenes Laërtius, Eubulides was a pupil of Euclid of Megara, the founder of the Megarian school. He was a contemporary of Aristotle, against whom he wrote with great bitterness. He taught logic to Demosthenes, and he is also said to have taught Apollonius Cronus, the teacher of Diodorus Cronus, and the historian Euphantus.
Eubulides is most famous for inventing the forms of seven famous paradoxes, some of which, however, are also ascribed to Diodorus Cronus:
The first paradox (the Liar) is probably the most famous, and is similar to the famous paradox of Epimenides the Cretan. The second, third and fourth paradoxes are variants of a single paradox and relate to the problem of what it means to "know" something and the identity of objects involved in an affirmation (compare the masked-man fallacy). The fifth and sixth paradoxes are also a single paradox and is usually thought to relate to the vagueness of language. The final paradox, the horns, is a paradox related to presupposition.
These paradoxes were very well known in ancient times, some are alluded to by Eubulides' contemporary Aristotle and even partially by Plato. Chrysippus, the Stoic philosopher wrote about the paradoxes developed by Eubulides and characterized the Horns paradox as an intractable problem (aporoi logoi). Aulus Gellius mentions how the discussion of such paradoxes was considered (for him) after-dinner entertainment at the Saturnalia, but Seneca, on the other hand, considered them a waste of time: "Not to know them does no harm, and mastering them does no good." | [
{
"paragraph_id": 0,
"text": "Eubulides (Greek: Εὑβουλίδης; fl. 4th century BCE) of Miletus was a philosopher of the Megarian school who is famous for his paradoxes.",
"title": ""
},
{
"paragraph_id": 1,
"text": "According to Diogenes Laërtius, Eubulides was a pupil of Euclid of Megara, the founder of the Megarian school. He was a contemporary of Aristotle, against whom he wrote with great bitterness. He taught logic to Demosthenes, and he is also said to have taught Apollonius Cronus, the teacher of Diodorus Cronus, and the historian Euphantus.",
"title": "Life"
},
{
"paragraph_id": 2,
"text": "Eubulides is most famous for inventing the forms of seven famous paradoxes, some of which, however, are also ascribed to Diodorus Cronus:",
"title": "Paradoxes of Eubulides"
},
{
"paragraph_id": 3,
"text": "The first paradox (the Liar) is probably the most famous, and is similar to the famous paradox of Epimenides the Cretan. The second, third and fourth paradoxes are variants of a single paradox and relate to the problem of what it means to \"know\" something and the identity of objects involved in an affirmation (compare the masked-man fallacy). The fifth and sixth paradoxes are also a single paradox and is usually thought to relate to the vagueness of language. The final paradox, the horns, is a paradox related to presupposition.",
"title": "Paradoxes of Eubulides"
},
{
"paragraph_id": 4,
"text": "These paradoxes were very well known in ancient times, some are alluded to by Eubulides' contemporary Aristotle and even partially by Plato. Chrysippus, the Stoic philosopher wrote about the paradoxes developed by Eubulides and characterized the Horns paradox as an intractable problem (aporoi logoi). Aulus Gellius mentions how the discussion of such paradoxes was considered (for him) after-dinner entertainment at the Saturnalia, but Seneca, on the other hand, considered them a waste of time: \"Not to know them does no harm, and mastering them does no good.\"",
"title": "Legacy"
}
]
| Eubulides of Miletus was a philosopher of the Megarian school who is famous for his paradoxes. | 2002-02-25T15:51:15Z | 2023-08-06T08:15:33Z | [
"Template:Cite SEP",
"Template:Cite journal",
"Template:Cite DGRBM",
"Template:Megarian philosophy",
"Template:Authority control",
"Template:Infobox philosopher",
"Template:Lang-el",
"Template:Cite LotEP",
"Template:Sfn",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/Eubulides |
9,926 | ETA (separatist group) | ETA, an acronym for Euskadi Ta Askatasuna ("Basque Homeland and Liberty" or "Basque Country and Freedom"), was an armed Basque nationalist and far-left separatist organization in the Basque Country between 1959 and 2018, with its goal being independence for the region. The group was founded in 1959 during the era of Francoist Spain, and later evolved from a pacifist group promoting traditional Basque culture to a violent paramilitary group. It engaged in a campaign of bombings, assassinations, and kidnappings throughout Spain and especially the Southern Basque Country against the regime, which was highly centralised and hostile to the expression of non-Castilian minority identities. ETA was the main group within the Basque National Liberation Movement and was the most important Basque participant in the Basque conflict.
ETA's motto was Bietan jarrai ("Keep up on both"), referring to the two figures in its symbol, a snake (representing politics) wrapped around an axe (representing armed struggle). Between 1968 and 2010, ETA killed 829 people (including 340 civilians) and injured more than 22,000. ETA was classified as a terrorist group by Spain, France, the United Kingdom, the United States, Canada, and the European Union. This convention was followed by a plurality of domestic and international media, which also referred to the group as terrorists. As of 2019, there were more than 260 imprisoned former members of the group in Spain, France, and other countries.
ETA declared ceasefires in 1989, 1996, 1998 and 2006. On 5 September 2010, ETA declared a new ceasefire that remained in force, and on 20 October 2011, ETA announced a "definitive cessation of its armed activity". On 24 November 2012, it was reported that the group was ready to negotiate a "definitive end" to its operations and disband completely. The group announced on 7 April 2017 that it had given up all its weapons and explosives. On 2 May 2018, ETA made public a letter dated 16 April 2018 according to which it had "completely dissolved all its structures and ended its political initiative".
ETA changed its internal structure on several occasions, commonly for security reasons. The group used to have a very hierarchical organization with a leading figure at the top, delegating into three substructures: the logistical, military and political sections. Reports from Spanish and French police point towards significant changes in ETA's structures in its later years. ETA divided the three substructures into a total of eleven. The change was a response to captures, and possible infiltration, by the different law enforcement agencies. ETA intended to disperse its members and reduce the effects of detentions.
The leading committee comprised 7 to 11 individuals, and ETA's internal documentation referred to it as Zuba, an abbreviation of Zuzendaritza Batzordea (directorial committee). There was another committee named Zuba-hits that functioned as an advisory committee. The eleven different substructures were: logistics, politics, international relations with fraternal organisations, military operations, reserves, prisoner support, expropriation, information, recruitment, negotiation, and treasury.
ETA's armed operations were organized in different taldes (groups or commandos), generally composed of three to five members, whose objective was to conduct attacks in a specific geographic zone. The taldes were coordinated by the cúpula militar ("military cupola"). To supply the taldes, support groups maintained safe houses and zulos (small rooms concealed in forests, garrets or underground, used to store arms, explosives or, sometimes, kidnapped people; the Basque word zulo literally means "hole"). The small cellars used to hide the people kidnapped are named by ETA and ETA's supporters "people's jails". The most common commandos were itinerant, not linked to any specific area, and thus were more difficult to capture.
Among its members, ETA distinguished between legales/legalak ("legal ones"), those members who did not have police records and lived apparently normal lives; liberados ("liberated members") known to the police that were on ETA's payroll and working full-time for ETA; and apoyos ("supporters") who just gave occasional help and logistics support to the group when required.
There were also imprisoned members of the group, serving time scattered across Spain and France, that sometimes still had significant influence inside the organisation; and finally the quemados ("burnt out"), members freed after having been imprisoned or those that were suspected by the group of being under police surveillance. In the past, there was also the figure of the deportees, expelled by the French government to remote countries where they lived freely. ETA's internal bulletin was named Zutabe ("Column"), replacing the earlier one (1962) Zutik ("Standing").
ETA also promoted the kale borroka ("street fight"), that is, violent acts against public transportation, political parties' offices or cultural buildings, destruction of private property of politicians, police, military, bank offices, journalists, council members, and anyone voicing criticism of ETA. Tactics included threats, graffiti of political mottoes, and rioting, usually using Molotov cocktails. These groups were mostly made up of young people, who were directed through youth organisations (such as Jarrai, Haika and Segi). Many members of ETA started their collaboration with the group as participants in the kale borroka.
The former political party Batasuna, disbanded in 2003, pursued the same political goals as ETA and did not condemn ETA's use of violence. Formerly known as Euskal Herritarrok and "Herri Batasuna", it was banned by the Spanish Supreme Court as an anti-democratic organisation following the Political Parties Law (Ley de Partidos Políticos). It generally received 10% to 20% of the vote in the Basque Autonomous Community.
Batasuna's political status was controversial. It was considered to be the political wing of ETA. Moreover, after the investigations on the nature of the relationship between Batasuna and ETA by Judge Baltasar Garzón, who suspended the activities of the political organisation and ordered police to shut down its headquarters, the Supreme Court of Spain finally declared Batasuna illegal on 18 March 2003. The court considered proven that Batasuna had links with ETA and that it constituted in fact part of ETA's structure. In 2003, the Constitutional Tribunal upheld the legality of the law.
However, the party itself denied being the political wing of ETA, although double membership – simultaneous or alternative – between Batasuna and ETA was often recorded, such as with the cases of prominent Batasuna leaders like Josu Urrutikoetxea, Arnaldo Otegi, Jon Salaberria and others.
The Spanish Cortes (the Spanish Parliament) began the process of declaring the party illegal in August 2002 by issuing a bill entitled the Ley de Partidos Políticos which bars political parties that use violence to achieve political goals, promote hatred against different groups or seek to destroy the democratic system. The bill passed the Cortes with a 304 to 16 vote. Many within the Basque nationalistic movement strongly disputed the Law, which they considered too draconian or even unconstitutional; alleging that any party could be made illegal almost by choice, simply for not clearly stating their opposition to an attack.
Defenders of the law argued that the Ley de Partidos did not necessarily require responses to individual acts of violence, but rather a declaration of principles explicitly rejecting violence as a means of achieving political goals. Defenders also argued that the ban of a political party is subject to judicial process, with all the guarantees of the State of Law. Batasuna had failed to produce such a statement. As of February 2008 other political parties linked to organizations such as Partido Comunista de España (reconstituted) have also been declared illegal, and Acción Nacionalista Vasca and Communist Party of the Basque Lands (EHAK/PCTV, Euskal Herrialdeetako Alderdi Komunista/Partido Comunista de las Tierras Vascas) was declared illegal in September 2008.
A new party called Aukera Guztiak (All the Options) was formed expressly for the elections to the Basque Parliament of April 2005. Its supporters claimed no heritage from Batasuna, asserting that they aimed to allow Basque citizens to freely express their political ideas, even those of independence. On the matter of political violence, Aukera Guztiak stated their right not to condemn some kinds of violence more than others if they did not see fit (in this regard, the Basque National Liberation Movement (MLNV) regards present police actions as violence, torture and state terrorism). Nevertheless, most of their members and certainly most of their leadership were former Batasuna supporters or affiliates. The Spanish Supreme Court unanimously considered the party to be a successor to Batasuna and declared a ban on it.
After Aukera Guztiak had been banned, and less than two weeks before the election, another political group appeared born from an earlier schism from Herri Batasuna, the Communist Party of the Basque Lands (EHAK/PCTV, Euskal Herrialdeetako Alderdi Komunista/Partido Comunista de las Tierras Vascas), a formerly unknown political party which had no representation in the Autonomous Basque Parliament. EHAK announced that they would apply the votes they obtained to sustain the political programme of the now-banned Aukera Guztiak platform.
This move left no time for the Spanish courts to investigate EHAK in compliance with the Ley de Partidos before the elections were held. The bulk of Batasuna supporters voted in this election for PCTV. It obtained 9 seats of 75 (12.44% of votes) in the Basque Parliament. The election of EHAK representatives eventually allowed the programme of the now-illegal Batasuna to continue being represented without having condemned violence as required by the Ley de Partidos.
In February 2011, Sortu, a party described as "the new Batasuna", was launched. Unlike predecessor parties, Sortu explicitly rejects politically motivated violence, including that of ETA. However, on 23 March 2011, the Spanish Supreme Court banned Sortu from registering as a political party on the grounds that it was linked to ETA.
The Spanish transition to democracy from 1975 on and ETA's progressive radicalisation had resulted in a steady loss of support, which became especially apparent at the time of their 1997 kidnapping and countdown assassination of Miguel Ángel Blanco. Their loss of sympathisers had been reflected in an erosion of support for the political parties identified with them. In the 1998 Basque parliament elections Euskal Herritarrok, formerly Batasuna, polled 17.7% of the votes. However, by 2001 the party's support had fallen to 10.0%. There were also concerns that Spain's "judicial offensive" against alleged ETA supporters (two Basque political parties and one NGO were banned in September 2008) constituted a threat to human rights. Strong evidence was seen that a legal network had grown so wide as to lead to the arrest of numerous innocent people. According to Amnesty International, torture was still "persistent", though not "systematic". Inroads could be undermined by judicial short-cuts and abuses of human rights.
The Euskobarometro, the survey carried out by the Universidad del País Vasco (University of the Basque Country), asking about the views of ETA within the Basque population, obtained these results in May 2009: 64% rejected ETA totally, 13% identified themselves as former ETA sympathisers who no longer support the group. Another 10% agreed with ETA's ends, but not their means. 3% said that their attitude towards ETA was mainly one of fear, 3% expressed indifference and 3% were undecided or did not answer. About 3% gave ETA "justified, with criticism" support (supporting the group but criticising some of their actions) and only 1% gave ETA total support. Even within Batasuna voters, at least 48% rejected ETA's violence.
A poll taken by the Basque Autonomous Government in December 2006 during ETA's "permanent" ceasefire showed that 88% of the Basques thought that all political parties needed to launch a dialogue, including a debate on the political framework for the Basque Country (86%). 69% support the idea of ratifying the results of this hypothetical multiparty dialogue through a referendum. This poll also reveals that the hope of a peaceful resolution to the issue of the constitutional status of the Basque region has fallen to 78% (from 90% in April).
These polls did not cover Navarre, where support for Basque nationalist electoral options is weaker (around 25% of the population); or the Northern Basque Country, where support is even weaker (around 15% of the population).
ETA grew out of a student group called Ekin, founded in the early 1950s, which published a magazine and undertook direct action. ETA was founded on 31 July 1959 as Euskadi Ta Askatasuna ("Basque Homeland and Liberty" or "Basque Country and Freedom") by students frustrated by the moderate stance of the Basque Nationalist Party. (Originally, the name for the organisation used the word Aberri instead of Euskadi, creating the acronym ATA. However, in some Basque dialects, ata means duck, so the name was changed.)
ETA held their first assembly in Bayonne, France, in 1962, during which a "declaration of principles" was formulated and following which a structure of activist cells was developed. Subsequently, Marxist and third-worldist perspectives developed within ETA, becoming the basis for a political programme set out in Federico Krutwig's (an anarchist of German origin) 1963 book Vasconia, which is considered to be the defining text of the movement. In contrast to previous Basque nationalist platforms, Krutwig's vision was anti-religious and based upon language and culture rather than race. ETA's third and fourth assemblies, held in 1964 and 1965, adopted an anti-capitalist and anti-imperialist position, seeing nationalism and the class struggle as intrinsically connected.
Some sources attributed the 1960 bombing of the Amara station in Donostia-San Sebastian (which killed a 22-month-old child) to ETA, but statistics published by the Spanish Ministry of the Interior have always showed that ETA's first victim was killed in 1968. The 1960 attack was claimed by the Portuguese and Galician left-wing group Directorio Revolucionario Ibérico de Liberación (DRIL) (together with four other very similar bombings committed that same day across Spain, all of them attributed to DRIL), and the attribution to ETA has been considered to be unfounded by researchers. Police documents dating from 1961, released in 2013, show that the DRIL was indeed the author of the bombing. A more recent study by the Memorial de Víctimas del Terrorismo based on the analysis of police diligences at the time reached the same conclusion, naming Guillermo Santoro, member of DRIL, as the author of the attack.
ETA's first killing occurred on 7 June 1968, when Guardia Civil member José Pardines Arcay was shot dead after he tried to halt ETA member Txabi Etxebarrieta during a routine road check. Etxebarrieta was chased down and killed as he tried to flee. This led to retaliation in the form of the first planned ETA assassination: that of Melitón Manzanas, chief of the secret police in San Sebastián and associated with a long record of tortures inflicted on detainees in his custody. In December 1970, several members of ETA were condemned to death in the Burgos trials (Proceso de Burgos), but international pressure resulted in their sentences being commuted (a process which, however, had by that time already been applied to some other members of ETA).
In early December 1970, ETA kidnapped the German consul in San Sebastian, Eugen Beilh, to exchange him for the Burgos defendants. He was released unharmed on 24 December.
Nationalists who refused to follow the tenets of Marxism–Leninism and who sought to create a united front appeared as ETA-V, but lacked the support to challenge ETA.
The most significant assassination performed by ETA during Franco's dictatorship was Operación Ogro, the December 1973 bomb assassination in Madrid of Admiral Luis Carrero Blanco, Franco's chosen successor and president of the government (a position roughly equivalent to being a prime minister). The assassination had been planned for months and was executed by placing a bomb in a tunnel dug below the street where Carrero Blanco's car passed every day. The bomb blew up beneath the politician's car and left a massive crater in the road.
For some in the Spanish opposition, Carrero Blanco's assassination, i.e., the elimination of Franco's chosen successor was an instrumental step for the subsequent re-establishment of democracy. The government responded with new anti-terrorism laws which gave police greater powers and empowered military tribunals to pass death sentences against those found guilty. However, the last use of capital punishment in Spain when two ETA members were executed in September 1975, eight weeks before Franco's death, sparked massive domestic and international protests against the Spanish government.
During the Spanish transition to democracy which began following Franco's death, ETA split into two separate groups: ETA political-military or ETA(pm), and ETA military or ETA(m).
Both ETA(m) and ETA(pm) refused offers of amnesty, and instead pursued and intensified their violent struggle. The years 1978–1980 were to prove ETA's most deadly, with 68, 76, and 98 fatalities, respectively.
During the Franco dictatorship, ETA was able to take advantage of tolerance by the French government, which allowed its members to move freely through French territory, believing that in this manner they were contributing to the end of Franco's regime. There is much controversy over the degree to which this policy of "sanctuary" continued even after the transition to democracy, but it is generally agreed that after 1983 the French authorities started to collaborate with the Spanish government against ETA.
In the 1980s, ETA(pm) accepted the Spanish government's offer of individual pardons to all ETA prisoners, even those who had committed violent crimes, who publicly abandoned the policy of violence. This caused a new division in ETA(pm) between the seventh and eighth assemblies. ETA VII accepted this partial amnesty granted by the now democratic Spanish government and integrated into the political party Euskadiko Ezkerra ("Left of the Basque Country").
ETA VIII, after a brief period of independent activity, eventually integrated into ETA(m). With no factions existing anymore, ETA(m) reclaimed the original name of Euskadi Ta Askatasuna.
During the 1980s a "dirty war" ensued using the Grupos Antiterroristas de Liberación (GAL, "Antiterrorist Liberation Groups"), a paramilitary group which billed themselves as counter-terrorist, active between 1983 and 1987. The GAL's stated mission was to avenge every ETA killing with another killing of ETA exiles in the French department of Pyrénées Atlantiques. GAL committed 27 assassinations (all but one in France), plus several kidnappings and torture, not only of ETA members but of civilians supposedly related to those, some of whom turned out to have nothing to do with ETA. GAL activities were a follow-up of similar dirty war actions by death squads, actively supported by members of Spanish security forces and secret services, using names such as Batallón Vasco Español active from 1975 to 1981. They were responsible for the killing of about 48 people.
One consequence of GAL's activities in France was the decision in 1984 by interior minister Pierre Joxe to permit the extradition of ETA suspects to Spain. Reaching this decision had taken 25 years and was critical in curbing ETA's capabilities by denial of previously safe territory in France.
The airing of the state-sponsored "dirty war" scheme and the imprisonment of officials responsible for GAL in the early 1990s led to a political scandal in Spain. The group's connections with the state were unveiled by the Spanish journal El Mundo, with an investigative series leading to the GAL plot being discovered and a national trial initiated. As a consequence, the group's attacks since the revelation have generally been dubbed state terrorism.
In 1997 the Spanish Audiencia Nacional court finished its trial, which resulted in convictions and imprisonment of several individuals related to the GAL, including civil servants and politicians up to the highest levels of the Spanish Socialist Workers' Party (PSOE) government, such as former Homeland Minister José Barrionuevo. Premier Felipe González was quoted as saying that the constitutional state has to defend itself "even in the sewers" (El Estado de derecho también se defiende en las cloacas), something which, for some, indicated at least his knowledge of the scheme. However, his involvement with the GAL could never be proven.
These events marked the end of the armed "counter-terrorist" period in Spain and no major cases of foul play on the part of the Spanish government after 1987 (when GAL ceased to operate) have been proven in courts.
According to the radical nationalist group, Euskal Memoria, between 1960 and 2010 there were 465 deaths in the Basque Country due to (primarily Spanish) state violence. This figure is considerably higher than those given elsewhere, which are usually between 250 and 300. Critics of ETA cite only 56 members of that organisation killed by state forces since 1975.
ETA members and supporters routinely claim torture at the hands of Spanish police forces. While these claims are hard to verify, some convictions were based on confessions while prisoners were held incommunicado and without access to a lawyer of their choice, for a maximum of five days. These confessions were routinely repudiated by the defendants during trials as having been extracted under torture. There were some successful prosecutions of proven tortures during the "dirty war" period of the mid-1980s, although the penalties have been considered by Amnesty International as unjustifiably light and lenient with co-conspirators and enablers.
In this regard, Amnesty International showed concern for the continuous disregard of the recommendations issued by the agency to prevent the alleged abuses from possibly taking place. Also in this regard, ETA's manuals were found instructing its members and supporters to claim routinely that they had been tortured while detained. Unai Romano's case was very controversial: pictures of him with a symmetrically swollen face of uncertain aetiology were published after his incommunicado period leading to claims of police abuse and torture. Martxelo Otamendi, the ex-director of the Basque newspaper Euskaldunon Egunkaria, decided to bring charges in September 2008 against the Spanish Government in the European Court of Human Rights for "not inspecting properly" cases tainted by torture.
As a result of ETA's violence, threats and killings of journalists, Reporters Without Borders included Spain in all six editions of its annual watchlist on press freedom up to 2006. Thus, the NGO included ETA in its watchlist "Predators of Press Freedom".
ETA performed their first car bomb assassination in Madrid in September 1985, resulting in one death (American citizen Eugene Kent Brown, employee of Johnson & Johnson) and sixteen injuries; the Plaza República Dominicana bombing in July 1986 killed 12 members of the Guardia Civil and injured 50; on 19 June 1987, the Hipercor bombing was an attack in a shopping centre in Barcelona, killing 21 and injuring 45; in the last case, entire families were killed. The horror caused then was so striking that ETA felt compelled to issue a communiqué stating that they had given warning of the Hipercor bomb, but that the police had declined to evacuate the area. The police said that the warning came only a few minutes before the bomb exploded.
In 1986 Gesto por la Paz (known in English as Association for Peace in the Basque Country) was founded; they began to convene silent demonstrations in communities throughout the Basque Country the day after any violent killing, whether by ETA or by GAL. These were the first systematic demonstrations in the Basque Country against political violence. Also in 1986, in Ordizia, ETA gunned down María Dolores Katarain, known as "Yoyes", while she was walking with her infant son. Yoyes was a former member of ETA who had abandoned the armed struggle and rejoined civil society: they accused her of "desertion" because of her taking advantage of the Spanish reinsertion policy which granted amnesty to those prisoners who publicly renounced political violence (see below).
On 12 January 1988, all Basque political parties except ETA-affiliated Herri Batasuna signed the Ajuria-Enea pact with the intent of ending ETA's violence. Weeks later on 28 January, ETA announced a 60-day "ceasefire", later prolonged several times. Negotiations known as the Mesa de Argel ("Algiers Table") took place between the ETA representative Eugenio Etxebeste ("Antxon") and the then PSOE government of Spain, but no successful conclusion was reached, and ETA eventually resumed the use of violence.
During this period, the Spanish government had a policy referred to as "reinsertion", under which imprisoned ETA members whom the government believed had genuinely abandoned violence could be freed and allowed to rejoin society. Claiming a need to prevent ETA from coercively impeding this reinsertion, the PSOE government decided that imprisoned ETA members, who previously had all been imprisoned within the Basque Country, would instead be dispersed to prisons throughout Spain, some as far from their families as in the Salto del Negro prison in the Canary Islands. France has taken a similar approach.
In the event, the only clear effect of this policy was to incite social protest, especially from nationalists and families of the prisoners, claiming cruelty of separating family members from the insurgents. Much of the protest against this policy runs under the slogan "Euskal Presoak – Euskal Herrira" ("Basque prisoners to the Basque Country"; by "Basque prisoners" only ETA members are meant). It has to be noted that almost in any Spanish jail there is a group of ETA prisoners, as the number of ETA prisoners makes it difficult to disperse them.
Gestoras pro Amnistía/Amnistiaren Aldeko Batzordeak ("Pro-Amnesty Managing Assemblies", currently illegal), later Askatasuna ("Freedom") and Senideak ("The Family Members"), provided support for prisoners and families. The Basque Government and several Nationalist town halls granted money on humanitarian reasons for relatives to visit prisoners. The long road trips have caused accidental deaths that are protested against by Nationalist Prisoner's Family supporters.
During the ETA ceasefire of the late 1990s, the PSOE government brought the prisoners on the islands and in Africa back to the mainland. Since the end of the ceasefire, ETA prisoners have not been sent back to overseas prisons. Some Basque authorities have established grants for the expenses of visiting families.
Another Spanish "counter-terrorist" law puts suspected terrorist cases under the central tribunal Audiencia Nacional in Madrid, due to the threats by the group over the Basque courts. Under Article 509 suspected terrorists are subject to being held incommunicado for up to thirteen days, during which they have no contact with the outside world other than through the court-appointed lawyer, including informing their family of their arrest, consultation with private lawyers or examination by a physician other than the coroners. In comparison, the habeas corpus term for other suspects is three days.
In 1992, ETA's three top leaders—"military" leader Francisco Mujika Garmendia ("Pakito"), political leader José Luis Alvarez Santacristina ("Txelis") and logistical leader José María Arregi Erostarbe ("Fiti"), often referred to collectively as the "cúpula" of ETA or as the Artapalo collective—were arrested in the northern Basque town of Bidart, which led to changes in ETA's leadership and direction.
After a two-month truce, ETA adopted even more radical positions. The principal consequence of the change appears to have been the creation of the "Y Groups", formed by young militants of ETA parallel groups (generally minors), dedicated to so-called "kale borroka"—street struggle—and whose activities included burning buses, street lamps, benches, ATMs, and garbage containers, and throwing Molotov cocktails. The appearance of these groups was attributed by many to the supposed weakness of ETA, which obliged them to resort to minors to maintain or augment their impact on society after arrests of leading militants, including the "cupola". ETA also began to menace leaders of other parties besides rival Basque nationalist parties.
In 1995, the armed group again launched a peace proposal. The so-called "Democratic Alternative" replaced the earlier KAS Alternative as a minimum proposal for the establishment of Euskal Herria. The Democratic Alternative offered the cessation of all armed ETA activity if the Spanish government would recognize the Basque people as having sovereignty over Basque territory, the right to self-determination, and that it freed all ETA members in prison. The Spanish government ultimately rejected this peace offer as it would go against the Spanish Constitution of 1978. Changing the constitution was not considered.
Also in 1995 was a failed ETA car bombing attempt directed against José María Aznar, a conservative politician who was the leader of the then-opposition Partido Popular (PP) and was shortly after elected to the presidency of the government; there was also an abortive attempt in Majorca on the life of King Juan Carlos I. Still, the act with the largest social impact came the following year. On 10 July 1997, PP council member Miguel Ángel Blanco was kidnapped in the Basque town of Ermua, with the separatist group threatening to assassinate him unless the Spanish government met ETA's demand of starting to bring all ETA's inmates to prisons of the Basque Country within two days after the kidnapping.
This demand was not met by the Spanish government and after three days Miguel Ángel Blanco was found shot dead when the deadline expired. More than six million people took out to the streets to demand his liberation, with massive demonstrations occurring as much in the Basque regions as elsewhere in Spain, chanting cries of "Assassins" and "Basques yes, ETA no". This response came to be known as the "Spirit of Ermua".
Later acts of violence included the 6 November 2001 car bomb in Madrid which injured 65 people, and attacks on football stadiums and tourist destinations throughout Spain.
The 11 September 2001 attacks in the US appeared to have dealt a hard blow to ETA, owing to the worldwide toughening of "anti-terrorist" measures (such as the freezing of bank accounts), the increase in international policy coordination, and the end of the toleration some countries had, up until then, extended to ETA. Additionally, in 2002 the Basque nationalist youth movement, Jarrai, was outlawed and the law of parties was changed outlawing Herri Batasuna, the "political arm" of ETA (although even before the change in law, Batasuna had been largely paralysed and under judicial investigation by judge Baltasar Garzón).
With ever-increasing frequency, attempted ETA actions were frustrated by Spanish security forces.
On 24 December 2003, in San Sebastián and in Hernani, National Police arrested two ETA members who had left dynamite in a railroad car prepared to explode in Chamartín Station in Madrid. On 1 March 2004, in a place between Alcalá de Henares and Madrid, a light truck with 536 kg of explosives was discovered by the Guardia Civil.
ETA was initially blamed for the 2004 Madrid bombings by the outgoing government and large sections of the press. However, the group denied responsibility and Islamic fundamentalists from Morocco were eventually convicted. The judicial investigation currently states that there is no relationship between ETA and the Madrid bombings.
In the context of negotiation with the Spanish government, ETA declared what it described as a "truce" several times since its creation.
On 22 March 2006, ETA sent a DVD message to the Basque Network Euskal Irrati-Telebista and the journals Gara and Berria with a communiqué from the group announcing what it called a "permanent ceasefire" that was broadcast over Spanish TV.
Talks with the group were then officially opened by Spanish Presidente del Gobierno José Luis Rodríguez Zapatero.
These took place all over 2006, not free from incidents such as an ETA cell stealing some 300 handguns, ammunition and spare parts in France in October 2006. or a series of warnings made by ETA such as the one of 23 September, when masked ETA militants declared that the group would "keep taking up arms" until achieving "independence and socialism in the Basque country", which were regarded by some as a way to increase pressure on the talks, by others as a tactic to reinforce ETA's position in the negotiations.
Finally, on 30 December 2006 ETA detonated a van bomb after three confusing warning calls, in a parking building at the Madrid Barajas international airport. The explosion caused the collapse of the building and killed two Ecuadorian immigrants who were napping inside their cars in the parking building. At 6:00 pm, José Luis Rodríguez Zapatero released a statement stating that the "peace process" had been discontinued.
In January 2008, ETA stated that its call for independence is similar to that of the Kosovo status and Scotland. In the week of 8 September 2008, two Basque political parties were banned by a Spanish court for their secretive links to ETA. In another case in the same week, 21 people were convicted whose work on behalf of ETA prisoners actually belied secretive links to the armed separatists themselves. ETA reacted to these actions by placing three major car bombs in less than 24 hours in northern Spain.
In April 2009 Jurdan Martitegi was arrested, making him the fourth consecutive ETA military chief to be captured within a single year, an unprecedented police record, further weakening the group. Violence surged in the middle of 2009, with several ETA attacks leaving three people dead and dozens injured around Spain. Amnesty International condemned these attacks as well as ETA's "grave human rights abuses".
The Basque newspaper Gara published an article that suggested that ETA member Jon Anza could have been killed and buried by Spanish police in April 2009. The central prosecutor in the French town of Bayonne, Anne Kayanakis, announced, as the official version, that the autopsy carried out on the body of Jon Anza – a suspected member of the armed Basque group ETA, missing since April 2009 – revealed no signs of having been beaten, wounded or shot, which should rule out any suspicions that he died from unnatural causes. Nevertheless, that very magistrate denied the demand of the family asking for the presence of a family doctor during the autopsy. After this, Jon Anza's family members asked for a second autopsy to be carried out.
In December 2009, Spain raised its terror alert after warning that ETA could be planning major attacks or high-profile kidnappings during Spain's European Union presidency. The next day, after being asked by the opposition, Alfredo Pérez Rubalcaba said that warning was part of a strategy.
On 5 September 2010, ETA declared a new ceasefire, its third after two previous ceasefires were ended by the group. A spokesperson speaking on a video announcing the ceasefire said the group wished to use "peaceful, democratic means" to achieve its aims, though it was not specified whether the ceasefire was considered permanent by the group. ETA claimed that it had decided to initiate a ceasefire several months before the announcement. In the part of the video, the spokesperson said that the group was "prepared today as yesterday to agree to the minimum democratic conditions necessary to put in motion a democratic process if the Spanish government is willing".
The announcement was met with a mixed reaction; Basque nationalist politicians responded positively and said that the Spanish and international governments should do the same, while the Spanish interior counsellor of Basque, Rodolfo Ares, said that the committee did not go far enough. He said that he considered ETA's statement "absolutely insufficient" because it did not commit to a complete termination of what Ares considered "terrorist activity" by the group.
On 10 January 2011, ETA declared that their September 2010 ceasefire would be permanent and verifiable by international observers. Observers urged caution, pointing out that ETA had broken permanent ceasefires in the past, whereas Prime Minister José Luis Rodríguez Zapatero (who left office in December 2011) demanded that ETA declare that it had given up violence once and for all. After the declaration, Spanish press started speculating of a possible Real IRA-type split within ETA, with hardliners forming a new more violent offshoot led by "Dienteputo".
On 21 October 2011, ETA announced a cessation of armed activity via video clip sent to media outlets following the Donostia-San Sebastián International Peace Conference, which was attended by former UN Secretary-General Kofi Annan, former Taoiseach of Ireland Bertie Ahern, former prime minister of Norway Gro Harlem Brundtland (an international leader in sustainable development and public health), former Interior Minister of France Pierre Joxe, president of Sinn Féin Gerry Adams (a Teachta Dála in Dáil Éireann), and British diplomat Jonathan Powell, who served as the first Downing Street Chief of Staff.
They all signed a final declaration that was supported also by former UK Prime Minister Tony Blair, the former US president and 2002 Nobel Peace Prize winner Jimmy Carter, and the former US senator and former US Special Envoy for Middle East Peace George J. Mitchell. The meeting did not include Spanish or French government representatives. The day after the ceasefire, in a contribution piece to The New York Times, Tony Blair indicated that lessons in dealing with paramilitary separatist groups can be learned from how the Spanish administration handled ETA. Blair wrote, "governments must firmly defend themselves, their principles and their people against terrorists. This requires good police and intelligence work as well as political determination. [However], firm security pressure on terrorists must be coupled with offering them a way out when they realize that they cannot win by violence. Terrorist groups are rarely defeated by military means alone". Blair also suggested that Spain would need to discuss weapon decommissioning, peace strategies, reparations for victims, and security with ETA, as Britain discussed with the Provisional IRA.
ETA had declared ceasefires many times before, most significantly in 1999 and 2006, but the Spanish government and media outlets expressed particularly hopeful opinions regarding the permanence of this proclamation. Spanish premier José Luis Rodríguez Zapatero described the move as "a victory for democracy, law and reason". Additionally, the effort of security and intelligence forces in Spain and France are cited by politicians as the primary instruments responsible for the weakening of ETA. The optimism may come as a surprise considering ETA's failure to renounce the independence movement, which has been one of the Spanish government's requirements.
Less optimistically, Spanish Prime Minister Mariano Rajoy of the centre-right People's Party expressed the need to push for the full dissolution of ETA. The People's Party has emphasized the obligation of the state to refuse negotiations with separatist movements since former Prime Minister José María Aznar was in office. Aznar was responsible for banning media outlets seen as subversive to the state and Batasuna, the political party of ETA. Additionally, in preparation for his party's manifesto, on 30 October 2011, Rajoy declared that the People's Party would not negotiate with ETA under threats of violence nor announcements of the group's termination, but would instead focus party efforts on remembering and honouring victims of separatist violence.
This event may not alter the goals of the Basque separatist movement but will change the method of the fight for a more autonomous state. Negotiations with the newly elected administration may prove difficult with the return to the centre-right People's Party, which is replacing Socialist control, due to pressure from within the party to refuse all ETA negotiations.
In September 2016, French police stated that they did not believe ETA had made progress in giving up arms. In March 2017, well-known French-Basque activist Jean-Noël Etxeverry [fr] was quoted as having told Le Monde, "ETA has made us responsible for the disarmament of its arsenal, and by the afternoon of 8 April, ETA will be completely unarmed." On 7 April, the BBC reported that ETA would disarm "tomorrow", including a photo of a stamped ETA letter attesting to this. The French police found 3.5 tonnes of weapons on 8 April, the following day, at the caches handed over by ETA.
ETA, for its part, issued a statement endorsing the 2017 Catalan independence referendum.
In a letter to online newspaper El Diario, published on 2 May 2018, ETA formally announced that it had "completely dissolved all its structures and ended its political initiative" on 16 April 2018.
A leading left-wing Basque nationalist politician and former ETA member, Arnaldo Otegi, the general coordinator of the Basque coalition party EH Bildu, has said the violence ETA used in its quest for independence "should never have happened" and it ought to have laid down its arms far earlier than it did. A full quote: 'Today we want to make specific mention of the victims of ETA's violence," said Otegi. "We want to express to them our sorrow and pain for the suffering they endured. We feel their pain, and that sincere feeling leads us to affirm that it should never have happened, that no one could be satisfied with what happened, and that it should not have lasted as long as it did. We should have managed to reach [the abandonment of the armed campaign] sooner.'
ETA's targets expanded from military or police-related personnel and their families to a wider array, which included the following:
ETA's tactics included:
These bombs sometimes killed family members of ETA's target victim and bystanders. When the bombs were large car-bombs seeking to produce large damage and terror, they were generally announced by one or more telephone calls made to newspapers speaking in the name of ETA. Charities (usually Detente Y Ayuda—DYA) were also used to announce the threat if the bomb was in a populated area. The type of explosives used in these attacks was initially Goma-2 or self-produced ammonal. After several successful robberies in France, ETA began using Titadyne.
With its attacks against what they considered "enemies of the Basque people", ETA killed over 820 people since 1968, including more than 340 civilians. It maimed hundreds more and kidnapped dozens. ETA was opposed to Lemóniz Nuclear Power Plant.
Its ability to inflict violence had declined steadily since the group was at its strongest during the late 1970s and 1980 (when it killed 92 people in a single year). After decreasing peaks in the fatal casualties in 1987 and 1991, 2000 was the last year when ETA killed more than 20 in a single year. After 2002, the yearly number of ETA's fatal casualties was reduced to single digits.
Similarly, over the 1990s and, especially, during the 2000s, fluid cooperation between the French and Spanish police, state-of-the-art tracking devices and techniques and, apparently, police infiltration allowed increasingly repeating blows to ETA's leadership and structure (between May 2008 and April 2009 no less than four consecutive "military chiefs" were arrested).
ETA operated mainly in Spain, particularly in the Basque Country, Navarre, and (to a lesser degree) Madrid, Barcelona, and the tourist areas of the Spanish Mediterranean coast. To date, about 65% of ETA's killings were committed in the Basque Country, followed by Madrid with roughly 15%. Navarre and Catalonia also registered significant numbers.
Actions in France usually consisted of assaults on arsenals or military industries to steal weapons or explosives; these were usually stored in large quantities in hide-outs located in the French Basque Country rather than Spain. The French judge Laurence Le Vert was threatened by ETA and a plot arguably aiming to assassinate her was unveiled. Only very rarely have ETA members engaged in shootings with the French Gendarmerie. This often occurred mainly when members of the group were confronted at checkpoints.
Despite this, on 1 December 2007 ETA killed two Spanish Civil Guards on counter-terrorist surveillance duties in Capbreton, Landes, France. This was its first killing after it ended its 2006 declaration of "permanent ceasefire" and the first killing committed by ETA in France of a Spanish police agent since 1976, when they kidnapped, tortured and assassinated two Spanish inspectors in Hendaye.
In 2007, police reports pointed out that, after the serious blows suffered by ETA and its political counterparts during the 2000s, its budget would have been adjusted to €2,000,000 annually.
Although ETA used robbery as a means of financing its activities in its early days, it was accused both of arms trafficking and of benefiting economically from its political counterpart Batasuna. Extortion was ETA's main source of funds.
ETA was considered to form part of what is informally known as the Basque National Liberation Movement, a movement born much after ETA's creation. This loose term refers to a range of political organizations that are ideologically similar, comprising several distinct organizations that promote a type of leftist Basque nationalism that is often referred to by the Basque-language term Ezker Abertzalea (Nationalist Left). Other groups typically considered to belong to this independentist movement are the political party Batasuna, the nationalist youth organization Segi, the labour union Langile Abertzaleen Batzordeak (LAB), and Askatasuna among others. There are often strong interconnections between these groups, double or even triple membership are not infrequent.
There are Basque nationalist parties with similar goals as those of ETA (namely, independence) but who reject their violent means. They are: EAJ-PNV, Eusko Alkartasuna, Aralar and, in the French Basque country, Abertzaleen Batasuna. Also, many left-wing parties, such as Ezker Batua, Batzarre and some sectors of the EAJ-PNV party, also support self-determination but are not in favour of independence.
Historically, members of ETA took refuge in France, particularly the French Basque Country. The leadership typically chose to live in France for security reasons, where police pressure was much less than in Spain. Accordingly, ETA's tactical approach had been to downplay the issue of independence of the French Basque country so as to get French acquiescence for their activities. The French government quietly tolerated the group, especially during Franco's regime, when ETA members could face the death penalty in Spain. In the 1980s, the advent of the GAL still hindered counter-terrorist cooperation between France and Spain, with the French government considering ETA a Spanish domestic problem. At the time, ETA members often travelled between the two countries using the French sanctuary as a base of operations.
With the disbanding of the GAL, the French government changed its position on the matter and in the 1990s initiated the ongoing period of active cooperation with the Spanish government against ETA, including fast-track transfers of detainees to Spanish tribunals that are regarded as fully compliant with European Union legislation on human rights and the legal representation of detainees. Virtually all of the highest ranks within ETA –including their successive "military", "political" or finances chiefs – have been captured in French territory, from where they had been plotting their activities after having crossed the border from Spain.
In response to the new situation, ETA carried out attacks against French policemen and made threats to some French judges and prosecutors. This implied a change from the group's previous low-profile in the French Basque Country, which successive ETA leaders had used to discreetly manage their activities in Spain.
ETA considered its prisoners political prisoners. Until 2003, ETA consequently forbade them to ask penal authorities for progression to tercer grado (a form of open prison that allows single-day or weekend furloughs) or parole. Before that date, those who did so were menaced and expelled from the group. Some were assassinated by ETA for leaving the group and going through reinsertion programs.
The Spanish Government passed the Ley de Partidos Políticos. This is a law barring political parties that support violence and do not condemn terrorist actions or are involved with terrorist groups. The law resulted in the banning of Herri Batasuna and its successor parties unless they explicitly condemned terrorist actions and, at times, imprisoning or trying some of its leaders who have been indicted for cooperation with ETA.
Judge Baltasar Garzón initiated a judicial procedure (coded as 18/98), aimed towards the support structure of ETA. This procedure started in 1998 with the preventive closure of the newspaper Egin (and its associated radio-station Egin Irratia), accused of being linked to ETA, and temporary imprisoning the editor of its "investigative unit", Pepe Rei, under similar accusations. In August 1999 Judge Baltasar Garzón authorized the reopening of the newspaper and the radio, but they could not reopen due to economic difficulties.
Judicial procedure 18/98 has many ramifications, including the following:
In 2007, indicted members of the youth movements Haika, Segi and Jarrai were found guilty of a crime of connivance with terrorism.
In May 2008, leading ETA figures were arrested in Bordeaux, France. Francisco Javier López Peña, also known as 'Thierry,' had been on the run for twenty years before his arrest. A final total of arrests brought in six people, including ETA members and supporters, including the ex-Mayor of Andoain, José Antonio Barandiarán, who is rumoured to have led police to 'Thierry'. The Spanish Interior Ministry claimed the relevance of the arrests would come in time with the investigation. Furthermore, the Interior Minister said that those members of ETA now arrested had ordered the latest attacks and that senior ETA member Francisco Javier López Peña was "not just another arrest because he is, in all probability, the man who has most political and military weight in the terrorist group."
After Lopez Pena's arrest, along with the Basque referendum being put on hold, police work has been on the rise. On 22 July 2008, Spanish police dismantled the most active cell of ETA by detaining nine suspected members of the group. Interior Minister Alfredo Perez Rubalcaba said about the arrests: "We can't say this is the only ETA unit but it was the most active, most dynamic and of course the most wanted one." Four days later French police also arrested two suspects believed to be tied to the same active cell. The two suspects were: Asier Eceiza, considered a top aide to a senior ETA operative still sought by police, and Olga Comes, whom authorities have linked to the ETA suspects.
The European Union and the United States listed ETA as a terrorist group in their relevant watch lists. ETA has been a Proscribed Organisation in the United Kingdom under the Terrorism Act 2000 since 29 March 2001. The Canadian Parliament listed ETA as a terrorist group in 2003.
France and Spain have often shown co-operation in the fight against ETA, after France's lack of co-operation during the Franco era. In late 2007, two Spanish guards were shot to death in France when on a joint operation with their French counterparts. Furthermore, in May 2008, the arrests of four people in Bordeaux led to a breakthrough against ETA, according to the Spanish Interior Ministry.
In 2008, as ETA activity increased, France increased its pressure on ETA by arresting more ETA suspects, including Unai Fano, María Lizarraga, and Esteban Murillo Zubiri in Bidarrain. He had been wanted by the Spanish authorities since 2007 when a Europol arrest warrant was issued against him. French judicial authorities had already ordered that he be held in prison on remand.
Spain has also sought cooperation from the United Kingdom in dealing with ETA-IRA ties. In 2008, this came to light after Iñaki de Juana Chaos, whose release from prison was cancelled on appeal, had moved to Belfast. He was thought to be staying at an IRA safe house while being sought by the Spanish authorities. Interpol notified the judge, Eloy Velasco, that he was in either the Republic of Ireland or Northern Ireland. | [
{
"paragraph_id": 0,
"text": "ETA, an acronym for Euskadi Ta Askatasuna (\"Basque Homeland and Liberty\" or \"Basque Country and Freedom\"), was an armed Basque nationalist and far-left separatist organization in the Basque Country between 1959 and 2018, with its goal being independence for the region. The group was founded in 1959 during the era of Francoist Spain, and later evolved from a pacifist group promoting traditional Basque culture to a violent paramilitary group. It engaged in a campaign of bombings, assassinations, and kidnappings throughout Spain and especially the Southern Basque Country against the regime, which was highly centralised and hostile to the expression of non-Castilian minority identities. ETA was the main group within the Basque National Liberation Movement and was the most important Basque participant in the Basque conflict.",
"title": ""
},
{
"paragraph_id": 1,
"text": "ETA's motto was Bietan jarrai (\"Keep up on both\"), referring to the two figures in its symbol, a snake (representing politics) wrapped around an axe (representing armed struggle). Between 1968 and 2010, ETA killed 829 people (including 340 civilians) and injured more than 22,000. ETA was classified as a terrorist group by Spain, France, the United Kingdom, the United States, Canada, and the European Union. This convention was followed by a plurality of domestic and international media, which also referred to the group as terrorists. As of 2019, there were more than 260 imprisoned former members of the group in Spain, France, and other countries.",
"title": ""
},
{
"paragraph_id": 2,
"text": "ETA declared ceasefires in 1989, 1996, 1998 and 2006. On 5 September 2010, ETA declared a new ceasefire that remained in force, and on 20 October 2011, ETA announced a \"definitive cessation of its armed activity\". On 24 November 2012, it was reported that the group was ready to negotiate a \"definitive end\" to its operations and disband completely. The group announced on 7 April 2017 that it had given up all its weapons and explosives. On 2 May 2018, ETA made public a letter dated 16 April 2018 according to which it had \"completely dissolved all its structures and ended its political initiative\".",
"title": ""
},
{
"paragraph_id": 3,
"text": "ETA changed its internal structure on several occasions, commonly for security reasons. The group used to have a very hierarchical organization with a leading figure at the top, delegating into three substructures: the logistical, military and political sections. Reports from Spanish and French police point towards significant changes in ETA's structures in its later years. ETA divided the three substructures into a total of eleven. The change was a response to captures, and possible infiltration, by the different law enforcement agencies. ETA intended to disperse its members and reduce the effects of detentions.",
"title": "Structure"
},
{
"paragraph_id": 4,
"text": "The leading committee comprised 7 to 11 individuals, and ETA's internal documentation referred to it as Zuba, an abbreviation of Zuzendaritza Batzordea (directorial committee). There was another committee named Zuba-hits that functioned as an advisory committee. The eleven different substructures were: logistics, politics, international relations with fraternal organisations, military operations, reserves, prisoner support, expropriation, information, recruitment, negotiation, and treasury.",
"title": "Structure"
},
{
"paragraph_id": 5,
"text": "ETA's armed operations were organized in different taldes (groups or commandos), generally composed of three to five members, whose objective was to conduct attacks in a specific geographic zone. The taldes were coordinated by the cúpula militar (\"military cupola\"). To supply the taldes, support groups maintained safe houses and zulos (small rooms concealed in forests, garrets or underground, used to store arms, explosives or, sometimes, kidnapped people; the Basque word zulo literally means \"hole\"). The small cellars used to hide the people kidnapped are named by ETA and ETA's supporters \"people's jails\". The most common commandos were itinerant, not linked to any specific area, and thus were more difficult to capture.",
"title": "Structure"
},
{
"paragraph_id": 6,
"text": "Among its members, ETA distinguished between legales/legalak (\"legal ones\"), those members who did not have police records and lived apparently normal lives; liberados (\"liberated members\") known to the police that were on ETA's payroll and working full-time for ETA; and apoyos (\"supporters\") who just gave occasional help and logistics support to the group when required.",
"title": "Structure"
},
{
"paragraph_id": 7,
"text": "There were also imprisoned members of the group, serving time scattered across Spain and France, that sometimes still had significant influence inside the organisation; and finally the quemados (\"burnt out\"), members freed after having been imprisoned or those that were suspected by the group of being under police surveillance. In the past, there was also the figure of the deportees, expelled by the French government to remote countries where they lived freely. ETA's internal bulletin was named Zutabe (\"Column\"), replacing the earlier one (1962) Zutik (\"Standing\").",
"title": "Structure"
},
{
"paragraph_id": 8,
"text": "ETA also promoted the kale borroka (\"street fight\"), that is, violent acts against public transportation, political parties' offices or cultural buildings, destruction of private property of politicians, police, military, bank offices, journalists, council members, and anyone voicing criticism of ETA. Tactics included threats, graffiti of political mottoes, and rioting, usually using Molotov cocktails. These groups were mostly made up of young people, who were directed through youth organisations (such as Jarrai, Haika and Segi). Many members of ETA started their collaboration with the group as participants in the kale borroka.",
"title": "Structure"
},
{
"paragraph_id": 9,
"text": "The former political party Batasuna, disbanded in 2003, pursued the same political goals as ETA and did not condemn ETA's use of violence. Formerly known as Euskal Herritarrok and \"Herri Batasuna\", it was banned by the Spanish Supreme Court as an anti-democratic organisation following the Political Parties Law (Ley de Partidos Políticos). It generally received 10% to 20% of the vote in the Basque Autonomous Community.",
"title": "Political support"
},
{
"paragraph_id": 10,
"text": "Batasuna's political status was controversial. It was considered to be the political wing of ETA. Moreover, after the investigations on the nature of the relationship between Batasuna and ETA by Judge Baltasar Garzón, who suspended the activities of the political organisation and ordered police to shut down its headquarters, the Supreme Court of Spain finally declared Batasuna illegal on 18 March 2003. The court considered proven that Batasuna had links with ETA and that it constituted in fact part of ETA's structure. In 2003, the Constitutional Tribunal upheld the legality of the law.",
"title": "Political support"
},
{
"paragraph_id": 11,
"text": "However, the party itself denied being the political wing of ETA, although double membership – simultaneous or alternative – between Batasuna and ETA was often recorded, such as with the cases of prominent Batasuna leaders like Josu Urrutikoetxea, Arnaldo Otegi, Jon Salaberria and others.",
"title": "Political support"
},
{
"paragraph_id": 12,
"text": "The Spanish Cortes (the Spanish Parliament) began the process of declaring the party illegal in August 2002 by issuing a bill entitled the Ley de Partidos Políticos which bars political parties that use violence to achieve political goals, promote hatred against different groups or seek to destroy the democratic system. The bill passed the Cortes with a 304 to 16 vote. Many within the Basque nationalistic movement strongly disputed the Law, which they considered too draconian or even unconstitutional; alleging that any party could be made illegal almost by choice, simply for not clearly stating their opposition to an attack.",
"title": "Political support"
},
{
"paragraph_id": 13,
"text": "Defenders of the law argued that the Ley de Partidos did not necessarily require responses to individual acts of violence, but rather a declaration of principles explicitly rejecting violence as a means of achieving political goals. Defenders also argued that the ban of a political party is subject to judicial process, with all the guarantees of the State of Law. Batasuna had failed to produce such a statement. As of February 2008 other political parties linked to organizations such as Partido Comunista de España (reconstituted) have also been declared illegal, and Acción Nacionalista Vasca and Communist Party of the Basque Lands (EHAK/PCTV, Euskal Herrialdeetako Alderdi Komunista/Partido Comunista de las Tierras Vascas) was declared illegal in September 2008.",
"title": "Political support"
},
{
"paragraph_id": 14,
"text": "A new party called Aukera Guztiak (All the Options) was formed expressly for the elections to the Basque Parliament of April 2005. Its supporters claimed no heritage from Batasuna, asserting that they aimed to allow Basque citizens to freely express their political ideas, even those of independence. On the matter of political violence, Aukera Guztiak stated their right not to condemn some kinds of violence more than others if they did not see fit (in this regard, the Basque National Liberation Movement (MLNV) regards present police actions as violence, torture and state terrorism). Nevertheless, most of their members and certainly most of their leadership were former Batasuna supporters or affiliates. The Spanish Supreme Court unanimously considered the party to be a successor to Batasuna and declared a ban on it.",
"title": "Political support"
},
{
"paragraph_id": 15,
"text": "After Aukera Guztiak had been banned, and less than two weeks before the election, another political group appeared born from an earlier schism from Herri Batasuna, the Communist Party of the Basque Lands (EHAK/PCTV, Euskal Herrialdeetako Alderdi Komunista/Partido Comunista de las Tierras Vascas), a formerly unknown political party which had no representation in the Autonomous Basque Parliament. EHAK announced that they would apply the votes they obtained to sustain the political programme of the now-banned Aukera Guztiak platform.",
"title": "Political support"
},
{
"paragraph_id": 16,
"text": "This move left no time for the Spanish courts to investigate EHAK in compliance with the Ley de Partidos before the elections were held. The bulk of Batasuna supporters voted in this election for PCTV. It obtained 9 seats of 75 (12.44% of votes) in the Basque Parliament. The election of EHAK representatives eventually allowed the programme of the now-illegal Batasuna to continue being represented without having condemned violence as required by the Ley de Partidos.",
"title": "Political support"
},
{
"paragraph_id": 17,
"text": "In February 2011, Sortu, a party described as \"the new Batasuna\", was launched. Unlike predecessor parties, Sortu explicitly rejects politically motivated violence, including that of ETA. However, on 23 March 2011, the Spanish Supreme Court banned Sortu from registering as a political party on the grounds that it was linked to ETA.",
"title": "Political support"
},
{
"paragraph_id": 18,
"text": "The Spanish transition to democracy from 1975 on and ETA's progressive radicalisation had resulted in a steady loss of support, which became especially apparent at the time of their 1997 kidnapping and countdown assassination of Miguel Ángel Blanco. Their loss of sympathisers had been reflected in an erosion of support for the political parties identified with them. In the 1998 Basque parliament elections Euskal Herritarrok, formerly Batasuna, polled 17.7% of the votes. However, by 2001 the party's support had fallen to 10.0%. There were also concerns that Spain's \"judicial offensive\" against alleged ETA supporters (two Basque political parties and one NGO were banned in September 2008) constituted a threat to human rights. Strong evidence was seen that a legal network had grown so wide as to lead to the arrest of numerous innocent people. According to Amnesty International, torture was still \"persistent\", though not \"systematic\". Inroads could be undermined by judicial short-cuts and abuses of human rights.",
"title": "Political support"
},
{
"paragraph_id": 19,
"text": "The Euskobarometro, the survey carried out by the Universidad del País Vasco (University of the Basque Country), asking about the views of ETA within the Basque population, obtained these results in May 2009: 64% rejected ETA totally, 13% identified themselves as former ETA sympathisers who no longer support the group. Another 10% agreed with ETA's ends, but not their means. 3% said that their attitude towards ETA was mainly one of fear, 3% expressed indifference and 3% were undecided or did not answer. About 3% gave ETA \"justified, with criticism\" support (supporting the group but criticising some of their actions) and only 1% gave ETA total support. Even within Batasuna voters, at least 48% rejected ETA's violence.",
"title": "Political support"
},
{
"paragraph_id": 20,
"text": "A poll taken by the Basque Autonomous Government in December 2006 during ETA's \"permanent\" ceasefire showed that 88% of the Basques thought that all political parties needed to launch a dialogue, including a debate on the political framework for the Basque Country (86%). 69% support the idea of ratifying the results of this hypothetical multiparty dialogue through a referendum. This poll also reveals that the hope of a peaceful resolution to the issue of the constitutional status of the Basque region has fallen to 78% (from 90% in April).",
"title": "Political support"
},
{
"paragraph_id": 21,
"text": "These polls did not cover Navarre, where support for Basque nationalist electoral options is weaker (around 25% of the population); or the Northern Basque Country, where support is even weaker (around 15% of the population).",
"title": "Political support"
},
{
"paragraph_id": 22,
"text": "ETA grew out of a student group called Ekin, founded in the early 1950s, which published a magazine and undertook direct action. ETA was founded on 31 July 1959 as Euskadi Ta Askatasuna (\"Basque Homeland and Liberty\" or \"Basque Country and Freedom\") by students frustrated by the moderate stance of the Basque Nationalist Party. (Originally, the name for the organisation used the word Aberri instead of Euskadi, creating the acronym ATA. However, in some Basque dialects, ata means duck, so the name was changed.)",
"title": "History"
},
{
"paragraph_id": 23,
"text": "ETA held their first assembly in Bayonne, France, in 1962, during which a \"declaration of principles\" was formulated and following which a structure of activist cells was developed. Subsequently, Marxist and third-worldist perspectives developed within ETA, becoming the basis for a political programme set out in Federico Krutwig's (an anarchist of German origin) 1963 book Vasconia, which is considered to be the defining text of the movement. In contrast to previous Basque nationalist platforms, Krutwig's vision was anti-religious and based upon language and culture rather than race. ETA's third and fourth assemblies, held in 1964 and 1965, adopted an anti-capitalist and anti-imperialist position, seeing nationalism and the class struggle as intrinsically connected.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Some sources attributed the 1960 bombing of the Amara station in Donostia-San Sebastian (which killed a 22-month-old child) to ETA, but statistics published by the Spanish Ministry of the Interior have always showed that ETA's first victim was killed in 1968. The 1960 attack was claimed by the Portuguese and Galician left-wing group Directorio Revolucionario Ibérico de Liberación (DRIL) (together with four other very similar bombings committed that same day across Spain, all of them attributed to DRIL), and the attribution to ETA has been considered to be unfounded by researchers. Police documents dating from 1961, released in 2013, show that the DRIL was indeed the author of the bombing. A more recent study by the Memorial de Víctimas del Terrorismo based on the analysis of police diligences at the time reached the same conclusion, naming Guillermo Santoro, member of DRIL, as the author of the attack.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "ETA's first killing occurred on 7 June 1968, when Guardia Civil member José Pardines Arcay was shot dead after he tried to halt ETA member Txabi Etxebarrieta during a routine road check. Etxebarrieta was chased down and killed as he tried to flee. This led to retaliation in the form of the first planned ETA assassination: that of Melitón Manzanas, chief of the secret police in San Sebastián and associated with a long record of tortures inflicted on detainees in his custody. In December 1970, several members of ETA were condemned to death in the Burgos trials (Proceso de Burgos), but international pressure resulted in their sentences being commuted (a process which, however, had by that time already been applied to some other members of ETA).",
"title": "History"
},
{
"paragraph_id": 26,
"text": "In early December 1970, ETA kidnapped the German consul in San Sebastian, Eugen Beilh, to exchange him for the Burgos defendants. He was released unharmed on 24 December.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Nationalists who refused to follow the tenets of Marxism–Leninism and who sought to create a united front appeared as ETA-V, but lacked the support to challenge ETA.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "The most significant assassination performed by ETA during Franco's dictatorship was Operación Ogro, the December 1973 bomb assassination in Madrid of Admiral Luis Carrero Blanco, Franco's chosen successor and president of the government (a position roughly equivalent to being a prime minister). The assassination had been planned for months and was executed by placing a bomb in a tunnel dug below the street where Carrero Blanco's car passed every day. The bomb blew up beneath the politician's car and left a massive crater in the road.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "For some in the Spanish opposition, Carrero Blanco's assassination, i.e., the elimination of Franco's chosen successor was an instrumental step for the subsequent re-establishment of democracy. The government responded with new anti-terrorism laws which gave police greater powers and empowered military tribunals to pass death sentences against those found guilty. However, the last use of capital punishment in Spain when two ETA members were executed in September 1975, eight weeks before Franco's death, sparked massive domestic and international protests against the Spanish government.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "During the Spanish transition to democracy which began following Franco's death, ETA split into two separate groups: ETA political-military or ETA(pm), and ETA military or ETA(m).",
"title": "History"
},
{
"paragraph_id": 31,
"text": "Both ETA(m) and ETA(pm) refused offers of amnesty, and instead pursued and intensified their violent struggle. The years 1978–1980 were to prove ETA's most deadly, with 68, 76, and 98 fatalities, respectively.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "During the Franco dictatorship, ETA was able to take advantage of tolerance by the French government, which allowed its members to move freely through French territory, believing that in this manner they were contributing to the end of Franco's regime. There is much controversy over the degree to which this policy of \"sanctuary\" continued even after the transition to democracy, but it is generally agreed that after 1983 the French authorities started to collaborate with the Spanish government against ETA.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "In the 1980s, ETA(pm) accepted the Spanish government's offer of individual pardons to all ETA prisoners, even those who had committed violent crimes, who publicly abandoned the policy of violence. This caused a new division in ETA(pm) between the seventh and eighth assemblies. ETA VII accepted this partial amnesty granted by the now democratic Spanish government and integrated into the political party Euskadiko Ezkerra (\"Left of the Basque Country\").",
"title": "History"
},
{
"paragraph_id": 34,
"text": "ETA VIII, after a brief period of independent activity, eventually integrated into ETA(m). With no factions existing anymore, ETA(m) reclaimed the original name of Euskadi Ta Askatasuna.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "During the 1980s a \"dirty war\" ensued using the Grupos Antiterroristas de Liberación (GAL, \"Antiterrorist Liberation Groups\"), a paramilitary group which billed themselves as counter-terrorist, active between 1983 and 1987. The GAL's stated mission was to avenge every ETA killing with another killing of ETA exiles in the French department of Pyrénées Atlantiques. GAL committed 27 assassinations (all but one in France), plus several kidnappings and torture, not only of ETA members but of civilians supposedly related to those, some of whom turned out to have nothing to do with ETA. GAL activities were a follow-up of similar dirty war actions by death squads, actively supported by members of Spanish security forces and secret services, using names such as Batallón Vasco Español active from 1975 to 1981. They were responsible for the killing of about 48 people.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "One consequence of GAL's activities in France was the decision in 1984 by interior minister Pierre Joxe to permit the extradition of ETA suspects to Spain. Reaching this decision had taken 25 years and was critical in curbing ETA's capabilities by denial of previously safe territory in France.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "The airing of the state-sponsored \"dirty war\" scheme and the imprisonment of officials responsible for GAL in the early 1990s led to a political scandal in Spain. The group's connections with the state were unveiled by the Spanish journal El Mundo, with an investigative series leading to the GAL plot being discovered and a national trial initiated. As a consequence, the group's attacks since the revelation have generally been dubbed state terrorism.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "In 1997 the Spanish Audiencia Nacional court finished its trial, which resulted in convictions and imprisonment of several individuals related to the GAL, including civil servants and politicians up to the highest levels of the Spanish Socialist Workers' Party (PSOE) government, such as former Homeland Minister José Barrionuevo. Premier Felipe González was quoted as saying that the constitutional state has to defend itself \"even in the sewers\" (El Estado de derecho también se defiende en las cloacas), something which, for some, indicated at least his knowledge of the scheme. However, his involvement with the GAL could never be proven.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "These events marked the end of the armed \"counter-terrorist\" period in Spain and no major cases of foul play on the part of the Spanish government after 1987 (when GAL ceased to operate) have been proven in courts.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "According to the radical nationalist group, Euskal Memoria, between 1960 and 2010 there were 465 deaths in the Basque Country due to (primarily Spanish) state violence. This figure is considerably higher than those given elsewhere, which are usually between 250 and 300. Critics of ETA cite only 56 members of that organisation killed by state forces since 1975.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "ETA members and supporters routinely claim torture at the hands of Spanish police forces. While these claims are hard to verify, some convictions were based on confessions while prisoners were held incommunicado and without access to a lawyer of their choice, for a maximum of five days. These confessions were routinely repudiated by the defendants during trials as having been extracted under torture. There were some successful prosecutions of proven tortures during the \"dirty war\" period of the mid-1980s, although the penalties have been considered by Amnesty International as unjustifiably light and lenient with co-conspirators and enablers.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "In this regard, Amnesty International showed concern for the continuous disregard of the recommendations issued by the agency to prevent the alleged abuses from possibly taking place. Also in this regard, ETA's manuals were found instructing its members and supporters to claim routinely that they had been tortured while detained. Unai Romano's case was very controversial: pictures of him with a symmetrically swollen face of uncertain aetiology were published after his incommunicado period leading to claims of police abuse and torture. Martxelo Otamendi, the ex-director of the Basque newspaper Euskaldunon Egunkaria, decided to bring charges in September 2008 against the Spanish Government in the European Court of Human Rights for \"not inspecting properly\" cases tainted by torture.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "As a result of ETA's violence, threats and killings of journalists, Reporters Without Borders included Spain in all six editions of its annual watchlist on press freedom up to 2006. Thus, the NGO included ETA in its watchlist \"Predators of Press Freedom\".",
"title": "History"
},
{
"paragraph_id": 44,
"text": "ETA performed their first car bomb assassination in Madrid in September 1985, resulting in one death (American citizen Eugene Kent Brown, employee of Johnson & Johnson) and sixteen injuries; the Plaza República Dominicana bombing in July 1986 killed 12 members of the Guardia Civil and injured 50; on 19 June 1987, the Hipercor bombing was an attack in a shopping centre in Barcelona, killing 21 and injuring 45; in the last case, entire families were killed. The horror caused then was so striking that ETA felt compelled to issue a communiqué stating that they had given warning of the Hipercor bomb, but that the police had declined to evacuate the area. The police said that the warning came only a few minutes before the bomb exploded.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "In 1986 Gesto por la Paz (known in English as Association for Peace in the Basque Country) was founded; they began to convene silent demonstrations in communities throughout the Basque Country the day after any violent killing, whether by ETA or by GAL. These were the first systematic demonstrations in the Basque Country against political violence. Also in 1986, in Ordizia, ETA gunned down María Dolores Katarain, known as \"Yoyes\", while she was walking with her infant son. Yoyes was a former member of ETA who had abandoned the armed struggle and rejoined civil society: they accused her of \"desertion\" because of her taking advantage of the Spanish reinsertion policy which granted amnesty to those prisoners who publicly renounced political violence (see below).",
"title": "History"
},
{
"paragraph_id": 46,
"text": "On 12 January 1988, all Basque political parties except ETA-affiliated Herri Batasuna signed the Ajuria-Enea pact with the intent of ending ETA's violence. Weeks later on 28 January, ETA announced a 60-day \"ceasefire\", later prolonged several times. Negotiations known as the Mesa de Argel (\"Algiers Table\") took place between the ETA representative Eugenio Etxebeste (\"Antxon\") and the then PSOE government of Spain, but no successful conclusion was reached, and ETA eventually resumed the use of violence.",
"title": "History"
},
{
"paragraph_id": 47,
"text": "During this period, the Spanish government had a policy referred to as \"reinsertion\", under which imprisoned ETA members whom the government believed had genuinely abandoned violence could be freed and allowed to rejoin society. Claiming a need to prevent ETA from coercively impeding this reinsertion, the PSOE government decided that imprisoned ETA members, who previously had all been imprisoned within the Basque Country, would instead be dispersed to prisons throughout Spain, some as far from their families as in the Salto del Negro prison in the Canary Islands. France has taken a similar approach.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "In the event, the only clear effect of this policy was to incite social protest, especially from nationalists and families of the prisoners, claiming cruelty of separating family members from the insurgents. Much of the protest against this policy runs under the slogan \"Euskal Presoak – Euskal Herrira\" (\"Basque prisoners to the Basque Country\"; by \"Basque prisoners\" only ETA members are meant). It has to be noted that almost in any Spanish jail there is a group of ETA prisoners, as the number of ETA prisoners makes it difficult to disperse them.",
"title": "History"
},
{
"paragraph_id": 49,
"text": "Gestoras pro Amnistía/Amnistiaren Aldeko Batzordeak (\"Pro-Amnesty Managing Assemblies\", currently illegal), later Askatasuna (\"Freedom\") and Senideak (\"The Family Members\"), provided support for prisoners and families. The Basque Government and several Nationalist town halls granted money on humanitarian reasons for relatives to visit prisoners. The long road trips have caused accidental deaths that are protested against by Nationalist Prisoner's Family supporters.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "During the ETA ceasefire of the late 1990s, the PSOE government brought the prisoners on the islands and in Africa back to the mainland. Since the end of the ceasefire, ETA prisoners have not been sent back to overseas prisons. Some Basque authorities have established grants for the expenses of visiting families.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "Another Spanish \"counter-terrorist\" law puts suspected terrorist cases under the central tribunal Audiencia Nacional in Madrid, due to the threats by the group over the Basque courts. Under Article 509 suspected terrorists are subject to being held incommunicado for up to thirteen days, during which they have no contact with the outside world other than through the court-appointed lawyer, including informing their family of their arrest, consultation with private lawyers or examination by a physician other than the coroners. In comparison, the habeas corpus term for other suspects is three days.",
"title": "History"
},
{
"paragraph_id": 52,
"text": "In 1992, ETA's three top leaders—\"military\" leader Francisco Mujika Garmendia (\"Pakito\"), political leader José Luis Alvarez Santacristina (\"Txelis\") and logistical leader José María Arregi Erostarbe (\"Fiti\"), often referred to collectively as the \"cúpula\" of ETA or as the Artapalo collective—were arrested in the northern Basque town of Bidart, which led to changes in ETA's leadership and direction.",
"title": "History"
},
{
"paragraph_id": 53,
"text": "After a two-month truce, ETA adopted even more radical positions. The principal consequence of the change appears to have been the creation of the \"Y Groups\", formed by young militants of ETA parallel groups (generally minors), dedicated to so-called \"kale borroka\"—street struggle—and whose activities included burning buses, street lamps, benches, ATMs, and garbage containers, and throwing Molotov cocktails. The appearance of these groups was attributed by many to the supposed weakness of ETA, which obliged them to resort to minors to maintain or augment their impact on society after arrests of leading militants, including the \"cupola\". ETA also began to menace leaders of other parties besides rival Basque nationalist parties.",
"title": "History"
},
{
"paragraph_id": 54,
"text": "In 1995, the armed group again launched a peace proposal. The so-called \"Democratic Alternative\" replaced the earlier KAS Alternative as a minimum proposal for the establishment of Euskal Herria. The Democratic Alternative offered the cessation of all armed ETA activity if the Spanish government would recognize the Basque people as having sovereignty over Basque territory, the right to self-determination, and that it freed all ETA members in prison. The Spanish government ultimately rejected this peace offer as it would go against the Spanish Constitution of 1978. Changing the constitution was not considered.",
"title": "History"
},
{
"paragraph_id": 55,
"text": "Also in 1995 was a failed ETA car bombing attempt directed against José María Aznar, a conservative politician who was the leader of the then-opposition Partido Popular (PP) and was shortly after elected to the presidency of the government; there was also an abortive attempt in Majorca on the life of King Juan Carlos I. Still, the act with the largest social impact came the following year. On 10 July 1997, PP council member Miguel Ángel Blanco was kidnapped in the Basque town of Ermua, with the separatist group threatening to assassinate him unless the Spanish government met ETA's demand of starting to bring all ETA's inmates to prisons of the Basque Country within two days after the kidnapping.",
"title": "History"
},
{
"paragraph_id": 56,
"text": "This demand was not met by the Spanish government and after three days Miguel Ángel Blanco was found shot dead when the deadline expired. More than six million people took out to the streets to demand his liberation, with massive demonstrations occurring as much in the Basque regions as elsewhere in Spain, chanting cries of \"Assassins\" and \"Basques yes, ETA no\". This response came to be known as the \"Spirit of Ermua\".",
"title": "History"
},
{
"paragraph_id": 57,
"text": "Later acts of violence included the 6 November 2001 car bomb in Madrid which injured 65 people, and attacks on football stadiums and tourist destinations throughout Spain.",
"title": "History"
},
{
"paragraph_id": 58,
"text": "The 11 September 2001 attacks in the US appeared to have dealt a hard blow to ETA, owing to the worldwide toughening of \"anti-terrorist\" measures (such as the freezing of bank accounts), the increase in international policy coordination, and the end of the toleration some countries had, up until then, extended to ETA. Additionally, in 2002 the Basque nationalist youth movement, Jarrai, was outlawed and the law of parties was changed outlawing Herri Batasuna, the \"political arm\" of ETA (although even before the change in law, Batasuna had been largely paralysed and under judicial investigation by judge Baltasar Garzón).",
"title": "History"
},
{
"paragraph_id": 59,
"text": "With ever-increasing frequency, attempted ETA actions were frustrated by Spanish security forces.",
"title": "History"
},
{
"paragraph_id": 60,
"text": "On 24 December 2003, in San Sebastián and in Hernani, National Police arrested two ETA members who had left dynamite in a railroad car prepared to explode in Chamartín Station in Madrid. On 1 March 2004, in a place between Alcalá de Henares and Madrid, a light truck with 536 kg of explosives was discovered by the Guardia Civil.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "ETA was initially blamed for the 2004 Madrid bombings by the outgoing government and large sections of the press. However, the group denied responsibility and Islamic fundamentalists from Morocco were eventually convicted. The judicial investigation currently states that there is no relationship between ETA and the Madrid bombings.",
"title": "History"
},
{
"paragraph_id": 62,
"text": "In the context of negotiation with the Spanish government, ETA declared what it described as a \"truce\" several times since its creation.",
"title": "History"
},
{
"paragraph_id": 63,
"text": "On 22 March 2006, ETA sent a DVD message to the Basque Network Euskal Irrati-Telebista and the journals Gara and Berria with a communiqué from the group announcing what it called a \"permanent ceasefire\" that was broadcast over Spanish TV.",
"title": "History"
},
{
"paragraph_id": 64,
"text": "Talks with the group were then officially opened by Spanish Presidente del Gobierno José Luis Rodríguez Zapatero.",
"title": "History"
},
{
"paragraph_id": 65,
"text": "These took place all over 2006, not free from incidents such as an ETA cell stealing some 300 handguns, ammunition and spare parts in France in October 2006. or a series of warnings made by ETA such as the one of 23 September, when masked ETA militants declared that the group would \"keep taking up arms\" until achieving \"independence and socialism in the Basque country\", which were regarded by some as a way to increase pressure on the talks, by others as a tactic to reinforce ETA's position in the negotiations.",
"title": "History"
},
{
"paragraph_id": 66,
"text": "Finally, on 30 December 2006 ETA detonated a van bomb after three confusing warning calls, in a parking building at the Madrid Barajas international airport. The explosion caused the collapse of the building and killed two Ecuadorian immigrants who were napping inside their cars in the parking building. At 6:00 pm, José Luis Rodríguez Zapatero released a statement stating that the \"peace process\" had been discontinued.",
"title": "History"
},
{
"paragraph_id": 67,
"text": "In January 2008, ETA stated that its call for independence is similar to that of the Kosovo status and Scotland. In the week of 8 September 2008, two Basque political parties were banned by a Spanish court for their secretive links to ETA. In another case in the same week, 21 people were convicted whose work on behalf of ETA prisoners actually belied secretive links to the armed separatists themselves. ETA reacted to these actions by placing three major car bombs in less than 24 hours in northern Spain.",
"title": "History"
},
{
"paragraph_id": 68,
"text": "In April 2009 Jurdan Martitegi was arrested, making him the fourth consecutive ETA military chief to be captured within a single year, an unprecedented police record, further weakening the group. Violence surged in the middle of 2009, with several ETA attacks leaving three people dead and dozens injured around Spain. Amnesty International condemned these attacks as well as ETA's \"grave human rights abuses\".",
"title": "History"
},
{
"paragraph_id": 69,
"text": "The Basque newspaper Gara published an article that suggested that ETA member Jon Anza could have been killed and buried by Spanish police in April 2009. The central prosecutor in the French town of Bayonne, Anne Kayanakis, announced, as the official version, that the autopsy carried out on the body of Jon Anza – a suspected member of the armed Basque group ETA, missing since April 2009 – revealed no signs of having been beaten, wounded or shot, which should rule out any suspicions that he died from unnatural causes. Nevertheless, that very magistrate denied the demand of the family asking for the presence of a family doctor during the autopsy. After this, Jon Anza's family members asked for a second autopsy to be carried out.",
"title": "History"
},
{
"paragraph_id": 70,
"text": "In December 2009, Spain raised its terror alert after warning that ETA could be planning major attacks or high-profile kidnappings during Spain's European Union presidency. The next day, after being asked by the opposition, Alfredo Pérez Rubalcaba said that warning was part of a strategy.",
"title": "History"
},
{
"paragraph_id": 71,
"text": "On 5 September 2010, ETA declared a new ceasefire, its third after two previous ceasefires were ended by the group. A spokesperson speaking on a video announcing the ceasefire said the group wished to use \"peaceful, democratic means\" to achieve its aims, though it was not specified whether the ceasefire was considered permanent by the group. ETA claimed that it had decided to initiate a ceasefire several months before the announcement. In the part of the video, the spokesperson said that the group was \"prepared today as yesterday to agree to the minimum democratic conditions necessary to put in motion a democratic process if the Spanish government is willing\".",
"title": "History"
},
{
"paragraph_id": 72,
"text": "The announcement was met with a mixed reaction; Basque nationalist politicians responded positively and said that the Spanish and international governments should do the same, while the Spanish interior counsellor of Basque, Rodolfo Ares, said that the committee did not go far enough. He said that he considered ETA's statement \"absolutely insufficient\" because it did not commit to a complete termination of what Ares considered \"terrorist activity\" by the group.",
"title": "History"
},
{
"paragraph_id": 73,
"text": "On 10 January 2011, ETA declared that their September 2010 ceasefire would be permanent and verifiable by international observers. Observers urged caution, pointing out that ETA had broken permanent ceasefires in the past, whereas Prime Minister José Luis Rodríguez Zapatero (who left office in December 2011) demanded that ETA declare that it had given up violence once and for all. After the declaration, Spanish press started speculating of a possible Real IRA-type split within ETA, with hardliners forming a new more violent offshoot led by \"Dienteputo\".",
"title": "History"
},
{
"paragraph_id": 74,
"text": "On 21 October 2011, ETA announced a cessation of armed activity via video clip sent to media outlets following the Donostia-San Sebastián International Peace Conference, which was attended by former UN Secretary-General Kofi Annan, former Taoiseach of Ireland Bertie Ahern, former prime minister of Norway Gro Harlem Brundtland (an international leader in sustainable development and public health), former Interior Minister of France Pierre Joxe, president of Sinn Féin Gerry Adams (a Teachta Dála in Dáil Éireann), and British diplomat Jonathan Powell, who served as the first Downing Street Chief of Staff.",
"title": "History"
},
{
"paragraph_id": 75,
"text": "They all signed a final declaration that was supported also by former UK Prime Minister Tony Blair, the former US president and 2002 Nobel Peace Prize winner Jimmy Carter, and the former US senator and former US Special Envoy for Middle East Peace George J. Mitchell. The meeting did not include Spanish or French government representatives. The day after the ceasefire, in a contribution piece to The New York Times, Tony Blair indicated that lessons in dealing with paramilitary separatist groups can be learned from how the Spanish administration handled ETA. Blair wrote, \"governments must firmly defend themselves, their principles and their people against terrorists. This requires good police and intelligence work as well as political determination. [However], firm security pressure on terrorists must be coupled with offering them a way out when they realize that they cannot win by violence. Terrorist groups are rarely defeated by military means alone\". Blair also suggested that Spain would need to discuss weapon decommissioning, peace strategies, reparations for victims, and security with ETA, as Britain discussed with the Provisional IRA.",
"title": "History"
},
{
"paragraph_id": 76,
"text": "ETA had declared ceasefires many times before, most significantly in 1999 and 2006, but the Spanish government and media outlets expressed particularly hopeful opinions regarding the permanence of this proclamation. Spanish premier José Luis Rodríguez Zapatero described the move as \"a victory for democracy, law and reason\". Additionally, the effort of security and intelligence forces in Spain and France are cited by politicians as the primary instruments responsible for the weakening of ETA. The optimism may come as a surprise considering ETA's failure to renounce the independence movement, which has been one of the Spanish government's requirements.",
"title": "History"
},
{
"paragraph_id": 77,
"text": "Less optimistically, Spanish Prime Minister Mariano Rajoy of the centre-right People's Party expressed the need to push for the full dissolution of ETA. The People's Party has emphasized the obligation of the state to refuse negotiations with separatist movements since former Prime Minister José María Aznar was in office. Aznar was responsible for banning media outlets seen as subversive to the state and Batasuna, the political party of ETA. Additionally, in preparation for his party's manifesto, on 30 October 2011, Rajoy declared that the People's Party would not negotiate with ETA under threats of violence nor announcements of the group's termination, but would instead focus party efforts on remembering and honouring victims of separatist violence.",
"title": "History"
},
{
"paragraph_id": 78,
"text": "This event may not alter the goals of the Basque separatist movement but will change the method of the fight for a more autonomous state. Negotiations with the newly elected administration may prove difficult with the return to the centre-right People's Party, which is replacing Socialist control, due to pressure from within the party to refuse all ETA negotiations.",
"title": "History"
},
{
"paragraph_id": 79,
"text": "In September 2016, French police stated that they did not believe ETA had made progress in giving up arms. In March 2017, well-known French-Basque activist Jean-Noël Etxeverry [fr] was quoted as having told Le Monde, \"ETA has made us responsible for the disarmament of its arsenal, and by the afternoon of 8 April, ETA will be completely unarmed.\" On 7 April, the BBC reported that ETA would disarm \"tomorrow\", including a photo of a stamped ETA letter attesting to this. The French police found 3.5 tonnes of weapons on 8 April, the following day, at the caches handed over by ETA.",
"title": "History"
},
{
"paragraph_id": 80,
"text": "ETA, for its part, issued a statement endorsing the 2017 Catalan independence referendum.",
"title": "History"
},
{
"paragraph_id": 81,
"text": "In a letter to online newspaper El Diario, published on 2 May 2018, ETA formally announced that it had \"completely dissolved all its structures and ended its political initiative\" on 16 April 2018.",
"title": "History"
},
{
"paragraph_id": 82,
"text": "A leading left-wing Basque nationalist politician and former ETA member, Arnaldo Otegi, the general coordinator of the Basque coalition party EH Bildu, has said the violence ETA used in its quest for independence \"should never have happened\" and it ought to have laid down its arms far earlier than it did. A full quote: 'Today we want to make specific mention of the victims of ETA's violence,\" said Otegi. \"We want to express to them our sorrow and pain for the suffering they endured. We feel their pain, and that sincere feeling leads us to affirm that it should never have happened, that no one could be satisfied with what happened, and that it should not have lasted as long as it did. We should have managed to reach [the abandonment of the armed campaign] sooner.'",
"title": "History"
},
{
"paragraph_id": 83,
"text": "ETA's targets expanded from military or police-related personnel and their families to a wider array, which included the following:",
"title": "Victims, tactics and attacks"
},
{
"paragraph_id": 84,
"text": "ETA's tactics included:",
"title": "Victims, tactics and attacks"
},
{
"paragraph_id": 85,
"text": "These bombs sometimes killed family members of ETA's target victim and bystanders. When the bombs were large car-bombs seeking to produce large damage and terror, they were generally announced by one or more telephone calls made to newspapers speaking in the name of ETA. Charities (usually Detente Y Ayuda—DYA) were also used to announce the threat if the bomb was in a populated area. The type of explosives used in these attacks was initially Goma-2 or self-produced ammonal. After several successful robberies in France, ETA began using Titadyne.",
"title": "Victims, tactics and attacks"
},
{
"paragraph_id": 86,
"text": "With its attacks against what they considered \"enemies of the Basque people\", ETA killed over 820 people since 1968, including more than 340 civilians. It maimed hundreds more and kidnapped dozens. ETA was opposed to Lemóniz Nuclear Power Plant.",
"title": "Activity"
},
{
"paragraph_id": 87,
"text": "Its ability to inflict violence had declined steadily since the group was at its strongest during the late 1970s and 1980 (when it killed 92 people in a single year). After decreasing peaks in the fatal casualties in 1987 and 1991, 2000 was the last year when ETA killed more than 20 in a single year. After 2002, the yearly number of ETA's fatal casualties was reduced to single digits.",
"title": "Activity"
},
{
"paragraph_id": 88,
"text": "Similarly, over the 1990s and, especially, during the 2000s, fluid cooperation between the French and Spanish police, state-of-the-art tracking devices and techniques and, apparently, police infiltration allowed increasingly repeating blows to ETA's leadership and structure (between May 2008 and April 2009 no less than four consecutive \"military chiefs\" were arrested).",
"title": "Activity"
},
{
"paragraph_id": 89,
"text": "ETA operated mainly in Spain, particularly in the Basque Country, Navarre, and (to a lesser degree) Madrid, Barcelona, and the tourist areas of the Spanish Mediterranean coast. To date, about 65% of ETA's killings were committed in the Basque Country, followed by Madrid with roughly 15%. Navarre and Catalonia also registered significant numbers.",
"title": "Activity"
},
{
"paragraph_id": 90,
"text": "Actions in France usually consisted of assaults on arsenals or military industries to steal weapons or explosives; these were usually stored in large quantities in hide-outs located in the French Basque Country rather than Spain. The French judge Laurence Le Vert was threatened by ETA and a plot arguably aiming to assassinate her was unveiled. Only very rarely have ETA members engaged in shootings with the French Gendarmerie. This often occurred mainly when members of the group were confronted at checkpoints.",
"title": "Activity"
},
{
"paragraph_id": 91,
"text": "Despite this, on 1 December 2007 ETA killed two Spanish Civil Guards on counter-terrorist surveillance duties in Capbreton, Landes, France. This was its first killing after it ended its 2006 declaration of \"permanent ceasefire\" and the first killing committed by ETA in France of a Spanish police agent since 1976, when they kidnapped, tortured and assassinated two Spanish inspectors in Hendaye.",
"title": "Activity"
},
{
"paragraph_id": 92,
"text": "In 2007, police reports pointed out that, after the serious blows suffered by ETA and its political counterparts during the 2000s, its budget would have been adjusted to €2,000,000 annually.",
"title": "Activity"
},
{
"paragraph_id": 93,
"text": "Although ETA used robbery as a means of financing its activities in its early days, it was accused both of arms trafficking and of benefiting economically from its political counterpart Batasuna. Extortion was ETA's main source of funds.",
"title": "Activity"
},
{
"paragraph_id": 94,
"text": "ETA was considered to form part of what is informally known as the Basque National Liberation Movement, a movement born much after ETA's creation. This loose term refers to a range of political organizations that are ideologically similar, comprising several distinct organizations that promote a type of leftist Basque nationalism that is often referred to by the Basque-language term Ezker Abertzalea (Nationalist Left). Other groups typically considered to belong to this independentist movement are the political party Batasuna, the nationalist youth organization Segi, the labour union Langile Abertzaleen Batzordeak (LAB), and Askatasuna among others. There are often strong interconnections between these groups, double or even triple membership are not infrequent.",
"title": "Basque nationalist context"
},
{
"paragraph_id": 95,
"text": "There are Basque nationalist parties with similar goals as those of ETA (namely, independence) but who reject their violent means. They are: EAJ-PNV, Eusko Alkartasuna, Aralar and, in the French Basque country, Abertzaleen Batasuna. Also, many left-wing parties, such as Ezker Batua, Batzarre and some sectors of the EAJ-PNV party, also support self-determination but are not in favour of independence.",
"title": "Basque nationalist context"
},
{
"paragraph_id": 96,
"text": "Historically, members of ETA took refuge in France, particularly the French Basque Country. The leadership typically chose to live in France for security reasons, where police pressure was much less than in Spain. Accordingly, ETA's tactical approach had been to downplay the issue of independence of the French Basque country so as to get French acquiescence for their activities. The French government quietly tolerated the group, especially during Franco's regime, when ETA members could face the death penalty in Spain. In the 1980s, the advent of the GAL still hindered counter-terrorist cooperation between France and Spain, with the French government considering ETA a Spanish domestic problem. At the time, ETA members often travelled between the two countries using the French sanctuary as a base of operations.",
"title": "French role"
},
{
"paragraph_id": 97,
"text": "With the disbanding of the GAL, the French government changed its position on the matter and in the 1990s initiated the ongoing period of active cooperation with the Spanish government against ETA, including fast-track transfers of detainees to Spanish tribunals that are regarded as fully compliant with European Union legislation on human rights and the legal representation of detainees. Virtually all of the highest ranks within ETA –including their successive \"military\", \"political\" or finances chiefs – have been captured in French territory, from where they had been plotting their activities after having crossed the border from Spain.",
"title": "French role"
},
{
"paragraph_id": 98,
"text": "In response to the new situation, ETA carried out attacks against French policemen and made threats to some French judges and prosecutors. This implied a change from the group's previous low-profile in the French Basque Country, which successive ETA leaders had used to discreetly manage their activities in Spain.",
"title": "French role"
},
{
"paragraph_id": 99,
"text": "ETA considered its prisoners political prisoners. Until 2003, ETA consequently forbade them to ask penal authorities for progression to tercer grado (a form of open prison that allows single-day or weekend furloughs) or parole. Before that date, those who did so were menaced and expelled from the group. Some were assassinated by ETA for leaving the group and going through reinsertion programs.",
"title": "Government response"
},
{
"paragraph_id": 100,
"text": "The Spanish Government passed the Ley de Partidos Políticos. This is a law barring political parties that support violence and do not condemn terrorist actions or are involved with terrorist groups. The law resulted in the banning of Herri Batasuna and its successor parties unless they explicitly condemned terrorist actions and, at times, imprisoning or trying some of its leaders who have been indicted for cooperation with ETA.",
"title": "Government response"
},
{
"paragraph_id": 101,
"text": "Judge Baltasar Garzón initiated a judicial procedure (coded as 18/98), aimed towards the support structure of ETA. This procedure started in 1998 with the preventive closure of the newspaper Egin (and its associated radio-station Egin Irratia), accused of being linked to ETA, and temporary imprisoning the editor of its \"investigative unit\", Pepe Rei, under similar accusations. In August 1999 Judge Baltasar Garzón authorized the reopening of the newspaper and the radio, but they could not reopen due to economic difficulties.",
"title": "Government response"
},
{
"paragraph_id": 102,
"text": "Judicial procedure 18/98 has many ramifications, including the following:",
"title": "Government response"
},
{
"paragraph_id": 103,
"text": "In 2007, indicted members of the youth movements Haika, Segi and Jarrai were found guilty of a crime of connivance with terrorism.",
"title": "Government response"
},
{
"paragraph_id": 104,
"text": "In May 2008, leading ETA figures were arrested in Bordeaux, France. Francisco Javier López Peña, also known as 'Thierry,' had been on the run for twenty years before his arrest. A final total of arrests brought in six people, including ETA members and supporters, including the ex-Mayor of Andoain, José Antonio Barandiarán, who is rumoured to have led police to 'Thierry'. The Spanish Interior Ministry claimed the relevance of the arrests would come in time with the investigation. Furthermore, the Interior Minister said that those members of ETA now arrested had ordered the latest attacks and that senior ETA member Francisco Javier López Peña was \"not just another arrest because he is, in all probability, the man who has most political and military weight in the terrorist group.\"",
"title": "Government response"
},
{
"paragraph_id": 105,
"text": "After Lopez Pena's arrest, along with the Basque referendum being put on hold, police work has been on the rise. On 22 July 2008, Spanish police dismantled the most active cell of ETA by detaining nine suspected members of the group. Interior Minister Alfredo Perez Rubalcaba said about the arrests: \"We can't say this is the only ETA unit but it was the most active, most dynamic and of course the most wanted one.\" Four days later French police also arrested two suspects believed to be tied to the same active cell. The two suspects were: Asier Eceiza, considered a top aide to a senior ETA operative still sought by police, and Olga Comes, whom authorities have linked to the ETA suspects.",
"title": "Government response"
},
{
"paragraph_id": 106,
"text": "The European Union and the United States listed ETA as a terrorist group in their relevant watch lists. ETA has been a Proscribed Organisation in the United Kingdom under the Terrorism Act 2000 since 29 March 2001. The Canadian Parliament listed ETA as a terrorist group in 2003.",
"title": "Government response"
},
{
"paragraph_id": 107,
"text": "France and Spain have often shown co-operation in the fight against ETA, after France's lack of co-operation during the Franco era. In late 2007, two Spanish guards were shot to death in France when on a joint operation with their French counterparts. Furthermore, in May 2008, the arrests of four people in Bordeaux led to a breakthrough against ETA, according to the Spanish Interior Ministry.",
"title": "Government response"
},
{
"paragraph_id": 108,
"text": "In 2008, as ETA activity increased, France increased its pressure on ETA by arresting more ETA suspects, including Unai Fano, María Lizarraga, and Esteban Murillo Zubiri in Bidarrain. He had been wanted by the Spanish authorities since 2007 when a Europol arrest warrant was issued against him. French judicial authorities had already ordered that he be held in prison on remand.",
"title": "Government response"
},
{
"paragraph_id": 109,
"text": "Spain has also sought cooperation from the United Kingdom in dealing with ETA-IRA ties. In 2008, this came to light after Iñaki de Juana Chaos, whose release from prison was cancelled on appeal, had moved to Belfast. He was thought to be staying at an IRA safe house while being sought by the Spanish authorities. Interpol notified the judge, Eloy Velasco, that he was in either the Republic of Ireland or Northern Ireland.",
"title": "Government response"
}
]
| ETA, an acronym for Euskadi Ta Askatasuna, was an armed Basque nationalist and far-left separatist organization in the Basque Country between 1959 and 2018, with its goal being independence for the region. The group was founded in 1959 during the era of Francoist Spain, and later evolved from a pacifist group promoting traditional Basque culture to a violent paramilitary group. It engaged in a campaign of bombings, assassinations, and kidnappings throughout Spain and especially the Southern Basque Country against the regime, which was highly centralised and hostile to the expression of non-Castilian minority identities. ETA was the main group within the Basque National Liberation Movement and was the most important Basque participant in the Basque conflict. ETA's motto was Bietan jarrai, referring to the two figures in its symbol, a snake wrapped around an axe. Between 1968 and 2010, ETA killed 829 people and injured more than 22,000. ETA was classified as a terrorist group by Spain, France, the United Kingdom, the United States, Canada, and the European Union. This convention was followed by a plurality of domestic and international media, which also referred to the group as terrorists. As of 2019, there were more than 260 imprisoned former members of the group in Spain, France, and other countries. ETA declared ceasefires in 1989, 1996, 1998 and 2006. On 5 September 2010, ETA declared a new ceasefire that remained in force, and on 20 October 2011, ETA announced a "definitive cessation of its armed activity". On 24 November 2012, it was reported that the group was ready to negotiate a "definitive end" to its operations and disband completely. The group announced on 7 April 2017 that it had given up all its weapons and explosives. On 2 May 2018, ETA made public a letter dated 16 April 2018 according to which it had "completely dissolved all its structures and ended its political initiative". | 2001-10-13T07:20:58Z | 2023-12-31T03:51:59Z | [
"Template:As of",
"Template:According to whom",
"Template:Ill",
"Template:Citation",
"Template:Pp-move-indef",
"Template:Use dmy dates",
"Template:Authority control",
"Template:Short description",
"Template:Notelist",
"Template:Webarchive",
"Template:Cite encyclopedia",
"Template:Cite AV media",
"Template:Small",
"Template:Refbegin",
"Template:Lang",
"Template:Cite web",
"Template:Main",
"Template:Dead link",
"Template:Infobox war faction",
"Template:Update inline",
"Template:Cite book",
"Template:Cite news",
"Template:Commons category-inline",
"Template:Basque Conflict",
"Template:See also",
"Template:Clarify",
"Template:In lang",
"Template:Cite act",
"Template:Cbignore",
"Template:IMDb title",
"Template:Reflist",
"Template:Cite magazine",
"Template:Refend",
"Template:Efn",
"Template:Citation needed",
"Template:Cite journal"
]
| https://en.wikipedia.org/wiki/ETA_(separatist_group) |
9,927 | Endomembrane system | The endomembrane system is composed of the different membranes (endomembranes) that are suspended in the cytoplasm within a eukaryotic cell. These membranes divide the cell into functional and structural compartments, or organelles. In eukaryotes the organelles of the endomembrane system include: the nuclear membrane, the endoplasmic reticulum, the Golgi apparatus, lysosomes, vesicles, endosomes, and plasma (cell) membrane among others. The system is defined more accurately as the set of membranes that forms a single functional and developmental unit, either being connected directly, or exchanging material through vesicle transport. Importantly, the endomembrane system does not include the membranes of plastids or mitochondria, but might have evolved partially from the actions of the latter (see below).
The nuclear membrane contains a lipid bilayer that encompasses the contents of the nucleus. The endoplasmic reticulum (ER) is a synthesis, and transport organelle that branches into the cytoplasm in plant and animal cells. The Golgi apparatus is a series of multiple compartments where molecules are packaged for delivery to other cell components or for secretion from the cell. Vacuoles, which are found in both plant and animal cells (though much bigger in plant cells), are responsible for maintaining the shape and structure of the cell as well as storing waste products. A vesicle is a relatively small, membrane-enclosed sac that stores or transports substances. The cell membrane is a protective barrier that regulates what enters and leaves the cell. There is also an organelle known as the Spitzenkörper that is only found in fungi, and is connected with hyphal tip growth.
In prokaryotes endomembranes are rare, although in many photosynthetic bacteria the plasma membrane is highly folded and most of the cell cytoplasm is filled with layers of light-gathering membrane. These light-gathering membranes may even form enclosed structures called chlorosomes in green sulfur bacteria. Another example is the complex "pepin" system of Thiomargarita species, especially T. magnifica.
The organelles of the endomembrane system are related through direct contact or by the transfer of membrane segments as vesicles. Despite these relationships, the various membranes are not identical in structure and function. The thickness, molecular composition, and metabolic behavior of a membrane are not fixed, they may be modified several times during the membrane's life. One unifying characteristic the membranes share is a lipid bilayer, with proteins attached to either side or traversing them.
Most lipids are synthesized in yeast either in the endoplasmic reticulum, lipid particles, or the mitochondrion, with little or no lipid synthesis occurring in the plasma membrane or nuclear membrane. Sphingolipid biosynthesis begins in the endoplasmic reticulum, but is completed in the Golgi apparatus. The situation is similar in mammals, with the exception of the first few steps in ether lipid biosynthesis, which occur in peroxisomes. The various membranes that enclose the other subcellular organelles must therefore be constructed by transfer of lipids from these sites of synthesis. However, although it is clear that lipid transport is a central process in organelle biogenesis, the mechanisms by which lipids are transported through cells remain poorly understood.
The first proposal that the membranes within cells form a single system that exchanges material between its components was by Morré and Mollenhauer in 1974. This proposal was made as a way of explaining how the various lipid membranes are assembled in the cell, with these membranes being assembled through lipid flow from the sites of lipid synthesis. The idea of lipid flow through a continuous system of membranes and vesicles was an alternative to the various membranes being independent entities that are formed from transport of free lipid components, such as fatty acids and sterols, through the cytosol. Importantly, the transport of lipids through the cytosol and lipid flow through a continuous endomembrane system are not mutually exclusive processes and both may occur in cells.
The nuclear envelope surrounds the nucleus, separating its contents from the cytoplasm. It has two membranes, each a lipid bilayer with associated proteins. The outer nuclear membrane is continuous with the rough endoplasmic reticulum membrane, and like that structure, features ribosomes attached to the surface. The outer membrane is also continuous with the inner nuclear membrane since the two layers are fused together at numerous tiny holes called nuclear pores that perforate the nuclear envelope. These pores are about 120 nm in diameter and regulate the passage of molecules between the nucleus and cytoplasm, permitting some to pass through the membrane, but not others. Since the nuclear pores are located in an area of high traffic, they play an important role in cell physiology. The space between the outer and inner membranes is called the perinuclear space and is joined with the lumen of the rough ER.
The nuclear envelope's structure is determined by a network of intermediate filaments (protein filaments). This network is organized into lining similar to mesh called the nuclear lamina, which binds to chromatin, integral membrane proteins, and other nuclear components along the inner surface of the nucleus. The nuclear lamina is thought to help materials inside the nucleus reach the nuclear pores and in the disintegration of the nuclear envelope during mitosis and its reassembly at the end of the process.
The nuclear pores are highly efficient at selectively allowing the passage of materials to and from the nucleus, because the nuclear envelope has a considerable amount of traffic. RNA and ribosomal subunits must be continually transferred from the nucleus to the cytoplasm. Histones, gene regulatory proteins, DNA and RNA polymerases, and other substances essential for nuclear activities must be imported from the cytoplasm. The nuclear envelope of a typical mammalian cell contains 3000–4000 pore complexes. If the cell is synthesizing DNA each pore complex needs to transport about 100 histone molecules per minute. If the cell is growing rapidly, each complex also needs to transport about 6 newly assembled large and small ribosomal subunits per minute from the nucleus to the cytosol, where they are used to synthesize proteins.
The endoplasmic reticulum (ER) is a membranous synthesis and transport organelle that is an extension of the nuclear envelope. More than half the total membrane in eukaryotic cells is accounted for by the ER. The ER is made up of flattened sacs and branching tubules that are thought to interconnect, so that the ER membrane forms a continuous sheet enclosing a single internal space. This highly convoluted space is called the ER lumen and is also referred to as the ER cisternal space. The lumen takes up about ten percent of the entire cell volume. The endoplasmic reticulum membrane allows molecules to be selectively transferred between the lumen and the cytoplasm, and since it is connected to the nuclear envelope, it provides a channel between the nucleus and the cytoplasm.
The ER has a central role in producing, processing, and transporting biochemical compounds for use inside and outside of the cell. Its membrane is the site of production of all the transmembrane proteins and lipids for most of the cell's organelles, including the ER itself, the Golgi apparatus, lysosomes, endosomes, mitochondria, peroxisomes, secretory vesicles, and the plasma membrane. Furthermore, almost all of the proteins that will exit the cell, plus those destined for the lumen of the ER, Golgi apparatus, or lysosomes, are originally delivered to the ER lumen. Consequently, many of the proteins found in the cisternal space of the endoplasmic reticulum lumen are there only temporarily as they pass on their way to other locations. Other proteins, however, constantly remain in the lumen and are known as endoplasmic reticulum resident proteins. These special proteins contain a specialized retention signal made up of a specific sequence of amino acids that enables them to be retained by the organelle. An example of an important endoplasmic reticulum resident protein is the chaperone protein known as BiP which identifies other proteins that have been improperly built or processed and keeps them from being sent to their final destinations.
The ER is involved in cotranslational sorting of proteins. A polypeptide which contains an ER signal sequence is recognised by the signal recognition particle which halts the production of the protein. The SRP transports the nascent protein to the ER membrane where it is released through a membrane channel and translation resumes.
There are two distinct, though connected, regions of ER that differ in structure and function: smooth ER and rough ER. The rough endoplasmic reticulum is so named because the cytoplasmic surface is covered with ribosomes, giving it a bumpy appearance when viewed through an electron microscope. The smooth ER appears smooth since its cytoplasmic surface lacks ribosomes.
In the great majority of cells, smooth ER regions are scarce and are often partly smooth and partly rough. They are sometimes called transitional ER because they contain ER exit sites from which transport vesicles carrying newly synthesized proteins and lipids bud off for transport to the Golgi apparatus. In certain specialized cells, however, the smooth ER is abundant and has additional functions. The smooth ER of these specialized cells functions in diverse metabolic processes, including synthesis of lipids, metabolism of carbohydrates, and detoxification of drugs and poisons.
Enzymes of the smooth ER are vital to the synthesis of lipids, including oils, phospholipids, and steroids. Sex hormones of vertebrates and the steroid hormones secreted by the adrenal glands are among the steroids produced by the smooth ER in animal cells. The cells that synthesize these hormones are rich in smooth ER.
Liver cells are another example of specialized cells that contain an abundance of smooth ER. These cells provide an example of the role of smooth ER in carbohydrate metabolism. Liver cells store carbohydrates in the form of glycogen. The breakdown of glycogen eventually leads to the release of glucose from the liver cells, which is important in the regulation of sugar concentration in the blood. However, the primary product of glycogen breakdown is glucose-1-phosphate. This is converted to glucose-6-phosphate and then an enzyme of the liver cell's smooth ER removes the phosphate from the glucose, so that it can then leave the cell.
Enzymes of the smooth ER can also help detoxify drugs and poisons. Detoxification usually involves the addition of a hydroxyl group to a drug, making the drug more soluble and thus easier to purge from the body. One extensively studied detoxification reaction is carried out by the cytochrome P450 family of enzymes, which catalyze oxidation reactions on water-insoluble drugs or metabolites that would otherwise accumulate to toxic levels in cell membrane.
In muscle cells, a specialized smooth ER (sarcoplasmic reticulum) forms a membranous compartment (cisternal space) into which calcium ions are pumped. When a muscle cell becomes stimulated by a nerve impulse, calcium goes back across this membrane into the cytosol and generates the contraction of the muscle cell.
Many types of cells export proteins produced by ribosomes attached to the rough ER. The ribosomes assemble amino acids into protein units, which are carried into the rough ER for further adjustments. These proteins may be either transmembrane proteins, which become embedded in the membrane of the endoplasmic reticulum, or water-soluble proteins, which are able to pass through the membrane into the lumen. Those that reach the inside of the endoplasmic reticulum are folded into the correct three-dimensional conformation. Chemicals, such as carbohydrates or sugars, are added, then the endoplasmic reticulum either transports the completed proteins, called secretory proteins, to areas of the cell where they are needed, or they are sent to the Golgi apparatus for further processing and modification.
Once secretory proteins are formed, the ER membrane separates them from the proteins that will remain in the cytosol. Secretory proteins depart from the ER enfolded in the membranes of vesicles that bud like bubbles from the transitional ER. These vesicles in transit to another part of the cell are called transport vesicles. An alternative mechanism for transport of lipids and proteins out of the ER are through lipid transfer proteins at regions called membrane contact sites where the ER becomes closely and stably associated with the membranes of other organelles, such as the plasma membrane, Golgi or lysosomes.
In addition to making secretory proteins, the rough ER makes membranes that grows in place from the addition of proteins and phospholipids. As polypeptides intended to be membrane proteins grow from the ribosomes, they are inserted into the ER membrane itself and are kept there by their hydrophobic portions. The rough ER also produces its own membrane phospholipids; enzymes built into the ER membrane assemble phospholipids. The ER membrane expands and can be transferred by transport vesicles to other components of the endomembrane system.
The Golgi apparatus (also known as the Golgi body and the Golgi complex) is composed of separate sacs called cisternae. Its shape is similar to a stack of pancakes. The number of these stacks varies with the specific function of the cell. The Golgi apparatus is used by the cell for further protein modification. The section of the Golgi apparatus that receives the vesicles from the ER is known as the cis face, and is usually near the ER. The opposite end of the Golgi apparatus is called the trans face, this is where the modified compounds leave. The trans face is usually facing the plasma membrane, which is where most of the substances the Golgi apparatus modifies are sent.
Vesicles sent off by the ER containing proteins are further altered at the Golgi apparatus and then prepared for secretion from the cell or transport to other parts of the cell. Various things can happen to the proteins on their journey through the enzyme covered space of the Golgi apparatus. The modification and synthesis of the carbohydrate portions of glycoproteins is common in protein processing. The Golgi apparatus removes and substitutes sugar monomers, producing a large variety of oligosaccharides. In addition to modifying proteins, the Golgi also manufactures macromolecules itself. In plant cells, the Golgi produces pectins and other polysaccharides needed by the plant structure.
Once the modification process is completed, the Golgi apparatus sorts the products of its processing and sends them to various parts of the cell. Molecular identification labels or tags are added by the Golgi enzymes to help with this. After everything is organized, the Golgi apparatus sends off its products by budding vesicles from its trans face.
Vacuoles, like vesicles, are membrane-bound sacs within the cell. They are larger than vesicles and their specific function varies. The operations of vacuoles are different for plant and animal vacuoles.
In plant cells, vacuoles cover anywhere from 30% to 90% of the total cell volume. Most mature plant cells contain one large central vacuole encompassed by a membrane called the tonoplast. Vacuoles of plant cells act as storage compartments for the nutrients and waste of a cell. The solution that these molecules are stored in is called the cell sap. Pigments that color the cell are sometime located in the cell sap. Vacuoles can also increase the size of the cell, which elongates as water is added, and they control the turgor pressure (the osmotic pressure that keeps the cell wall from caving in). Like lysosomes of animal cells, vacuoles have an acidic pH and contain hydrolytic enzymes. The pH of vacuoles enables them to perform homeostatic procedures in the cell. For example, when the pH in the cells environment drops, the H ions surging into the cytosol can be transferred to a vacuole in order to keep the cytosol's pH constant.
In animals, vacuoles serve in exocytosis and endocytosis processes. Endocytosis refers to when substances are taken into the cell, whereas for exocytosis substances are moved from the cell into the extracellular space. Material to be taken-in is surrounded by the plasma membrane, and then transferred to a vacuole. There are two types of endocytosis, phagocytosis (cell eating) and pinocytosis (cell drinking). In phagocytosis, cells engulf large particles such as bacteria. Pinocytosis is the same process, except the substances being ingested are in the fluid form.
Vesicles are small membrane-enclosed transport units that can transfer molecules between different compartments. Most vesicles transfer the membranes assembled in the endoplasmic reticulum to the Golgi apparatus, and then from the Golgi apparatus to various locations.
There are various types of vesicles each with a different protein configuration. Most are formed from specific regions of membranes. When a vesicle buds off from a membrane it contains specific proteins on its cytosolic surface. Each membrane a vesicle travels to contains a marker on its cytosolic surface. This marker corresponds with the proteins on the vesicle traveling to the membrane. Once the vesicle finds the membrane, they fuse.
There are three well known types of vesicles. They are clathrin-coated, COPI-coated, and COPII-coated vesicles. Each performs different functions in the cell. For example, clathrin-coated vesicles transport substances between the Golgi apparatus and the plasma membrane. COPI- and COPII-coated vesicles are frequently used for transportation between the ER and the Golgi apparatus.
Lysosomes are organelles that contain hydrolytic enzymes that are used for intracellular digestion. The main functions of a lysosome are to process molecules taken in by the cell and to recycle worn out cell parts. The enzymes inside of lysosomes are acid hydrolases which require an acidic environment for optimal performance. Lysosomes provide such an environment by maintaining a pH of 5.0 inside of the organelle. If a lysosome were to rupture, the enzymes released would not be very active because of the cytosol's neutral pH. However, if numerous lysosomes leaked the cell could be destroyed from autodigestion.
Lysosomes carry out intracellular digestion, in a process called phagocytosis (from the Greek phagein, to eat and kytos, vessel, referring here to the cell), by fusing with a vacuole and releasing their enzymes into the vacuole. Through this process, sugars, amino acids, and other monomers pass into the cytosol and become nutrients for the cell. Lysosomes also use their hydrolytic enzymes to recycle the cell's obsolete organelles in a process called autophagy. The lysosome engulfs another organelle and uses its enzymes to take apart the ingested material. The resulting organic monomers are then returned to the cytosol for reuse. The last function of a lysosome is to digest the cell itself through autolysis.
The spitzenkörper is a component of the endomembrane system found only in fungi, and is associated with hyphal tip growth. It is a phase-dark body that is composed of an aggregation of membrane-bound vesicles containing cell wall components, serving as a point of assemblage and release of such components intermediate between the Golgi and the cell membrane. The spitzenkörper is motile and generates new hyphal tip growth as it moves forward.
The plasma membrane is a phospholipid bilayer membrane that separates the cell from its environment and regulates the transport of molecules and signals into and out of the cell. Embedded in the membrane are proteins that perform the functions of the plasma membrane. The plasma membrane is not a fixed or rigid structure, the molecules that compose the membrane are capable of lateral movement. This movement and the multiple components of the membrane are why it is referred to as a fluid mosaic. Smaller molecules such as carbon dioxide, water, and oxygen can pass through the plasma membrane freely by diffusion or osmosis. Larger molecules needed by the cell are assisted by proteins through active transport.
The plasma membrane of a cell has multiple functions. These include transporting nutrients into the cell, allowing waste to leave, preventing materials from entering the cell, averting needed materials from leaving the cell, maintaining the pH of the cytosol, and preserving the osmotic pressure of the cytosol. Transport proteins which allow some materials to pass through but not others are used for these functions. These proteins use ATP hydrolysis to pump materials against their concentration gradients.
In addition to these universal functions, the plasma membrane has a more specific role in multicellular organisms. Glycoproteins on the membrane assist the cell in recognizing other cells, in order to exchange metabolites and form tissues. Other proteins on the plasma membrane allow attachment to the cytoskeleton and extracellular matrix; a function that maintains cell shape and fixes the location of membrane proteins. Enzymes that catalyze reactions are also found on the plasma membrane. Receptor proteins on the membrane have a shape that matches with a chemical messenger, resulting in various cellular responses.
The origin of the endomembrane system is linked to the origin of eukaryotes themselves and the origin of eukaryoties to the endosymbiotic origin of mitochondria. Many models have been put forward to explain the origin of the endomembrane system (reviewed in). The most recent concept suggests that the endomembrane system evolved from outer membrane vesicles the endosymbiotic mitochondrion secreted, and got enclosed within infoldings of the host prokaryote (in turn, a result of the ingestion of the endosymbiont). This OMV (outer membrane vesicles)-based model for the origin of the endomembrane system is currently the one that requires the fewest novel inventions at eukaryote origin and explains the many connections of mitochondria with other compartments of the cell. Currently, this "inside-out" hypothesis (which states that the alphaproteobacteria, the ancestral mitochondria, were engulfed by the blebs of an asgardarchaeon, and later the blebs fused leaving infoldings which would eventually become the endomembrane system) is favored more than the outside-in one (which suggested that the endomembrane system arose due to infoldings within the archaeal membrane). | [
{
"paragraph_id": 0,
"text": "The endomembrane system is composed of the different membranes (endomembranes) that are suspended in the cytoplasm within a eukaryotic cell. These membranes divide the cell into functional and structural compartments, or organelles. In eukaryotes the organelles of the endomembrane system include: the nuclear membrane, the endoplasmic reticulum, the Golgi apparatus, lysosomes, vesicles, endosomes, and plasma (cell) membrane among others. The system is defined more accurately as the set of membranes that forms a single functional and developmental unit, either being connected directly, or exchanging material through vesicle transport. Importantly, the endomembrane system does not include the membranes of plastids or mitochondria, but might have evolved partially from the actions of the latter (see below).",
"title": ""
},
{
"paragraph_id": 1,
"text": "The nuclear membrane contains a lipid bilayer that encompasses the contents of the nucleus. The endoplasmic reticulum (ER) is a synthesis, and transport organelle that branches into the cytoplasm in plant and animal cells. The Golgi apparatus is a series of multiple compartments where molecules are packaged for delivery to other cell components or for secretion from the cell. Vacuoles, which are found in both plant and animal cells (though much bigger in plant cells), are responsible for maintaining the shape and structure of the cell as well as storing waste products. A vesicle is a relatively small, membrane-enclosed sac that stores or transports substances. The cell membrane is a protective barrier that regulates what enters and leaves the cell. There is also an organelle known as the Spitzenkörper that is only found in fungi, and is connected with hyphal tip growth.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In prokaryotes endomembranes are rare, although in many photosynthetic bacteria the plasma membrane is highly folded and most of the cell cytoplasm is filled with layers of light-gathering membrane. These light-gathering membranes may even form enclosed structures called chlorosomes in green sulfur bacteria. Another example is the complex \"pepin\" system of Thiomargarita species, especially T. magnifica.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The organelles of the endomembrane system are related through direct contact or by the transfer of membrane segments as vesicles. Despite these relationships, the various membranes are not identical in structure and function. The thickness, molecular composition, and metabolic behavior of a membrane are not fixed, they may be modified several times during the membrane's life. One unifying characteristic the membranes share is a lipid bilayer, with proteins attached to either side or traversing them.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Most lipids are synthesized in yeast either in the endoplasmic reticulum, lipid particles, or the mitochondrion, with little or no lipid synthesis occurring in the plasma membrane or nuclear membrane. Sphingolipid biosynthesis begins in the endoplasmic reticulum, but is completed in the Golgi apparatus. The situation is similar in mammals, with the exception of the first few steps in ether lipid biosynthesis, which occur in peroxisomes. The various membranes that enclose the other subcellular organelles must therefore be constructed by transfer of lipids from these sites of synthesis. However, although it is clear that lipid transport is a central process in organelle biogenesis, the mechanisms by which lipids are transported through cells remain poorly understood.",
"title": "History of the concept"
},
{
"paragraph_id": 5,
"text": "The first proposal that the membranes within cells form a single system that exchanges material between its components was by Morré and Mollenhauer in 1974. This proposal was made as a way of explaining how the various lipid membranes are assembled in the cell, with these membranes being assembled through lipid flow from the sites of lipid synthesis. The idea of lipid flow through a continuous system of membranes and vesicles was an alternative to the various membranes being independent entities that are formed from transport of free lipid components, such as fatty acids and sterols, through the cytosol. Importantly, the transport of lipids through the cytosol and lipid flow through a continuous endomembrane system are not mutually exclusive processes and both may occur in cells.",
"title": "History of the concept"
},
{
"paragraph_id": 6,
"text": "The nuclear envelope surrounds the nucleus, separating its contents from the cytoplasm. It has two membranes, each a lipid bilayer with associated proteins. The outer nuclear membrane is continuous with the rough endoplasmic reticulum membrane, and like that structure, features ribosomes attached to the surface. The outer membrane is also continuous with the inner nuclear membrane since the two layers are fused together at numerous tiny holes called nuclear pores that perforate the nuclear envelope. These pores are about 120 nm in diameter and regulate the passage of molecules between the nucleus and cytoplasm, permitting some to pass through the membrane, but not others. Since the nuclear pores are located in an area of high traffic, they play an important role in cell physiology. The space between the outer and inner membranes is called the perinuclear space and is joined with the lumen of the rough ER.",
"title": "Components of the system"
},
{
"paragraph_id": 7,
"text": "The nuclear envelope's structure is determined by a network of intermediate filaments (protein filaments). This network is organized into lining similar to mesh called the nuclear lamina, which binds to chromatin, integral membrane proteins, and other nuclear components along the inner surface of the nucleus. The nuclear lamina is thought to help materials inside the nucleus reach the nuclear pores and in the disintegration of the nuclear envelope during mitosis and its reassembly at the end of the process.",
"title": "Components of the system"
},
{
"paragraph_id": 8,
"text": "The nuclear pores are highly efficient at selectively allowing the passage of materials to and from the nucleus, because the nuclear envelope has a considerable amount of traffic. RNA and ribosomal subunits must be continually transferred from the nucleus to the cytoplasm. Histones, gene regulatory proteins, DNA and RNA polymerases, and other substances essential for nuclear activities must be imported from the cytoplasm. The nuclear envelope of a typical mammalian cell contains 3000–4000 pore complexes. If the cell is synthesizing DNA each pore complex needs to transport about 100 histone molecules per minute. If the cell is growing rapidly, each complex also needs to transport about 6 newly assembled large and small ribosomal subunits per minute from the nucleus to the cytosol, where they are used to synthesize proteins.",
"title": "Components of the system"
},
{
"paragraph_id": 9,
"text": "The endoplasmic reticulum (ER) is a membranous synthesis and transport organelle that is an extension of the nuclear envelope. More than half the total membrane in eukaryotic cells is accounted for by the ER. The ER is made up of flattened sacs and branching tubules that are thought to interconnect, so that the ER membrane forms a continuous sheet enclosing a single internal space. This highly convoluted space is called the ER lumen and is also referred to as the ER cisternal space. The lumen takes up about ten percent of the entire cell volume. The endoplasmic reticulum membrane allows molecules to be selectively transferred between the lumen and the cytoplasm, and since it is connected to the nuclear envelope, it provides a channel between the nucleus and the cytoplasm.",
"title": "Components of the system"
},
{
"paragraph_id": 10,
"text": "The ER has a central role in producing, processing, and transporting biochemical compounds for use inside and outside of the cell. Its membrane is the site of production of all the transmembrane proteins and lipids for most of the cell's organelles, including the ER itself, the Golgi apparatus, lysosomes, endosomes, mitochondria, peroxisomes, secretory vesicles, and the plasma membrane. Furthermore, almost all of the proteins that will exit the cell, plus those destined for the lumen of the ER, Golgi apparatus, or lysosomes, are originally delivered to the ER lumen. Consequently, many of the proteins found in the cisternal space of the endoplasmic reticulum lumen are there only temporarily as they pass on their way to other locations. Other proteins, however, constantly remain in the lumen and are known as endoplasmic reticulum resident proteins. These special proteins contain a specialized retention signal made up of a specific sequence of amino acids that enables them to be retained by the organelle. An example of an important endoplasmic reticulum resident protein is the chaperone protein known as BiP which identifies other proteins that have been improperly built or processed and keeps them from being sent to their final destinations.",
"title": "Components of the system"
},
{
"paragraph_id": 11,
"text": "The ER is involved in cotranslational sorting of proteins. A polypeptide which contains an ER signal sequence is recognised by the signal recognition particle which halts the production of the protein. The SRP transports the nascent protein to the ER membrane where it is released through a membrane channel and translation resumes.",
"title": "Components of the system"
},
{
"paragraph_id": 12,
"text": "There are two distinct, though connected, regions of ER that differ in structure and function: smooth ER and rough ER. The rough endoplasmic reticulum is so named because the cytoplasmic surface is covered with ribosomes, giving it a bumpy appearance when viewed through an electron microscope. The smooth ER appears smooth since its cytoplasmic surface lacks ribosomes.",
"title": "Components of the system"
},
{
"paragraph_id": 13,
"text": "In the great majority of cells, smooth ER regions are scarce and are often partly smooth and partly rough. They are sometimes called transitional ER because they contain ER exit sites from which transport vesicles carrying newly synthesized proteins and lipids bud off for transport to the Golgi apparatus. In certain specialized cells, however, the smooth ER is abundant and has additional functions. The smooth ER of these specialized cells functions in diverse metabolic processes, including synthesis of lipids, metabolism of carbohydrates, and detoxification of drugs and poisons.",
"title": "Components of the system"
},
{
"paragraph_id": 14,
"text": "Enzymes of the smooth ER are vital to the synthesis of lipids, including oils, phospholipids, and steroids. Sex hormones of vertebrates and the steroid hormones secreted by the adrenal glands are among the steroids produced by the smooth ER in animal cells. The cells that synthesize these hormones are rich in smooth ER.",
"title": "Components of the system"
},
{
"paragraph_id": 15,
"text": "Liver cells are another example of specialized cells that contain an abundance of smooth ER. These cells provide an example of the role of smooth ER in carbohydrate metabolism. Liver cells store carbohydrates in the form of glycogen. The breakdown of glycogen eventually leads to the release of glucose from the liver cells, which is important in the regulation of sugar concentration in the blood. However, the primary product of glycogen breakdown is glucose-1-phosphate. This is converted to glucose-6-phosphate and then an enzyme of the liver cell's smooth ER removes the phosphate from the glucose, so that it can then leave the cell.",
"title": "Components of the system"
},
{
"paragraph_id": 16,
"text": "Enzymes of the smooth ER can also help detoxify drugs and poisons. Detoxification usually involves the addition of a hydroxyl group to a drug, making the drug more soluble and thus easier to purge from the body. One extensively studied detoxification reaction is carried out by the cytochrome P450 family of enzymes, which catalyze oxidation reactions on water-insoluble drugs or metabolites that would otherwise accumulate to toxic levels in cell membrane.",
"title": "Components of the system"
},
{
"paragraph_id": 17,
"text": "In muscle cells, a specialized smooth ER (sarcoplasmic reticulum) forms a membranous compartment (cisternal space) into which calcium ions are pumped. When a muscle cell becomes stimulated by a nerve impulse, calcium goes back across this membrane into the cytosol and generates the contraction of the muscle cell.",
"title": "Components of the system"
},
{
"paragraph_id": 18,
"text": "Many types of cells export proteins produced by ribosomes attached to the rough ER. The ribosomes assemble amino acids into protein units, which are carried into the rough ER for further adjustments. These proteins may be either transmembrane proteins, which become embedded in the membrane of the endoplasmic reticulum, or water-soluble proteins, which are able to pass through the membrane into the lumen. Those that reach the inside of the endoplasmic reticulum are folded into the correct three-dimensional conformation. Chemicals, such as carbohydrates or sugars, are added, then the endoplasmic reticulum either transports the completed proteins, called secretory proteins, to areas of the cell where they are needed, or they are sent to the Golgi apparatus for further processing and modification.",
"title": "Components of the system"
},
{
"paragraph_id": 19,
"text": "Once secretory proteins are formed, the ER membrane separates them from the proteins that will remain in the cytosol. Secretory proteins depart from the ER enfolded in the membranes of vesicles that bud like bubbles from the transitional ER. These vesicles in transit to another part of the cell are called transport vesicles. An alternative mechanism for transport of lipids and proteins out of the ER are through lipid transfer proteins at regions called membrane contact sites where the ER becomes closely and stably associated with the membranes of other organelles, such as the plasma membrane, Golgi or lysosomes.",
"title": "Components of the system"
},
{
"paragraph_id": 20,
"text": "In addition to making secretory proteins, the rough ER makes membranes that grows in place from the addition of proteins and phospholipids. As polypeptides intended to be membrane proteins grow from the ribosomes, they are inserted into the ER membrane itself and are kept there by their hydrophobic portions. The rough ER also produces its own membrane phospholipids; enzymes built into the ER membrane assemble phospholipids. The ER membrane expands and can be transferred by transport vesicles to other components of the endomembrane system.",
"title": "Components of the system"
},
{
"paragraph_id": 21,
"text": "The Golgi apparatus (also known as the Golgi body and the Golgi complex) is composed of separate sacs called cisternae. Its shape is similar to a stack of pancakes. The number of these stacks varies with the specific function of the cell. The Golgi apparatus is used by the cell for further protein modification. The section of the Golgi apparatus that receives the vesicles from the ER is known as the cis face, and is usually near the ER. The opposite end of the Golgi apparatus is called the trans face, this is where the modified compounds leave. The trans face is usually facing the plasma membrane, which is where most of the substances the Golgi apparatus modifies are sent.",
"title": "Components of the system"
},
{
"paragraph_id": 22,
"text": "Vesicles sent off by the ER containing proteins are further altered at the Golgi apparatus and then prepared for secretion from the cell or transport to other parts of the cell. Various things can happen to the proteins on their journey through the enzyme covered space of the Golgi apparatus. The modification and synthesis of the carbohydrate portions of glycoproteins is common in protein processing. The Golgi apparatus removes and substitutes sugar monomers, producing a large variety of oligosaccharides. In addition to modifying proteins, the Golgi also manufactures macromolecules itself. In plant cells, the Golgi produces pectins and other polysaccharides needed by the plant structure.",
"title": "Components of the system"
},
{
"paragraph_id": 23,
"text": "Once the modification process is completed, the Golgi apparatus sorts the products of its processing and sends them to various parts of the cell. Molecular identification labels or tags are added by the Golgi enzymes to help with this. After everything is organized, the Golgi apparatus sends off its products by budding vesicles from its trans face.",
"title": "Components of the system"
},
{
"paragraph_id": 24,
"text": "Vacuoles, like vesicles, are membrane-bound sacs within the cell. They are larger than vesicles and their specific function varies. The operations of vacuoles are different for plant and animal vacuoles.",
"title": "Components of the system"
},
{
"paragraph_id": 25,
"text": "In plant cells, vacuoles cover anywhere from 30% to 90% of the total cell volume. Most mature plant cells contain one large central vacuole encompassed by a membrane called the tonoplast. Vacuoles of plant cells act as storage compartments for the nutrients and waste of a cell. The solution that these molecules are stored in is called the cell sap. Pigments that color the cell are sometime located in the cell sap. Vacuoles can also increase the size of the cell, which elongates as water is added, and they control the turgor pressure (the osmotic pressure that keeps the cell wall from caving in). Like lysosomes of animal cells, vacuoles have an acidic pH and contain hydrolytic enzymes. The pH of vacuoles enables them to perform homeostatic procedures in the cell. For example, when the pH in the cells environment drops, the H ions surging into the cytosol can be transferred to a vacuole in order to keep the cytosol's pH constant.",
"title": "Components of the system"
},
{
"paragraph_id": 26,
"text": "In animals, vacuoles serve in exocytosis and endocytosis processes. Endocytosis refers to when substances are taken into the cell, whereas for exocytosis substances are moved from the cell into the extracellular space. Material to be taken-in is surrounded by the plasma membrane, and then transferred to a vacuole. There are two types of endocytosis, phagocytosis (cell eating) and pinocytosis (cell drinking). In phagocytosis, cells engulf large particles such as bacteria. Pinocytosis is the same process, except the substances being ingested are in the fluid form.",
"title": "Components of the system"
},
{
"paragraph_id": 27,
"text": "Vesicles are small membrane-enclosed transport units that can transfer molecules between different compartments. Most vesicles transfer the membranes assembled in the endoplasmic reticulum to the Golgi apparatus, and then from the Golgi apparatus to various locations.",
"title": "Components of the system"
},
{
"paragraph_id": 28,
"text": "There are various types of vesicles each with a different protein configuration. Most are formed from specific regions of membranes. When a vesicle buds off from a membrane it contains specific proteins on its cytosolic surface. Each membrane a vesicle travels to contains a marker on its cytosolic surface. This marker corresponds with the proteins on the vesicle traveling to the membrane. Once the vesicle finds the membrane, they fuse.",
"title": "Components of the system"
},
{
"paragraph_id": 29,
"text": "There are three well known types of vesicles. They are clathrin-coated, COPI-coated, and COPII-coated vesicles. Each performs different functions in the cell. For example, clathrin-coated vesicles transport substances between the Golgi apparatus and the plasma membrane. COPI- and COPII-coated vesicles are frequently used for transportation between the ER and the Golgi apparatus.",
"title": "Components of the system"
},
{
"paragraph_id": 30,
"text": "Lysosomes are organelles that contain hydrolytic enzymes that are used for intracellular digestion. The main functions of a lysosome are to process molecules taken in by the cell and to recycle worn out cell parts. The enzymes inside of lysosomes are acid hydrolases which require an acidic environment for optimal performance. Lysosomes provide such an environment by maintaining a pH of 5.0 inside of the organelle. If a lysosome were to rupture, the enzymes released would not be very active because of the cytosol's neutral pH. However, if numerous lysosomes leaked the cell could be destroyed from autodigestion.",
"title": "Components of the system"
},
{
"paragraph_id": 31,
"text": "Lysosomes carry out intracellular digestion, in a process called phagocytosis (from the Greek phagein, to eat and kytos, vessel, referring here to the cell), by fusing with a vacuole and releasing their enzymes into the vacuole. Through this process, sugars, amino acids, and other monomers pass into the cytosol and become nutrients for the cell. Lysosomes also use their hydrolytic enzymes to recycle the cell's obsolete organelles in a process called autophagy. The lysosome engulfs another organelle and uses its enzymes to take apart the ingested material. The resulting organic monomers are then returned to the cytosol for reuse. The last function of a lysosome is to digest the cell itself through autolysis.",
"title": "Components of the system"
},
{
"paragraph_id": 32,
"text": "The spitzenkörper is a component of the endomembrane system found only in fungi, and is associated with hyphal tip growth. It is a phase-dark body that is composed of an aggregation of membrane-bound vesicles containing cell wall components, serving as a point of assemblage and release of such components intermediate between the Golgi and the cell membrane. The spitzenkörper is motile and generates new hyphal tip growth as it moves forward.",
"title": "Components of the system"
},
{
"paragraph_id": 33,
"text": "The plasma membrane is a phospholipid bilayer membrane that separates the cell from its environment and regulates the transport of molecules and signals into and out of the cell. Embedded in the membrane are proteins that perform the functions of the plasma membrane. The plasma membrane is not a fixed or rigid structure, the molecules that compose the membrane are capable of lateral movement. This movement and the multiple components of the membrane are why it is referred to as a fluid mosaic. Smaller molecules such as carbon dioxide, water, and oxygen can pass through the plasma membrane freely by diffusion or osmosis. Larger molecules needed by the cell are assisted by proteins through active transport.",
"title": "Components of the system"
},
{
"paragraph_id": 34,
"text": "The plasma membrane of a cell has multiple functions. These include transporting nutrients into the cell, allowing waste to leave, preventing materials from entering the cell, averting needed materials from leaving the cell, maintaining the pH of the cytosol, and preserving the osmotic pressure of the cytosol. Transport proteins which allow some materials to pass through but not others are used for these functions. These proteins use ATP hydrolysis to pump materials against their concentration gradients.",
"title": "Components of the system"
},
{
"paragraph_id": 35,
"text": "In addition to these universal functions, the plasma membrane has a more specific role in multicellular organisms. Glycoproteins on the membrane assist the cell in recognizing other cells, in order to exchange metabolites and form tissues. Other proteins on the plasma membrane allow attachment to the cytoskeleton and extracellular matrix; a function that maintains cell shape and fixes the location of membrane proteins. Enzymes that catalyze reactions are also found on the plasma membrane. Receptor proteins on the membrane have a shape that matches with a chemical messenger, resulting in various cellular responses.",
"title": "Components of the system"
},
{
"paragraph_id": 36,
"text": "The origin of the endomembrane system is linked to the origin of eukaryotes themselves and the origin of eukaryoties to the endosymbiotic origin of mitochondria. Many models have been put forward to explain the origin of the endomembrane system (reviewed in). The most recent concept suggests that the endomembrane system evolved from outer membrane vesicles the endosymbiotic mitochondrion secreted, and got enclosed within infoldings of the host prokaryote (in turn, a result of the ingestion of the endosymbiont). This OMV (outer membrane vesicles)-based model for the origin of the endomembrane system is currently the one that requires the fewest novel inventions at eukaryote origin and explains the many connections of mitochondria with other compartments of the cell. Currently, this \"inside-out\" hypothesis (which states that the alphaproteobacteria, the ancestral mitochondria, were engulfed by the blebs of an asgardarchaeon, and later the blebs fused leaving infoldings which would eventually become the endomembrane system) is favored more than the outside-in one (which suggested that the endomembrane system arose due to infoldings within the archaeal membrane).",
"title": "Evolution"
},
{
"paragraph_id": 37,
"text": "",
"title": "References"
}
]
| The endomembrane system is composed of the different membranes (endomembranes) that are suspended in the cytoplasm within a eukaryotic cell. These membranes divide the cell into functional and structural compartments, or organelles. In eukaryotes the organelles of the endomembrane system include: the nuclear membrane, the endoplasmic reticulum, the Golgi apparatus, lysosomes, vesicles, endosomes, and plasma (cell) membrane among others. The system is defined more accurately as the set of membranes that forms a single functional and developmental unit, either being connected directly, or exchanging material through vesicle transport. Importantly, the endomembrane system does not include the membranes of plastids or mitochondria, but might have evolved partially from the actions of the latter. The nuclear membrane contains a lipid bilayer that encompasses the contents of the nucleus. The endoplasmic reticulum (ER) is a synthesis, and transport organelle that branches into the cytoplasm in plant and animal cells. The Golgi apparatus is a series of multiple compartments where molecules are packaged for delivery to other cell components or for secretion from the cell. Vacuoles, which are found in both plant and animal cells, are responsible for maintaining the shape and structure of the cell as well as storing waste products. A vesicle is a relatively small, membrane-enclosed sac that stores or transports substances. The cell membrane is a protective barrier that regulates what enters and leaves the cell. There is also an organelle known as the Spitzenkörper that is only found in fungi, and is connected with hyphal tip growth. In prokaryotes endomembranes are rare, although in many photosynthetic bacteria the plasma membrane is highly folded and most of the cell cytoplasm is filled with layers of light-gathering membrane. These light-gathering membranes may even form enclosed structures called chlorosomes in green sulfur bacteria. Another example is the complex "pepin" system of Thiomargarita species, especially T. magnifica. The organelles of the endomembrane system are related through direct contact or by the transfer of membrane segments as vesicles. Despite these relationships, the various membranes are not identical in structure and function. The thickness, molecular composition, and metabolic behavior of a membrane are not fixed, they may be modified several times during the membrane's life. One unifying characteristic the membranes share is a lipid bilayer, with proteins attached to either side or traversing them. | 2001-10-13T13:28:01Z | 2023-11-26T02:44:56Z | [
"Template:Cite book",
"Template:Cite web",
"Template:Organelles",
"Template:Good article",
"Template:Nbsp",
"Template:Endomembrane system diagram",
"Template:Main",
"Template:Lang",
"Template:Reflist",
"Template:Cite journal",
"Template:Short description"
]
| https://en.wikipedia.org/wiki/Endomembrane_system |
9,928 | Ethnology | Ethnology (from the Greek: ἔθνος, ethnos meaning 'nation') is an academic field and discipline that compares and analyzes the characteristics of different peoples and the relationships between them (compare cultural, social, or sociocultural anthropology).
Compared to ethnography, the study of single groups through direct contact with the culture, ethnology takes the research that ethnographers have compiled and then compares and contrasts different cultures.
The term ethnologia (ethnology) is credited to Adam Franz Kollár (1718-1783) who used and defined it in his Historiae ivrisqve pvblici Regni Vngariae amoenitates published in Vienna in 1783. as: "the science of nations and peoples, or, that study of learned men in which they inquire into the origins, languages, customs, and institutions of various nations, and finally into the fatherland and ancient seats, in order to be able better to judge the nations and peoples in their own times."
Kollár's interest in linguistic and cultural diversity was aroused by the situation in his native multi-ethnic and multilingual Kingdom of Hungary and his roots among its Slovaks, and by the shifts that began to emerge after the gradual retreat of the Ottoman Empire in the more distant Balkans.
Among the goals of ethnology have been the reconstruction of human history, and the formulation of cultural invariants, such as the incest taboo and culture change, and the formulation of generalizations about "human nature", a concept which has been criticized since the 19th century by various philosophers (Hegel, Marx, structuralism, etc.). In some parts of the world, ethnology has developed along independent paths of investigation and pedagogical doctrine, with cultural anthropology becoming dominant especially in the United States, and social anthropology in Great Britain. The distinction between the three terms is increasingly blurry. Ethnology has been considered an academic field since the late 18th century, especially in Europe and is sometimes conceived of as any comparative study of human groups.
The 15th-century exploration of America by European explorers had an important role in formulating new notions of the Occident (the Western world), such as the notion of the "Other". This term was used in conjunction with "savages", which was either seen as a brutal barbarian, or alternatively, as the "noble savage". Thus, civilization was opposed in a dualist manner to barbary, a classic opposition constitutive of the even more commonly shared ethnocentrism. The progress of ethnology, for example with Claude Lévi-Strauss's structural anthropology, led to the criticism of conceptions of a linear progress, or the pseudo-opposition between "societies with histories" and "societies without histories", judged too dependent on a limited view of history as constituted by accumulative growth.
Lévi-Strauss often referred to Montaigne's essay on cannibalism as an early example of ethnology. Lévi-Strauss aimed, through a structural method, at discovering universal invariants in human society, chief among which he believed to be the incest taboo. However, the claims of such cultural universalism have been criticized by various 19th- and 20th-century social thinkers, including Marx, Nietzsche, Foucault, Derrida, Althusser, and Deleuze.
The French school of ethnology was particularly significant for the development of the discipline, since the early 1950s. Important figures in this movement have included Lévi-Strauss, Paul Rivet, Marcel Griaule, Germaine Dieterlen, and Jean Rouch.
See: List of scholars of ethnology | [
{
"paragraph_id": 0,
"text": "Ethnology (from the Greek: ἔθνος, ethnos meaning 'nation') is an academic field and discipline that compares and analyzes the characteristics of different peoples and the relationships between them (compare cultural, social, or sociocultural anthropology).",
"title": ""
},
{
"paragraph_id": 1,
"text": "Compared to ethnography, the study of single groups through direct contact with the culture, ethnology takes the research that ethnographers have compiled and then compares and contrasts different cultures.",
"title": "Scientific discipline"
},
{
"paragraph_id": 2,
"text": "The term ethnologia (ethnology) is credited to Adam Franz Kollár (1718-1783) who used and defined it in his Historiae ivrisqve pvblici Regni Vngariae amoenitates published in Vienna in 1783. as: \"the science of nations and peoples, or, that study of learned men in which they inquire into the origins, languages, customs, and institutions of various nations, and finally into the fatherland and ancient seats, in order to be able better to judge the nations and peoples in their own times.\"",
"title": "Scientific discipline"
},
{
"paragraph_id": 3,
"text": "Kollár's interest in linguistic and cultural diversity was aroused by the situation in his native multi-ethnic and multilingual Kingdom of Hungary and his roots among its Slovaks, and by the shifts that began to emerge after the gradual retreat of the Ottoman Empire in the more distant Balkans.",
"title": "Scientific discipline"
},
{
"paragraph_id": 4,
"text": "Among the goals of ethnology have been the reconstruction of human history, and the formulation of cultural invariants, such as the incest taboo and culture change, and the formulation of generalizations about \"human nature\", a concept which has been criticized since the 19th century by various philosophers (Hegel, Marx, structuralism, etc.). In some parts of the world, ethnology has developed along independent paths of investigation and pedagogical doctrine, with cultural anthropology becoming dominant especially in the United States, and social anthropology in Great Britain. The distinction between the three terms is increasingly blurry. Ethnology has been considered an academic field since the late 18th century, especially in Europe and is sometimes conceived of as any comparative study of human groups.",
"title": "Scientific discipline"
},
{
"paragraph_id": 5,
"text": "The 15th-century exploration of America by European explorers had an important role in formulating new notions of the Occident (the Western world), such as the notion of the \"Other\". This term was used in conjunction with \"savages\", which was either seen as a brutal barbarian, or alternatively, as the \"noble savage\". Thus, civilization was opposed in a dualist manner to barbary, a classic opposition constitutive of the even more commonly shared ethnocentrism. The progress of ethnology, for example with Claude Lévi-Strauss's structural anthropology, led to the criticism of conceptions of a linear progress, or the pseudo-opposition between \"societies with histories\" and \"societies without histories\", judged too dependent on a limited view of history as constituted by accumulative growth.",
"title": "Scientific discipline"
},
{
"paragraph_id": 6,
"text": "Lévi-Strauss often referred to Montaigne's essay on cannibalism as an early example of ethnology. Lévi-Strauss aimed, through a structural method, at discovering universal invariants in human society, chief among which he believed to be the incest taboo. However, the claims of such cultural universalism have been criticized by various 19th- and 20th-century social thinkers, including Marx, Nietzsche, Foucault, Derrida, Althusser, and Deleuze.",
"title": "Scientific discipline"
},
{
"paragraph_id": 7,
"text": "The French school of ethnology was particularly significant for the development of the discipline, since the early 1950s. Important figures in this movement have included Lévi-Strauss, Paul Rivet, Marcel Griaule, Germaine Dieterlen, and Jean Rouch.",
"title": "Scientific discipline"
},
{
"paragraph_id": 8,
"text": "See: List of scholars of ethnology",
"title": "Scholars"
}
]
| Ethnology is an academic field and discipline that compares and analyzes the characteristics of different peoples and the relationships between them. | 2001-10-13T18:18:27Z | 2023-10-11T12:06:54Z | [
"Template:Distinguish",
"Template:For",
"Template:Clear",
"Template:Commons category",
"Template:Div col end",
"Template:Short description",
"Template:Lang-grc-gre",
"Template:Wikisource-inline",
"Template:Ethnicity",
"Template:Anthropology",
"Template:Lang",
"Template:Further",
"Template:Div col",
"Template:Reflist",
"Template:Cite web",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Ethnology |
9,929 | Espagnole sauce | Espagnole sauce (French pronunciation: [ɛspaɲɔl] ) is a basic brown sauce, and is one of the mother sauces of classic French cooking. In the early 19th century the chef Antonin Carême included it in his list of the basic sauces of French cooking. In the early 20th century Auguste Escoffier named it as one of the five sauces at the core of France's cuisine.
"Espagnole" is the French for "Spanish", and several suggestions have been advanced to explain why a French sauce is nominally Spanish. In Louis Diat's account, Louis XIII's wife, Anne of Austria – who despite her name was Spanish – introduced cooks from Spain to the kitchens of the French court. Her cooks are said to have improved the French brown sauce by adding tomatoes.
A similar tale refers to the Spanish cooks employed by Louis XIV's wife, Maria Theresa of Spain. Another suggestion is that in the 17th century, Spanish bacon and ham were introduced as the meat for the stock on which the sauce is based, rather than the traditional beef.
The term "sauce espagnole" appears in Vincent La Chapelle's 1733 cookery book Le Cuisinier moderne, but no recipe is given. Antonin Carême printed a detailed recipe in his 1828 book Le Cuisinier parisien. By the middle of the 19th century the sauce was familiar in the English-speaking world: in her Modern Cookery of 1845 Eliza Acton gave two recipes for it, one with added wine and one without. The sauce was included in Auguste Escoffier's 1903 classification of the five mother sauces, on which much French cooking depends.
The recipe given by Carême runs to more than 400 words. He calls for ham, veal, and partridges in the cooking pan, gently braised in water for two hours, after which roux is mixed in and the pan is returned to the stove for a further two hours or more. It is garnished with "parsley, chives, bay leaves, thyme, sweet basil and cloves and parings of mushrooms". Carême is credited with codifying the key sauces – the mother sauces, or in his phrase, the grandes sauces – on which classic French haute cuisine is based. His recipes for velouté, béchamel, allemande, as well as espagnole became standard for French chefs of his day.
Nearly a century after Carême, Auguste Escoffier followed the former's classification of the key sauces, though adding mayonnaise and tomato sauces to the list and removing allemande. His recipe for espagnole, dating from 1903, is briefer than his predecessor's. It calls for brown stock (made from veal, beef and bacon), a brown roux, diced bacon fat, diced carrot, thyme, bay, parsley and butter, simmered for three hours.
Tomato purée is added to the other ingredients in some more recent recipes, including in the catering textbook Practical Cookery by Victor Ceserani and Ronald Kinton.
Sauce espagnole is the basis for many French sauces. They include: | [
{
"paragraph_id": 0,
"text": "Espagnole sauce (French pronunciation: [ɛspaɲɔl] ) is a basic brown sauce, and is one of the mother sauces of classic French cooking. In the early 19th century the chef Antonin Carême included it in his list of the basic sauces of French cooking. In the early 20th century Auguste Escoffier named it as one of the five sauces at the core of France's cuisine.",
"title": ""
},
{
"paragraph_id": 1,
"text": "\"Espagnole\" is the French for \"Spanish\", and several suggestions have been advanced to explain why a French sauce is nominally Spanish. In Louis Diat's account, Louis XIII's wife, Anne of Austria – who despite her name was Spanish – introduced cooks from Spain to the kitchens of the French court. Her cooks are said to have improved the French brown sauce by adding tomatoes.",
"title": "Etymology"
},
{
"paragraph_id": 2,
"text": "A similar tale refers to the Spanish cooks employed by Louis XIV's wife, Maria Theresa of Spain. Another suggestion is that in the 17th century, Spanish bacon and ham were introduced as the meat for the stock on which the sauce is based, rather than the traditional beef.",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "The term \"sauce espagnole\" appears in Vincent La Chapelle's 1733 cookery book Le Cuisinier moderne, but no recipe is given. Antonin Carême printed a detailed recipe in his 1828 book Le Cuisinier parisien. By the middle of the 19th century the sauce was familiar in the English-speaking world: in her Modern Cookery of 1845 Eliza Acton gave two recipes for it, one with added wine and one without. The sauce was included in Auguste Escoffier's 1903 classification of the five mother sauces, on which much French cooking depends.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The recipe given by Carême runs to more than 400 words. He calls for ham, veal, and partridges in the cooking pan, gently braised in water for two hours, after which roux is mixed in and the pan is returned to the stove for a further two hours or more. It is garnished with \"parsley, chives, bay leaves, thyme, sweet basil and cloves and parings of mushrooms\". Carême is credited with codifying the key sauces – the mother sauces, or in his phrase, the grandes sauces – on which classic French haute cuisine is based. His recipes for velouté, béchamel, allemande, as well as espagnole became standard for French chefs of his day.",
"title": "Ingredients"
},
{
"paragraph_id": 5,
"text": "Nearly a century after Carême, Auguste Escoffier followed the former's classification of the key sauces, though adding mayonnaise and tomato sauces to the list and removing allemande. His recipe for espagnole, dating from 1903, is briefer than his predecessor's. It calls for brown stock (made from veal, beef and bacon), a brown roux, diced bacon fat, diced carrot, thyme, bay, parsley and butter, simmered for three hours.",
"title": "Ingredients"
},
{
"paragraph_id": 6,
"text": "Tomato purée is added to the other ingredients in some more recent recipes, including in the catering textbook Practical Cookery by Victor Ceserani and Ronald Kinton.",
"title": "Ingredients"
},
{
"paragraph_id": 7,
"text": "Sauce espagnole is the basis for many French sauces. They include:",
"title": "Derivatives"
}
]
| Espagnole sauce is a basic brown sauce, and is one of the mother sauces of classic French cooking. In the early 19th century the chef Antonin Carême included it in his list of the basic sauces of French cooking. In the early 20th century Auguste Escoffier named it as one of the five sauces at the core of France's cuisine. | 2001-10-13T21:41:33Z | 2023-12-16T12:59:47Z | [
"Template:Infobox food",
"Template:Reflist",
"Template:Cookbook",
"Template:Brown sauces",
"Template:Short description",
"Template:IPA-fr",
"Template:Cite book",
"Template:French mother sauces",
"Template:Portal bar"
]
| https://en.wikipedia.org/wiki/Espagnole_sauce |
9,931 | Amplifier | An amplifier, electronic amplifier or (informally) amp is an electronic device that can increase the magnitude of a signal (a time-varying voltage or current). It is a two-port electronic circuit that uses electric power from a power supply to increase the amplitude (magnitude of the voltage or current) of a signal applied to its input terminals, producing a proportionally greater amplitude signal at its output. The amount of amplification provided by an amplifier is measured by its gain: the ratio of output voltage, current, or power to input. An amplifier is defined as a circuit that has a power gain greater than one.
An amplifier can be either a separate piece of equipment or an electrical circuit contained within another device. Amplification is fundamental to modern electronics, and amplifiers are widely used in almost all electronic equipment. Amplifiers can be categorized in different ways. One is by the frequency of the electronic signal being amplified. For example, audio amplifiers amplify signals in the audio (sound) range of less than 20 kHz, RF amplifiers amplify frequencies in the radio frequency range between 20 kHz and 300 GHz, and servo amplifiers and instrumentation amplifiers may work with very low frequencies down to direct current. Amplifiers can also be categorized by their physical placement in the signal chain; a preamplifier may precede other signal processing stages, for example, while a power amplifier is usually used after other amplifier stages to provide enough output power for the final use of the signal. The first practical electrical device which could amplify was the triode vacuum tube, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Today most amplifiers use transistors.
The first practical prominent device that could amplify was the triode vacuum tube, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Vacuum tubes were used in almost all amplifiers until the 1960s–1970s when transistors replaced them. Today, most amplifiers use transistors, but vacuum tubes continue to be used in some applications.
The development of audio communication technology in form of the telephone, first patented in 1876, created the need to increase the amplitude of electrical signals to extend the transmission of signals over increasingly long distances. In telegraphy, this problem had been solved with intermediate devices at stations that replenished the dissipated energy by operating a signal recorder and transmitter back-to-back, forming a relay, so that a local energy source at each intermediate station powered the next leg of transmission. For duplex transmission, i.e. sending and receiving in both directions, bi-directional relay repeaters were developed starting with the work of C. F. Varley for telegraphic transmission. Duplex transmission was essential for telephony and the problem was not satisfactorily solved until 1904, when H. E. Shreeve of the American Telephone and Telegraph Company improved existing attempts at constructing a telephone repeater consisting of back-to-back carbon-granule transmitter and electrodynamic receiver pairs. The Shreeve repeater was first tested on a line between Boston and Amesbury, MA, and more refined devices remained in service for some time. After the turn of the century it was found that negative resistance mercury lamps could amplify, and were also tried in repeaters, with little success.
The development of thermionic valves which began around 1902, provided an entirely electronic method of amplifying signals. The first practical version of such devices was the Audion triode, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Since the only previous device which was widely used to strengthen a signal was the relay used in telegraph systems, the amplifying vacuum tube was first called an electron relay. The terms amplifier and amplification, derived from the Latin amplificare, (to enlarge or expand), were first used for this new capability around 1915 when triodes became widespread.
The amplifying vacuum tube revolutionized electrical technology, creating the new field of electronics, the technology of active electrical devices. It made possible long-distance telephone lines, public address systems, radio broadcasting, talking motion pictures, practical audio recording, radar, television, and the first computers. For 50 years virtually all consumer electronic devices used vacuum tubes. Early tube amplifiers often had positive feedback (regeneration), which could increase gain but also make the amplifier unstable and prone to oscillation. Much of the mathematical theory of amplifiers was developed at Bell Telephone Laboratories during the 1920s to 1940s. Distortion levels in early amplifiers were high, usually around 5%, until 1934, when Harold Black developed negative feedback; this allowed the distortion levels to be greatly reduced, at the cost of lower gain. Other advances in the theory of amplification were made by Harry Nyquist and Hendrik Wade Bode.
The vacuum tube was virtually the only amplifying device, other than specialized power devices such as the magnetic amplifier and amplidyne, for 40 years. Power control circuitry used magnetic amplifiers until the latter half of the twentieth century when power semiconductor devices became more economical, with higher operating speeds. The old Shreeve electroacoustic carbon repeaters were used in adjustable amplifiers in telephone subscriber sets for the hearing impaired until the transistor provided smaller and higher quality amplifiers in the 1950s.
The first working transistor was a point-contact transistor invented by John Bardeen and Walter Brattain in 1947 at Bell Labs, where William Shockley later invented the bipolar junction transistor (BJT) in 1948. They were followed by the invention of the metal–oxide–semiconductor field-effect transistor (MOSFET) by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. Due to MOSFET scaling, the ability to scale down to increasingly small sizes, the MOSFET has since become the most widely used amplifier.
The replacement of bulky electron tubes with transistors during the 1960s and 1970s created a revolution in electronics, making possible a large class of portable electronic devices, such as the transistor radio developed in 1954. Today, use of vacuum tubes is limited for some high power applications, such as radio transmitters.
Beginning in the 1970s, more and more transistors were connected on a single chip thereby creating higher scales of integration (such as small-scale, medium-scale and large-scale integration) in integrated circuits. Many amplifiers commercially available today are based on integrated circuits.
For special purposes, other active elements have been used. For example, in the early days of the satellite communication, parametric amplifiers were used. The core circuit was a diode whose capacitance was changed by an RF signal created locally. Under certain conditions, this RF signal provided energy that was modulated by the extremely weak satellite signal received at the earth station.
Advances in digital electronics since the late 20th century provided new alternatives to the conventional linear-gain amplifiers by using digital switching to vary the pulse-shape of fixed amplitude signals, resulting in devices such as the Class-D amplifier.
In principle, an amplifier is an electrical two-port network that produces a signal at the output port that is a replica of the signal applied to the input port, but increased in magnitude.
The input port can be idealized as either being a voltage input, which takes no current, with the output proportional to the voltage across the port; or a current input, with no voltage across it, in which the output is proportional to the current through the port. The output port can be idealized as being either a dependent voltage source, with zero source resistance and its output voltage dependent on the input; or a dependent current source, with infinite source resistance and the output current dependent on the input. Combinations of these choices lead to four types of ideal amplifiers. In idealized form they are represented by each of the four types of dependent source used in linear analysis, as shown in the figure, namely:
Each type of amplifier in its ideal form has an ideal input and output resistance that is the same as that of the corresponding dependent source:
In real amplifiers the ideal impedances are not possible to achieve, but these ideal elements can be used to construct equivalent circuits of real amplifiers by adding impedances (resistance, capacitance and inductance) to the input and output. For any particular circuit, a small-signal analysis is often used to find the actual impedance. A small-signal AC test current Ix is applied to the input or output node, all external sources are set to AC zero, and the corresponding alternating voltage Vx across the test current source determines the impedance seen at that node as R = Vx / Ix.
Amplifiers designed to attach to a transmission line at input and output, especially RF amplifiers, do not fit into this classification approach. Rather than dealing with voltage or current individually, they ideally couple with an input or output impedance matched to the transmission line impedance, that is, match ratios of voltage to current. Many real RF amplifiers come close to this ideal. Although, for a given appropriate source and load impedance, RF amplifiers can be characterized as amplifying voltage or current, they fundamentally are amplifying power.
Amplifier properties are given by parameters that include:
Amplifiers are described according to the properties of their inputs, their outputs, and how they relate. All amplifiers have gain, a multiplication factor that relates the magnitude of some property of the output signal to a property of the input signal. The gain may be specified as the ratio of output voltage to input voltage (voltage gain), output power to input power (power gain), or some combination of current, voltage, and power. In many cases the property of the output that varies is dependent on the same property of the input, making the gain unitless (though often expressed in decibels (dB)).
Most amplifiers are designed to be linear. That is, they provide constant gain for any normal input level and output signal. If an amplifier's gain is not linear, the output signal can become distorted. There are, however, cases where variable gain is useful. Certain signal processing applications use exponential gain amplifiers.
Amplifiers are usually designed to function well in a specific application, for example: radio and television transmitters and receivers, high-fidelity ("hi-fi") stereo equipment, microcomputers and other digital equipment, and guitar and other instrument amplifiers. Every amplifier includes at least one active device, such as a vacuum tube or transistor.
Negative feedback is a technique used in most modern amplifiers to increase bandwidth, reduce distortion, and control gain. In a negative feedback amplifier part of the output is fed back and added to the input in the opposite phase, subtracting from the input. The main effect is to reduce the overall gain of the system. However, any unwanted signals introduced by the amplifier, such as distortion are also fed back. Since they are not part of the original input, they are added to the input in opposite phase, subtracting them from the input. In this way, negative feedback also reduces nonlinearity, distortion and other errors introduced by the amplifier. Large amounts of negative feedback can reduce errors to the point that the response of the amplifier itself becomes almost irrelevant as long as it has a large gain, and the output performance of the system (the "closed loop performance") is defined entirely by the components in the feedback loop. This technique is used particularly with operational amplifiers (op-amps).
Non-feedback amplifiers can achieve only about 1% distortion for audio-frequency signals. With negative feedback, distortion can typically be reduced to 0.001%. Noise, even crossover distortion, can be practically eliminated. Negative feedback also compensates for changing temperatures, and degrading or nonlinear components in the gain stage, but any change or nonlinearity in the components in the feedback loop will affect the output. Indeed, the ability of the feedback loop to define the output is used to make active filter circuits.
Another advantage of negative feedback is that it extends the bandwidth of the amplifier. The concept of feedback is used in operational amplifiers to precisely define gain, bandwidth, and other parameters entirely based on the components in the feedback loop.
Negative feedback can be applied at each stage of an amplifier to stabilize the operating point of active devices against minor changes in power-supply voltage or device characteristics.
Some feedback, positive or negative, is unavoidable and often undesirable—introduced, for example, by parasitic elements, such as inherent capacitance between input and output of devices such as transistors, and capacitive coupling of external wiring. Excessive frequency-dependent positive feedback can produce parasitic oscillation and turn an amplifier into an oscillator.
All amplifiers include some form of active device: this is the device that does the actual amplification. The active device can be a vacuum tube, discrete solid state component, such as a single transistor, or part of an integrated circuit, as in an op-amp).
Transistor amplifiers (or solid state amplifiers) are the most common type of amplifier in use today. A transistor is used as the active element. The gain of the amplifier is determined by the properties of the transistor itself as well as the circuit it is contained within.
Common active devices in transistor amplifiers include bipolar junction transistors (BJTs) and metal oxide semiconductor field-effect transistors (MOSFETs).
Applications are numerous, some common examples are audio amplifiers in a home stereo or public address system, RF high power generation for semiconductor equipment, to RF and microwave applications such as radio transmitters.
Transistor-based amplification can be realized using various configurations: for example a bipolar junction transistor can realize common base, common collector or common emitter amplification; a MOSFET can realize common gate, common source or common drain amplification. Each configuration has different characteristics.
Vacuum-tube amplifiers (also known as tube amplifiers or valve amplifiers) use a vacuum tube as the active device. While semiconductor amplifiers have largely displaced valve amplifiers for low-power applications, valve amplifiers can be much more cost effective in high power applications such as radar, countermeasures equipment, and communications equipment. Many microwave amplifiers are specially designed valve amplifiers, such as the klystron, gyrotron, traveling wave tube, and crossed-field amplifier, and these microwave valves provide much greater single-device power output at microwave frequencies than solid-state devices. Vacuum tubes remain in use in some high end audio equipment, as well as in musical instrument amplifiers, due to a preference for "tube sound".
Magnetic amplifiers are devices somewhat similar to a transformer where one winding is used to control the saturation of a magnetic core and hence alter the impedance of the other winding.
They have largely fallen out of use due to development in semiconductor amplifiers but are still useful in HVDC control, and in nuclear power control circuitry due to not being affected by radioactivity.
Negative resistances can be used as amplifiers, such as the tunnel diode amplifier.
A power amplifier is an amplifier designed primarily to increase the power available to a load. In practice, amplifier power gain depends on the source and load impedances, as well as the inherent voltage and current gain. A radio frequency (RF) amplifier design typically optimizes impedances for power transfer, while audio and instrumentation amplifier designs normally optimize input and output impedance for least loading and highest signal integrity. An amplifier that is said to have a gain of 20 dB might have a voltage gain of 20 dB and an available power gain of much more than 20 dB (power ratio of 100)—yet actually deliver a much lower power gain if, for example, the input is from a 600 Ω microphone and the output connects to a 47 kΩ input socket for a power amplifier. In general, the power amplifier is the last 'amplifier' or actual circuit in a signal chain (the output stage) and is the amplifier stage that requires attention to power efficiency. Efficiency considerations lead to the various classes of power amplifiers based on the biasing of the output transistors or tubes: see power amplifier classes below.
Audio power amplifiers are typically used to drive loudspeakers. They will often have two output channels and deliver equal power to each. An RF power amplifier is found in radio transmitter final stages. A Servo motor controller: amplifies a control voltage to adjust the speed of a motor, or the position of a motorized system.
An operational amplifier is an amplifier circuit which typically has very high open loop gain and differential inputs. Op amps have become very widely used as standardized "gain blocks" in circuits due to their versatility; their gain, bandwidth and other characteristics can be controlled by feedback through an external circuit. Though the term today commonly applies to integrated circuits, the original operational amplifier design used valves, and later designs used discrete transistor circuits.
A fully differential amplifier is similar to the operational amplifier, but also has differential outputs. These are usually constructed using BJTs or FETs.
These use balanced transmission lines to separate individual single stage amplifiers, the outputs of which are summed by the same transmission line. The transmission line is a balanced type with the input at one end and on one side only of the balanced transmission line and the output at the opposite end is also the opposite side of the balanced transmission line. The gain of each stage adds linearly to the output rather than multiplies one on the other as in a cascade configuration. This allows a higher bandwidth to be achieved than could otherwise be realised even with the same gain stage elements.
These nonlinear amplifiers have much higher efficiencies than linear amps, and are used where the power saving justifies the extra complexity. Class-D amplifiers are the main example of this type of amplification.
Negative Resistance Amplifier is a type of Regenerative Amplifier that can use the feedback between the transistor's source and gate to transform a capacitive impedance on the transistor's source to a negative resistance on its gate. Compared to other types of amplifiers, this "negative resistance amplifier" will require only a tiny amount of power to achieve very high gain, maintaining a good noise figure at the same time.
Video amplifiers are designed to process video signals and have varying bandwidths depending on whether the video signal is for SDTV, EDTV, HDTV 720p or 1080i/p etc.. The specification of the bandwidth itself depends on what kind of filter is used—and at which point (−1 dB or −3 dB for example) the bandwidth is measured. Certain requirements for step response and overshoot are necessary for an acceptable TV image.
Traveling wave tube amplifiers (TWTAs) are used for high power amplification at low microwave frequencies. They typically can amplify across a broad spectrum of frequencies; however, they are usually not as tunable as klystrons.
Klystrons are specialized linear-beam vacuum-devices, designed to provide high power, widely tunable amplification of millimetre and sub-millimetre waves. Klystrons are designed for large scale operations and despite having a narrower bandwidth than TWTAs, they have the advantage of coherently amplifying a reference signal so its output may be precisely controlled in amplitude, frequency and phase.
Solid-state devices such as silicon short channel MOSFETs like double-diffused metal–oxide–semiconductor (DMOS) FETs, GaAs FETs, SiGe and GaAs heterojunction bipolar transistors/HBTs, HEMTs, IMPATT diodes, and others, are used especially at lower microwave frequencies and power levels on the order of watts specifically in applications like portable RF terminals/cell phones and access points where size and efficiency are the drivers. New materials like gallium nitride (GaN) or GaN on silicon or on silicon carbide/SiC are emerging in HEMT transistors and applications where improved efficiency, wide bandwidth, operation roughly from few to few tens of GHz with output power of few Watts to few hundred of Watts are needed.
Depending on the amplifier specifications and size requirements microwave amplifiers can be realised as monolithically integrated, integrated as modules or based on discrete parts or any combination of those.
The maser is a non-electronic microwave amplifier.
Instrument amplifiers are a range of audio power amplifiers used to increase the sound level of musical instruments, for example guitars, during performances. Amplifiers' tone mainly come from the order and amount in which it applies EQ and distortion
One set of classifications for amplifiers is based on which device terminal is common to both the input and the output circuit. In the case of bipolar junction transistors, the three classes are common emitter, common base, and common collector. For field-effect transistors, the corresponding configurations are common source, common gate, and common drain; for vacuum tubes, common cathode, common grid, and common plate.
The common emitter (or common source, common cathode, etc.) is most often configured to provide amplification of a voltage applied between base and emitter, and the output signal taken between collector and emitter is inverted, relative to the input. The common collector arrangement applies the input voltage between base and collector, and to take the output voltage between emitter and collector. This causes negative feedback, and the output voltage tends to follow the input voltage. This arrangement is also used as the input presents a high impedance and does not load the signal source, though the voltage amplification is less than one. The common-collector circuit is, therefore, better known as an emitter follower, source follower, or cathode follower.
An amplifier whose output exhibits no feedback to its input side is described as 'unilateral'. The input impedance of a unilateral amplifier is independent of load, and output impedance is independent of signal source impedance.
An amplifier that uses feedback to connect part of the output back to the input is a bilateral amplifier. Bilateral amplifier input impedance depends on the load, and output impedance on the signal source impedance. All amplifiers are bilateral to some degree; however they may often be modeled as unilateral under operating conditions where feedback is small enough to neglect for most purposes, simplifying analysis (see the common base article for an example).
Another way to classify amplifiers is by the phase relationship of the input signal to the output signal. An 'inverting' amplifier produces an output 180 degrees out of phase with the input signal (that is, a polarity inversion or mirror image of the input as seen on an oscilloscope). A 'non-inverting' amplifier maintains the phase of the input signal waveforms. An emitter follower is a type of non-inverting amplifier, indicating that the signal at the emitter of a transistor is following (that is, matching with unity gain but perhaps an offset) the input signal. Voltage follower is also non-inverting type of amplifier having unity gain.
This description can apply to a single stage of an amplifier, or to a complete amplifier system.
Other amplifiers may be classified by their function or output characteristics. These functional descriptions usually apply to complete amplifier systems or sub-systems and rarely to individual stages.
Amplifiers are sometimes classified by the coupling method of the signal at the input, output, or between stages. Different types of these include:
Depending on the frequency range and other properties amplifiers are designed according to different principles.
Frequency ranges down to DC are used only when this property is needed. Amplifiers for direct current signals are vulnerable to minor variations in the properties of components with time. Special methods, such as chopper stabilized amplifiers are used to prevent objectionable drift in the amplifier's properties for DC. "DC-blocking" capacitors can be added to remove DC and sub-sonic frequencies from audio amplifiers.
Depending on the frequency range specified different design principles must be used. Up to the MHz range only "discrete" properties need be considered; e.g., a terminal has an input impedance.
As soon as any connection within the circuit gets longer than perhaps 1% of the wavelength of the highest specified frequency (e.g., at 100 MHz the wavelength is 3 m, so the critical connection length is approx. 3 cm) design properties radically change. For example, a specified length and width of a PCB trace can be used as a selective or impedance-matching entity. Above a few hundred MHz, it gets difficult to use discrete elements, especially inductors. In most cases, PCB traces of very closely defined shapes are used instead (stripline techniques).
The frequency range handled by an amplifier might be specified in terms of bandwidth (normally implying a response that is 3 dB down when the frequency reaches the specified bandwidth), or by specifying a frequency response that is within a certain number of decibels between a lower and an upper frequency (e.g. "20 Hz to 20 kHz plus or minus 1 dB").
Power amplifier circuits (output stages) are classified as A, B, AB and C for analog designs—and class D and E for switching designs. The power amplifier classes are based on the proportion of each input cycle (conduction angle) during which an amplifying device passes current. The image of the conduction angle derives from amplifying a sinusoidal signal. If the device is always on, the conducting angle is 360°. If it is on for only half of each cycle, the angle is 180°. The angle of flow is closely related to the amplifier power efficiency.
The practical amplifier circuit shown above could be the basis for a moderate-power audio amplifier. It features a typical (though substantially simplified) design as found in modern amplifiers, with a class-AB push–pull output stage, and uses some overall negative feedback. Bipolar transistors are shown, but this design would also be realizable with FETs or valves.
The input signal is coupled through capacitor C1 to the base of transistor Q1. The capacitor allows the AC signal to pass, but blocks the DC bias voltage established by resistors R1 and R2 so that any preceding circuit is not affected by it. Q1 and Q2 form a differential amplifier (an amplifier that multiplies the difference between two inputs by some constant), in an arrangement known as a long-tailed pair. This arrangement is used to conveniently allow the use of negative feedback, which is fed from the output to Q2 via R7 and R8.
The negative feedback into the difference amplifier allows the amplifier to compare the input to the actual output. The amplified signal from Q1 is directly fed to the second stage, Q3, which is a common emitter stage that provides further amplification of the signal and the DC bias for the output stages, Q4 and Q5. R6 provides the load for Q3 (a better design would probably use some form of active load here, such as a constant-current sink). So far, all of the amplifier is operating in class A. The output pair are arranged in class-AB push–pull, also called a complementary pair. They provide the majority of the current amplification (while consuming low quiescent current) and directly drive the load, connected via DC-blocking capacitor C2. The diodes D1 and D2 provide a small amount of constant voltage bias for the output pair, just biasing them into the conducting state so that crossover distortion is minimized. That is, the diodes push the output stage firmly into class-AB mode (assuming that the base-emitter drop of the output transistors is reduced by heat dissipation).
This design is simple, but a good basis for a practical design because it automatically stabilises its operating point, since feedback internally operates from DC up through the audio range and beyond. Further circuit elements would probably be found in a real design that would roll-off the frequency response above the needed range to prevent the possibility of unwanted oscillation. Also, the use of fixed diode bias as shown here can cause problems if the diodes are not both electrically and thermally matched to the output transistors – if the output transistors turn on too much, they can easily overheat and destroy themselves, as the full current from the power supply is not limited at this stage.
A common solution to help stabilise the output devices is to include some emitter resistors, typically one ohm or so. Calculating the values of the circuit's resistors and capacitors is done based on the components employed and the intended use of the amp.
Any real amplifier is an imperfect realization of an ideal amplifier. An important limitation of a real amplifier is that the output it generates is ultimately limited by the power available from the power supply. An amplifier saturates and clips the output if the input signal becomes too large for the amplifier to reproduce or exceeds operational limits for the device. The power supply may influence the output, so must be considered in the design. The power output from an amplifier cannot exceed its input power.
The amplifier circuit has an "open loop" performance. This is described by various parameters (gain, slew rate, output impedance, distortion, bandwidth, signal-to-noise ratio, etc.). Many modern amplifiers use negative feedback techniques to hold the gain at the desired value and reduce distortion. Negative loop feedback has the intended effect of lowering the output impedance and thereby increasing electrical damping of loudspeaker motion at and near the resonance frequency of the speaker.
When assessing rated amplifier power output, it is useful to consider the applied load, the signal type (e.g., speech or music), required power output duration (i.e., short-time or continuous), and required dynamic range (e.g., recorded or live audio). In high-powered audio applications that require long cables to the load (e.g., cinemas and shopping centres) it may be more efficient to connect to the load at line output voltage, with matching transformers at source and loads. This avoids long runs of heavy speaker cables.
To prevent instability or overheating requires care to ensure solid state amplifiers are adequately loaded. Most have a rated minimum load impedance.
All amplifiers generate heat through electrical losses. The amplifier must dissipate this heat via convection or forced air cooling. Heat can damage or reduce electronic component service life. Designers and installers must also consider heating effects on adjacent equipment.
Different power supply types result in many different methods of bias. Bias is a technique by which active devices are set to operate in a particular region, or by which the DC component of the output signal is set to the midpoint between the maximum voltages available from the power supply. Most amplifiers use several devices at each stage; they are typically matched in specifications except for polarity. Matched inverted polarity devices are called complementary pairs. Class-A amplifiers generally use only one device, unless the power supply is set to provide both positive and negative voltages, in which case a dual device symmetrical design may be used. Class-C amplifiers, by definition, use a single polarity supply.
Amplifiers often have multiple stages in cascade to increase gain. Each stage of these designs may be a different type of amp to suit the needs of that stage. For instance, the first stage might be a class-A stage, feeding a class-AB push–pull second stage, which then drives a class-G final output stage, taking advantage of the strengths of each type, while minimizing their weaknesses. | [
{
"paragraph_id": 0,
"text": "An amplifier, electronic amplifier or (informally) amp is an electronic device that can increase the magnitude of a signal (a time-varying voltage or current). It is a two-port electronic circuit that uses electric power from a power supply to increase the amplitude (magnitude of the voltage or current) of a signal applied to its input terminals, producing a proportionally greater amplitude signal at its output. The amount of amplification provided by an amplifier is measured by its gain: the ratio of output voltage, current, or power to input. An amplifier is defined as a circuit that has a power gain greater than one.",
"title": ""
},
{
"paragraph_id": 1,
"text": "An amplifier can be either a separate piece of equipment or an electrical circuit contained within another device. Amplification is fundamental to modern electronics, and amplifiers are widely used in almost all electronic equipment. Amplifiers can be categorized in different ways. One is by the frequency of the electronic signal being amplified. For example, audio amplifiers amplify signals in the audio (sound) range of less than 20 kHz, RF amplifiers amplify frequencies in the radio frequency range between 20 kHz and 300 GHz, and servo amplifiers and instrumentation amplifiers may work with very low frequencies down to direct current. Amplifiers can also be categorized by their physical placement in the signal chain; a preamplifier may precede other signal processing stages, for example, while a power amplifier is usually used after other amplifier stages to provide enough output power for the final use of the signal. The first practical electrical device which could amplify was the triode vacuum tube, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Today most amplifiers use transistors.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The first practical prominent device that could amplify was the triode vacuum tube, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Vacuum tubes were used in almost all amplifiers until the 1960s–1970s when transistors replaced them. Today, most amplifiers use transistors, but vacuum tubes continue to be used in some applications.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The development of audio communication technology in form of the telephone, first patented in 1876, created the need to increase the amplitude of electrical signals to extend the transmission of signals over increasingly long distances. In telegraphy, this problem had been solved with intermediate devices at stations that replenished the dissipated energy by operating a signal recorder and transmitter back-to-back, forming a relay, so that a local energy source at each intermediate station powered the next leg of transmission. For duplex transmission, i.e. sending and receiving in both directions, bi-directional relay repeaters were developed starting with the work of C. F. Varley for telegraphic transmission. Duplex transmission was essential for telephony and the problem was not satisfactorily solved until 1904, when H. E. Shreeve of the American Telephone and Telegraph Company improved existing attempts at constructing a telephone repeater consisting of back-to-back carbon-granule transmitter and electrodynamic receiver pairs. The Shreeve repeater was first tested on a line between Boston and Amesbury, MA, and more refined devices remained in service for some time. After the turn of the century it was found that negative resistance mercury lamps could amplify, and were also tried in repeaters, with little success.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The development of thermionic valves which began around 1902, provided an entirely electronic method of amplifying signals. The first practical version of such devices was the Audion triode, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Since the only previous device which was widely used to strengthen a signal was the relay used in telegraph systems, the amplifying vacuum tube was first called an electron relay. The terms amplifier and amplification, derived from the Latin amplificare, (to enlarge or expand), were first used for this new capability around 1915 when triodes became widespread.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The amplifying vacuum tube revolutionized electrical technology, creating the new field of electronics, the technology of active electrical devices. It made possible long-distance telephone lines, public address systems, radio broadcasting, talking motion pictures, practical audio recording, radar, television, and the first computers. For 50 years virtually all consumer electronic devices used vacuum tubes. Early tube amplifiers often had positive feedback (regeneration), which could increase gain but also make the amplifier unstable and prone to oscillation. Much of the mathematical theory of amplifiers was developed at Bell Telephone Laboratories during the 1920s to 1940s. Distortion levels in early amplifiers were high, usually around 5%, until 1934, when Harold Black developed negative feedback; this allowed the distortion levels to be greatly reduced, at the cost of lower gain. Other advances in the theory of amplification were made by Harry Nyquist and Hendrik Wade Bode.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The vacuum tube was virtually the only amplifying device, other than specialized power devices such as the magnetic amplifier and amplidyne, for 40 years. Power control circuitry used magnetic amplifiers until the latter half of the twentieth century when power semiconductor devices became more economical, with higher operating speeds. The old Shreeve electroacoustic carbon repeaters were used in adjustable amplifiers in telephone subscriber sets for the hearing impaired until the transistor provided smaller and higher quality amplifiers in the 1950s.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The first working transistor was a point-contact transistor invented by John Bardeen and Walter Brattain in 1947 at Bell Labs, where William Shockley later invented the bipolar junction transistor (BJT) in 1948. They were followed by the invention of the metal–oxide–semiconductor field-effect transistor (MOSFET) by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. Due to MOSFET scaling, the ability to scale down to increasingly small sizes, the MOSFET has since become the most widely used amplifier.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The replacement of bulky electron tubes with transistors during the 1960s and 1970s created a revolution in electronics, making possible a large class of portable electronic devices, such as the transistor radio developed in 1954. Today, use of vacuum tubes is limited for some high power applications, such as radio transmitters.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Beginning in the 1970s, more and more transistors were connected on a single chip thereby creating higher scales of integration (such as small-scale, medium-scale and large-scale integration) in integrated circuits. Many amplifiers commercially available today are based on integrated circuits.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "For special purposes, other active elements have been used. For example, in the early days of the satellite communication, parametric amplifiers were used. The core circuit was a diode whose capacitance was changed by an RF signal created locally. Under certain conditions, this RF signal provided energy that was modulated by the extremely weak satellite signal received at the earth station.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Advances in digital electronics since the late 20th century provided new alternatives to the conventional linear-gain amplifiers by using digital switching to vary the pulse-shape of fixed amplitude signals, resulting in devices such as the Class-D amplifier.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In principle, an amplifier is an electrical two-port network that produces a signal at the output port that is a replica of the signal applied to the input port, but increased in magnitude.",
"title": "Ideal"
},
{
"paragraph_id": 13,
"text": "The input port can be idealized as either being a voltage input, which takes no current, with the output proportional to the voltage across the port; or a current input, with no voltage across it, in which the output is proportional to the current through the port. The output port can be idealized as being either a dependent voltage source, with zero source resistance and its output voltage dependent on the input; or a dependent current source, with infinite source resistance and the output current dependent on the input. Combinations of these choices lead to four types of ideal amplifiers. In idealized form they are represented by each of the four types of dependent source used in linear analysis, as shown in the figure, namely:",
"title": "Ideal"
},
{
"paragraph_id": 14,
"text": "Each type of amplifier in its ideal form has an ideal input and output resistance that is the same as that of the corresponding dependent source:",
"title": "Ideal"
},
{
"paragraph_id": 15,
"text": "In real amplifiers the ideal impedances are not possible to achieve, but these ideal elements can be used to construct equivalent circuits of real amplifiers by adding impedances (resistance, capacitance and inductance) to the input and output. For any particular circuit, a small-signal analysis is often used to find the actual impedance. A small-signal AC test current Ix is applied to the input or output node, all external sources are set to AC zero, and the corresponding alternating voltage Vx across the test current source determines the impedance seen at that node as R = Vx / Ix.",
"title": "Ideal"
},
{
"paragraph_id": 16,
"text": "Amplifiers designed to attach to a transmission line at input and output, especially RF amplifiers, do not fit into this classification approach. Rather than dealing with voltage or current individually, they ideally couple with an input or output impedance matched to the transmission line impedance, that is, match ratios of voltage to current. Many real RF amplifiers come close to this ideal. Although, for a given appropriate source and load impedance, RF amplifiers can be characterized as amplifying voltage or current, they fundamentally are amplifying power.",
"title": "Ideal"
},
{
"paragraph_id": 17,
"text": "Amplifier properties are given by parameters that include:",
"title": "Properties"
},
{
"paragraph_id": 18,
"text": "Amplifiers are described according to the properties of their inputs, their outputs, and how they relate. All amplifiers have gain, a multiplication factor that relates the magnitude of some property of the output signal to a property of the input signal. The gain may be specified as the ratio of output voltage to input voltage (voltage gain), output power to input power (power gain), or some combination of current, voltage, and power. In many cases the property of the output that varies is dependent on the same property of the input, making the gain unitless (though often expressed in decibels (dB)).",
"title": "Properties"
},
{
"paragraph_id": 19,
"text": "Most amplifiers are designed to be linear. That is, they provide constant gain for any normal input level and output signal. If an amplifier's gain is not linear, the output signal can become distorted. There are, however, cases where variable gain is useful. Certain signal processing applications use exponential gain amplifiers.",
"title": "Properties"
},
{
"paragraph_id": 20,
"text": "Amplifiers are usually designed to function well in a specific application, for example: radio and television transmitters and receivers, high-fidelity (\"hi-fi\") stereo equipment, microcomputers and other digital equipment, and guitar and other instrument amplifiers. Every amplifier includes at least one active device, such as a vacuum tube or transistor.",
"title": "Properties"
},
{
"paragraph_id": 21,
"text": "Negative feedback is a technique used in most modern amplifiers to increase bandwidth, reduce distortion, and control gain. In a negative feedback amplifier part of the output is fed back and added to the input in the opposite phase, subtracting from the input. The main effect is to reduce the overall gain of the system. However, any unwanted signals introduced by the amplifier, such as distortion are also fed back. Since they are not part of the original input, they are added to the input in opposite phase, subtracting them from the input. In this way, negative feedback also reduces nonlinearity, distortion and other errors introduced by the amplifier. Large amounts of negative feedback can reduce errors to the point that the response of the amplifier itself becomes almost irrelevant as long as it has a large gain, and the output performance of the system (the \"closed loop performance\") is defined entirely by the components in the feedback loop. This technique is used particularly with operational amplifiers (op-amps).",
"title": "Negative feedback"
},
{
"paragraph_id": 22,
"text": "Non-feedback amplifiers can achieve only about 1% distortion for audio-frequency signals. With negative feedback, distortion can typically be reduced to 0.001%. Noise, even crossover distortion, can be practically eliminated. Negative feedback also compensates for changing temperatures, and degrading or nonlinear components in the gain stage, but any change or nonlinearity in the components in the feedback loop will affect the output. Indeed, the ability of the feedback loop to define the output is used to make active filter circuits.",
"title": "Negative feedback"
},
{
"paragraph_id": 23,
"text": "Another advantage of negative feedback is that it extends the bandwidth of the amplifier. The concept of feedback is used in operational amplifiers to precisely define gain, bandwidth, and other parameters entirely based on the components in the feedback loop.",
"title": "Negative feedback"
},
{
"paragraph_id": 24,
"text": "Negative feedback can be applied at each stage of an amplifier to stabilize the operating point of active devices against minor changes in power-supply voltage or device characteristics.",
"title": "Negative feedback"
},
{
"paragraph_id": 25,
"text": "Some feedback, positive or negative, is unavoidable and often undesirable—introduced, for example, by parasitic elements, such as inherent capacitance between input and output of devices such as transistors, and capacitive coupling of external wiring. Excessive frequency-dependent positive feedback can produce parasitic oscillation and turn an amplifier into an oscillator.",
"title": "Negative feedback"
},
{
"paragraph_id": 26,
"text": "All amplifiers include some form of active device: this is the device that does the actual amplification. The active device can be a vacuum tube, discrete solid state component, such as a single transistor, or part of an integrated circuit, as in an op-amp).",
"title": "Categories"
},
{
"paragraph_id": 27,
"text": "Transistor amplifiers (or solid state amplifiers) are the most common type of amplifier in use today. A transistor is used as the active element. The gain of the amplifier is determined by the properties of the transistor itself as well as the circuit it is contained within.",
"title": "Categories"
},
{
"paragraph_id": 28,
"text": "Common active devices in transistor amplifiers include bipolar junction transistors (BJTs) and metal oxide semiconductor field-effect transistors (MOSFETs).",
"title": "Categories"
},
{
"paragraph_id": 29,
"text": "Applications are numerous, some common examples are audio amplifiers in a home stereo or public address system, RF high power generation for semiconductor equipment, to RF and microwave applications such as radio transmitters.",
"title": "Categories"
},
{
"paragraph_id": 30,
"text": "Transistor-based amplification can be realized using various configurations: for example a bipolar junction transistor can realize common base, common collector or common emitter amplification; a MOSFET can realize common gate, common source or common drain amplification. Each configuration has different characteristics.",
"title": "Categories"
},
{
"paragraph_id": 31,
"text": "Vacuum-tube amplifiers (also known as tube amplifiers or valve amplifiers) use a vacuum tube as the active device. While semiconductor amplifiers have largely displaced valve amplifiers for low-power applications, valve amplifiers can be much more cost effective in high power applications such as radar, countermeasures equipment, and communications equipment. Many microwave amplifiers are specially designed valve amplifiers, such as the klystron, gyrotron, traveling wave tube, and crossed-field amplifier, and these microwave valves provide much greater single-device power output at microwave frequencies than solid-state devices. Vacuum tubes remain in use in some high end audio equipment, as well as in musical instrument amplifiers, due to a preference for \"tube sound\".",
"title": "Categories"
},
{
"paragraph_id": 32,
"text": "Magnetic amplifiers are devices somewhat similar to a transformer where one winding is used to control the saturation of a magnetic core and hence alter the impedance of the other winding.",
"title": "Categories"
},
{
"paragraph_id": 33,
"text": "They have largely fallen out of use due to development in semiconductor amplifiers but are still useful in HVDC control, and in nuclear power control circuitry due to not being affected by radioactivity.",
"title": "Categories"
},
{
"paragraph_id": 34,
"text": "Negative resistances can be used as amplifiers, such as the tunnel diode amplifier.",
"title": "Categories"
},
{
"paragraph_id": 35,
"text": "A power amplifier is an amplifier designed primarily to increase the power available to a load. In practice, amplifier power gain depends on the source and load impedances, as well as the inherent voltage and current gain. A radio frequency (RF) amplifier design typically optimizes impedances for power transfer, while audio and instrumentation amplifier designs normally optimize input and output impedance for least loading and highest signal integrity. An amplifier that is said to have a gain of 20 dB might have a voltage gain of 20 dB and an available power gain of much more than 20 dB (power ratio of 100)—yet actually deliver a much lower power gain if, for example, the input is from a 600 Ω microphone and the output connects to a 47 kΩ input socket for a power amplifier. In general, the power amplifier is the last 'amplifier' or actual circuit in a signal chain (the output stage) and is the amplifier stage that requires attention to power efficiency. Efficiency considerations lead to the various classes of power amplifiers based on the biasing of the output transistors or tubes: see power amplifier classes below.",
"title": "Categories"
},
{
"paragraph_id": 36,
"text": "Audio power amplifiers are typically used to drive loudspeakers. They will often have two output channels and deliver equal power to each. An RF power amplifier is found in radio transmitter final stages. A Servo motor controller: amplifies a control voltage to adjust the speed of a motor, or the position of a motorized system.",
"title": "Categories"
},
{
"paragraph_id": 37,
"text": "An operational amplifier is an amplifier circuit which typically has very high open loop gain and differential inputs. Op amps have become very widely used as standardized \"gain blocks\" in circuits due to their versatility; their gain, bandwidth and other characteristics can be controlled by feedback through an external circuit. Though the term today commonly applies to integrated circuits, the original operational amplifier design used valves, and later designs used discrete transistor circuits.",
"title": "Categories"
},
{
"paragraph_id": 38,
"text": "A fully differential amplifier is similar to the operational amplifier, but also has differential outputs. These are usually constructed using BJTs or FETs.",
"title": "Categories"
},
{
"paragraph_id": 39,
"text": "These use balanced transmission lines to separate individual single stage amplifiers, the outputs of which are summed by the same transmission line. The transmission line is a balanced type with the input at one end and on one side only of the balanced transmission line and the output at the opposite end is also the opposite side of the balanced transmission line. The gain of each stage adds linearly to the output rather than multiplies one on the other as in a cascade configuration. This allows a higher bandwidth to be achieved than could otherwise be realised even with the same gain stage elements.",
"title": "Categories"
},
{
"paragraph_id": 40,
"text": "These nonlinear amplifiers have much higher efficiencies than linear amps, and are used where the power saving justifies the extra complexity. Class-D amplifiers are the main example of this type of amplification.",
"title": "Categories"
},
{
"paragraph_id": 41,
"text": "Negative Resistance Amplifier is a type of Regenerative Amplifier that can use the feedback between the transistor's source and gate to transform a capacitive impedance on the transistor's source to a negative resistance on its gate. Compared to other types of amplifiers, this \"negative resistance amplifier\" will require only a tiny amount of power to achieve very high gain, maintaining a good noise figure at the same time.",
"title": "Categories"
},
{
"paragraph_id": 42,
"text": "Video amplifiers are designed to process video signals and have varying bandwidths depending on whether the video signal is for SDTV, EDTV, HDTV 720p or 1080i/p etc.. The specification of the bandwidth itself depends on what kind of filter is used—and at which point (−1 dB or −3 dB for example) the bandwidth is measured. Certain requirements for step response and overshoot are necessary for an acceptable TV image.",
"title": "Applications"
},
{
"paragraph_id": 43,
"text": "Traveling wave tube amplifiers (TWTAs) are used for high power amplification at low microwave frequencies. They typically can amplify across a broad spectrum of frequencies; however, they are usually not as tunable as klystrons.",
"title": "Applications"
},
{
"paragraph_id": 44,
"text": "Klystrons are specialized linear-beam vacuum-devices, designed to provide high power, widely tunable amplification of millimetre and sub-millimetre waves. Klystrons are designed for large scale operations and despite having a narrower bandwidth than TWTAs, they have the advantage of coherently amplifying a reference signal so its output may be precisely controlled in amplitude, frequency and phase.",
"title": "Applications"
},
{
"paragraph_id": 45,
"text": "Solid-state devices such as silicon short channel MOSFETs like double-diffused metal–oxide–semiconductor (DMOS) FETs, GaAs FETs, SiGe and GaAs heterojunction bipolar transistors/HBTs, HEMTs, IMPATT diodes, and others, are used especially at lower microwave frequencies and power levels on the order of watts specifically in applications like portable RF terminals/cell phones and access points where size and efficiency are the drivers. New materials like gallium nitride (GaN) or GaN on silicon or on silicon carbide/SiC are emerging in HEMT transistors and applications where improved efficiency, wide bandwidth, operation roughly from few to few tens of GHz with output power of few Watts to few hundred of Watts are needed.",
"title": "Applications"
},
{
"paragraph_id": 46,
"text": "Depending on the amplifier specifications and size requirements microwave amplifiers can be realised as monolithically integrated, integrated as modules or based on discrete parts or any combination of those.",
"title": "Applications"
},
{
"paragraph_id": 47,
"text": "The maser is a non-electronic microwave amplifier.",
"title": "Applications"
},
{
"paragraph_id": 48,
"text": "Instrument amplifiers are a range of audio power amplifiers used to increase the sound level of musical instruments, for example guitars, during performances. Amplifiers' tone mainly come from the order and amount in which it applies EQ and distortion",
"title": "Applications"
},
{
"paragraph_id": 49,
"text": "One set of classifications for amplifiers is based on which device terminal is common to both the input and the output circuit. In the case of bipolar junction transistors, the three classes are common emitter, common base, and common collector. For field-effect transistors, the corresponding configurations are common source, common gate, and common drain; for vacuum tubes, common cathode, common grid, and common plate.",
"title": "Classification of amplifier stages and systems"
},
{
"paragraph_id": 50,
"text": "The common emitter (or common source, common cathode, etc.) is most often configured to provide amplification of a voltage applied between base and emitter, and the output signal taken between collector and emitter is inverted, relative to the input. The common collector arrangement applies the input voltage between base and collector, and to take the output voltage between emitter and collector. This causes negative feedback, and the output voltage tends to follow the input voltage. This arrangement is also used as the input presents a high impedance and does not load the signal source, though the voltage amplification is less than one. The common-collector circuit is, therefore, better known as an emitter follower, source follower, or cathode follower.",
"title": "Classification of amplifier stages and systems"
},
{
"paragraph_id": 51,
"text": "An amplifier whose output exhibits no feedback to its input side is described as 'unilateral'. The input impedance of a unilateral amplifier is independent of load, and output impedance is independent of signal source impedance.",
"title": "Classification of amplifier stages and systems"
},
{
"paragraph_id": 52,
"text": "An amplifier that uses feedback to connect part of the output back to the input is a bilateral amplifier. Bilateral amplifier input impedance depends on the load, and output impedance on the signal source impedance. All amplifiers are bilateral to some degree; however they may often be modeled as unilateral under operating conditions where feedback is small enough to neglect for most purposes, simplifying analysis (see the common base article for an example).",
"title": "Classification of amplifier stages and systems"
},
{
"paragraph_id": 53,
"text": "Another way to classify amplifiers is by the phase relationship of the input signal to the output signal. An 'inverting' amplifier produces an output 180 degrees out of phase with the input signal (that is, a polarity inversion or mirror image of the input as seen on an oscilloscope). A 'non-inverting' amplifier maintains the phase of the input signal waveforms. An emitter follower is a type of non-inverting amplifier, indicating that the signal at the emitter of a transistor is following (that is, matching with unity gain but perhaps an offset) the input signal. Voltage follower is also non-inverting type of amplifier having unity gain.",
"title": "Classification of amplifier stages and systems"
},
{
"paragraph_id": 54,
"text": "This description can apply to a single stage of an amplifier, or to a complete amplifier system.",
"title": "Classification of amplifier stages and systems"
},
{
"paragraph_id": 55,
"text": "Other amplifiers may be classified by their function or output characteristics. These functional descriptions usually apply to complete amplifier systems or sub-systems and rarely to individual stages.",
"title": "Classification of amplifier stages and systems"
},
{
"paragraph_id": 56,
"text": "Amplifiers are sometimes classified by the coupling method of the signal at the input, output, or between stages. Different types of these include:",
"title": "Classification of amplifier stages and systems"
},
{
"paragraph_id": 57,
"text": "Depending on the frequency range and other properties amplifiers are designed according to different principles.",
"title": "Classification of amplifier stages and systems"
},
{
"paragraph_id": 58,
"text": "Frequency ranges down to DC are used only when this property is needed. Amplifiers for direct current signals are vulnerable to minor variations in the properties of components with time. Special methods, such as chopper stabilized amplifiers are used to prevent objectionable drift in the amplifier's properties for DC. \"DC-blocking\" capacitors can be added to remove DC and sub-sonic frequencies from audio amplifiers.",
"title": "Classification of amplifier stages and systems"
},
{
"paragraph_id": 59,
"text": "Depending on the frequency range specified different design principles must be used. Up to the MHz range only \"discrete\" properties need be considered; e.g., a terminal has an input impedance.",
"title": "Classification of amplifier stages and systems"
},
{
"paragraph_id": 60,
"text": "As soon as any connection within the circuit gets longer than perhaps 1% of the wavelength of the highest specified frequency (e.g., at 100 MHz the wavelength is 3 m, so the critical connection length is approx. 3 cm) design properties radically change. For example, a specified length and width of a PCB trace can be used as a selective or impedance-matching entity. Above a few hundred MHz, it gets difficult to use discrete elements, especially inductors. In most cases, PCB traces of very closely defined shapes are used instead (stripline techniques).",
"title": "Classification of amplifier stages and systems"
},
{
"paragraph_id": 61,
"text": "The frequency range handled by an amplifier might be specified in terms of bandwidth (normally implying a response that is 3 dB down when the frequency reaches the specified bandwidth), or by specifying a frequency response that is within a certain number of decibels between a lower and an upper frequency (e.g. \"20 Hz to 20 kHz plus or minus 1 dB\").",
"title": "Classification of amplifier stages and systems"
},
{
"paragraph_id": 62,
"text": "Power amplifier circuits (output stages) are classified as A, B, AB and C for analog designs—and class D and E for switching designs. The power amplifier classes are based on the proportion of each input cycle (conduction angle) during which an amplifying device passes current. The image of the conduction angle derives from amplifying a sinusoidal signal. If the device is always on, the conducting angle is 360°. If it is on for only half of each cycle, the angle is 180°. The angle of flow is closely related to the amplifier power efficiency.",
"title": "Power amplifier classes"
},
{
"paragraph_id": 63,
"text": "The practical amplifier circuit shown above could be the basis for a moderate-power audio amplifier. It features a typical (though substantially simplified) design as found in modern amplifiers, with a class-AB push–pull output stage, and uses some overall negative feedback. Bipolar transistors are shown, but this design would also be realizable with FETs or valves.",
"title": "Example amplifier circuit"
},
{
"paragraph_id": 64,
"text": "The input signal is coupled through capacitor C1 to the base of transistor Q1. The capacitor allows the AC signal to pass, but blocks the DC bias voltage established by resistors R1 and R2 so that any preceding circuit is not affected by it. Q1 and Q2 form a differential amplifier (an amplifier that multiplies the difference between two inputs by some constant), in an arrangement known as a long-tailed pair. This arrangement is used to conveniently allow the use of negative feedback, which is fed from the output to Q2 via R7 and R8.",
"title": "Example amplifier circuit"
},
{
"paragraph_id": 65,
"text": "The negative feedback into the difference amplifier allows the amplifier to compare the input to the actual output. The amplified signal from Q1 is directly fed to the second stage, Q3, which is a common emitter stage that provides further amplification of the signal and the DC bias for the output stages, Q4 and Q5. R6 provides the load for Q3 (a better design would probably use some form of active load here, such as a constant-current sink). So far, all of the amplifier is operating in class A. The output pair are arranged in class-AB push–pull, also called a complementary pair. They provide the majority of the current amplification (while consuming low quiescent current) and directly drive the load, connected via DC-blocking capacitor C2. The diodes D1 and D2 provide a small amount of constant voltage bias for the output pair, just biasing them into the conducting state so that crossover distortion is minimized. That is, the diodes push the output stage firmly into class-AB mode (assuming that the base-emitter drop of the output transistors is reduced by heat dissipation).",
"title": "Example amplifier circuit"
},
{
"paragraph_id": 66,
"text": "This design is simple, but a good basis for a practical design because it automatically stabilises its operating point, since feedback internally operates from DC up through the audio range and beyond. Further circuit elements would probably be found in a real design that would roll-off the frequency response above the needed range to prevent the possibility of unwanted oscillation. Also, the use of fixed diode bias as shown here can cause problems if the diodes are not both electrically and thermally matched to the output transistors – if the output transistors turn on too much, they can easily overheat and destroy themselves, as the full current from the power supply is not limited at this stage.",
"title": "Example amplifier circuit"
},
{
"paragraph_id": 67,
"text": "A common solution to help stabilise the output devices is to include some emitter resistors, typically one ohm or so. Calculating the values of the circuit's resistors and capacitors is done based on the components employed and the intended use of the amp.",
"title": "Example amplifier circuit"
},
{
"paragraph_id": 68,
"text": "Any real amplifier is an imperfect realization of an ideal amplifier. An important limitation of a real amplifier is that the output it generates is ultimately limited by the power available from the power supply. An amplifier saturates and clips the output if the input signal becomes too large for the amplifier to reproduce or exceeds operational limits for the device. The power supply may influence the output, so must be considered in the design. The power output from an amplifier cannot exceed its input power.",
"title": "Notes on implementation"
},
{
"paragraph_id": 69,
"text": "The amplifier circuit has an \"open loop\" performance. This is described by various parameters (gain, slew rate, output impedance, distortion, bandwidth, signal-to-noise ratio, etc.). Many modern amplifiers use negative feedback techniques to hold the gain at the desired value and reduce distortion. Negative loop feedback has the intended effect of lowering the output impedance and thereby increasing electrical damping of loudspeaker motion at and near the resonance frequency of the speaker.",
"title": "Notes on implementation"
},
{
"paragraph_id": 70,
"text": "When assessing rated amplifier power output, it is useful to consider the applied load, the signal type (e.g., speech or music), required power output duration (i.e., short-time or continuous), and required dynamic range (e.g., recorded or live audio). In high-powered audio applications that require long cables to the load (e.g., cinemas and shopping centres) it may be more efficient to connect to the load at line output voltage, with matching transformers at source and loads. This avoids long runs of heavy speaker cables.",
"title": "Notes on implementation"
},
{
"paragraph_id": 71,
"text": "To prevent instability or overheating requires care to ensure solid state amplifiers are adequately loaded. Most have a rated minimum load impedance.",
"title": "Notes on implementation"
},
{
"paragraph_id": 72,
"text": "All amplifiers generate heat through electrical losses. The amplifier must dissipate this heat via convection or forced air cooling. Heat can damage or reduce electronic component service life. Designers and installers must also consider heating effects on adjacent equipment.",
"title": "Notes on implementation"
},
{
"paragraph_id": 73,
"text": "Different power supply types result in many different methods of bias. Bias is a technique by which active devices are set to operate in a particular region, or by which the DC component of the output signal is set to the midpoint between the maximum voltages available from the power supply. Most amplifiers use several devices at each stage; they are typically matched in specifications except for polarity. Matched inverted polarity devices are called complementary pairs. Class-A amplifiers generally use only one device, unless the power supply is set to provide both positive and negative voltages, in which case a dual device symmetrical design may be used. Class-C amplifiers, by definition, use a single polarity supply.",
"title": "Notes on implementation"
},
{
"paragraph_id": 74,
"text": "Amplifiers often have multiple stages in cascade to increase gain. Each stage of these designs may be a different type of amp to suit the needs of that stage. For instance, the first stage might be a class-A stage, feeding a class-AB push–pull second stage, which then drives a class-G final output stage, taking advantage of the strengths of each type, while minimizing their weaknesses.",
"title": "Notes on implementation"
}
]
| An amplifier, electronic amplifier or (informally) amp is an electronic device that can increase the magnitude of a signal. It is a two-port electronic circuit that uses electric power from a power supply to increase the amplitude of a signal applied to its input terminals, producing a proportionally greater amplitude signal at its output. The amount of amplification provided by an amplifier is measured by its gain: the ratio of output voltage, current, or power to input. An amplifier is defined as a circuit that has a power gain greater than one. An amplifier can be either a separate piece of equipment or an electrical circuit contained within another device. Amplification is fundamental to modern electronics, and amplifiers are widely used in almost all electronic equipment. Amplifiers can be categorized in different ways. One is by the frequency of the electronic signal being amplified. For example, audio amplifiers amplify signals in the audio (sound) range of less than 20 kHz, RF amplifiers amplify frequencies in the radio frequency range between 20 kHz and 300 GHz, and servo amplifiers and instrumentation amplifiers may work with very low frequencies down to direct current. Amplifiers can also be categorized by their physical placement in the signal chain; a preamplifier may precede other signal processing stages, for example, while a power amplifier is usually used after other amplifier stages to provide enough output power for the final use of the signal. The first practical electrical device which could amplify was the triode vacuum tube, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Today most amplifiers use transistors. | 2002-02-25T15:51:15Z | 2023-12-28T13:32:17Z | [
"Template:See also",
"Template:Music technology",
"Template:Nowrap",
"Template:Portal",
"Template:Reflist",
"Template:Commons category",
"Template:Main",
"Template:Div col",
"Template:Cite journal",
"Template:Snd",
"Template:Short description",
"Template:About",
"Template:Further",
"Template:Spaced ndash",
"Template:Div col end",
"Template:Cite book",
"Template:Cite encyclopedia",
"Template:Cite web",
"Template:Electronic components",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Amplifier |
9,932 | Escort carrier | The escort carrier or escort aircraft carrier (U.S. hull classification symbol CVE), also called a "jeep carrier" or "baby flattop" in the United States Navy (USN) or "Woolworth Carrier" by the Royal Navy, was a small and slow type of aircraft carrier used by the Royal Navy, the Royal Canadian Navy, the United States Navy, the Imperial Japanese Navy and Imperial Japanese Army Air Force in World War II. They were typically half the length and a third the displacement of larger fleet carriers, slower, more-lightly armed and armored, and carried fewer planes. Escort carriers were most often built upon a commercial ship hull, so they were cheaper and could be built quickly. This was their principal advantage as they could be completed in greater numbers as a stop-gap when fleet carriers were scarce. However, the lack of protection made escort carriers particularly vulnerable, and several were sunk with great loss of life. The light carrier (U.S. hull classification symbol CVL) was a similar concept to the escort carrier in most respects, but was fast enough to operate alongside fleet carriers.
Escort carriers were too slow to keep up with the main forces consisting of fleet carriers, battleships, and cruisers. Instead, they were used to escort merchant ship convoys, defending them from enemy threats such as submarines and planes. In the invasions of mainland Europe and Pacific islands, escort carriers provided air support to ground forces during amphibious operations. Escort carriers also served as backup aircraft transports for fleet carriers, and ferried aircraft of all military services to points of delivery.
In the Battle of the Atlantic, escort carriers were used to protect convoys against U-boats. Initially escort carriers accompanied the merchant ships and helped to fend off attacks from aircraft and submarines. As numbers increased later in the war, escort carriers also formed part of hunter-killer groups that sought out submarines instead of being attached to a particular convoy.
In the Pacific theater, CVEs provided air support of ground troops in the Battle of Leyte Gulf. They lacked the speed and weapons to counter enemy fleets, relying on the protection of a Fast Carrier Task Force. However, at the Battle off Samar, one U.S. task force of escort carriers and destroyers managed to successfully defend itself against a much larger Japanese force of battleships and cruisers. The Japanese met a furious defense of carrier aircraft, screening destroyers, and destroyer escorts.
Of the 151 aircraft carriers built in the U.S. during World War II, 122 were escort carriers, though no examples survive. The Casablanca class was the most numerous class of aircraft carrier, with 50 launched. Second was the Bogue class, with 45 launched.
In the early 1920s, the Washington Naval Treaty imposed limits on the maximum size and total tonnage of aircraft carriers for the five main naval powers. Later treaties largely kept these provisions. As a result, construction between the World Wars had been insufficient to meet operational needs for aircraft carriers as World War II expanded from Europe. Too few fleet carriers were available to simultaneously transport aircraft to distant bases, support amphibious invasions, offer carrier landing training for replacement pilots, conduct anti-submarine patrols, and provide defensive air cover for deployed battleships and cruisers. The foregoing mission requirements limited use of fleet carriers' unique offensive strike capability demonstrated at the Battle of Taranto and the Attack on Pearl Harbor. Conversion of existing ships (and hulls under construction for other purposes) provided additional aircraft carriers until new construction became available.
Conversions of cruisers and passenger liners with speed similar to fleet carriers were identified by the U.S. as "light aircraft carriers" (hull classification symbol CVL) able to operate at battle fleet speeds. Slower conversions were classified as "escort carriers" and were considered naval auxiliaries suitable for pilot training and transport of aircraft to distant bases.
The Royal Navy had recognized a need for carriers to defend its trade routes in the 1930s. While designs had been prepared for "trade protection carriers" and five suitable liners identified for conversion, nothing further was done mostly because there were insufficient aircraft for even the fleet carriers under construction at the time. However, by 1940 the need had become urgent and HMS Audacity was converted from the captured German merchant ship MV Hannover and commissioned in July 1941. For defense from German aircraft, convoys were supplied first with fighter catapult ships and CAM ships that could carry a single (disposable) fighter. In the interim, before escort carriers could be supplied, they also brought in merchant aircraft carriers that could operate four aircraft.
In 1940, U.S. Admiral William Halsey recommended construction of naval auxiliaries for pilot training. In early 1941 the British asked the U.S. to build on their behalf six carriers of an improved Audacity design, but the U.S. had already begun their own escort carrier. On 1 February 1941, the United States Chief of Naval Operations gave priority to construction of naval auxiliaries for aircraft transport. U.S. ships built to meet these needs were initially referred to as auxiliary aircraft escort vessels (AVG) in February 1942 and then auxiliary aircraft carrier (ACV) on 5 August 1942. The first U.S. example of the type was USS Long Island. Operation Torch and North Atlantic anti-submarine warfare proved these ships capable aircraft carriers for ship formations moving at the speed of trade or amphibious invasion convoys. U.S. classification revision to escort aircraft carrier (CVE) on 15 July 1943 reflected upgraded status from auxiliary to combatant. They were informally known as "Jeep carriers" or "baby flattops". It was quickly found that the escort carriers had better performance than light carriers, which tended to pitch badly in moderate to high seas. The Commencement Bay class was designed to incorporate the best features of American CVLs on a more stable hull with a less expensive propulsion system.
Among their crews, CVE was sarcastically said to stand for "Combustible, Vulnerable, and Expendable", and the CVEs were called “Kaiser coffins" in honor of Casablanca-class manufacturer Henry J. Kaiser. Magazine protection was minimal in comparison to fleet aircraft carriers. HMS Avenger was sunk within minutes by a single torpedo, and HMS Dasher exploded from undetermined causes with very heavy loss of life. Three escort carriers—USS St. Lo, Ommaney Bay and Bismarck Sea—were destroyed by kamikazes, the largest ships to meet such a fate.
Allied escort carriers were typically around 500 ft (150 m) long, not much more than half the length of the almost 900 ft (270 m) fleet carriers of the same era, but were less than 1⁄3 of the weight. A typical escort carrier displaced about 8,000 long tons (8,100 t), as compared to almost 30,000 long tons (30,000 t) for a full-size fleet carrier. The aircraft hangar typically ran only 1⁄3 of the way under the flight deck and housed a combination of 24–30 fighters and bombers organized into one single "composite squadron". By comparison, a late Essex-class fleet carrier of the period could carry 103 aircraft organized into separate fighter, bomber and torpedo-bomber squadrons.
The island (superstructure) on these ships was small and cramped, and located well forward of the funnels (unlike on a normal-sized carrier, where the funnels were integrated into the island). Although the first escort carriers had only one aircraft elevator, having two elevators (one fore and one aft), along with the single aircraft catapult, quickly became standard. The carriers employed the same system of arresting cables and tail hooks as on the big carriers, and procedures for launch and recovery were the same as well.
The crew size was less than 1⁄3 of that of a large carrier, but this was still a bigger complement than most naval vessels. U.S. escort carriers were large enough to have facilities such as a permanent canteen or snack bar, called a gedunk bar, in addition to the mess. The bar was open for longer hours than the mess and sold several flavors of ice cream, along with cigarettes and other consumables. There were also several vending machines available on board.
In all, 130 Allied escort carriers were launched or converted during the war. Of these, six were British conversions of merchant ships: HMS Audacity, Nairana, Campania, Activity, Pretoria Castle and Vindex. The remaining escort carriers were U.S.-built. Like the British, the first U.S. escort carriers were converted merchant vessels (or in the Sangamon class, converted military oilers). The Bogue-class carriers were based on the hull of the Type C3 cargo ship. The last 69 escort carriers of the Casablanca and Commencement Bay classes were purpose-designed and purpose-built carriers drawing on the experience gained with the previous classes.
Originally developed at the behest of the United Kingdom to operate as part of a North Atlantic convoy escort, rather than as part of a naval strike force, many of the escort carriers produced were assigned to the Royal Navy for the duration of the war under the Lend-Lease act. They supplemented and then replaced the converted merchant aircraft carriers that were put into service by the British and Dutch as an emergency measure until dedicated escort carriers became available. As convoy escorts, they were used by the Royal Navy to provide air scouting, to ward off enemy long-range scouting aircraft and, increasingly, to spot and hunt submarines. Often additional escort carriers joined convoys, not as fighting ships but as transporters, ferrying aircraft from the U.S. to Britain; twice as many aircraft could be carried by storing aircraft on the flight deck as well as in the hangar.
The ships sent to the Royal Navy were slightly modified, partly to suit the traditions of that service. Among other things the ice-cream making machines were removed, since they were considered unnecessary luxuries on ships which provided a grog ration. The heavy duty washing machines of the laundry room were removed, since "all a British sailor needs to keep clean is a bucket and a bar of soap" (quoted from Warrilow).
Other modifications were due to the need for a completely enclosed hangar when operating in the North Atlantic and in support of the Arctic convoys.
Of the U.S.-built escort carriers, Nabob and Puncher sailed on launch from Tacoma to the port of Vancouver, where they were lightly refitted to Canadian standard and then crewed by Royal Canadian Navy personnel. Both ships served in the North Atlantic while nominally under the British fleet and carrying aircraft of the Fleet Air Arm.
The attack on Pearl Harbor brought up an urgent need for aircraft carriers, so some T3 tankers were converted to escort carriers; USS Suwannee is an example of how a T3 tanker hull, AO-33, was rebuilt to be an escort carrier. The T3 tanker size and speed made the T3 a useful escort carrier. There were two classes of T3 hull carriers: Sangamon class and Commencement Bay class.
The U.S. discovered their own uses for escort carriers. In the North Atlantic, they supplemented the escorting destroyers by providing air support for anti-submarine warfare. One of these escort carriers, USS Guadalcanal, was instrumental in the capture of U-505 off North Africa in 1944.
In the Pacific theater, escort carriers lacked the speed to sail with fast carrier attack groups, so were often tasked to escort the landing ships and troop carriers during the island-hopping campaign. In this role they provided air cover for the troopships and flew the first wave of attacks on beach fortifications in amphibious landing operations. On occasion, they even escorted the large carriers, serving as emergency airstrips and providing fighter cover for their larger sisters while these were busy readying or refueling their own planes. They also transported aircraft and spare parts from the U.S. to remote island airstrips.
A battle in which escort carriers played a major role was the Battle off Samar in the Philippines on 25 October 1944. The Japanese lured Admiral William Halsey, Jr. into chasing a decoy fleet with his powerful 3rd Fleet. This left about 450 aircraft from 16 small and slow escort carriers in three task units ("Taffies"), armed primarily to bomb ground forces, and their protective screen of destroyers and slower destroyer escorts to protect undefended troop and supply ships in Leyte Gulf. No Japanese threat was believed to be in the area, but a force of four battleships, including the formidable Yamato, eight cruisers, and 11 destroyers, appeared, sailing towards Leyte Gulf. Only the Taffies were in the way of the Japanese attack.
The slow carriers could not outrun 30-knot (35 mph; 56 km/h) cruisers. They launched their aircraft and maneuvered to avoid shellfire, helped by smoke screens, for over an hour. "Taffy 3" bore the brunt of the fight. The Taffy ships took dozens of hits, mostly from armor-piercing rounds that passed right through their thin, unarmored hulls without exploding. USS Gambier Bay, sunk in this action, was the only U.S. carrier lost to enemy surface gunfire in the war; the Japanese concentration of fire on this one carrier assisted the escape of the others. The carriers' only substantial armament—aside from their aircraft—was a single 5-inch (127 mm) dual-purpose gun mounted on the stern, but the pursuing Japanese cruisers closed to within range of these guns. One of the guns damaged the burning Japanese heavy cruiser Chōkai, and a subsequent bomb dropped by an aircraft hit the cruiser's forward machinery room, leaving her dead in the water. A kamikaze attack sank USS St Lo; kamikaze aircraft attacking other ships were shot down. Ultimately the superior Japanese surface force withdrew, believing they were confronted by a stronger force than was the case. Most of the damage to the Japanese fleet was inflicted by torpedoes fired by destroyers, and bombs from the carriers' aircraft.
The U.S. Navy lost a similar number of ships and more men than in the battles of the Coral Sea and Midway combined (though major fleet carriers were lost in the other battles).
Many escort carriers were Lend-Leased to the United Kingdom, this list specifies the breakdown in service to each navy.
In addition, six escort carriers were converted from other types by the British during the war.
The table below lists escort carriers and similar ships performing the same missions. The first four were built as early fleet aircraft carriers. Merchant aircraft carriers (MAC) carried trade cargo in addition to operating aircraft. Aircraft transports carried larger numbers of planes by eliminating accommodation for operating personnel and storage of fuel and ammunition.
The years following World War II brought many revolutionary new technologies to naval aviation, most notably the helicopter and the jet fighter, and with this a complete rethinking of its strategies and ships' tasks. Although several of the latest Commencement Bay-class CVE were deployed as floating airfields during the Korean War, the main reasons for the development of the escort carrier had disappeared or could be dealt with better by newer weapons. The emergence of the helicopter meant that helicopter-deck equipped frigates could now take over the CVE's role in a convoy while also performing their usual role as submarine hunters. Ship-mounted guided missile launchers took over much of the aircraft protection role, and in-flight refueling eliminated the need for floating stopover points for transport or patrol aircraft. Consequently, after the Commencement Bay class, no new escort carriers were designed, and with every downsizing of the navy, the CVEs were the first to be mothballed.
Several escort carriers were pressed back into service during the first years of the Vietnam War because of their ability to carry large numbers of aircraft. Redesignated AKV (air transport auxiliary), they were manned by a civilian crew and used to ferry whole aircraft and spare parts from the U.S. to Army, Air Force and Marine bases in South Vietnam. However, CVEs were useful in this role only for a limited period. Once all major aircraft were equipped with refueling probes, it became much easier to fly the aircraft directly to its base instead of shipping it.
The last chapter in the history of escort carriers consisted of two conversions: as an experiment, USS Thetis Bay was converted from an aircraft carrier into a pure helicopter carrier (CVHA-1) and used by the Marine Corps to carry assault helicopters for the first wave of amphibious warfare operations. Later, Thetis Bay became a full amphibious assault ship (LHP-6). Although in service only from 1955 (the year of her conversion) to 1964, the experience gained in her training exercises greatly influenced the design of today's amphibious assault ships.
In the second conversion, in 1961, USS Gilbert Islands had all her aircraft handling equipment removed and four tall radio antennas installed on her long, flat deck. In lieu of aircraft, the hangar deck now had 24 military radio transmitter trucks bolted to its floor. Rechristened USS Annapolis, the ship was used as a communication relay ship and served dutifully through the Vietnam War as a floating radio station, relaying transmissions between the forces on the ground and the command centers back home. Like Thetis Bay, the experience gained before Annapolis was stricken in 1976 helped develop today's purpose-built amphibious command ships of the Blue Ridge class.
Unlike almost all other major classes of ships and patrol boats from World War II, most of which can be found in a museum or port, no escort carrier or American light carrier has survived; all were destroyed during the war or broken up in the following decades. The Dictionary of American Naval Fighting Ships records that the last former escort carrier remaining in naval service—USS Annapolis—was sold for scrapping 19 December 1979. The last American light carrier (the escort carrier's faster sister type) was USS Cabot, which was broken up in 2002 after a decade-long attempt to preserve the vessel.
Later in the Cold War the U.S.-designed Sea Control Ship was intended to serve a similar role; while none were actually built, the Spanish aircraft carrier Principe de Asturias and the Thai HTMS Chakri Naruebet are based on the concept.
For complete lists see:
Media related to Escort carriers at Wikimedia Commons | [
{
"paragraph_id": 0,
"text": "The escort carrier or escort aircraft carrier (U.S. hull classification symbol CVE), also called a \"jeep carrier\" or \"baby flattop\" in the United States Navy (USN) or \"Woolworth Carrier\" by the Royal Navy, was a small and slow type of aircraft carrier used by the Royal Navy, the Royal Canadian Navy, the United States Navy, the Imperial Japanese Navy and Imperial Japanese Army Air Force in World War II. They were typically half the length and a third the displacement of larger fleet carriers, slower, more-lightly armed and armored, and carried fewer planes. Escort carriers were most often built upon a commercial ship hull, so they were cheaper and could be built quickly. This was their principal advantage as they could be completed in greater numbers as a stop-gap when fleet carriers were scarce. However, the lack of protection made escort carriers particularly vulnerable, and several were sunk with great loss of life. The light carrier (U.S. hull classification symbol CVL) was a similar concept to the escort carrier in most respects, but was fast enough to operate alongside fleet carriers.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Escort carriers were too slow to keep up with the main forces consisting of fleet carriers, battleships, and cruisers. Instead, they were used to escort merchant ship convoys, defending them from enemy threats such as submarines and planes. In the invasions of mainland Europe and Pacific islands, escort carriers provided air support to ground forces during amphibious operations. Escort carriers also served as backup aircraft transports for fleet carriers, and ferried aircraft of all military services to points of delivery.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In the Battle of the Atlantic, escort carriers were used to protect convoys against U-boats. Initially escort carriers accompanied the merchant ships and helped to fend off attacks from aircraft and submarines. As numbers increased later in the war, escort carriers also formed part of hunter-killer groups that sought out submarines instead of being attached to a particular convoy.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In the Pacific theater, CVEs provided air support of ground troops in the Battle of Leyte Gulf. They lacked the speed and weapons to counter enemy fleets, relying on the protection of a Fast Carrier Task Force. However, at the Battle off Samar, one U.S. task force of escort carriers and destroyers managed to successfully defend itself against a much larger Japanese force of battleships and cruisers. The Japanese met a furious defense of carrier aircraft, screening destroyers, and destroyer escorts.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Of the 151 aircraft carriers built in the U.S. during World War II, 122 were escort carriers, though no examples survive. The Casablanca class was the most numerous class of aircraft carrier, with 50 launched. Second was the Bogue class, with 45 launched.",
"title": ""
},
{
"paragraph_id": 5,
"text": "In the early 1920s, the Washington Naval Treaty imposed limits on the maximum size and total tonnage of aircraft carriers for the five main naval powers. Later treaties largely kept these provisions. As a result, construction between the World Wars had been insufficient to meet operational needs for aircraft carriers as World War II expanded from Europe. Too few fleet carriers were available to simultaneously transport aircraft to distant bases, support amphibious invasions, offer carrier landing training for replacement pilots, conduct anti-submarine patrols, and provide defensive air cover for deployed battleships and cruisers. The foregoing mission requirements limited use of fleet carriers' unique offensive strike capability demonstrated at the Battle of Taranto and the Attack on Pearl Harbor. Conversion of existing ships (and hulls under construction for other purposes) provided additional aircraft carriers until new construction became available.",
"title": "Development"
},
{
"paragraph_id": 6,
"text": "Conversions of cruisers and passenger liners with speed similar to fleet carriers were identified by the U.S. as \"light aircraft carriers\" (hull classification symbol CVL) able to operate at battle fleet speeds. Slower conversions were classified as \"escort carriers\" and were considered naval auxiliaries suitable for pilot training and transport of aircraft to distant bases.",
"title": "Development"
},
{
"paragraph_id": 7,
"text": "The Royal Navy had recognized a need for carriers to defend its trade routes in the 1930s. While designs had been prepared for \"trade protection carriers\" and five suitable liners identified for conversion, nothing further was done mostly because there were insufficient aircraft for even the fleet carriers under construction at the time. However, by 1940 the need had become urgent and HMS Audacity was converted from the captured German merchant ship MV Hannover and commissioned in July 1941. For defense from German aircraft, convoys were supplied first with fighter catapult ships and CAM ships that could carry a single (disposable) fighter. In the interim, before escort carriers could be supplied, they also brought in merchant aircraft carriers that could operate four aircraft.",
"title": "Development"
},
{
"paragraph_id": 8,
"text": "In 1940, U.S. Admiral William Halsey recommended construction of naval auxiliaries for pilot training. In early 1941 the British asked the U.S. to build on their behalf six carriers of an improved Audacity design, but the U.S. had already begun their own escort carrier. On 1 February 1941, the United States Chief of Naval Operations gave priority to construction of naval auxiliaries for aircraft transport. U.S. ships built to meet these needs were initially referred to as auxiliary aircraft escort vessels (AVG) in February 1942 and then auxiliary aircraft carrier (ACV) on 5 August 1942. The first U.S. example of the type was USS Long Island. Operation Torch and North Atlantic anti-submarine warfare proved these ships capable aircraft carriers for ship formations moving at the speed of trade or amphibious invasion convoys. U.S. classification revision to escort aircraft carrier (CVE) on 15 July 1943 reflected upgraded status from auxiliary to combatant. They were informally known as \"Jeep carriers\" or \"baby flattops\". It was quickly found that the escort carriers had better performance than light carriers, which tended to pitch badly in moderate to high seas. The Commencement Bay class was designed to incorporate the best features of American CVLs on a more stable hull with a less expensive propulsion system.",
"title": "Development"
},
{
"paragraph_id": 9,
"text": "Among their crews, CVE was sarcastically said to stand for \"Combustible, Vulnerable, and Expendable\", and the CVEs were called “Kaiser coffins\" in honor of Casablanca-class manufacturer Henry J. Kaiser. Magazine protection was minimal in comparison to fleet aircraft carriers. HMS Avenger was sunk within minutes by a single torpedo, and HMS Dasher exploded from undetermined causes with very heavy loss of life. Three escort carriers—USS St. Lo, Ommaney Bay and Bismarck Sea—were destroyed by kamikazes, the largest ships to meet such a fate.",
"title": "Development"
},
{
"paragraph_id": 10,
"text": "Allied escort carriers were typically around 500 ft (150 m) long, not much more than half the length of the almost 900 ft (270 m) fleet carriers of the same era, but were less than 1⁄3 of the weight. A typical escort carrier displaced about 8,000 long tons (8,100 t), as compared to almost 30,000 long tons (30,000 t) for a full-size fleet carrier. The aircraft hangar typically ran only 1⁄3 of the way under the flight deck and housed a combination of 24–30 fighters and bombers organized into one single \"composite squadron\". By comparison, a late Essex-class fleet carrier of the period could carry 103 aircraft organized into separate fighter, bomber and torpedo-bomber squadrons.",
"title": "Development"
},
{
"paragraph_id": 11,
"text": "The island (superstructure) on these ships was small and cramped, and located well forward of the funnels (unlike on a normal-sized carrier, where the funnels were integrated into the island). Although the first escort carriers had only one aircraft elevator, having two elevators (one fore and one aft), along with the single aircraft catapult, quickly became standard. The carriers employed the same system of arresting cables and tail hooks as on the big carriers, and procedures for launch and recovery were the same as well.",
"title": "Development"
},
{
"paragraph_id": 12,
"text": "The crew size was less than 1⁄3 of that of a large carrier, but this was still a bigger complement than most naval vessels. U.S. escort carriers were large enough to have facilities such as a permanent canteen or snack bar, called a gedunk bar, in addition to the mess. The bar was open for longer hours than the mess and sold several flavors of ice cream, along with cigarettes and other consumables. There were also several vending machines available on board.",
"title": "Development"
},
{
"paragraph_id": 13,
"text": "In all, 130 Allied escort carriers were launched or converted during the war. Of these, six were British conversions of merchant ships: HMS Audacity, Nairana, Campania, Activity, Pretoria Castle and Vindex. The remaining escort carriers were U.S.-built. Like the British, the first U.S. escort carriers were converted merchant vessels (or in the Sangamon class, converted military oilers). The Bogue-class carriers were based on the hull of the Type C3 cargo ship. The last 69 escort carriers of the Casablanca and Commencement Bay classes were purpose-designed and purpose-built carriers drawing on the experience gained with the previous classes.",
"title": "Development"
},
{
"paragraph_id": 14,
"text": "Originally developed at the behest of the United Kingdom to operate as part of a North Atlantic convoy escort, rather than as part of a naval strike force, many of the escort carriers produced were assigned to the Royal Navy for the duration of the war under the Lend-Lease act. They supplemented and then replaced the converted merchant aircraft carriers that were put into service by the British and Dutch as an emergency measure until dedicated escort carriers became available. As convoy escorts, they were used by the Royal Navy to provide air scouting, to ward off enemy long-range scouting aircraft and, increasingly, to spot and hunt submarines. Often additional escort carriers joined convoys, not as fighting ships but as transporters, ferrying aircraft from the U.S. to Britain; twice as many aircraft could be carried by storing aircraft on the flight deck as well as in the hangar.",
"title": "Royal Navy"
},
{
"paragraph_id": 15,
"text": "The ships sent to the Royal Navy were slightly modified, partly to suit the traditions of that service. Among other things the ice-cream making machines were removed, since they were considered unnecessary luxuries on ships which provided a grog ration. The heavy duty washing machines of the laundry room were removed, since \"all a British sailor needs to keep clean is a bucket and a bar of soap\" (quoted from Warrilow).",
"title": "Royal Navy"
},
{
"paragraph_id": 16,
"text": "Other modifications were due to the need for a completely enclosed hangar when operating in the North Atlantic and in support of the Arctic convoys.",
"title": "Royal Navy"
},
{
"paragraph_id": 17,
"text": "Of the U.S.-built escort carriers, Nabob and Puncher sailed on launch from Tacoma to the port of Vancouver, where they were lightly refitted to Canadian standard and then crewed by Royal Canadian Navy personnel. Both ships served in the North Atlantic while nominally under the British fleet and carrying aircraft of the Fleet Air Arm.",
"title": "Royal Canadian Navy"
},
{
"paragraph_id": 18,
"text": "The attack on Pearl Harbor brought up an urgent need for aircraft carriers, so some T3 tankers were converted to escort carriers; USS Suwannee is an example of how a T3 tanker hull, AO-33, was rebuilt to be an escort carrier. The T3 tanker size and speed made the T3 a useful escort carrier. There were two classes of T3 hull carriers: Sangamon class and Commencement Bay class.",
"title": "U.S. Navy service"
},
{
"paragraph_id": 19,
"text": "The U.S. discovered their own uses for escort carriers. In the North Atlantic, they supplemented the escorting destroyers by providing air support for anti-submarine warfare. One of these escort carriers, USS Guadalcanal, was instrumental in the capture of U-505 off North Africa in 1944.",
"title": "U.S. Navy service"
},
{
"paragraph_id": 20,
"text": "In the Pacific theater, escort carriers lacked the speed to sail with fast carrier attack groups, so were often tasked to escort the landing ships and troop carriers during the island-hopping campaign. In this role they provided air cover for the troopships and flew the first wave of attacks on beach fortifications in amphibious landing operations. On occasion, they even escorted the large carriers, serving as emergency airstrips and providing fighter cover for their larger sisters while these were busy readying or refueling their own planes. They also transported aircraft and spare parts from the U.S. to remote island airstrips.",
"title": "U.S. Navy service"
},
{
"paragraph_id": 21,
"text": "A battle in which escort carriers played a major role was the Battle off Samar in the Philippines on 25 October 1944. The Japanese lured Admiral William Halsey, Jr. into chasing a decoy fleet with his powerful 3rd Fleet. This left about 450 aircraft from 16 small and slow escort carriers in three task units (\"Taffies\"), armed primarily to bomb ground forces, and their protective screen of destroyers and slower destroyer escorts to protect undefended troop and supply ships in Leyte Gulf. No Japanese threat was believed to be in the area, but a force of four battleships, including the formidable Yamato, eight cruisers, and 11 destroyers, appeared, sailing towards Leyte Gulf. Only the Taffies were in the way of the Japanese attack.",
"title": "U.S. Navy service"
},
{
"paragraph_id": 22,
"text": "The slow carriers could not outrun 30-knot (35 mph; 56 km/h) cruisers. They launched their aircraft and maneuvered to avoid shellfire, helped by smoke screens, for over an hour. \"Taffy 3\" bore the brunt of the fight. The Taffy ships took dozens of hits, mostly from armor-piercing rounds that passed right through their thin, unarmored hulls without exploding. USS Gambier Bay, sunk in this action, was the only U.S. carrier lost to enemy surface gunfire in the war; the Japanese concentration of fire on this one carrier assisted the escape of the others. The carriers' only substantial armament—aside from their aircraft—was a single 5-inch (127 mm) dual-purpose gun mounted on the stern, but the pursuing Japanese cruisers closed to within range of these guns. One of the guns damaged the burning Japanese heavy cruiser Chōkai, and a subsequent bomb dropped by an aircraft hit the cruiser's forward machinery room, leaving her dead in the water. A kamikaze attack sank USS St Lo; kamikaze aircraft attacking other ships were shot down. Ultimately the superior Japanese surface force withdrew, believing they were confronted by a stronger force than was the case. Most of the damage to the Japanese fleet was inflicted by torpedoes fired by destroyers, and bombs from the carriers' aircraft.",
"title": "U.S. Navy service"
},
{
"paragraph_id": 23,
"text": "The U.S. Navy lost a similar number of ships and more men than in the battles of the Coral Sea and Midway combined (though major fleet carriers were lost in the other battles).",
"title": "U.S. Navy service"
},
{
"paragraph_id": 24,
"text": "Many escort carriers were Lend-Leased to the United Kingdom, this list specifies the breakdown in service to each navy.",
"title": "The ships"
},
{
"paragraph_id": 25,
"text": "In addition, six escort carriers were converted from other types by the British during the war.",
"title": "The ships"
},
{
"paragraph_id": 26,
"text": "The table below lists escort carriers and similar ships performing the same missions. The first four were built as early fleet aircraft carriers. Merchant aircraft carriers (MAC) carried trade cargo in addition to operating aircraft. Aircraft transports carried larger numbers of planes by eliminating accommodation for operating personnel and storage of fuel and ammunition.",
"title": "The ships"
},
{
"paragraph_id": 27,
"text": "The years following World War II brought many revolutionary new technologies to naval aviation, most notably the helicopter and the jet fighter, and with this a complete rethinking of its strategies and ships' tasks. Although several of the latest Commencement Bay-class CVE were deployed as floating airfields during the Korean War, the main reasons for the development of the escort carrier had disappeared or could be dealt with better by newer weapons. The emergence of the helicopter meant that helicopter-deck equipped frigates could now take over the CVE's role in a convoy while also performing their usual role as submarine hunters. Ship-mounted guided missile launchers took over much of the aircraft protection role, and in-flight refueling eliminated the need for floating stopover points for transport or patrol aircraft. Consequently, after the Commencement Bay class, no new escort carriers were designed, and with every downsizing of the navy, the CVEs were the first to be mothballed.",
"title": "Post-World War II"
},
{
"paragraph_id": 28,
"text": "Several escort carriers were pressed back into service during the first years of the Vietnam War because of their ability to carry large numbers of aircraft. Redesignated AKV (air transport auxiliary), they were manned by a civilian crew and used to ferry whole aircraft and spare parts from the U.S. to Army, Air Force and Marine bases in South Vietnam. However, CVEs were useful in this role only for a limited period. Once all major aircraft were equipped with refueling probes, it became much easier to fly the aircraft directly to its base instead of shipping it.",
"title": "Post-World War II"
},
{
"paragraph_id": 29,
"text": "The last chapter in the history of escort carriers consisted of two conversions: as an experiment, USS Thetis Bay was converted from an aircraft carrier into a pure helicopter carrier (CVHA-1) and used by the Marine Corps to carry assault helicopters for the first wave of amphibious warfare operations. Later, Thetis Bay became a full amphibious assault ship (LHP-6). Although in service only from 1955 (the year of her conversion) to 1964, the experience gained in her training exercises greatly influenced the design of today's amphibious assault ships.",
"title": "Post-World War II"
},
{
"paragraph_id": 30,
"text": "In the second conversion, in 1961, USS Gilbert Islands had all her aircraft handling equipment removed and four tall radio antennas installed on her long, flat deck. In lieu of aircraft, the hangar deck now had 24 military radio transmitter trucks bolted to its floor. Rechristened USS Annapolis, the ship was used as a communication relay ship and served dutifully through the Vietnam War as a floating radio station, relaying transmissions between the forces on the ground and the command centers back home. Like Thetis Bay, the experience gained before Annapolis was stricken in 1976 helped develop today's purpose-built amphibious command ships of the Blue Ridge class.",
"title": "Post-World War II"
},
{
"paragraph_id": 31,
"text": "Unlike almost all other major classes of ships and patrol boats from World War II, most of which can be found in a museum or port, no escort carrier or American light carrier has survived; all were destroyed during the war or broken up in the following decades. The Dictionary of American Naval Fighting Ships records that the last former escort carrier remaining in naval service—USS Annapolis—was sold for scrapping 19 December 1979. The last American light carrier (the escort carrier's faster sister type) was USS Cabot, which was broken up in 2002 after a decade-long attempt to preserve the vessel.",
"title": "Post-World War II"
},
{
"paragraph_id": 32,
"text": "Later in the Cold War the U.S.-designed Sea Control Ship was intended to serve a similar role; while none were actually built, the Spanish aircraft carrier Principe de Asturias and the Thai HTMS Chakri Naruebet are based on the concept.",
"title": "Post-World War II"
},
{
"paragraph_id": 33,
"text": "For complete lists see:",
"title": "See also"
},
{
"paragraph_id": 34,
"text": "Media related to Escort carriers at Wikimedia Commons",
"title": "External links"
}
]
| The escort carrier or escort aircraft carrier, also called a "jeep carrier" or "baby flattop" in the United States Navy (USN) or "Woolworth Carrier" by the Royal Navy, was a small and slow type of aircraft carrier used by the Royal Navy, the Royal Canadian Navy, the United States Navy, the Imperial Japanese Navy and Imperial Japanese Army Air Force in World War II. They were typically half the length and a third the displacement of larger fleet carriers, slower, more-lightly armed and armored, and carried fewer planes. Escort carriers were most often built upon a commercial ship hull, so they were cheaper and could be built quickly. This was their principal advantage as they could be completed in greater numbers as a stop-gap when fleet carriers were scarce. However, the lack of protection made escort carriers particularly vulnerable, and several were sunk with great loss of life. The light carrier was a similar concept to the escort carrier in most respects, but was fast enough to operate alongside fleet carriers. Escort carriers were too slow to keep up with the main forces consisting of fleet carriers, battleships, and cruisers. Instead, they were used to escort merchant ship convoys, defending them from enemy threats such as submarines and planes. In the invasions of mainland Europe and Pacific islands, escort carriers provided air support to ground forces during amphibious operations. Escort carriers also served as backup aircraft transports for fleet carriers, and ferried aircraft of all military services to points of delivery. In the Battle of the Atlantic, escort carriers were used to protect convoys against U-boats. Initially escort carriers accompanied the merchant ships and helped to fend off attacks from aircraft and submarines. As numbers increased later in the war, escort carriers also formed part of hunter-killer groups that sought out submarines instead of being attached to a particular convoy. In the Pacific theater, CVEs provided air support of ground troops in the Battle of Leyte Gulf. They lacked the speed and weapons to counter enemy fleets, relying on the protection of a Fast Carrier Task Force. However, at the Battle off Samar, one U.S. task force of escort carriers and destroyers managed to successfully defend itself against a much larger Japanese force of battleships and cruisers. The Japanese met a furious defense of carrier aircraft, screening destroyers, and destroyer escorts. Of the 151 aircraft carriers built in the U.S. during World War II, 122 were escort carriers, though no examples survive. The Casablanca class was the most numerous class of aircraft carrier, with 50 launched. Second was the Bogue class, with 45 launched. | 2001-10-14T07:17:02Z | 2023-11-30T11:49:57Z | [
"Template:HMS",
"Template:USS",
"Template:MV",
"Template:Reflist",
"Template:Short description",
"Template:Frac",
"Template:Main",
"Template:Refbegin",
"Template:Cite report",
"Template:Warship types of the 19th & 20th centuries",
"Template:More citations needed",
"Template:Convert",
"Template:Sfn",
"Template:Cite magazine",
"Template:Use dmy dates",
"Template:Ship",
"Template:Cvt",
"Template:Cite journal",
"Template:Cite web",
"Template:HTMS",
"Template:Commons category-inline",
"Template:See also",
"Template:Cite book",
"Template:Refend",
"Template:GS",
"Template:Unreferenced section",
"Template:Harvnb",
"Template:Sclass"
]
| https://en.wikipedia.org/wiki/Escort_carrier |
9,933 | Extreme sport | Action sports, adventure sports or extreme sports are activities perceived as involving a high degree of risk. These activities often involve speed, height, a high level of physical exertion and highly specialized gear. Extreme tourism overlaps with extreme sport. The two share the same main attraction, "adrenaline rush" caused by an element of risk, and differ mostly in the degree of engagement and professionalism.
The definition of extreme sports is not exact and the origin of the terms is unclear, but it gained popularity in the 1990s when it was picked up by marketing companies to promote the X Games and when the Extreme Sports Channel and Extreme International launched. More recently, the commonly used definition from research is "a competitive (comparison or self-evaluative) activity within which the participant is subjected to natural or unusual physical and mental challenges such as speed, height, depth or natural forces and where fast and accurate cognitive perceptual processing may be required for a successful outcome" by Dr. Rhonda Cohen (2012).
While the use of the term "extreme sport" has spread everywhere to describe a multitude of different activities, exactly which sports are considered 'extreme' is debatable. There are, however, several characteristics common to most extreme sports. While they are not the exclusive domain of youth, extreme sports tend to have a younger-than-average target demographic. Extreme sports are also rarely sanctioned by schools for their physical education curriculum. Extreme sports tend to be more solitary than many of the popular traditional sports (rafting and paintballing are notable exceptions, as they are done in teams).
Activities categorized by media as extreme sports differ from traditional sports due to the higher number of inherently uncontrollable variables. These environmental variables are frequently weather and terrain related, including wind, snow, water and mountains. Because these natural phenomena cannot be controlled, they inevitably affect the outcome of the given activity or event.
In a traditional sporting event, athletes compete against each other under controlled circumstances. While it is possible to create a controlled sporting event such as X Games, there are environmental variables that cannot be held constant for all athletes. Examples include changing snow conditions for snowboarders, rock and ice quality for climbers, and wave height and shape for surfers.
Whilst traditional sporting judgment criteria may be adopted when assessing performance (distance, time, score, etc.), extreme sports performers are often evaluated on more subjective and aesthetic criteria. This results in a tendency to reject unified judging methods, with different sports employing their own ideals and indeed having the ability to evolve their assessment standards with new trends or developments in the sports.
While the exact definition and what is included as extreme sport is debatable, some attempted to make classification for extreme sports.
One argument is that to qualify as an "extreme sport" both expression terms need to be fulfilled;
Along this definition, being a passenger in a canyon jet boat ride will not fulfill the requirements as the skill required pertains to the pilot, not the passengers. "Thrill seeking" might be a more suitable qualification than "extreme sport" or "action sport" in these cases.
The origin of the divergence of the term "extreme sports" from "sports" may date to the 1950s in the appearance of a phrase usually, but wrongly, attributed to Ernest Hemingway. The phrase is;
There are only three sports: bullfighting, motor racing, and mountaineering; all the rest are merely games.
The implication of the phrase was that the word "sport" defined an activity in which one might be killed, other activities being termed "games." The phrase may have been invented by either writer Barnaby Conrad or automotive author Ken Purdy.
The Dangerous Sports Club of Oxford University, England was founded by David Kirke, Chris Baker, Ed Hulton and Alan Weston. They first came to wide public attention by inventing modern day bungee jumping, by making the first modern jumps on 1 April 1979, from the Clifton Suspension Bridge, Bristol, England. They followed the Clifton Bridge effort with a jump from the Golden Gate Bridge in San Francisco, California (including the first female bungee jump by Jane Wilmot), and with a televised leap from the Royal Gorge Suspension Bridge in Colorado, sponsored by and televised on the popular American television program That's Incredible! Bungee jumping was treated as a novelty for a few years, then became a craze for young people, and is now an established industry for thrill seekers.
The Club also pioneered a surrealist form of skiing, holding three events at St. Moritz, Switzerland, in which competitors were required to devise a sculpture mounted on skis and ride it down a mountain. The event reached its limits when the Club arrived in St. Moritz with a London double-decker bus, wanting to send it down the ski slopes, and the Swiss resort managers refused.
Other Club activities included expedition hang gliding from active volcanoes; the launching of giant (20 m) plastic spheres with pilots suspended in the centre (zorbing); microlight flying; and BASE jumping (in the early days of this sport).
In recent decades the term extreme sport was further promoted after the Extreme Sports Channel, Extremesportscompany.com launched and then the X Games, a multi-sport event was created and developed by ESPN. The first X Games (known as 1995 Extreme Games) were held in Newport, Providence, Mount Snow, and Vermont in the United States.
Certain extreme sports clearly trace back to other extreme sports, or combinations thereof. For example, windsurfing was conceived as a result of efforts to equip a surfboard with a sailing boat's propulsion system (mast and sail). Kitesurfing on the other hand was conceived by combining the propulsion system of kite buggying (a parafoil) with the bi-directional boards used for wakeboarding. Wakeboarding is in turn derived from snowboarding and waterskiing.
Some contend that the distinction between an extreme sport and a conventional one has as much to do with marketing as with the level of danger involved or the adrenaline generated. For example, rugby union is both dangerous and adrenaline-inducing but is not considered an extreme sport due to its traditional image, and because it does not involve high speed or an intention to perform stunts (the aesthetic criteria mentioned above) and also it does not have changing environmental variables for the athletes.
A feature of such activities in the view of some is their alleged capacity to induce an adrenaline rush in participants. However, the medical view is that the rush or high associated with the activity is not due to adrenaline being released as a response to fear, but due to increased levels of dopamine, endorphins and serotonin because of the high level of physical exertion. Furthermore, recent studies suggest that the link to adrenaline and 'true' extreme sports is tentative. Brymer and Gray's study defined 'true' extreme sports as a leisure or recreation activity where the most likely outcome of a mismanaged accident or mistake was death. This definition was designed to separate the marketing hype from the activity.
Eric Brymer also found that the potential of various extraordinary human experiences, many of which parallel those found in activities such as meditation, was an important part of the extreme sport experience. Those experiences put the participants outside their comfort zone and are often done in conjunction with adventure travel.
Some of the sports have existed for decades and their proponents span generations, some going on to become well known personalities. Rock climbing and ice climbing have spawned publicly recognizable names such as Edmund Hillary, Chris Bonington, Wolfgang Güllich and more recently Joe Simpson. Another example is surfing, invented centuries ago by the inhabitants of Polynesia, it will become national sport of Hawaii.
Disabled people participate in extreme sports. Nonprofit organizations such as Adaptive Action Sports seek to increase awareness of the participation in action sports by members of the disabled community, as well as increase access to the adaptive technologies that make participation possible and to competitions such as The X Games.
Extreme sports may be perceived as extremely dangerous, conducive to fatalities, near-fatalities and other serious injuries. The perceived risk in an extreme sport has been considered a somewhat necessary part of its appeal, which is partially a result of pressure for athletes to make more money and provide maximum entertainment.
Extreme sports is a sub-category of sports that are described as any kind of sport "of a character or kind farthest removed from the ordinary or average". These kinds of sports often carry out the potential risk of serious and permanent physical injury and even death. However, these sports also have the potential to produce drastic benefits on mental and physical health and provide opportunity for individuals to engage fully with life.
Extreme sports trigger the release of the hormone adrenaline, which can facilitate performance of stunts. It is believed that the implementation of extreme sports on mental health patients improves their perspective and recognition of aspects of life.
In outdoor adventure sports, participants get to experience the emotion of intense thrill, usually associated with the extreme sports. Even though some extreme sports present a higher level of risk, people still choose to embark in the experience of extreme sports for the sake of the adrenaline. According to Sigmund Freud, we have an instinctual 'death wish', which is a subconscious inbuilt desire to destroy ourselves, proving that in the seek for the thrill, danger is considered pleasurable. | [
{
"paragraph_id": 0,
"text": "Action sports, adventure sports or extreme sports are activities perceived as involving a high degree of risk. These activities often involve speed, height, a high level of physical exertion and highly specialized gear. Extreme tourism overlaps with extreme sport. The two share the same main attraction, \"adrenaline rush\" caused by an element of risk, and differ mostly in the degree of engagement and professionalism.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The definition of extreme sports is not exact and the origin of the terms is unclear, but it gained popularity in the 1990s when it was picked up by marketing companies to promote the X Games and when the Extreme Sports Channel and Extreme International launched. More recently, the commonly used definition from research is \"a competitive (comparison or self-evaluative) activity within which the participant is subjected to natural or unusual physical and mental challenges such as speed, height, depth or natural forces and where fast and accurate cognitive perceptual processing may be required for a successful outcome\" by Dr. Rhonda Cohen (2012).",
"title": "Definition"
},
{
"paragraph_id": 2,
"text": "While the use of the term \"extreme sport\" has spread everywhere to describe a multitude of different activities, exactly which sports are considered 'extreme' is debatable. There are, however, several characteristics common to most extreme sports. While they are not the exclusive domain of youth, extreme sports tend to have a younger-than-average target demographic. Extreme sports are also rarely sanctioned by schools for their physical education curriculum. Extreme sports tend to be more solitary than many of the popular traditional sports (rafting and paintballing are notable exceptions, as they are done in teams).",
"title": "Definition"
},
{
"paragraph_id": 3,
"text": "Activities categorized by media as extreme sports differ from traditional sports due to the higher number of inherently uncontrollable variables. These environmental variables are frequently weather and terrain related, including wind, snow, water and mountains. Because these natural phenomena cannot be controlled, they inevitably affect the outcome of the given activity or event.",
"title": "Definition"
},
{
"paragraph_id": 4,
"text": "In a traditional sporting event, athletes compete against each other under controlled circumstances. While it is possible to create a controlled sporting event such as X Games, there are environmental variables that cannot be held constant for all athletes. Examples include changing snow conditions for snowboarders, rock and ice quality for climbers, and wave height and shape for surfers.",
"title": "Definition"
},
{
"paragraph_id": 5,
"text": "Whilst traditional sporting judgment criteria may be adopted when assessing performance (distance, time, score, etc.), extreme sports performers are often evaluated on more subjective and aesthetic criteria. This results in a tendency to reject unified judging methods, with different sports employing their own ideals and indeed having the ability to evolve their assessment standards with new trends or developments in the sports.",
"title": "Definition"
},
{
"paragraph_id": 6,
"text": "While the exact definition and what is included as extreme sport is debatable, some attempted to make classification for extreme sports.",
"title": "Classification"
},
{
"paragraph_id": 7,
"text": "One argument is that to qualify as an \"extreme sport\" both expression terms need to be fulfilled;",
"title": "Classification"
},
{
"paragraph_id": 8,
"text": "Along this definition, being a passenger in a canyon jet boat ride will not fulfill the requirements as the skill required pertains to the pilot, not the passengers. \"Thrill seeking\" might be a more suitable qualification than \"extreme sport\" or \"action sport\" in these cases.",
"title": "Classification"
},
{
"paragraph_id": 9,
"text": "The origin of the divergence of the term \"extreme sports\" from \"sports\" may date to the 1950s in the appearance of a phrase usually, but wrongly, attributed to Ernest Hemingway. The phrase is;",
"title": "History"
},
{
"paragraph_id": 10,
"text": "There are only three sports: bullfighting, motor racing, and mountaineering; all the rest are merely games.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The implication of the phrase was that the word \"sport\" defined an activity in which one might be killed, other activities being termed \"games.\" The phrase may have been invented by either writer Barnaby Conrad or automotive author Ken Purdy.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The Dangerous Sports Club of Oxford University, England was founded by David Kirke, Chris Baker, Ed Hulton and Alan Weston. They first came to wide public attention by inventing modern day bungee jumping, by making the first modern jumps on 1 April 1979, from the Clifton Suspension Bridge, Bristol, England. They followed the Clifton Bridge effort with a jump from the Golden Gate Bridge in San Francisco, California (including the first female bungee jump by Jane Wilmot), and with a televised leap from the Royal Gorge Suspension Bridge in Colorado, sponsored by and televised on the popular American television program That's Incredible! Bungee jumping was treated as a novelty for a few years, then became a craze for young people, and is now an established industry for thrill seekers.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The Club also pioneered a surrealist form of skiing, holding three events at St. Moritz, Switzerland, in which competitors were required to devise a sculpture mounted on skis and ride it down a mountain. The event reached its limits when the Club arrived in St. Moritz with a London double-decker bus, wanting to send it down the ski slopes, and the Swiss resort managers refused.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Other Club activities included expedition hang gliding from active volcanoes; the launching of giant (20 m) plastic spheres with pilots suspended in the centre (zorbing); microlight flying; and BASE jumping (in the early days of this sport).",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In recent decades the term extreme sport was further promoted after the Extreme Sports Channel, Extremesportscompany.com launched and then the X Games, a multi-sport event was created and developed by ESPN. The first X Games (known as 1995 Extreme Games) were held in Newport, Providence, Mount Snow, and Vermont in the United States.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Certain extreme sports clearly trace back to other extreme sports, or combinations thereof. For example, windsurfing was conceived as a result of efforts to equip a surfboard with a sailing boat's propulsion system (mast and sail). Kitesurfing on the other hand was conceived by combining the propulsion system of kite buggying (a parafoil) with the bi-directional boards used for wakeboarding. Wakeboarding is in turn derived from snowboarding and waterskiing.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Some contend that the distinction between an extreme sport and a conventional one has as much to do with marketing as with the level of danger involved or the adrenaline generated. For example, rugby union is both dangerous and adrenaline-inducing but is not considered an extreme sport due to its traditional image, and because it does not involve high speed or an intention to perform stunts (the aesthetic criteria mentioned above) and also it does not have changing environmental variables for the athletes.",
"title": "Marketing"
},
{
"paragraph_id": 18,
"text": "A feature of such activities in the view of some is their alleged capacity to induce an adrenaline rush in participants. However, the medical view is that the rush or high associated with the activity is not due to adrenaline being released as a response to fear, but due to increased levels of dopamine, endorphins and serotonin because of the high level of physical exertion. Furthermore, recent studies suggest that the link to adrenaline and 'true' extreme sports is tentative. Brymer and Gray's study defined 'true' extreme sports as a leisure or recreation activity where the most likely outcome of a mismanaged accident or mistake was death. This definition was designed to separate the marketing hype from the activity.",
"title": "Motivation"
},
{
"paragraph_id": 19,
"text": "Eric Brymer also found that the potential of various extraordinary human experiences, many of which parallel those found in activities such as meditation, was an important part of the extreme sport experience. Those experiences put the participants outside their comfort zone and are often done in conjunction with adventure travel.",
"title": "Motivation"
},
{
"paragraph_id": 20,
"text": "Some of the sports have existed for decades and their proponents span generations, some going on to become well known personalities. Rock climbing and ice climbing have spawned publicly recognizable names such as Edmund Hillary, Chris Bonington, Wolfgang Güllich and more recently Joe Simpson. Another example is surfing, invented centuries ago by the inhabitants of Polynesia, it will become national sport of Hawaii.",
"title": "Motivation"
},
{
"paragraph_id": 21,
"text": "Disabled people participate in extreme sports. Nonprofit organizations such as Adaptive Action Sports seek to increase awareness of the participation in action sports by members of the disabled community, as well as increase access to the adaptive technologies that make participation possible and to competitions such as The X Games.",
"title": "Motivation"
},
{
"paragraph_id": 22,
"text": "Extreme sports may be perceived as extremely dangerous, conducive to fatalities, near-fatalities and other serious injuries. The perceived risk in an extreme sport has been considered a somewhat necessary part of its appeal, which is partially a result of pressure for athletes to make more money and provide maximum entertainment.",
"title": "Mortality, health, and thrill"
},
{
"paragraph_id": 23,
"text": "Extreme sports is a sub-category of sports that are described as any kind of sport \"of a character or kind farthest removed from the ordinary or average\". These kinds of sports often carry out the potential risk of serious and permanent physical injury and even death. However, these sports also have the potential to produce drastic benefits on mental and physical health and provide opportunity for individuals to engage fully with life.",
"title": "Mortality, health, and thrill"
},
{
"paragraph_id": 24,
"text": "Extreme sports trigger the release of the hormone adrenaline, which can facilitate performance of stunts. It is believed that the implementation of extreme sports on mental health patients improves their perspective and recognition of aspects of life.",
"title": "Mortality, health, and thrill"
},
{
"paragraph_id": 25,
"text": "In outdoor adventure sports, participants get to experience the emotion of intense thrill, usually associated with the extreme sports. Even though some extreme sports present a higher level of risk, people still choose to embark in the experience of extreme sports for the sake of the adrenaline. According to Sigmund Freud, we have an instinctual 'death wish', which is a subconscious inbuilt desire to destroy ourselves, proving that in the seek for the thrill, danger is considered pleasurable.",
"title": "Mortality, health, and thrill"
}
]
| Action sports, adventure sports or extreme sports are activities perceived as involving a high degree of risk. These activities often involve speed, height, a high level of physical exertion and highly specialized gear. Extreme tourism overlaps with extreme sport. The two share the same main attraction, "adrenaline rush" caused by an element of risk, and differ mostly in the degree of engagement and professionalism. | 2001-10-15T11:17:50Z | 2023-12-27T10:54:28Z | [
"Template:Promotional language",
"Template:Reflist",
"Template:About",
"Template:Original research",
"Template:Cite web",
"Template:Cite encyclopedia",
"Template:Div col",
"Template:Cite journal",
"Template:Cite news",
"Template:Dead link",
"Template:Commons category",
"Template:Wiktionary-inline",
"Template:Authority control",
"Template:Div col end",
"Template:Webarchive",
"Template:Citation needed",
"Template:See also",
"Template:Cite book",
"Template:Extreme sports",
"Template:Short description",
"Template:Redirect"
]
| https://en.wikipedia.org/wiki/Extreme_sport |
9,935 | Eadgyth | Edith of England, also spelt Eadgyth or Ædgyth (Old English: Ēadgȳð, German: Edgitha; 910–946), a member of the House of Wessex, was a German queen from 936, by her marriage to King Otto I.
Edith was born to the reigning English king Edward the Elder by his second wife, Ælfflæd, and hence was a granddaughter of King Alfred the Great. She had an older sister, Eadgifu. She apparently spent her early years near Winchester in Wessex, moving about frequently with the court, and may have spent her later youth, with her mother, living for a time at a monastery.
At the request of the East Frankish king Henry the Fowler, who wished to stake a claim to equality and to seal the alliance between the two Saxon kingdoms, her half-brother King Æthelstan sent his sisters Edith and Edgiva to Germany. Henry's eldest son and heir to the throne Otto was instructed to choose whichever one pleased him best. Otto chose Edith, according to Hrotsvitha a woman "of pure noble countenance, graceful character and truly royal appearance", and married her in 930. In 929 King Otto I had granted the city of Magdeburg to his Edith as dower. She had a particular love for the town and often lived there.
In 936 Henry the Fowler died and his eldest son Otto, Edith's husband, was crowned king at Aachen Cathedral. A surviving report of the ceremony by the medieval chronicler Widukind of Corvey makes no mention of his wife having been crowned at this point, but according to Bishop Thietmar of Merseburg's chronicle, Eadgyth was nevertheless anointed as queen, albeit in a separate ceremony.
As queen consort, Edith undertook the usual state duties of a "First Lady": when she turns up in the records it is generally in connection with gifts to the state's favoured monasteries or memorials to holy women and saints. In this respect she seems to have been more diligent than her now widowed and subsequently sainted mother-in-law, Queen Matilda, whose own charitable activities only achieve a single recorded mention from the period of Eadgyth's time as queen. There was probably rivalry between the Benedictine Monastery of St Maurice founded at Magdeburg by Otto and Eadgyth in 937, a year after coming to the throne, and Matilda's foundation Quedlinburg Abbey, intended by her as a memorial to her husband, the late King Henry. Edith accompanied her husband on his travels, though not during battles. While Otto fought against the rebellious dukes Eberhard of Franconia and Gilbert of Lorraine in 939, she spent the hostilities at Lorsch Abbey. In 941 she effected a reconciliation between her husband and his mother.
Like her brother, Æthelstan, Edith was devoted to the cult of their ancestor Saint Oswald of Northumbria and was instrumental in introducing this cult into Germany after her marriage to the emperor. Her lasting influence may have caused certain monasteries and churches in the Duchy of Saxony to be dedicated to this saint.
Eadgyth's death in 946 at around the age of thirty-six, was unexpected. Otto apparently mourned the loss of a beloved spouse. He married Adelaide of Italy in 951.
Edith and Otto's children were:
both buried in St. Alban's Abbey, Mainz.
Initially buried in the St Maurice monastery, Edith's tomb since the 16th century has been located in Magdeburg Cathedral. Long regarded as a cenotaph, a lead coffin inside a stone sarcophagus with her name on it was found and opened in 2008 by archaeologists during work on the building. An inscription recorded that it was the body of Eadgyth, reburied in 1510. The fragmented and incomplete bones were examined in 2009, then brought to Bristol, England, for tests in 2010.
The investigations at Bristol, applying isotope tests on tooth enamel, checked whether she was born and brought up in Wessex and Mercia, as written history indicated. Testing on the bones revealed that they are the remains of Eadgyth, from study made of the enamel of the teeth in her upper jaw. Testing of the enamel revealed that the individual entombed at Magdeburg had spent time as a youth in the chalky uplands of Wessex. The bones are the oldest found of a member of English royalty.
Following the tests the bones were re-interred in a new titanium coffin in her tomb at Magdeburg Cathedral on 22 October 2010. | [
{
"paragraph_id": 0,
"text": "Edith of England, also spelt Eadgyth or Ædgyth (Old English: Ēadgȳð, German: Edgitha; 910–946), a member of the House of Wessex, was a German queen from 936, by her marriage to King Otto I.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Edith was born to the reigning English king Edward the Elder by his second wife, Ælfflæd, and hence was a granddaughter of King Alfred the Great. She had an older sister, Eadgifu. She apparently spent her early years near Winchester in Wessex, moving about frequently with the court, and may have spent her later youth, with her mother, living for a time at a monastery.",
"title": "Life"
},
{
"paragraph_id": 2,
"text": "At the request of the East Frankish king Henry the Fowler, who wished to stake a claim to equality and to seal the alliance between the two Saxon kingdoms, her half-brother King Æthelstan sent his sisters Edith and Edgiva to Germany. Henry's eldest son and heir to the throne Otto was instructed to choose whichever one pleased him best. Otto chose Edith, according to Hrotsvitha a woman \"of pure noble countenance, graceful character and truly royal appearance\", and married her in 930. In 929 King Otto I had granted the city of Magdeburg to his Edith as dower. She had a particular love for the town and often lived there.",
"title": "Life"
},
{
"paragraph_id": 3,
"text": "In 936 Henry the Fowler died and his eldest son Otto, Edith's husband, was crowned king at Aachen Cathedral. A surviving report of the ceremony by the medieval chronicler Widukind of Corvey makes no mention of his wife having been crowned at this point, but according to Bishop Thietmar of Merseburg's chronicle, Eadgyth was nevertheless anointed as queen, albeit in a separate ceremony.",
"title": "Life"
},
{
"paragraph_id": 4,
"text": "As queen consort, Edith undertook the usual state duties of a \"First Lady\": when she turns up in the records it is generally in connection with gifts to the state's favoured monasteries or memorials to holy women and saints. In this respect she seems to have been more diligent than her now widowed and subsequently sainted mother-in-law, Queen Matilda, whose own charitable activities only achieve a single recorded mention from the period of Eadgyth's time as queen. There was probably rivalry between the Benedictine Monastery of St Maurice founded at Magdeburg by Otto and Eadgyth in 937, a year after coming to the throne, and Matilda's foundation Quedlinburg Abbey, intended by her as a memorial to her husband, the late King Henry. Edith accompanied her husband on his travels, though not during battles. While Otto fought against the rebellious dukes Eberhard of Franconia and Gilbert of Lorraine in 939, she spent the hostilities at Lorsch Abbey. In 941 she effected a reconciliation between her husband and his mother.",
"title": "Life"
},
{
"paragraph_id": 5,
"text": "Like her brother, Æthelstan, Edith was devoted to the cult of their ancestor Saint Oswald of Northumbria and was instrumental in introducing this cult into Germany after her marriage to the emperor. Her lasting influence may have caused certain monasteries and churches in the Duchy of Saxony to be dedicated to this saint.",
"title": "Life"
},
{
"paragraph_id": 6,
"text": "Eadgyth's death in 946 at around the age of thirty-six, was unexpected. Otto apparently mourned the loss of a beloved spouse. He married Adelaide of Italy in 951.",
"title": "Life"
},
{
"paragraph_id": 7,
"text": "Edith and Otto's children were:",
"title": "Children"
},
{
"paragraph_id": 8,
"text": "both buried in St. Alban's Abbey, Mainz.",
"title": "Children"
},
{
"paragraph_id": 9,
"text": "Initially buried in the St Maurice monastery, Edith's tomb since the 16th century has been located in Magdeburg Cathedral. Long regarded as a cenotaph, a lead coffin inside a stone sarcophagus with her name on it was found and opened in 2008 by archaeologists during work on the building. An inscription recorded that it was the body of Eadgyth, reburied in 1510. The fragmented and incomplete bones were examined in 2009, then brought to Bristol, England, for tests in 2010.",
"title": "Tomb"
},
{
"paragraph_id": 10,
"text": "The investigations at Bristol, applying isotope tests on tooth enamel, checked whether she was born and brought up in Wessex and Mercia, as written history indicated. Testing on the bones revealed that they are the remains of Eadgyth, from study made of the enamel of the teeth in her upper jaw. Testing of the enamel revealed that the individual entombed at Magdeburg had spent time as a youth in the chalky uplands of Wessex. The bones are the oldest found of a member of English royalty.",
"title": "Tomb"
},
{
"paragraph_id": 11,
"text": "Following the tests the bones were re-interred in a new titanium coffin in her tomb at Magdeburg Cathedral on 22 October 2010.",
"title": "Tomb"
}
]
| Edith of England, also spelt Eadgyth or Ædgyth, a member of the House of Wessex, was a German queen from 936, by her marriage to King Otto I. | 2001-11-15T17:59:09Z | 2023-10-15T14:39:45Z | [
"Template:Short description",
"Template:Other people",
"Template:Use dmy dates",
"Template:S-ttl",
"Template:S-end",
"Template:Infobox royalty",
"Template:Lang-ang",
"Template:Lang-de",
"Template:S-start",
"Template:S-roy",
"Template:Royal consorts of Germany",
"Template:ISBN",
"Template:Cite web",
"Template:PASE",
"Template:S-hou",
"Template:S-bef",
"Template:S-aft",
"Template:Authority control",
"Template:Reflist",
"Template:PD-notice",
"Template:Cite news"
]
| https://en.wikipedia.org/wiki/Eadgyth |
9,937 | Kingdom of Essex | The Kingdom of the East Saxons (Old English: Ēastseaxna rīce; Latin: Regnum Orientalium Saxonum), referred to as the Kingdom of Essex /ˈɛsɪks/, was one of the seven traditional kingdoms of the Anglo-Saxon Heptarchy. It was founded in the 6th century and covered the territory later occupied by the counties of Essex, Middlesex, much of Hertfordshire and (for a short while) west Kent. The last king of Essex was Sigered of Essex, who in 825 ceded the kingdom to Ecgberht, King of Wessex.
The Kingdom of Essex was bounded to the north by the River Stour and the Kingdom of East Anglia, to the south by the River Thames and Kent, to the east lay the North Sea and to the west Mercia. The territory included the remains of two provincial Roman capitals, Colchester and London.
The kingdom included the Middle Saxon Province, which included the area of the later county of Middlesex, and most if not all of Hertfordshire Although the province is only ever recorded as a part of the East Saxon kingdom, charter evidence shows that it was not part of their core territory. In the core area they granted charters freely, but further west they did so while also making reference to their Mercian overlords. At times, Essex was ruled jointly by co-Kings, and it thought that the Middle Saxon Province is likely to have been the domain of one of these co-kings. The links to Essex between Middlesex and parts of Hertfordshire were long reflected in the Diocese of London, re-established in 604 as the East Saxon see, and its boundaries continued to be based on the Kingdom of Essex until the nineteenth century.
The East Saxons also had intermittent control of Surrey. For a brief period in the 8th century, the Kingdom of Essex controlled west Kent.
The modern English county of Essex maintains the historic northern and the southern borders, but only covers the territory east of the River Lea, the other parts being lost to neighbouring Mercia during the 8th century.
In the Tribal Hidage it is listed as containing 7,000 hides.
Although the kingdom of Essex was one of the kingdoms of the Heptarchy, its history is not well documented. It produced relatively few Anglo-Saxon charters and no version of the Anglo-Saxon Chronicle; in fact the only mention in the chronicle concerns Bishop Mellitus. As a result, the kingdom is regarded as comparatively obscure. For most of the kingdom's existence, the Essex king was subservient to an overlord – variously the kings of Kent, East Anglia or Mercia.
Saxon occupation of land that was to form the kingdom had begun by the early 5th century at Mucking and other locations. A large proportion of these original settlers came from Old Saxony. According to British legend (see Historia Brittonum) the territory known later as Essex was ceded by the Celtic Britons to the Saxons following the infamous Treason of the Long Knives, which occurred c. 460 during the reign of High King Vortigern. Della Hooke relates the territory ruled by the kings of Essex to the pre-Roman territory of the Trinovantes. Studies suggest a pattern of typically peaceful co-existence, with the structure of the Romano-British landscape being maintained, and with the Saxon settlers believed to have been in the minority.
The kingdom of Essex grew by the absorption of smaller subkingdoms or Saxon tribal groups. There are a number of suggestions for the location of these subkingdoms including:
Essex emerged as a single kingdom during the 6th century. The dates, names and achievements of the Essex kings, like those of most early rulers in the Heptarchy, remain conjectural. The historical identification of the kings of Essex, including the evidence and a reconstructed genealogy are discussed extensively by Yorke. The dynasty claimed descent from Woden via Seaxnēat. A genealogy of the Essex royal house was prepared in Wessex in the 9th century. Unfortunately, the surviving copy is somewhat mutilated. At times during the history of the kingdom several sub-kings within Essex appear to have been able to rule simultaneously. They may have exercised authority over different parts of the kingdom. The first recorded king, according to the East Saxon King List, was Æscwine of Essex, to which a date of 527 is given for the start of his reign, although there are some difficulties with the date of his reign, and Sledd of Essex is listed as the founder of the Essex royal house by other sources. The kings of Essex are notable for their S-nomenclature, nearly all their names begin with the letter S.
The Essex kings issued coins that echoed those issued by Cunobeline simultaneously asserting a link to the first century rulers while emphasising independence from Mercia.
Christianity is thought to have flourished among the Trinovantes in the 4th century (late Roman period); indications include the remains of a probable church at Colchester, dating from some time after 320 AD, shortly after the Constantine the Great granted freedom of worship to Christians in 313 AD. Other archaeological evidence includes a chi rho symbol etched on a tile at a site in Wickford, and a gold ring inscribed with a chi rho monogram found at Brentwood. It is not clear to what extent, if any, Christianity persisted by the time of the pagan East Saxon kings in the sixth century.
The earliest English record of the kingdom dates to Bede's Historia ecclesiastica gentis Anglorum, which noted the arrival of Bishop (later Saint) Mellitus in London in 604. Æthelberht (King of Kent and overlord of southern England according to Bede) was in a position to exercise some authority in Essex shortly after 604, when his intervention helped in the conversion of King Saebert of Essex (son of Sledd), his nephew, to Christianity. It was Æthelberht, and not Sæberht, who built and endowed St. Paul's in London, where St. Paul's Cathedral now stands. Bede describes Æthelberht as Sæberht's overlord. After the death of Saebert in AD 616, Mellitus was driven out and the kingdom reverted to paganism. This may have been the result of opposition to Kentish influence in Essex affairs rather than being specifically anti-Christian.
The kingdom reconverted to Christianity under Sigeberht II the Good following a mission by St Cedd who established monasteries at Tilaburg (probably East Tilbury, but possibly West Tilbury) and Ithancester (almost certainly Bradwell-on-Sea). A royal tomb at Prittlewell was discovered and excavated in 2003. Finds included gold foil crosses, suggesting the occupant was Christian. If the occupant was a king, it was probably either Saebert or Sigeberht (murdered AD 653). It is, however, also possible that the occupant was not royal, but simply a wealthy and powerful individual whose identity has gone unrecorded.
Essex reverted to Paganism again in 660 with the ascension of the Pagan King Swithelm of Essex. He converted in 662, but died in 664. He was succeeded by his two sons, Sigehere and Sæbbi. A plague the same year caused Sigehere and his people to recant their Christianity and Essex reverted to Paganism a third time. This rebellion was suppressed by Wulfhere of Mercia who established himself as overlord. Bede describes Sigehere and Sæbbi as "rulers […] under Wulfhere, king of the Mercians". Wulfhere sent Jaruman, the bishop of Lichfield, to reconvert the East Saxons.
Wine (in 666) and Erkenwald (in 675) were appointed bishops of London with spiritual authority over the East Saxon Kingdom. A small stone chest bearing the name of Sæbbi of Essex (r. 664–683) was visible in Old St Paul's Cathedral until the Great Fire of London of 1666 when the cathedral and the tombs within it were lost. The inscription on the chest was recorded by Paul Hentzner and translated by Robert Naunton as reading: "Here lies Seba, King of the East Saxons, who was converted to the faith by St. Erkenwald, Bishop of London, A.D. 677."
Although London (and the rest of Middlesex) was lost by the East Saxons in the 8th century, the bishops of London continued to exert spiritual authority over Essex as a kingdom, shire and county until 1845.
Despite the comparative obscurity of the kingdom, there were strong connections between Essex and the Kentish kingdom across the river Thames which led to the marriage of King Sledd to Ricula, sister of the king, Aethelbert of Kent. For a brief period in the 8th century the kingdom included west Kent. During this period, Essex kings were issuing their own sceattas (coins), perhaps as an assertion of their own independence. However, by the mid-8th century, much of the kingdom, including London, had fallen to Mercia and the rump of Essex, roughly the modern county, was now subordinate to the same. After the defeat of the Mercian king Beornwulf around AD 825, Sigered, the last king of Essex, ceded the kingdom which then became a possession of the Wessex king Egbert.
The Mercians continued to control parts of Essex and may have supported a pretender to the Essex throne since a Sigeric rex Orientalem Saxonum witnessed a Mercian charter after AD 825. During the ninth century, Essex was part of a sub-kingdom that included Sussex, Surrey and Kent. Sometime between 878 and 886, the territory was formally ceded by Wessex to the Danelaw kingdom of East Anglia, under the Treaty of Alfred and Guthrum. After the reconquest by Edward the Elder the king's representative in Essex was styled an ealdorman and Essex came to be regarded as a shire.
The following list of kings may omit whole generations. | [
{
"paragraph_id": 0,
"text": "The Kingdom of the East Saxons (Old English: Ēastseaxna rīce; Latin: Regnum Orientalium Saxonum), referred to as the Kingdom of Essex /ˈɛsɪks/, was one of the seven traditional kingdoms of the Anglo-Saxon Heptarchy. It was founded in the 6th century and covered the territory later occupied by the counties of Essex, Middlesex, much of Hertfordshire and (for a short while) west Kent. The last king of Essex was Sigered of Essex, who in 825 ceded the kingdom to Ecgberht, King of Wessex.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Kingdom of Essex was bounded to the north by the River Stour and the Kingdom of East Anglia, to the south by the River Thames and Kent, to the east lay the North Sea and to the west Mercia. The territory included the remains of two provincial Roman capitals, Colchester and London.",
"title": "Extent"
},
{
"paragraph_id": 2,
"text": "The kingdom included the Middle Saxon Province, which included the area of the later county of Middlesex, and most if not all of Hertfordshire Although the province is only ever recorded as a part of the East Saxon kingdom, charter evidence shows that it was not part of their core territory. In the core area they granted charters freely, but further west they did so while also making reference to their Mercian overlords. At times, Essex was ruled jointly by co-Kings, and it thought that the Middle Saxon Province is likely to have been the domain of one of these co-kings. The links to Essex between Middlesex and parts of Hertfordshire were long reflected in the Diocese of London, re-established in 604 as the East Saxon see, and its boundaries continued to be based on the Kingdom of Essex until the nineteenth century.",
"title": "Extent"
},
{
"paragraph_id": 3,
"text": "The East Saxons also had intermittent control of Surrey. For a brief period in the 8th century, the Kingdom of Essex controlled west Kent.",
"title": "Extent"
},
{
"paragraph_id": 4,
"text": "The modern English county of Essex maintains the historic northern and the southern borders, but only covers the territory east of the River Lea, the other parts being lost to neighbouring Mercia during the 8th century.",
"title": "Extent"
},
{
"paragraph_id": 5,
"text": "In the Tribal Hidage it is listed as containing 7,000 hides.",
"title": "Extent"
},
{
"paragraph_id": 6,
"text": "Although the kingdom of Essex was one of the kingdoms of the Heptarchy, its history is not well documented. It produced relatively few Anglo-Saxon charters and no version of the Anglo-Saxon Chronicle; in fact the only mention in the chronicle concerns Bishop Mellitus. As a result, the kingdom is regarded as comparatively obscure. For most of the kingdom's existence, the Essex king was subservient to an overlord – variously the kings of Kent, East Anglia or Mercia.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Saxon occupation of land that was to form the kingdom had begun by the early 5th century at Mucking and other locations. A large proportion of these original settlers came from Old Saxony. According to British legend (see Historia Brittonum) the territory known later as Essex was ceded by the Celtic Britons to the Saxons following the infamous Treason of the Long Knives, which occurred c. 460 during the reign of High King Vortigern. Della Hooke relates the territory ruled by the kings of Essex to the pre-Roman territory of the Trinovantes. Studies suggest a pattern of typically peaceful co-existence, with the structure of the Romano-British landscape being maintained, and with the Saxon settlers believed to have been in the minority.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The kingdom of Essex grew by the absorption of smaller subkingdoms or Saxon tribal groups. There are a number of suggestions for the location of these subkingdoms including:",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Essex emerged as a single kingdom during the 6th century. The dates, names and achievements of the Essex kings, like those of most early rulers in the Heptarchy, remain conjectural. The historical identification of the kings of Essex, including the evidence and a reconstructed genealogy are discussed extensively by Yorke. The dynasty claimed descent from Woden via Seaxnēat. A genealogy of the Essex royal house was prepared in Wessex in the 9th century. Unfortunately, the surviving copy is somewhat mutilated. At times during the history of the kingdom several sub-kings within Essex appear to have been able to rule simultaneously. They may have exercised authority over different parts of the kingdom. The first recorded king, according to the East Saxon King List, was Æscwine of Essex, to which a date of 527 is given for the start of his reign, although there are some difficulties with the date of his reign, and Sledd of Essex is listed as the founder of the Essex royal house by other sources. The kings of Essex are notable for their S-nomenclature, nearly all their names begin with the letter S.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The Essex kings issued coins that echoed those issued by Cunobeline simultaneously asserting a link to the first century rulers while emphasising independence from Mercia.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Christianity is thought to have flourished among the Trinovantes in the 4th century (late Roman period); indications include the remains of a probable church at Colchester, dating from some time after 320 AD, shortly after the Constantine the Great granted freedom of worship to Christians in 313 AD. Other archaeological evidence includes a chi rho symbol etched on a tile at a site in Wickford, and a gold ring inscribed with a chi rho monogram found at Brentwood. It is not clear to what extent, if any, Christianity persisted by the time of the pagan East Saxon kings in the sixth century.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The earliest English record of the kingdom dates to Bede's Historia ecclesiastica gentis Anglorum, which noted the arrival of Bishop (later Saint) Mellitus in London in 604. Æthelberht (King of Kent and overlord of southern England according to Bede) was in a position to exercise some authority in Essex shortly after 604, when his intervention helped in the conversion of King Saebert of Essex (son of Sledd), his nephew, to Christianity. It was Æthelberht, and not Sæberht, who built and endowed St. Paul's in London, where St. Paul's Cathedral now stands. Bede describes Æthelberht as Sæberht's overlord. After the death of Saebert in AD 616, Mellitus was driven out and the kingdom reverted to paganism. This may have been the result of opposition to Kentish influence in Essex affairs rather than being specifically anti-Christian.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The kingdom reconverted to Christianity under Sigeberht II the Good following a mission by St Cedd who established monasteries at Tilaburg (probably East Tilbury, but possibly West Tilbury) and Ithancester (almost certainly Bradwell-on-Sea). A royal tomb at Prittlewell was discovered and excavated in 2003. Finds included gold foil crosses, suggesting the occupant was Christian. If the occupant was a king, it was probably either Saebert or Sigeberht (murdered AD 653). It is, however, also possible that the occupant was not royal, but simply a wealthy and powerful individual whose identity has gone unrecorded.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Essex reverted to Paganism again in 660 with the ascension of the Pagan King Swithelm of Essex. He converted in 662, but died in 664. He was succeeded by his two sons, Sigehere and Sæbbi. A plague the same year caused Sigehere and his people to recant their Christianity and Essex reverted to Paganism a third time. This rebellion was suppressed by Wulfhere of Mercia who established himself as overlord. Bede describes Sigehere and Sæbbi as \"rulers […] under Wulfhere, king of the Mercians\". Wulfhere sent Jaruman, the bishop of Lichfield, to reconvert the East Saxons.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Wine (in 666) and Erkenwald (in 675) were appointed bishops of London with spiritual authority over the East Saxon Kingdom. A small stone chest bearing the name of Sæbbi of Essex (r. 664–683) was visible in Old St Paul's Cathedral until the Great Fire of London of 1666 when the cathedral and the tombs within it were lost. The inscription on the chest was recorded by Paul Hentzner and translated by Robert Naunton as reading: \"Here lies Seba, King of the East Saxons, who was converted to the faith by St. Erkenwald, Bishop of London, A.D. 677.\"",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Although London (and the rest of Middlesex) was lost by the East Saxons in the 8th century, the bishops of London continued to exert spiritual authority over Essex as a kingdom, shire and county until 1845.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Despite the comparative obscurity of the kingdom, there were strong connections between Essex and the Kentish kingdom across the river Thames which led to the marriage of King Sledd to Ricula, sister of the king, Aethelbert of Kent. For a brief period in the 8th century the kingdom included west Kent. During this period, Essex kings were issuing their own sceattas (coins), perhaps as an assertion of their own independence. However, by the mid-8th century, much of the kingdom, including London, had fallen to Mercia and the rump of Essex, roughly the modern county, was now subordinate to the same. After the defeat of the Mercian king Beornwulf around AD 825, Sigered, the last king of Essex, ceded the kingdom which then became a possession of the Wessex king Egbert.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The Mercians continued to control parts of Essex and may have supported a pretender to the Essex throne since a Sigeric rex Orientalem Saxonum witnessed a Mercian charter after AD 825. During the ninth century, Essex was part of a sub-kingdom that included Sussex, Surrey and Kent. Sometime between 878 and 886, the territory was formally ceded by Wessex to the Danelaw kingdom of East Anglia, under the Treaty of Alfred and Guthrum. After the reconquest by Edward the Elder the king's representative in Essex was styled an ealdorman and Essex came to be regarded as a shire.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The following list of kings may omit whole generations.",
"title": "List of kings"
}
]
| The Kingdom of the East Saxons, referred to as the Kingdom of Essex, was one of the seven traditional kingdoms of the Anglo-Saxon Heptarchy. It was founded in the 6th century and covered the territory later occupied by the counties of Essex, Middlesex, much of Hertfordshire and west Kent. The last king of Essex was Sigered of Essex, who in 825 ceded the kingdom to Ecgberht, King of Wessex. | 2002-02-25T15:51:15Z | 2023-10-24T21:29:30Z | [
"Template:Lang",
"Template:C.",
"Template:Cite book",
"Template:Essex Monarchs",
"Template:Use dmy dates",
"Template:Lang-ang",
"Template:Lang-la",
"Template:Reign",
"Template:See also",
"Template:Reflist",
"Template:ISBN",
"Template:PASE",
"Template:Infobox country",
"Template:IPAc-en",
"Template:Efn",
"Template:Royal houses of Britain and Ireland",
"Template:Notelist",
"Template:Cite web",
"Template:Short description",
"Template:Heptarchy",
"Template:Portal bar"
]
| https://en.wikipedia.org/wiki/Kingdom_of_Essex |
9,939 | Eve (disambiguation) | Eve is the first woman created by God according to the creation narrative of Abrahamic religions.
Eve or EVE may also refer to: | [
{
"paragraph_id": 0,
"text": "Eve is the first woman created by God according to the creation narrative of Abrahamic religions.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Eve or EVE may also refer to:",
"title": ""
}
]
| Eve is the first woman created by God according to the creation narrative of Abrahamic religions. Eve or EVE may also refer to: | 2002-02-25T15:43:11Z | 2023-12-15T17:03:05Z | [
"Template:Disambiguation",
"Template:Wiktionary",
"Template:TOC right",
"Template:Main",
"Template:Solename"
]
| https://en.wikipedia.org/wiki/Eve_(disambiguation) |
9,941 | Æthelberht of Kent | Æthelberht (/ˈæθəlbərt/; also Æthelbert, Aethelberht, Aethelbert or Ethelbert; Old English: Æðelberht [ˈæðelberˠxt]; c. 550 – 24 February 616) was King of Kent from about 589 until his death. The eighth-century monk Bede, in his Ecclesiastical History of the English People, lists him as the third king to hold imperium over other Anglo-Saxon kingdoms. In the late ninth century Anglo-Saxon Chronicle, he is referred to as a bretwalda, or "Britain-ruler". He was the first English king to convert to Christianity.
Æthelberht was the son of Eormenric, succeeding him as king, according to the Chronicle. He married Bertha, the Christian daughter of Charibert I, king of the Franks, thus building an alliance with the most powerful state in contemporary Western Europe; the marriage probably took place before he came to the throne. Bertha's influence may have led to Pope Gregory I's decision to send Augustine as a missionary from Rome. Augustine landed on the Isle of Thanet in east Kent in 597. Shortly thereafter, Æthelberht converted to Christianity, churches were established, and wider-scale conversion to Christianity began in the kingdom. He provided the new church with land in Canterbury, thus helping to establish one of the foundation stones of English Christianity.
Æthelberht's law for Kent, the earliest written code in any Germanic language, instituted a complex system of fines; the law code is preserved in the Textus Roffensis. Kent was rich, with strong trade ties to the Continent, and Æthelberht may have instituted royal control over trade. Coinage probably began circulating in Kent during his reign for the first time since the Anglo-Saxon settlement. He later came to be regarded as a saint for his role in establishing Christianity among the Anglo-Saxons. His feast day was originally 24 February but was changed to 25 February.
In the fifth century, raids on Britain by continental peoples had developed into full-scale migrations. The newcomers are known to have included Angles, Saxons, Jutes and Frisians, and there is evidence of other groups as well. These groups captured territory in the east and south of England, but at about the end of the fifth century, a British victory at the battle of Mount Badon (Mons Badonicus) halted the Anglo-Saxon advance for fifty years. From about 550, however, the British began to lose ground once more, and within twenty-five years it appears that control of almost all of southern England was in the hands of the invaders.
Anglo-Saxons probably conquered Kent before Mons Badonicus. There is both documentary and archaeological evidence that Kent was primarily colonised by Jutes, from the southern part of the Jutland peninsula. According to legend, the brothers Hengist and Horsa landed in 449 as mercenaries for a British king, Vortigern. After a rebellion over pay and Horsa's death in battle, Hengist established the Kingdom of Kent. Some historians now think the underlying story of a rebelling mercenary force may be accurate; most now date the founding of the kingdom of Kent to the middle of the fifth-century, which is consistent with the legend. This early date, only a few decades after the departure of the Romans, also suggests that more of Roman civilization may have survived into Anglo-Saxon rule in Kent than in other areas.
Overlordship was a central feature of Anglo-Saxon politics which began before Æthelberht's time; kings were described as overlords as late as the ninth century. The Anglo-Saxon invasion may have involved military coordination of different groups within the invaders, with a leader who had authority over many different groups; Ælle of Sussex may have been such a leader. Once the new states began to form, conflicts among them began. Tribute from dependents could lead to wealth. A weaker state also might ask or pay for the protection of a stronger neighbour against a warlike third state.
Sources for this period in Kentish history include the Ecclesiastical History of the English People, written in 731 by Bede, a Northumbrian monk. Bede was interested primarily in England's Christianization. Since Æthelberht was the first Anglo-Saxon king to convert to Christianity, Bede provides more substantial information about him than about any earlier king. One of Bede's correspondents was Albinus, abbot of the monastery of St. Peter and St. Paul (subsequently renamed St. Augustine's) in Canterbury. The Anglo-Saxon Chronicle, a collection of annals assembled c. 890 in the kingdom of Wessex, mentions several events in Kent during Æthelberht's reign. Further mention of events in Kent occurs in the late sixth century history of the Franks by Gregory of Tours. This is the earliest surviving source to mention any Anglo-Saxon kingdom. Some of Pope Gregory the Great's letters concern the mission of St. Augustine to Kent in 597; these letters also mention the state of Kent and its relationships with neighbours. Other sources include regnal lists of the kings of Kent and early charters (land grants by kings to their followers or to the church). Although no originals survive from Æthelberht's reign, later copies exist. A law code from Æthelberht's reign also survives.
According to Bede, Æthelberht was descended directly from Hengist. Bede gives the line of descent as follows: "Ethelbert was son of Irminric, son of Octa, and after his grandfather Oeric, surnamed Oisc, the kings of the Kentish folk are commonly known as Oiscings. The father of Oeric was Hengist." An alternative form of this genealogy, found in the Historia Brittonum among other places, reverses the position of Octa and Oisc in the lineage. The first of these names that can be placed historically with reasonable confidence is Æthelberht's father, whose name now usually is spelled Eormenric. The only direct written reference to Eormenric is in Kentish genealogies, but Gregory of Tours does mention that Æthelberht's father was the king of Kent, though Gregory gives no date. Eormenric's name provides a hint of connections to the kingdom of the Franks, across the English channel; the element "Eormen" was rare in names of the Anglo-Saxon aristocracy, but much more common among Frankish nobles. One other member of Æthelberht's family is known: his sister, Ricole, who is recorded by both Bede and the Anglo-Saxon Chronicle as the mother of Sæberht, king of the East Saxons (i.e., Essex).
The dates of Æthelberht's birth and accession to the throne of Kent are both matters of debate. Bede, the earliest source to give dates, is thought to have drawn his information from correspondence with Albinus. Bede states that when Æthelberht died in 616 he had reigned for fifty-six years, placing his accession in 560. Bede also says that Æthelberht died twenty-one years after his baptism. Augustine's mission from Rome is known to have arrived in 597, and according to Bede, it was this mission that converted Æthelberht. Hence Bede's dates are inconsistent. The Anglo-Saxon Chronicle, an important source for early dates, is inconsistent with Bede and also has inconsistencies among different manuscript versions. Putting together the different dates in the Chronicle for birth, death and length of reign, it appears that Æthelberht's reign was thought to have been either 560–616 or 565–618 but that the surviving sources have confused the two traditions.
It is possible that Æthelberht was converted to Christianity before Augustine's arrival. Æthelberht's wife was a Christian and brought a Frankish bishop with her, to attend her at court, so Æthelberht would have had knowledge of Christianity before the mission reached Kent. It also is possible that Bede had the date of Æthelberht's death wrong; if, in fact, Æthelberht died in 618, this would be consistent with his baptism in 597, which is in accord with the tradition that Augustine converted the king within a year of his arrival.
Gregory of Tours, in his Historia Francorum, writes that Bertha, daughter of Charibert I, king of the Franks, married the son of the king of Kent. Bede says that Æthelberht received Bertha "from her parents". If Bede is interpreted literally, the marriage would have had to take place before 567, when Charibert died. The traditions for Æthelberht's reign, then, would imply that Æthelberht married Bertha before either 560 or 565.
The extreme length of Æthelberht's reign also has been regarded with skepticism by historians; it has been suggested that he died in the fifty-sixth year of his life, rather than the fifty-sixth year of his reign. This would place the year of his birth approximately at 560, and he would not then have been able to marry until the mid 570s. According to Gregory of Tours, Charibert was king when he married Ingoberg, Bertha's mother, which places that marriage no earlier than 561. It therefore is unlikely that Bertha was married much before about 580. These later dates for Bertha and Æthelberht also solve another possible problem: Æthelberht's daughter, Æthelburh, seems likely to have been Bertha's child, but the earlier dates would have Bertha aged sixty or so at Æthelburh's likely birthdate using the early dates.
Gregory, however, also says that he thinks that Ingoberg was seventy years old in 589; and this would make her about forty when she married Charibert. This is possible, but seems unlikely, especially as Charibert seems to have had a preference for younger women, again according to Gregory's account. This would imply an earlier birth date for Bertha. On the other hand, Gregory refers to Æthelberht at the time of his marriage to Bertha simply as "a man of Kent", and in the 589 passage concerning Ingoberg's death, which was written in about 590 or 591, he refers to Æthelberht as "the son of the king of Kent". If this does not simply reflect Gregory's ignorance of Kentish affairs, which seems unlikely given the close ties between Kent and the Franks, then some assert that Æthelberht's reign cannot have begun before 589.
While all of the contradictions above cannot be reconciled, the most probable dates that may be drawn from available data place Æthelberht's birth at approximately 560 and, perhaps, his marriage to Bertha at 580. His reign is most likely to have begun in 589 or 590.
The later history of Kent shows clear evidence of a system of joint kingship, with the kingdom being divided into east Kent and west Kent, although it appears that there generally was a dominant king. This evidence is less clear for the earlier period, but there are early charters, known to be forged, which nevertheless imply that Æthelberht ruled as joint king with his son, Eadbald. It may be that Æthelberht was king of east Kent and Eadbald became king of west Kent; the east Kent king seems generally to have been the dominant ruler later in Kentish history. Whether or not Eadbald became a joint king with Æthelberht, there is no question that Æthelberht had authority throughout the kingdom.
The division into two kingdoms is most likely to date back to the sixth century; east Kent may have conquered west Kent and preserved the institutions of kingship as a subkingdom. This was a common pattern in Anglo-Saxon England, as the more powerful kingdoms absorbed their weaker neighbours. An unusual feature of the Kentish system was that only sons of kings appeared to be legitimate claimants to the throne, although this did not eliminate all strife over the succession.
The main towns of the two kingdoms were Rochester, for west Kent, and Canterbury, for east Kent. Bede does not state that Æthelberht had a palace in Canterbury, but he does refer to Canterbury as Æthelberht's "metropolis", and it is clear that it is Æthelberht's seat.
There are many indications of close relations between Kent and the Franks. Æthelberht's marriage to Bertha certainly connected the two courts, although not as equals: the Franks would have thought of Æthelberht as an under-king. There is no record that Æthelberht ever accepted a continental king as his overlord and, as a result, historians are divided on the true nature of the relationship. Evidence for an explicit Frankish overlordship of Kent comes from a letter written by Pope Gregory the Great to Theuderic, king of Burgundy, and Theudebert, king of Austrasia. The letter concerned Augustine's mission to Kent in 597, and in it Gregory says that he believes "that you wish your subjects in every respect to be converted to that faith in which you, their kings and lords, stand". It may be that this is a papal compliment, rather than a description of the relationship between the kingdoms. It also has been suggested that Liudhard, Bertha's chaplain, was intended as a representative of the Frankish church in Kent, which also could be interpreted as evidence of overlordship.
A possible reason for the willingness of the Franks to connect themselves with the Kentish court is the fact that a Frankish king, Chilperic I, is recorded as having conquered a people known as the Euthiones during the mid-sixth century. If, as seems likely from the name, these people were the continental remnants of the Jutish invaders of Kent, then it may be that the marriage was intended as a unifying political move, reconnecting different branches of the same people. Another perspective on the marriage may be gained by considering that it is likely that Æthelberht was not yet king at the time he and Bertha were wed: it may be that Frankish support for him, acquired via the marriage, was instrumental in gaining the throne for him.
Regardless of the political relationship between Æthelberht and the Franks, there is abundant evidence of strong connections across the English Channel. There was a luxury trade between Kent and the Franks, and burial artefacts found include clothing, drink, and weapons that reflect Frankish cultural influence. The Kentish burials have a greater range of imported goods than those of the neighbouring Anglo-Saxon regions, which is not surprising given Kent's easier access to trade across the English Channel. In addition, the grave goods are both richer and more numerous in Kentish graves, implying that material wealth was derived from that trade. Frankish influences also may be detected in the social and agrarian organization of Kent. Other cultural influences may be seen in the burials as well, so it is not necessary to presume that there was direct settlement by the Franks in Kent.
In his Ecclesiastical History, Bede includes his list of seven kings who held imperium over the other kingdoms south of the Humber. The usual translation for imperium is "overlordship". Bede names Æthelberht as the third on the list, after Ælle of Sussex and Ceawlin of Wessex. The anonymous annalist who composed one of the versions of the Anglo-Saxon Chronicle repeated Bede's list of seven kings in a famous entry under the year 827, with one additional king, Egbert of Wessex. The Chronicle also records that these kings held the title bretwalda, or "Britain-ruler". The exact meaning of bretwalda has been the subject of much debate; it has been described as a term "of encomiastic poetry", but there also is evidence that it implied a definite role of military leadership.
The prior bretwalda, Ceawlin, is recorded by the Anglo-Saxon Chronicle as having fought Æthelberht in 568 at a place called "Wibbandun" ("Wibba's Mount") whose location has not been identified. The entry states that Æthelberht lost the battle and was driven back to Kent. Comparison of the entries concerning the West Saxons in this section of the Chronicle with the West Saxon Genealogical Regnal List shows that their dating is unreliable: Ceawlin's reign is more likely to have been approximately 581–588, rather 560–592 as claimed in the Chronicle.
At some point Ceawlin lost his overlordship, perhaps after a battle at Fethan leag, thought to have been in Oxfordshire, which the Chronicle dates to 584, some eight years before he was deposed in 592 (again using the Chronicle's unreliable dating). Æthelberht certainly was a dominant ruler by 601, when Gregory the Great wrote to him: Gregory urges Æthelberht to spread Christianity among those kings and peoples subject to him, implying some level of overlordship. If the battle of Wibbandun was fought c. 590, as has been suggested, then Æthelberht must have gained his position as overlord at some time in the 590s. This dating for Wibbandun is slightly inconsistent with the proposed dates of 581–588 for Ceawlin's reign, but those dates are not thought to be precise, merely the most plausible given the available data.
In addition to the evidence of the Chronicle that Æthelberht was accorded the title of bretwalda, there is evidence of his domination in several of the southern kingdoms of the Heptarchy. In Essex, Æthelberht appears to have been in a position to exercise authority shortly after 604, when his intervention helped in the conversion of King Sæberht of Essex, his nephew, to Christianity. It was Æthelberht, and not Sæberht, who built and endowed St. Pauls in London, where St Paul's Cathedral now stands. Further evidence is provided by Bede, who explicitly describes Æthelberht as Sæberht's overlord.
Bede describes Æthelberht's relationship with Rædwald, king of East Anglia, in a passage that is not completely clear in meaning. It seems to imply that Rædwald retained ducatus, or military command of his people, even while Æthelberht held imperium. This implies that being a bretwalda usually included holding the military command of other kingdoms and also that it was more than that, since Æthelberht is bretwalda despite Rædwald's control of his own troops. Rædwald was converted to Christianity while in Kent but did not abandon his pagan beliefs; this, together with the fact that he retained military independence, implies that Æthelberht's overlordship of East Anglia was much weaker than his influence with the East Saxons. An alternative interpretation, however, is that the passage in Bede should be translated as "Rædwald, king of the East Angles, who while Æthelberht lived, even conceded to him the military leadership of his people"; if this is Bede's intent, then East Anglia firmly was under Æthelberht's overlordship.
There is no evidence that Æthelberht's influence in other kingdoms was enough for him to convert any other kings to Christianity, although this is partly due to the lack of sources—nothing is known of Sussex's history, for example, for almost all of the seventh and eighth centuries. Æthelberht was able to arrange a meeting in 602 in the Severn valley, on the northwestern borders of Wessex, however, and this may be an indication of the extent of his influence in the west. No evidence survives showing Kentish domination of Mercia, but it is known that Mercia was independent of Northumbria, so it is quite plausible that it was under Kentish overlordship.
The native Britons had converted to Christianity under Roman rule. The Anglo-Saxon invasions separated the British church from European Christianity for centuries, so the church in Rome had no presence or authority in Britain, and in fact, Rome knew so little about the British church that it was unaware of any schism in customs. However, Æthelberht would have known something about the Roman church from his Frankish wife, Bertha, who had brought a bishop, Liudhard, with her across the Channel, and for whom Æthelberht built a chapel, St Martin's.
In 596, Pope Gregory the Great sent Augustine, prior of the monastery of St. Andrew in Rome, to England as a missionary, and in 597, a group of nearly forty monks, led by Augustine, landed on the Isle of Thanet in Kent. According to Bede, Æthelberht was sufficiently distrustful of the newcomers to insist on meeting them under the open sky, to prevent them from performing sorcery. The monks impressed Æthelberht, but he was not converted immediately. He agreed to allow the mission to settle in Canterbury and permitted them to preach.
It is not known when Æthelberht became a Christian. It is possible, despite Bede's account, that he already was a Christian before Augustine's mission arrived. It is likely that Liudhard and Bertha pressed Æthelberht to consider becoming a Christian before the arrival of the mission, and it is also likely that a condition of Æthelberht's marriage to Bertha was that Æthelberht would consider conversion. Conversion via the influence of the Frankish court would have been seen as an explicit recognition of Frankish overlordship, however, so it is possible that Æthelberht's delay of his conversion until it could be accomplished via Roman influence might have been an assertion of independence from Frankish control. It also has been argued that Augustine's hesitation—he turned back to Rome, asking to be released from the mission—is an indication that Æthelberht was a pagan at the time Augustine was sent.
At the latest, Æthelberht must have converted before 601, since that year Gregory wrote to him as a Christian king. An old tradition records that Æthelberht converted on 1 June, in the summer of the year that Augustine arrived. Through Æthelberht's influence Sæberht, king of Essex, also was converted, but there were limits to the effectiveness of the mission. The entire Kentish court did not convert: Eadbald, Æthelberht's son and heir, was a pagan at his accession. Rædwald, king of East Anglia, was only partly converted (apparently while at Æthelberht's court) and retained a pagan shrine next to the new Christian altar. Augustine also was unsuccessful in gaining the allegiance of the British clergy.
Some time after the arrival of Augustine's mission, perhaps in 602 or 603, Æthelberht issued a set of laws, in ninety sections. These laws are by far the earliest surviving code composed in any of the Germanic countries, and they were almost certainly among the first documents written down in Anglo-Saxon, as literacy would have arrived in England with Augustine's mission. The only surviving early manuscript, the Textus Roffensis, dates from the twelfth century, and it now resides in the Medway Studies Centre in Strood, Kent. Æthelberht's code makes reference to the church in the very first item, which enumerates the compensation required for the property of a bishop, a deacon, a priest, and so on; but overall, the laws seem remarkably uninfluenced by Christian principles. Bede asserted that they were composed "after the Roman manner", but there is little discernible Roman influence either. In subject matter, the laws have been compared to the Lex Salica of the Franks, but it is not thought that Æthelberht based his new code on any specific previous model.
The laws are concerned with setting and enforcing the penalties for transgressions at all levels of society; the severity of the fine depended on the social rank of the victim. The king had a financial interest in enforcement, for part of the fines would come to him in many cases, but the king also was responsible for law and order, and avoiding blood feuds by enforcing the rules on compensation for injury was part of the way the king maintained control. Æthelberht's laws are mentioned by Alfred the Great, who compiled his own laws, making use of the prior codes created by Æthelberht, as well as those of Offa of Mercia and Ine of Wessex.
One of Æthelberht's laws seems to preserve a trace of a very old custom: the third item in the code states that "If the king is drinking at a man's home, and anyone commits any evil deed there, he is to pay twofold compensation." This probably refers to the ancient custom of a king traveling the country, being hosted, and being provided for by his subjects wherever he went. The king's servants retained these rights for centuries after Æthelberht's time.
Items 77–81 in the code have been interpreted as a description of a woman's financial rights after a divorce or legal separation. These clauses define how much of the household goods a woman could keep in different circumstances, depending on whether she keeps custody of the children, for example. It has recently been suggested, however, that it would be more correct to interpret these clauses as referring to women who are widowed, rather than divorced.
There is little documentary evidence about the nature of trade in Æthelberht's Kent. It is known that the kings of Kent had established royal control of trade by the late seventh century, but it is not known how early this control began. There is archaeological evidence suggesting that the royal influence predates any of the written sources. It has been suggested that one of Æthelberht's achievements was to take control of trade away from the aristocracy and to make it a royal monopoly. The continental trade provided Kent access to luxury goods which gave it an advantage in trading with the other Anglo-Saxon nations, and the revenue from trade was important in itself.
Kentish manufacture before 600 included glass beakers and jewelry. Kentish jewellers were highly skilled, and before the end of the sixth century they gained access to gold. Goods from Kent are found in cemeteries across the channel and as far away as at the mouth of the Loire. It is not known what Kent traded for all of this wealth, although it seems likely that there was a flourishing slave trade. It may well be that this wealth was the foundation of Æthelberht's strength, although his overlordship and the associated right to demand tribute would have brought wealth in its turn.
It may have been during Æthelberht's reign that the first coins were minted in England since the departure of the Romans: none bear his name, but it is thought likely that the first coins predate the end of the sixth century. These early coins were gold, and probably were the shillings (scillingas in Old English) that are mentioned in Æthelberht's laws. The coins are also known to numismatists as thrymsas.
Æthelberht died on 24 February 616 and was succeeded by his son, Eadbald, who was not a Christian—Bede says he had been converted but went back to his pagan faith, although he ultimately did become a Christian king. Eadbald outraged the church by marrying his stepmother, which was contrary to Church law, and by refusing to accept baptism. Sæberht of the East Saxons also died at approximately this time, and he was succeeded by his three sons, none of whom were Christian. A subsequent revolt against Christianity and the expulsion of the missionaries from Kent may have been a reaction to Kentish overlordship after Æthelberht's death as much as a pagan opposition to Christianity.
In addition to Eadbald, it is possible that Æthelberht had another son, Æthelwald. The evidence for this is a papal letter to Justus, archbishop of Canterbury from 619 to 625, that refers to a king named Aduluald, who is apparently different from Audubald, which refers to Eadbald. There is no agreement among modern scholars on how to interpret this: "Aduluald" might be intended as a representation of "Æthelwald", and hence an indication of another king, perhaps a sub-king of west Kent; or it may be merely a scribal error which should be read as referring to Eadbald.
Æthelberht was later regarded as a saint for his role in establishing Christianity among the Anglo-Saxons. His feast day was originally 24 February but was changed to 25 February. In the 2004 edition of the Roman Martyrology, he is listed under his date of death, 24 February, with the citation: 'King of Kent, converted by St Augustine, bishop, the first leader of the English people to do so'. The Roman Catholic Archdiocese of Southwark, which contains Kent, commemorates him on 25 February.
He is also venerated in the Eastern Orthodox Church as Saint Ethelbert, king of Kent, his day commemorated on 25 February. | [
{
"paragraph_id": 0,
"text": "Æthelberht (/ˈæθəlbərt/; also Æthelbert, Aethelberht, Aethelbert or Ethelbert; Old English: Æðelberht [ˈæðelberˠxt]; c. 550 – 24 February 616) was King of Kent from about 589 until his death. The eighth-century monk Bede, in his Ecclesiastical History of the English People, lists him as the third king to hold imperium over other Anglo-Saxon kingdoms. In the late ninth century Anglo-Saxon Chronicle, he is referred to as a bretwalda, or \"Britain-ruler\". He was the first English king to convert to Christianity.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Æthelberht was the son of Eormenric, succeeding him as king, according to the Chronicle. He married Bertha, the Christian daughter of Charibert I, king of the Franks, thus building an alliance with the most powerful state in contemporary Western Europe; the marriage probably took place before he came to the throne. Bertha's influence may have led to Pope Gregory I's decision to send Augustine as a missionary from Rome. Augustine landed on the Isle of Thanet in east Kent in 597. Shortly thereafter, Æthelberht converted to Christianity, churches were established, and wider-scale conversion to Christianity began in the kingdom. He provided the new church with land in Canterbury, thus helping to establish one of the foundation stones of English Christianity.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Æthelberht's law for Kent, the earliest written code in any Germanic language, instituted a complex system of fines; the law code is preserved in the Textus Roffensis. Kent was rich, with strong trade ties to the Continent, and Æthelberht may have instituted royal control over trade. Coinage probably began circulating in Kent during his reign for the first time since the Anglo-Saxon settlement. He later came to be regarded as a saint for his role in establishing Christianity among the Anglo-Saxons. His feast day was originally 24 February but was changed to 25 February.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In the fifth century, raids on Britain by continental peoples had developed into full-scale migrations. The newcomers are known to have included Angles, Saxons, Jutes and Frisians, and there is evidence of other groups as well. These groups captured territory in the east and south of England, but at about the end of the fifth century, a British victory at the battle of Mount Badon (Mons Badonicus) halted the Anglo-Saxon advance for fifty years. From about 550, however, the British began to lose ground once more, and within twenty-five years it appears that control of almost all of southern England was in the hands of the invaders.",
"title": "Historical context"
},
{
"paragraph_id": 4,
"text": "Anglo-Saxons probably conquered Kent before Mons Badonicus. There is both documentary and archaeological evidence that Kent was primarily colonised by Jutes, from the southern part of the Jutland peninsula. According to legend, the brothers Hengist and Horsa landed in 449 as mercenaries for a British king, Vortigern. After a rebellion over pay and Horsa's death in battle, Hengist established the Kingdom of Kent. Some historians now think the underlying story of a rebelling mercenary force may be accurate; most now date the founding of the kingdom of Kent to the middle of the fifth-century, which is consistent with the legend. This early date, only a few decades after the departure of the Romans, also suggests that more of Roman civilization may have survived into Anglo-Saxon rule in Kent than in other areas.",
"title": "Historical context"
},
{
"paragraph_id": 5,
"text": "Overlordship was a central feature of Anglo-Saxon politics which began before Æthelberht's time; kings were described as overlords as late as the ninth century. The Anglo-Saxon invasion may have involved military coordination of different groups within the invaders, with a leader who had authority over many different groups; Ælle of Sussex may have been such a leader. Once the new states began to form, conflicts among them began. Tribute from dependents could lead to wealth. A weaker state also might ask or pay for the protection of a stronger neighbour against a warlike third state.",
"title": "Historical context"
},
{
"paragraph_id": 6,
"text": "Sources for this period in Kentish history include the Ecclesiastical History of the English People, written in 731 by Bede, a Northumbrian monk. Bede was interested primarily in England's Christianization. Since Æthelberht was the first Anglo-Saxon king to convert to Christianity, Bede provides more substantial information about him than about any earlier king. One of Bede's correspondents was Albinus, abbot of the monastery of St. Peter and St. Paul (subsequently renamed St. Augustine's) in Canterbury. The Anglo-Saxon Chronicle, a collection of annals assembled c. 890 in the kingdom of Wessex, mentions several events in Kent during Æthelberht's reign. Further mention of events in Kent occurs in the late sixth century history of the Franks by Gregory of Tours. This is the earliest surviving source to mention any Anglo-Saxon kingdom. Some of Pope Gregory the Great's letters concern the mission of St. Augustine to Kent in 597; these letters also mention the state of Kent and its relationships with neighbours. Other sources include regnal lists of the kings of Kent and early charters (land grants by kings to their followers or to the church). Although no originals survive from Æthelberht's reign, later copies exist. A law code from Æthelberht's reign also survives.",
"title": "Historical context"
},
{
"paragraph_id": 7,
"text": "According to Bede, Æthelberht was descended directly from Hengist. Bede gives the line of descent as follows: \"Ethelbert was son of Irminric, son of Octa, and after his grandfather Oeric, surnamed Oisc, the kings of the Kentish folk are commonly known as Oiscings. The father of Oeric was Hengist.\" An alternative form of this genealogy, found in the Historia Brittonum among other places, reverses the position of Octa and Oisc in the lineage. The first of these names that can be placed historically with reasonable confidence is Æthelberht's father, whose name now usually is spelled Eormenric. The only direct written reference to Eormenric is in Kentish genealogies, but Gregory of Tours does mention that Æthelberht's father was the king of Kent, though Gregory gives no date. Eormenric's name provides a hint of connections to the kingdom of the Franks, across the English channel; the element \"Eormen\" was rare in names of the Anglo-Saxon aristocracy, but much more common among Frankish nobles. One other member of Æthelberht's family is known: his sister, Ricole, who is recorded by both Bede and the Anglo-Saxon Chronicle as the mother of Sæberht, king of the East Saxons (i.e., Essex).",
"title": "Ancestry, accession and chronology"
},
{
"paragraph_id": 8,
"text": "The dates of Æthelberht's birth and accession to the throne of Kent are both matters of debate. Bede, the earliest source to give dates, is thought to have drawn his information from correspondence with Albinus. Bede states that when Æthelberht died in 616 he had reigned for fifty-six years, placing his accession in 560. Bede also says that Æthelberht died twenty-one years after his baptism. Augustine's mission from Rome is known to have arrived in 597, and according to Bede, it was this mission that converted Æthelberht. Hence Bede's dates are inconsistent. The Anglo-Saxon Chronicle, an important source for early dates, is inconsistent with Bede and also has inconsistencies among different manuscript versions. Putting together the different dates in the Chronicle for birth, death and length of reign, it appears that Æthelberht's reign was thought to have been either 560–616 or 565–618 but that the surviving sources have confused the two traditions.",
"title": "Ancestry, accession and chronology"
},
{
"paragraph_id": 9,
"text": "It is possible that Æthelberht was converted to Christianity before Augustine's arrival. Æthelberht's wife was a Christian and brought a Frankish bishop with her, to attend her at court, so Æthelberht would have had knowledge of Christianity before the mission reached Kent. It also is possible that Bede had the date of Æthelberht's death wrong; if, in fact, Æthelberht died in 618, this would be consistent with his baptism in 597, which is in accord with the tradition that Augustine converted the king within a year of his arrival.",
"title": "Ancestry, accession and chronology"
},
{
"paragraph_id": 10,
"text": "Gregory of Tours, in his Historia Francorum, writes that Bertha, daughter of Charibert I, king of the Franks, married the son of the king of Kent. Bede says that Æthelberht received Bertha \"from her parents\". If Bede is interpreted literally, the marriage would have had to take place before 567, when Charibert died. The traditions for Æthelberht's reign, then, would imply that Æthelberht married Bertha before either 560 or 565.",
"title": "Ancestry, accession and chronology"
},
{
"paragraph_id": 11,
"text": "The extreme length of Æthelberht's reign also has been regarded with skepticism by historians; it has been suggested that he died in the fifty-sixth year of his life, rather than the fifty-sixth year of his reign. This would place the year of his birth approximately at 560, and he would not then have been able to marry until the mid 570s. According to Gregory of Tours, Charibert was king when he married Ingoberg, Bertha's mother, which places that marriage no earlier than 561. It therefore is unlikely that Bertha was married much before about 580. These later dates for Bertha and Æthelberht also solve another possible problem: Æthelberht's daughter, Æthelburh, seems likely to have been Bertha's child, but the earlier dates would have Bertha aged sixty or so at Æthelburh's likely birthdate using the early dates.",
"title": "Ancestry, accession and chronology"
},
{
"paragraph_id": 12,
"text": "Gregory, however, also says that he thinks that Ingoberg was seventy years old in 589; and this would make her about forty when she married Charibert. This is possible, but seems unlikely, especially as Charibert seems to have had a preference for younger women, again according to Gregory's account. This would imply an earlier birth date for Bertha. On the other hand, Gregory refers to Æthelberht at the time of his marriage to Bertha simply as \"a man of Kent\", and in the 589 passage concerning Ingoberg's death, which was written in about 590 or 591, he refers to Æthelberht as \"the son of the king of Kent\". If this does not simply reflect Gregory's ignorance of Kentish affairs, which seems unlikely given the close ties between Kent and the Franks, then some assert that Æthelberht's reign cannot have begun before 589.",
"title": "Ancestry, accession and chronology"
},
{
"paragraph_id": 13,
"text": "While all of the contradictions above cannot be reconciled, the most probable dates that may be drawn from available data place Æthelberht's birth at approximately 560 and, perhaps, his marriage to Bertha at 580. His reign is most likely to have begun in 589 or 590.",
"title": "Ancestry, accession and chronology"
},
{
"paragraph_id": 14,
"text": "The later history of Kent shows clear evidence of a system of joint kingship, with the kingdom being divided into east Kent and west Kent, although it appears that there generally was a dominant king. This evidence is less clear for the earlier period, but there are early charters, known to be forged, which nevertheless imply that Æthelberht ruled as joint king with his son, Eadbald. It may be that Æthelberht was king of east Kent and Eadbald became king of west Kent; the east Kent king seems generally to have been the dominant ruler later in Kentish history. Whether or not Eadbald became a joint king with Æthelberht, there is no question that Æthelberht had authority throughout the kingdom.",
"title": "Kingship of Kent"
},
{
"paragraph_id": 15,
"text": "The division into two kingdoms is most likely to date back to the sixth century; east Kent may have conquered west Kent and preserved the institutions of kingship as a subkingdom. This was a common pattern in Anglo-Saxon England, as the more powerful kingdoms absorbed their weaker neighbours. An unusual feature of the Kentish system was that only sons of kings appeared to be legitimate claimants to the throne, although this did not eliminate all strife over the succession.",
"title": "Kingship of Kent"
},
{
"paragraph_id": 16,
"text": "The main towns of the two kingdoms were Rochester, for west Kent, and Canterbury, for east Kent. Bede does not state that Æthelberht had a palace in Canterbury, but he does refer to Canterbury as Æthelberht's \"metropolis\", and it is clear that it is Æthelberht's seat.",
"title": "Kingship of Kent"
},
{
"paragraph_id": 17,
"text": "There are many indications of close relations between Kent and the Franks. Æthelberht's marriage to Bertha certainly connected the two courts, although not as equals: the Franks would have thought of Æthelberht as an under-king. There is no record that Æthelberht ever accepted a continental king as his overlord and, as a result, historians are divided on the true nature of the relationship. Evidence for an explicit Frankish overlordship of Kent comes from a letter written by Pope Gregory the Great to Theuderic, king of Burgundy, and Theudebert, king of Austrasia. The letter concerned Augustine's mission to Kent in 597, and in it Gregory says that he believes \"that you wish your subjects in every respect to be converted to that faith in which you, their kings and lords, stand\". It may be that this is a papal compliment, rather than a description of the relationship between the kingdoms. It also has been suggested that Liudhard, Bertha's chaplain, was intended as a representative of the Frankish church in Kent, which also could be interpreted as evidence of overlordship.",
"title": "Relations with the Franks"
},
{
"paragraph_id": 18,
"text": "A possible reason for the willingness of the Franks to connect themselves with the Kentish court is the fact that a Frankish king, Chilperic I, is recorded as having conquered a people known as the Euthiones during the mid-sixth century. If, as seems likely from the name, these people were the continental remnants of the Jutish invaders of Kent, then it may be that the marriage was intended as a unifying political move, reconnecting different branches of the same people. Another perspective on the marriage may be gained by considering that it is likely that Æthelberht was not yet king at the time he and Bertha were wed: it may be that Frankish support for him, acquired via the marriage, was instrumental in gaining the throne for him.",
"title": "Relations with the Franks"
},
{
"paragraph_id": 19,
"text": "Regardless of the political relationship between Æthelberht and the Franks, there is abundant evidence of strong connections across the English Channel. There was a luxury trade between Kent and the Franks, and burial artefacts found include clothing, drink, and weapons that reflect Frankish cultural influence. The Kentish burials have a greater range of imported goods than those of the neighbouring Anglo-Saxon regions, which is not surprising given Kent's easier access to trade across the English Channel. In addition, the grave goods are both richer and more numerous in Kentish graves, implying that material wealth was derived from that trade. Frankish influences also may be detected in the social and agrarian organization of Kent. Other cultural influences may be seen in the burials as well, so it is not necessary to presume that there was direct settlement by the Franks in Kent.",
"title": "Relations with the Franks"
},
{
"paragraph_id": 20,
"text": "In his Ecclesiastical History, Bede includes his list of seven kings who held imperium over the other kingdoms south of the Humber. The usual translation for imperium is \"overlordship\". Bede names Æthelberht as the third on the list, after Ælle of Sussex and Ceawlin of Wessex. The anonymous annalist who composed one of the versions of the Anglo-Saxon Chronicle repeated Bede's list of seven kings in a famous entry under the year 827, with one additional king, Egbert of Wessex. The Chronicle also records that these kings held the title bretwalda, or \"Britain-ruler\". The exact meaning of bretwalda has been the subject of much debate; it has been described as a term \"of encomiastic poetry\", but there also is evidence that it implied a definite role of military leadership.",
"title": "Rise to dominance"
},
{
"paragraph_id": 21,
"text": "The prior bretwalda, Ceawlin, is recorded by the Anglo-Saxon Chronicle as having fought Æthelberht in 568 at a place called \"Wibbandun\" (\"Wibba's Mount\") whose location has not been identified. The entry states that Æthelberht lost the battle and was driven back to Kent. Comparison of the entries concerning the West Saxons in this section of the Chronicle with the West Saxon Genealogical Regnal List shows that their dating is unreliable: Ceawlin's reign is more likely to have been approximately 581–588, rather 560–592 as claimed in the Chronicle.",
"title": "Rise to dominance"
},
{
"paragraph_id": 22,
"text": "At some point Ceawlin lost his overlordship, perhaps after a battle at Fethan leag, thought to have been in Oxfordshire, which the Chronicle dates to 584, some eight years before he was deposed in 592 (again using the Chronicle's unreliable dating). Æthelberht certainly was a dominant ruler by 601, when Gregory the Great wrote to him: Gregory urges Æthelberht to spread Christianity among those kings and peoples subject to him, implying some level of overlordship. If the battle of Wibbandun was fought c. 590, as has been suggested, then Æthelberht must have gained his position as overlord at some time in the 590s. This dating for Wibbandun is slightly inconsistent with the proposed dates of 581–588 for Ceawlin's reign, but those dates are not thought to be precise, merely the most plausible given the available data.",
"title": "Rise to dominance"
},
{
"paragraph_id": 23,
"text": "In addition to the evidence of the Chronicle that Æthelberht was accorded the title of bretwalda, there is evidence of his domination in several of the southern kingdoms of the Heptarchy. In Essex, Æthelberht appears to have been in a position to exercise authority shortly after 604, when his intervention helped in the conversion of King Sæberht of Essex, his nephew, to Christianity. It was Æthelberht, and not Sæberht, who built and endowed St. Pauls in London, where St Paul's Cathedral now stands. Further evidence is provided by Bede, who explicitly describes Æthelberht as Sæberht's overlord.",
"title": "Rise to dominance"
},
{
"paragraph_id": 24,
"text": "Bede describes Æthelberht's relationship with Rædwald, king of East Anglia, in a passage that is not completely clear in meaning. It seems to imply that Rædwald retained ducatus, or military command of his people, even while Æthelberht held imperium. This implies that being a bretwalda usually included holding the military command of other kingdoms and also that it was more than that, since Æthelberht is bretwalda despite Rædwald's control of his own troops. Rædwald was converted to Christianity while in Kent but did not abandon his pagan beliefs; this, together with the fact that he retained military independence, implies that Æthelberht's overlordship of East Anglia was much weaker than his influence with the East Saxons. An alternative interpretation, however, is that the passage in Bede should be translated as \"Rædwald, king of the East Angles, who while Æthelberht lived, even conceded to him the military leadership of his people\"; if this is Bede's intent, then East Anglia firmly was under Æthelberht's overlordship.",
"title": "Rise to dominance"
},
{
"paragraph_id": 25,
"text": "There is no evidence that Æthelberht's influence in other kingdoms was enough for him to convert any other kings to Christianity, although this is partly due to the lack of sources—nothing is known of Sussex's history, for example, for almost all of the seventh and eighth centuries. Æthelberht was able to arrange a meeting in 602 in the Severn valley, on the northwestern borders of Wessex, however, and this may be an indication of the extent of his influence in the west. No evidence survives showing Kentish domination of Mercia, but it is known that Mercia was independent of Northumbria, so it is quite plausible that it was under Kentish overlordship.",
"title": "Rise to dominance"
},
{
"paragraph_id": 26,
"text": "The native Britons had converted to Christianity under Roman rule. The Anglo-Saxon invasions separated the British church from European Christianity for centuries, so the church in Rome had no presence or authority in Britain, and in fact, Rome knew so little about the British church that it was unaware of any schism in customs. However, Æthelberht would have known something about the Roman church from his Frankish wife, Bertha, who had brought a bishop, Liudhard, with her across the Channel, and for whom Æthelberht built a chapel, St Martin's.",
"title": "Augustine's mission and early Christianisation"
},
{
"paragraph_id": 27,
"text": "In 596, Pope Gregory the Great sent Augustine, prior of the monastery of St. Andrew in Rome, to England as a missionary, and in 597, a group of nearly forty monks, led by Augustine, landed on the Isle of Thanet in Kent. According to Bede, Æthelberht was sufficiently distrustful of the newcomers to insist on meeting them under the open sky, to prevent them from performing sorcery. The monks impressed Æthelberht, but he was not converted immediately. He agreed to allow the mission to settle in Canterbury and permitted them to preach.",
"title": "Augustine's mission and early Christianisation"
},
{
"paragraph_id": 28,
"text": "It is not known when Æthelberht became a Christian. It is possible, despite Bede's account, that he already was a Christian before Augustine's mission arrived. It is likely that Liudhard and Bertha pressed Æthelberht to consider becoming a Christian before the arrival of the mission, and it is also likely that a condition of Æthelberht's marriage to Bertha was that Æthelberht would consider conversion. Conversion via the influence of the Frankish court would have been seen as an explicit recognition of Frankish overlordship, however, so it is possible that Æthelberht's delay of his conversion until it could be accomplished via Roman influence might have been an assertion of independence from Frankish control. It also has been argued that Augustine's hesitation—he turned back to Rome, asking to be released from the mission—is an indication that Æthelberht was a pagan at the time Augustine was sent.",
"title": "Augustine's mission and early Christianisation"
},
{
"paragraph_id": 29,
"text": "At the latest, Æthelberht must have converted before 601, since that year Gregory wrote to him as a Christian king. An old tradition records that Æthelberht converted on 1 June, in the summer of the year that Augustine arrived. Through Æthelberht's influence Sæberht, king of Essex, also was converted, but there were limits to the effectiveness of the mission. The entire Kentish court did not convert: Eadbald, Æthelberht's son and heir, was a pagan at his accession. Rædwald, king of East Anglia, was only partly converted (apparently while at Æthelberht's court) and retained a pagan shrine next to the new Christian altar. Augustine also was unsuccessful in gaining the allegiance of the British clergy.",
"title": "Augustine's mission and early Christianisation"
},
{
"paragraph_id": 30,
"text": "Some time after the arrival of Augustine's mission, perhaps in 602 or 603, Æthelberht issued a set of laws, in ninety sections. These laws are by far the earliest surviving code composed in any of the Germanic countries, and they were almost certainly among the first documents written down in Anglo-Saxon, as literacy would have arrived in England with Augustine's mission. The only surviving early manuscript, the Textus Roffensis, dates from the twelfth century, and it now resides in the Medway Studies Centre in Strood, Kent. Æthelberht's code makes reference to the church in the very first item, which enumerates the compensation required for the property of a bishop, a deacon, a priest, and so on; but overall, the laws seem remarkably uninfluenced by Christian principles. Bede asserted that they were composed \"after the Roman manner\", but there is little discernible Roman influence either. In subject matter, the laws have been compared to the Lex Salica of the Franks, but it is not thought that Æthelberht based his new code on any specific previous model.",
"title": "Law code"
},
{
"paragraph_id": 31,
"text": "The laws are concerned with setting and enforcing the penalties for transgressions at all levels of society; the severity of the fine depended on the social rank of the victim. The king had a financial interest in enforcement, for part of the fines would come to him in many cases, but the king also was responsible for law and order, and avoiding blood feuds by enforcing the rules on compensation for injury was part of the way the king maintained control. Æthelberht's laws are mentioned by Alfred the Great, who compiled his own laws, making use of the prior codes created by Æthelberht, as well as those of Offa of Mercia and Ine of Wessex.",
"title": "Law code"
},
{
"paragraph_id": 32,
"text": "One of Æthelberht's laws seems to preserve a trace of a very old custom: the third item in the code states that \"If the king is drinking at a man's home, and anyone commits any evil deed there, he is to pay twofold compensation.\" This probably refers to the ancient custom of a king traveling the country, being hosted, and being provided for by his subjects wherever he went. The king's servants retained these rights for centuries after Æthelberht's time.",
"title": "Law code"
},
{
"paragraph_id": 33,
"text": "Items 77–81 in the code have been interpreted as a description of a woman's financial rights after a divorce or legal separation. These clauses define how much of the household goods a woman could keep in different circumstances, depending on whether she keeps custody of the children, for example. It has recently been suggested, however, that it would be more correct to interpret these clauses as referring to women who are widowed, rather than divorced.",
"title": "Law code"
},
{
"paragraph_id": 34,
"text": "There is little documentary evidence about the nature of trade in Æthelberht's Kent. It is known that the kings of Kent had established royal control of trade by the late seventh century, but it is not known how early this control began. There is archaeological evidence suggesting that the royal influence predates any of the written sources. It has been suggested that one of Æthelberht's achievements was to take control of trade away from the aristocracy and to make it a royal monopoly. The continental trade provided Kent access to luxury goods which gave it an advantage in trading with the other Anglo-Saxon nations, and the revenue from trade was important in itself.",
"title": "Trade and coinage"
},
{
"paragraph_id": 35,
"text": "Kentish manufacture before 600 included glass beakers and jewelry. Kentish jewellers were highly skilled, and before the end of the sixth century they gained access to gold. Goods from Kent are found in cemeteries across the channel and as far away as at the mouth of the Loire. It is not known what Kent traded for all of this wealth, although it seems likely that there was a flourishing slave trade. It may well be that this wealth was the foundation of Æthelberht's strength, although his overlordship and the associated right to demand tribute would have brought wealth in its turn.",
"title": "Trade and coinage"
},
{
"paragraph_id": 36,
"text": "It may have been during Æthelberht's reign that the first coins were minted in England since the departure of the Romans: none bear his name, but it is thought likely that the first coins predate the end of the sixth century. These early coins were gold, and probably were the shillings (scillingas in Old English) that are mentioned in Æthelberht's laws. The coins are also known to numismatists as thrymsas.",
"title": "Trade and coinage"
},
{
"paragraph_id": 37,
"text": "Æthelberht died on 24 February 616 and was succeeded by his son, Eadbald, who was not a Christian—Bede says he had been converted but went back to his pagan faith, although he ultimately did become a Christian king. Eadbald outraged the church by marrying his stepmother, which was contrary to Church law, and by refusing to accept baptism. Sæberht of the East Saxons also died at approximately this time, and he was succeeded by his three sons, none of whom were Christian. A subsequent revolt against Christianity and the expulsion of the missionaries from Kent may have been a reaction to Kentish overlordship after Æthelberht's death as much as a pagan opposition to Christianity.",
"title": "Death and succession"
},
{
"paragraph_id": 38,
"text": "In addition to Eadbald, it is possible that Æthelberht had another son, Æthelwald. The evidence for this is a papal letter to Justus, archbishop of Canterbury from 619 to 625, that refers to a king named Aduluald, who is apparently different from Audubald, which refers to Eadbald. There is no agreement among modern scholars on how to interpret this: \"Aduluald\" might be intended as a representation of \"Æthelwald\", and hence an indication of another king, perhaps a sub-king of west Kent; or it may be merely a scribal error which should be read as referring to Eadbald.",
"title": "Death and succession"
},
{
"paragraph_id": 39,
"text": "Æthelberht was later regarded as a saint for his role in establishing Christianity among the Anglo-Saxons. His feast day was originally 24 February but was changed to 25 February. In the 2004 edition of the Roman Martyrology, he is listed under his date of death, 24 February, with the citation: 'King of Kent, converted by St Augustine, bishop, the first leader of the English people to do so'. The Roman Catholic Archdiocese of Southwark, which contains Kent, commemorates him on 25 February.",
"title": "Liturgical celebration"
},
{
"paragraph_id": 40,
"text": "He is also venerated in the Eastern Orthodox Church as Saint Ethelbert, king of Kent, his day commemorated on 25 February.",
"title": "Liturgical celebration"
},
{
"paragraph_id": 41,
"text": "",
"title": "External links"
}
]
| Æthelberht was King of Kent from about 589 until his death. The eighth-century monk Bede, in his Ecclesiastical History of the English People, lists him as the third king to hold imperium over other Anglo-Saxon kingdoms. In the late ninth century Anglo-Saxon Chronicle, he is referred to as a bretwalda, or "Britain-ruler". He was the first English king to convert to Christianity. Æthelberht was the son of Eormenric, succeeding him as king, according to the Chronicle. He married Bertha, the Christian daughter of Charibert I, king of the Franks, thus building an alliance with the most powerful state in contemporary Western Europe; the marriage probably took place before he came to the throne. Bertha's influence may have led to Pope Gregory I's decision to send Augustine as a missionary from Rome. Augustine landed on the Isle of Thanet in east Kent in 597. Shortly thereafter, Æthelberht converted to Christianity, churches were established, and wider-scale conversion to Christianity began in the kingdom. He provided the new church with land in Canterbury, thus helping to establish one of the foundation stones of English Christianity. Æthelberht's law for Kent, the earliest written code in any Germanic language, instituted a complex system of fines; the law code is preserved in the Textus Roffensis. Kent was rich, with strong trade ties to the Continent, and Æthelberht may have instituted royal control over trade. Coinage probably began circulating in Kent during his reign for the first time since the Anglo-Saxon settlement. He later came to be regarded as a saint for his role in establishing Christianity among the Anglo-Saxons. His feast day was originally 24 February but was changed to 25 February. | 2001-10-16T21:15:08Z | 2023-12-05T11:42:07Z | [
"Template:Bretwalda",
"Template:Short description",
"Template:Use dmy dates",
"Template:Lang",
"Template:Cite journal",
"Template:PASE",
"Template:Commons category-inline",
"Template:Authority control",
"Template:Main",
"Template:Portal",
"Template:Reflist",
"Template:Cite web",
"Template:Wikisource author",
"Template:Use British English",
"Template:IPAc-en",
"Template:Lang-ang",
"Template:Circa",
"Template:Kentish Monarchs",
"Template:Anglo-Saxon saints",
"Template:Featured article",
"Template:Bots",
"Template:Infobox royalty",
"Template:IPA-ang",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/%C3%86thelberht_of_Kent |
9,942 | Erwin Schrödinger | Erwin Rudolf Josef Alexander Schrödinger (UK: /ˈʃrɜːdɪŋər/, US: /ˈʃroʊ-/; German: [ˈɛɐ̯vɪn ˈʃʁøːdɪŋɐ]; 12 August 1887 – 4 January 1961), sometimes written as Schroedinger or Schrodinger, was a Nobel Prize–winning Austrian and naturalized Irish physicist who developed fundamental results in quantum theory. In particular, he is recognized for postulating the Schrödinger equation, an equation that provides a way to calculate the wave function of a system and how it changes dynamically in time. He coined the term "quantum entanglement", and was the earliest to discuss it, doing so in 1932.
In addition, he wrote many works on various aspects of physics: statistical mechanics and thermodynamics, physics of dielectrics, colour theory, electrodynamics, general relativity, and cosmology, and he made several attempts to construct a unified field theory. In his book What Is Life? Schrödinger addressed the problems of genetics, looking at the phenomenon of life from the point of view of physics. He also paid great attention to the philosophical aspects of science, ancient, and oriental philosophical concepts, ethics, and religion. He also wrote on philosophy and theoretical biology. In popular culture, he is best known for his "Schrödinger's cat" thought experiment.
Spending most of his life as an academic with positions at various universities, Schrödinger, along with Paul Dirac, won the Nobel Prize in Physics in 1933 for his work on quantum mechanics, the same year he left Germany due to his opposition to Nazism. In his personal life, he lived with both his wife and his mistress which may have led to problems causing him to leave his position at Oxford. Subsequently, until 1938, he had a position in Graz, Austria, until the Nazi takeover when he fled, finally finding a long-term arrangement in Dublin where he remained until retirement in 1955. He died in Vienna of tuberculosis when he was 73.
Schrödinger was born in Erdberg [de], Vienna, Austria, on 12 August 1887, to Rudolf Schrödinger [de] (cerecloth producer, botanist) and Georgine Emilia Brenda Schrödinger (née Bauer) (daughter of Alexander Bauer [de], professor of chemistry, TU Wien). He was their only child.
His mother was of half Austrian and half English descent; his father was Catholic and his mother was Lutheran. He himself was an atheist. However, he had strong interests in Eastern religions and pantheism, and he used religious symbolism in his works. He also believed his scientific work was an approach to Divinity, albeit in an intellectual sense.
He was also able to learn English outside school, as his maternal grandmother was British. Between 1906 and 1910 (the year he earned his doctorate) Schrödinger studied at the University of Vienna under the physicists Franz S. Exner (1849–1926) and Friedrich Hasenöhrl (1874–1915). He received his doctorate at Vienna under Hasenöhrl. He also conducted experimental work with Karl Wilhelm Friedrich "Fritz" Kohlrausch. In 1911, Schrödinger became an assistant to Exner.
In 1914 Schrödinger achieved habilitation (venia legendi). Between 1914 and 1918 he participated in war work as a commissioned officer in the Austrian fortress artillery (Gorizia, Duino, Sistiana, Prosecco, Vienna). In 1920 he became the assistant to Max Wien, in Jena, and in September 1920 he attained the position of ao. Prof. (ausserordentlicher Professor), roughly equivalent to Reader (UK) or associate professor (US), in Stuttgart. In 1921, he became o. Prof. (ordentlicher Professor, i.e. full professor), in Breslau (now Wrocław, Poland).
In 1921, he moved to the University of Zürich. In 1927, he succeeded Max Planck at the Friedrich Wilhelm University in Berlin. In 1933, Schrödinger decided to leave Germany because he strongly disapproved of the Nazis' antisemitism. He became a Fellow of Magdalen College at the University of Oxford. Soon after he arrived, he received the Nobel Prize together with Paul Dirac. His position at Oxford did not work out well; his unconventional domestic arrangements, sharing living quarters with two women, were not met with acceptance. In 1934, Schrödinger lectured at Princeton University; he was offered a permanent position there, but did not accept it. Again, his wish to set up house with his wife and his mistress may have created a problem. He had the prospect of a position at the University of Edinburgh but visa delays occurred, and in the end he took up a position at the University of Graz in Austria in 1936. He had also accepted the offer of chair position at Department of Physics, Allahabad University in India.
In the midst of these tenure issues in 1935, after extensive correspondence with Albert Einstein, he proposed what is now called the Schrödinger's cat thought experiment.
In 1938, after the Anschluss, Schrödinger had problems in Graz because of his flight from Germany in 1933 and his known opposition to Nazism. He issued a statement recanting this opposition (he later regretted doing so and explained the reason to Einstein). However, this did not fully appease the new dispensation and the University of Graz dismissed him from his post for political unreliability. He suffered harassment and was instructed not to leave the country. He and his wife, however, fled to Italy. From there, he went to visiting positions in Oxford and Ghent University.
In the same year he received a personal invitation from Ireland's Taoiseach, Éamon de Valera – a mathematician himself – to reside in Ireland and agreed to help establish an Institute for Advanced Studies in Dublin. He moved to Kincora Road, Clontarf, Dublin, and lived modestly. A plaque has been erected at his Clontarf residence and at the address of his workplace in Merrion Square. Schrödinger believed that as an Austrian he had a unique relationship to Ireland. In October 1940, a writer from the Irish Press interviewed Schrödinger who spoke of Celtic heritage of Austrians, saying: "I believe there is a deeper connection between us Austrians and the Celts. Names of places in the Austrian Alps are said to be of Celtic origin." He became the Director of the School for Theoretical Physics in 1940 and remained there for 17 years. He became a naturalized Irish citizen in 1948, but also retained his Austrian citizenship. He wrote around 50 further publications on various topics, including his explorations of unified field theory.
In 1944, he wrote What Is Life?, which contains a discussion of negentropy and the concept of a complex molecule with the genetic code for living organisms. According to James D. Watson's memoir, DNA, the Secret of Life, Schrödinger's book gave Watson the inspiration to research the gene, which led to the discovery of the DNA double helix structure in 1953. Similarly, Francis Crick, in his autobiographical book What Mad Pursuit, described how he was influenced by Schrödinger's speculations about how genetic information might be stored in molecules.
Schrödinger stayed in Dublin until retiring in 1955.
A manuscript "Fragment from an unpublished dialogue of Galileo" from this time recently resurfaced at The King's Hospital boarding school, Dublin after it was written for the School's 1955 edition of their Blue Coat to celebrate his leaving of Dublin to take up his appointment as Chair of Physics at the University of Vienna.
In 1956, he returned to Vienna (chair ad personam). At an important lecture during the World Energy Conference he refused to speak on nuclear energy because of his scepticism about it and gave a philosophical lecture instead. During this period, Schrödinger turned from mainstream quantum mechanics' definition of wave–particle duality and promoted the wave idea alone, causing much controversy.
Schrödinger suffered from tuberculosis and several times in the 1920s stayed at a sanatorium in Arosa in Switzerland. It was there that he formulated his wave equation. On 4 January 1961, Schrödinger died of tuberculosis, aged 73, in Vienna. He left Anny a widow, and was buried in Alpbach, Austria, in a Catholic cemetery. Although he was not Catholic, the priest in charge of the cemetery permitted the burial after learning Schrödinger was a member of the Pontifical Academy of Sciences.
On April 6, 1920, Schrödinger married Annemarie (Anny) Bertel.
When he migrated to Ireland in 1938, he obtained visas for himself, his wife and also another woman, Hilde March. March was the wife of an Austrian colleague and Schrödinger had fathered a daughter with her in 1934. Schrödinger wrote to the Taoiseach, Éamon de Valera personally, so as to obtain a visa for March. In October 1939 the ménage à trois duly took up residence in Dublin. His wife, Anny (born 3 December 1896), died on 3 October 1965.
One of Schrödinger's grandchildren, Terry Rudolph, has followed in his footsteps as a quantum physicist, and teaches at Imperial College London.
Schrödinger kept a record of his sexual liaisons including children he sexually abused in a diary he called Ephemeridae, in which he stated a "predilection for teenage girls on the grounds that their innocence was the ideal match for his natural genius".
At the age of 39, Schrödinger tutored 14-year-old "Ithi" Junger. As John Gribbin recounted in his 2012 biography of Schrödinger, "As well as the maths, the lessons included 'a fair amount of petting and cuddling' and Schrödinger soon convinced himself that he was in love with Ithi". Schrödinger assured Junger she would not become pregnant, and raped her at 17. She later became pregnant and had an abortion that left her sterile. Schrödinger left her soon after and moved on to other targets. Kate Nolan, a pseudonym used by surviving family to protect the victim, was also impregnated by Schrödinger amid claims of a lack of consent.
Carlo Rovelli notes in his book Helgoland that Schrödinger "always kept a number of relationships going at once – and made no secret of his fascination with preadolescent girls". In Ireland, Rovelli writes, he had one child each from two students identified in a Der Standard article as being a 26-year-old and a married political activist of unknown age. While carrying out research into a family tree, Bernard Biggar uncovered reports of Schrödinger grooming his cousin, Barbara MacEntee, when she was 12 years old. Apparently, her uncle, the mathematician and priest Pádraig de Brún, advised Schrödinger to no longer pursue her, and Schrödinger later wrote in his journal that she was one of his "unrequited loves". MacEntee died in 1995, with the accounts emerging posthumously.
Walter Moore's biography of the scientist outlined that Schrödinger's attitude towards women was "essentially that of a male supremacist", an assessment corroborated by Helge Kragh in his review of Moore's biography, "The conquest of women, especially very young women, was the salt of life for this sincere romantic and male chauvinist". Walter Moore used Schrödinger's relationships with girls to characterise what Moore called Schrödinger's "Lolita Complex". Schrödinger's grandson and his mother were unhappy with the accusation made by Moore, and once the biography was published, their family broke off contact with him.
In a 2021 Irish Times article, Schrödinger's pattern of serial abuse was identified by the paper as a "behaviour [that] fitted the profile of a paedophile in the widely understood sense of that term." The physics department of Trinity College Dublin announced in January 2022 that they would recommend a lecture theatre that had been named for Schrödinger since the 1990s be renamed in light of his history of sexual abuse, while a picture of the scientist would be removed, and the renaming of an eponymous lecture series would be considered. The College's webpage "The History of the School of Physics" currently has a photo labeled, "View of the front desk and blackboard at the Physics Lecture Theatre".
Early in his life, Schrödinger experimented in the fields of electrical engineering, atmospheric electricity, and atmospheric radioactivity, but he usually worked with his former teacher Franz Exner. He also studied vibrational theory, the theory of Brownian motion, and mathematical statistics. In 1912, at the request of the editors of the Handbook of Electricity and Magnetism, Schrödinger wrote an article titled Dielectrism. That same year, Schrödinger gave a theoretical estimate of the probable height distribution of radioactive substances, which is required to explain the observed radioactivity of the atmosphere, and in August 1913 executed several experiments in Zeehame that confirmed his theoretical estimate and those of Victor Franz Hess. For this work, Schrödinger was awarded the 1920 Haitinger Prize (Haitinger-Preis) of the Austrian Academy of Sciences. Other experimental studies conducted by the young researcher in 1914 were checking formulas for capillary pressure in gas bubbles and the study of the properties of soft beta radiation produced by gamma rays striking a metal surface. The last work he performed together with his friend Fritz Kohlrausch. In 1919, Schrödinger performed his last physical experiment on coherent light and subsequently focused on theoretical studies.
In the first years of his career, Schrödinger became acquainted with the ideas of the old quantum theory, developed in the works of Max Planck, Albert Einstein, Niels Bohr, Arnold Sommerfeld, and others. This knowledge helped him work on some problems in theoretical physics, but the Austrian scientist at the time was not yet ready to part with the traditional methods of classical physics.
Schrödinger's first publications about atomic theory and the theory of spectra began to emerge only from the beginning of the 1920s, after his personal acquaintance with Sommerfeld and Wolfgang Pauli and his move to Germany. In January 1921, Schrödinger finished his first article on this subject, about the framework of the Bohr-Sommerfeld effect of the interaction of electrons on some features of the spectra of the alkali metals. Of particular interest to him was the introduction of relativistic considerations in quantum theory. In autumn 1922, he analyzed the electron orbits in an atom from a geometric point of view, using methods developed by the mathematician Hermann Weyl (1885–1955). This work, in which it was shown that quantum orbits are associated with certain geometric properties, was an important step in predicting some of the features of wave mechanics. Earlier in the same year, he created the Schrödinger equation of the relativistic Doppler effect for spectral lines, based on the hypothesis of light quanta and considerations of energy and momentum. He liked the idea of his teacher Exner on the statistical nature of the conservation laws, so he enthusiastically embraced the articles of Bohr, Kramers, and Slater, which suggested the possibility of violation of these laws in individual atomic processes (for example, in the process of emission of radiation). Although the experiments of Hans Geiger and Walther Bothe soon cast doubt on this, the idea of energy as a statistical concept was a lifelong attraction for Schrödinger, and he discussed it in some reports and publications.
In January 1926, Schrödinger published in Annalen der Physik the paper "Quantisierung als Eigenwertproblem" (Quantization as an Eigenvalue Problem) on wave mechanics and presented what is now known as the Schrödinger equation. In this paper, he gave a "derivation" of the wave equation for time-independent systems and showed that it gave the correct energy eigenvalues for a hydrogen-like atom. This paper has been universally celebrated as one of the most important achievements of the twentieth century and created a revolution in most areas of quantum mechanics and indeed of all physics and chemistry. A second paper was submitted just four weeks later that solved the quantum harmonic oscillator, rigid rotor, and diatomic molecule problems and gave a new derivation of the Schrödinger equation. A third paper, published in May, showed the equivalence of his approach to that of Heisenberg and gave the treatment of the Stark effect. A fourth paper in this series showed how to treat problems in which the system changes with time, as in scattering problems. In this paper, he introduced a complex solution to the wave equation in order to prevent the occurrence of fourth- and sixth-order differential equations. Schrödinger ultimately reduced the order of the equation to one. (This was arguably the moment when quantum mechanics switched from real to complex numbers.) These papers were his central achievement and were at once recognized as having great significance by the physics community.
Schrödinger was not entirely comfortable with the implications of quantum theory referring to his theory as "wave mechanics". He wrote about the probability interpretation of quantum mechanics, saying, "I don't like it, and I'm sorry I ever had anything to do with it." (Just in order to ridicule the Copenhagen interpretation of quantum mechanics, he contrived the famous thought experiment called Schrödinger's cat paradox and was said to have angrily complained to his students that "now the damned Göttingen physicists use my beautiful wave mechanics for calculating their shitty matrix elements.")
Following his work on quantum mechanics, Schrödinger devoted considerable effort to working on a unified field theory that would unite gravity, electromagnetism, and nuclear forces within the basic framework of general relativity, doing the work with an extended correspondence with Albert Einstein. In 1947, he announced a result, "Affine Field Theory", in a talk at the Royal Irish Academy, but the announcement was criticized by Einstein as "preliminary" and failed to lead to the desired unified theory. Following the failure of his attempt at unification, Schrödinger gave up his work on unification and turned to other topics.
Schrödinger had a strong interest in psychology, in particular color perception and colorimetry (German: Farbenmetrik). He spent quite a few years of his life working on these questions and published a series of papers in this area:
His work on the psychology of color perception follows the step of Newton, Maxwell and von Helmholtz in the same area. Some of these papers have been translated into English and can be found in: Sources of Colour Science, Ed. David L. MacAdam, MIT Press (1970) and in Erwin Schrödinger’s Color Theory, Translated with Modern Commentary, Ed. Keith K. Niall, Springer (2017). ISBN 978-3-319-64619-0 doi:10.1007/978-3-319-64621-3.
Schrödinger had a deep interest in philosophy, and was influenced by the works of Arthur Schopenhauer and Baruch Spinoza. In his 1956 lecture "Mind and Matter", he said that "The world extended in space and time is but our representation." This is a repetition of the first words of Schopenhauer's main work. Schopenhauer's works also introduced him to Indian philosophy, more specifically to the Upanishads and Advaita Vedanta’s interpretation. He once took on a particular line of thought: "If the world is indeed created by our act of observation, there should be billions of such worlds, one for each of us. How come your world and my world are the same? If something happens in my world, does it happen in your world, too? What causes all these worlds to synchronize with each other?".
"There is obviously only one alternative, namely the unification of minds or consciousnesses. Their multiplicity is only apparent, in truth there is only one mind. This is the doctrine of the Upanishads."
Schrödinger discussed topics such as consciousness, the mind–body problem, sense perception, free will, and objective reality in his lectures and writings.
Schrödinger’s attitude with respect to the relations between Eastern and Western thought was one of prudence, expressing appreciation for Eastern philosophy while also admitting that some of the ideas did not fit with empirical approaches to natural philosophy. Some commentators have suggested that Schrödinger was so deeply immersed in a non-dualist Vedântic-like view that it may have served as a broad framework or subliminal inspiration for much of his work including that in theoretical physics. Schrödinger expressed sympathy for the idea of tat tvam asi, stating "you can throw yourself flat on the ground, stretched out upon Mother Earth, with the certain conviction that you are one with her and she with you."
Schrödinger said that "Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else."
The philosophical issues raised by Schrödinger's cat are still debated today and remain his most enduring legacy in popular science, while Schrödinger's equation is his most enduring legacy at a more technical level. Schrödinger is one of several individuals who have been called "the father of quantum mechanics". The large crater Schrödinger, on the far side of the Moon, is named after him. The Erwin Schrödinger International Institute for Mathematical Physics was founded in Vienna in 1992.
Schrödinger's portrait was the main feature of the design of the 1983–97 Austrian 1000-schilling banknote, the second-highest denomination.
A building is named after him at the University of Limerick, in Limerick, Ireland, as is the 'Erwin Schrödinger Zentrum' at Adlershof in Berlin and the Route Schrödinger at CERN, Prévessin, France.
Schrödinger also has a lecture hall in Trinity College Dublin dedicated to him. In January 2022, the head of the school of physics stated there would be a recommendation to drop Schrödinger lecture theatre name due to Schrödinger's "history of sexually abusing women and children".
Schrödinger's 126th birthday anniversary in 2013 was celebrated with a Google Doodle.
Schrödinger's cat is named in his honour, see also: List of things named after Erwin Schrödinger. | [
{
"paragraph_id": 0,
"text": "Erwin Rudolf Josef Alexander Schrödinger (UK: /ˈʃrɜːdɪŋər/, US: /ˈʃroʊ-/; German: [ˈɛɐ̯vɪn ˈʃʁøːdɪŋɐ]; 12 August 1887 – 4 January 1961), sometimes written as Schroedinger or Schrodinger, was a Nobel Prize–winning Austrian and naturalized Irish physicist who developed fundamental results in quantum theory. In particular, he is recognized for postulating the Schrödinger equation, an equation that provides a way to calculate the wave function of a system and how it changes dynamically in time. He coined the term \"quantum entanglement\", and was the earliest to discuss it, doing so in 1932.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In addition, he wrote many works on various aspects of physics: statistical mechanics and thermodynamics, physics of dielectrics, colour theory, electrodynamics, general relativity, and cosmology, and he made several attempts to construct a unified field theory. In his book What Is Life? Schrödinger addressed the problems of genetics, looking at the phenomenon of life from the point of view of physics. He also paid great attention to the philosophical aspects of science, ancient, and oriental philosophical concepts, ethics, and religion. He also wrote on philosophy and theoretical biology. In popular culture, he is best known for his \"Schrödinger's cat\" thought experiment.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Spending most of his life as an academic with positions at various universities, Schrödinger, along with Paul Dirac, won the Nobel Prize in Physics in 1933 for his work on quantum mechanics, the same year he left Germany due to his opposition to Nazism. In his personal life, he lived with both his wife and his mistress which may have led to problems causing him to leave his position at Oxford. Subsequently, until 1938, he had a position in Graz, Austria, until the Nazi takeover when he fled, finally finding a long-term arrangement in Dublin where he remained until retirement in 1955. He died in Vienna of tuberculosis when he was 73.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Schrödinger was born in Erdberg [de], Vienna, Austria, on 12 August 1887, to Rudolf Schrödinger [de] (cerecloth producer, botanist) and Georgine Emilia Brenda Schrödinger (née Bauer) (daughter of Alexander Bauer [de], professor of chemistry, TU Wien). He was their only child.",
"title": "Biography"
},
{
"paragraph_id": 4,
"text": "His mother was of half Austrian and half English descent; his father was Catholic and his mother was Lutheran. He himself was an atheist. However, he had strong interests in Eastern religions and pantheism, and he used religious symbolism in his works. He also believed his scientific work was an approach to Divinity, albeit in an intellectual sense.",
"title": "Biography"
},
{
"paragraph_id": 5,
"text": "He was also able to learn English outside school, as his maternal grandmother was British. Between 1906 and 1910 (the year he earned his doctorate) Schrödinger studied at the University of Vienna under the physicists Franz S. Exner (1849–1926) and Friedrich Hasenöhrl (1874–1915). He received his doctorate at Vienna under Hasenöhrl. He also conducted experimental work with Karl Wilhelm Friedrich \"Fritz\" Kohlrausch. In 1911, Schrödinger became an assistant to Exner.",
"title": "Biography"
},
{
"paragraph_id": 6,
"text": "In 1914 Schrödinger achieved habilitation (venia legendi). Between 1914 and 1918 he participated in war work as a commissioned officer in the Austrian fortress artillery (Gorizia, Duino, Sistiana, Prosecco, Vienna). In 1920 he became the assistant to Max Wien, in Jena, and in September 1920 he attained the position of ao. Prof. (ausserordentlicher Professor), roughly equivalent to Reader (UK) or associate professor (US), in Stuttgart. In 1921, he became o. Prof. (ordentlicher Professor, i.e. full professor), in Breslau (now Wrocław, Poland).",
"title": "Biography"
},
{
"paragraph_id": 7,
"text": "In 1921, he moved to the University of Zürich. In 1927, he succeeded Max Planck at the Friedrich Wilhelm University in Berlin. In 1933, Schrödinger decided to leave Germany because he strongly disapproved of the Nazis' antisemitism. He became a Fellow of Magdalen College at the University of Oxford. Soon after he arrived, he received the Nobel Prize together with Paul Dirac. His position at Oxford did not work out well; his unconventional domestic arrangements, sharing living quarters with two women, were not met with acceptance. In 1934, Schrödinger lectured at Princeton University; he was offered a permanent position there, but did not accept it. Again, his wish to set up house with his wife and his mistress may have created a problem. He had the prospect of a position at the University of Edinburgh but visa delays occurred, and in the end he took up a position at the University of Graz in Austria in 1936. He had also accepted the offer of chair position at Department of Physics, Allahabad University in India.",
"title": "Biography"
},
{
"paragraph_id": 8,
"text": "In the midst of these tenure issues in 1935, after extensive correspondence with Albert Einstein, he proposed what is now called the Schrödinger's cat thought experiment.",
"title": "Biography"
},
{
"paragraph_id": 9,
"text": "In 1938, after the Anschluss, Schrödinger had problems in Graz because of his flight from Germany in 1933 and his known opposition to Nazism. He issued a statement recanting this opposition (he later regretted doing so and explained the reason to Einstein). However, this did not fully appease the new dispensation and the University of Graz dismissed him from his post for political unreliability. He suffered harassment and was instructed not to leave the country. He and his wife, however, fled to Italy. From there, he went to visiting positions in Oxford and Ghent University.",
"title": "Biography"
},
{
"paragraph_id": 10,
"text": "In the same year he received a personal invitation from Ireland's Taoiseach, Éamon de Valera – a mathematician himself – to reside in Ireland and agreed to help establish an Institute for Advanced Studies in Dublin. He moved to Kincora Road, Clontarf, Dublin, and lived modestly. A plaque has been erected at his Clontarf residence and at the address of his workplace in Merrion Square. Schrödinger believed that as an Austrian he had a unique relationship to Ireland. In October 1940, a writer from the Irish Press interviewed Schrödinger who spoke of Celtic heritage of Austrians, saying: \"I believe there is a deeper connection between us Austrians and the Celts. Names of places in the Austrian Alps are said to be of Celtic origin.\" He became the Director of the School for Theoretical Physics in 1940 and remained there for 17 years. He became a naturalized Irish citizen in 1948, but also retained his Austrian citizenship. He wrote around 50 further publications on various topics, including his explorations of unified field theory.",
"title": "Biography"
},
{
"paragraph_id": 11,
"text": "In 1944, he wrote What Is Life?, which contains a discussion of negentropy and the concept of a complex molecule with the genetic code for living organisms. According to James D. Watson's memoir, DNA, the Secret of Life, Schrödinger's book gave Watson the inspiration to research the gene, which led to the discovery of the DNA double helix structure in 1953. Similarly, Francis Crick, in his autobiographical book What Mad Pursuit, described how he was influenced by Schrödinger's speculations about how genetic information might be stored in molecules.",
"title": "Biography"
},
{
"paragraph_id": 12,
"text": "Schrödinger stayed in Dublin until retiring in 1955.",
"title": "Biography"
},
{
"paragraph_id": 13,
"text": "A manuscript \"Fragment from an unpublished dialogue of Galileo\" from this time recently resurfaced at The King's Hospital boarding school, Dublin after it was written for the School's 1955 edition of their Blue Coat to celebrate his leaving of Dublin to take up his appointment as Chair of Physics at the University of Vienna.",
"title": "Biography"
},
{
"paragraph_id": 14,
"text": "In 1956, he returned to Vienna (chair ad personam). At an important lecture during the World Energy Conference he refused to speak on nuclear energy because of his scepticism about it and gave a philosophical lecture instead. During this period, Schrödinger turned from mainstream quantum mechanics' definition of wave–particle duality and promoted the wave idea alone, causing much controversy.",
"title": "Biography"
},
{
"paragraph_id": 15,
"text": "Schrödinger suffered from tuberculosis and several times in the 1920s stayed at a sanatorium in Arosa in Switzerland. It was there that he formulated his wave equation. On 4 January 1961, Schrödinger died of tuberculosis, aged 73, in Vienna. He left Anny a widow, and was buried in Alpbach, Austria, in a Catholic cemetery. Although he was not Catholic, the priest in charge of the cemetery permitted the burial after learning Schrödinger was a member of the Pontifical Academy of Sciences.",
"title": "Biography"
},
{
"paragraph_id": 16,
"text": "On April 6, 1920, Schrödinger married Annemarie (Anny) Bertel.",
"title": "Personal life"
},
{
"paragraph_id": 17,
"text": "When he migrated to Ireland in 1938, he obtained visas for himself, his wife and also another woman, Hilde March. March was the wife of an Austrian colleague and Schrödinger had fathered a daughter with her in 1934. Schrödinger wrote to the Taoiseach, Éamon de Valera personally, so as to obtain a visa for March. In October 1939 the ménage à trois duly took up residence in Dublin. His wife, Anny (born 3 December 1896), died on 3 October 1965.",
"title": "Personal life"
},
{
"paragraph_id": 18,
"text": "One of Schrödinger's grandchildren, Terry Rudolph, has followed in his footsteps as a quantum physicist, and teaches at Imperial College London.",
"title": "Personal life"
},
{
"paragraph_id": 19,
"text": "Schrödinger kept a record of his sexual liaisons including children he sexually abused in a diary he called Ephemeridae, in which he stated a \"predilection for teenage girls on the grounds that their innocence was the ideal match for his natural genius\".",
"title": "Personal life"
},
{
"paragraph_id": 20,
"text": "At the age of 39, Schrödinger tutored 14-year-old \"Ithi\" Junger. As John Gribbin recounted in his 2012 biography of Schrödinger, \"As well as the maths, the lessons included 'a fair amount of petting and cuddling' and Schrödinger soon convinced himself that he was in love with Ithi\". Schrödinger assured Junger she would not become pregnant, and raped her at 17. She later became pregnant and had an abortion that left her sterile. Schrödinger left her soon after and moved on to other targets. Kate Nolan, a pseudonym used by surviving family to protect the victim, was also impregnated by Schrödinger amid claims of a lack of consent.",
"title": "Personal life"
},
{
"paragraph_id": 21,
"text": "Carlo Rovelli notes in his book Helgoland that Schrödinger \"always kept a number of relationships going at once – and made no secret of his fascination with preadolescent girls\". In Ireland, Rovelli writes, he had one child each from two students identified in a Der Standard article as being a 26-year-old and a married political activist of unknown age. While carrying out research into a family tree, Bernard Biggar uncovered reports of Schrödinger grooming his cousin, Barbara MacEntee, when she was 12 years old. Apparently, her uncle, the mathematician and priest Pádraig de Brún, advised Schrödinger to no longer pursue her, and Schrödinger later wrote in his journal that she was one of his \"unrequited loves\". MacEntee died in 1995, with the accounts emerging posthumously.",
"title": "Personal life"
},
{
"paragraph_id": 22,
"text": "Walter Moore's biography of the scientist outlined that Schrödinger's attitude towards women was \"essentially that of a male supremacist\", an assessment corroborated by Helge Kragh in his review of Moore's biography, \"The conquest of women, especially very young women, was the salt of life for this sincere romantic and male chauvinist\". Walter Moore used Schrödinger's relationships with girls to characterise what Moore called Schrödinger's \"Lolita Complex\". Schrödinger's grandson and his mother were unhappy with the accusation made by Moore, and once the biography was published, their family broke off contact with him.",
"title": "Personal life"
},
{
"paragraph_id": 23,
"text": "In a 2021 Irish Times article, Schrödinger's pattern of serial abuse was identified by the paper as a \"behaviour [that] fitted the profile of a paedophile in the widely understood sense of that term.\" The physics department of Trinity College Dublin announced in January 2022 that they would recommend a lecture theatre that had been named for Schrödinger since the 1990s be renamed in light of his history of sexual abuse, while a picture of the scientist would be removed, and the renaming of an eponymous lecture series would be considered. The College's webpage \"The History of the School of Physics\" currently has a photo labeled, \"View of the front desk and blackboard at the Physics Lecture Theatre\".",
"title": "Personal life"
},
{
"paragraph_id": 24,
"text": "Early in his life, Schrödinger experimented in the fields of electrical engineering, atmospheric electricity, and atmospheric radioactivity, but he usually worked with his former teacher Franz Exner. He also studied vibrational theory, the theory of Brownian motion, and mathematical statistics. In 1912, at the request of the editors of the Handbook of Electricity and Magnetism, Schrödinger wrote an article titled Dielectrism. That same year, Schrödinger gave a theoretical estimate of the probable height distribution of radioactive substances, which is required to explain the observed radioactivity of the atmosphere, and in August 1913 executed several experiments in Zeehame that confirmed his theoretical estimate and those of Victor Franz Hess. For this work, Schrödinger was awarded the 1920 Haitinger Prize (Haitinger-Preis) of the Austrian Academy of Sciences. Other experimental studies conducted by the young researcher in 1914 were checking formulas for capillary pressure in gas bubbles and the study of the properties of soft beta radiation produced by gamma rays striking a metal surface. The last work he performed together with his friend Fritz Kohlrausch. In 1919, Schrödinger performed his last physical experiment on coherent light and subsequently focused on theoretical studies.",
"title": "Academic interests and life of the mind"
},
{
"paragraph_id": 25,
"text": "In the first years of his career, Schrödinger became acquainted with the ideas of the old quantum theory, developed in the works of Max Planck, Albert Einstein, Niels Bohr, Arnold Sommerfeld, and others. This knowledge helped him work on some problems in theoretical physics, but the Austrian scientist at the time was not yet ready to part with the traditional methods of classical physics.",
"title": "Academic interests and life of the mind"
},
{
"paragraph_id": 26,
"text": "Schrödinger's first publications about atomic theory and the theory of spectra began to emerge only from the beginning of the 1920s, after his personal acquaintance with Sommerfeld and Wolfgang Pauli and his move to Germany. In January 1921, Schrödinger finished his first article on this subject, about the framework of the Bohr-Sommerfeld effect of the interaction of electrons on some features of the spectra of the alkali metals. Of particular interest to him was the introduction of relativistic considerations in quantum theory. In autumn 1922, he analyzed the electron orbits in an atom from a geometric point of view, using methods developed by the mathematician Hermann Weyl (1885–1955). This work, in which it was shown that quantum orbits are associated with certain geometric properties, was an important step in predicting some of the features of wave mechanics. Earlier in the same year, he created the Schrödinger equation of the relativistic Doppler effect for spectral lines, based on the hypothesis of light quanta and considerations of energy and momentum. He liked the idea of his teacher Exner on the statistical nature of the conservation laws, so he enthusiastically embraced the articles of Bohr, Kramers, and Slater, which suggested the possibility of violation of these laws in individual atomic processes (for example, in the process of emission of radiation). Although the experiments of Hans Geiger and Walther Bothe soon cast doubt on this, the idea of energy as a statistical concept was a lifelong attraction for Schrödinger, and he discussed it in some reports and publications.",
"title": "Academic interests and life of the mind"
},
{
"paragraph_id": 27,
"text": "In January 1926, Schrödinger published in Annalen der Physik the paper \"Quantisierung als Eigenwertproblem\" (Quantization as an Eigenvalue Problem) on wave mechanics and presented what is now known as the Schrödinger equation. In this paper, he gave a \"derivation\" of the wave equation for time-independent systems and showed that it gave the correct energy eigenvalues for a hydrogen-like atom. This paper has been universally celebrated as one of the most important achievements of the twentieth century and created a revolution in most areas of quantum mechanics and indeed of all physics and chemistry. A second paper was submitted just four weeks later that solved the quantum harmonic oscillator, rigid rotor, and diatomic molecule problems and gave a new derivation of the Schrödinger equation. A third paper, published in May, showed the equivalence of his approach to that of Heisenberg and gave the treatment of the Stark effect. A fourth paper in this series showed how to treat problems in which the system changes with time, as in scattering problems. In this paper, he introduced a complex solution to the wave equation in order to prevent the occurrence of fourth- and sixth-order differential equations. Schrödinger ultimately reduced the order of the equation to one. (This was arguably the moment when quantum mechanics switched from real to complex numbers.) These papers were his central achievement and were at once recognized as having great significance by the physics community.",
"title": "Academic interests and life of the mind"
},
{
"paragraph_id": 28,
"text": "Schrödinger was not entirely comfortable with the implications of quantum theory referring to his theory as \"wave mechanics\". He wrote about the probability interpretation of quantum mechanics, saying, \"I don't like it, and I'm sorry I ever had anything to do with it.\" (Just in order to ridicule the Copenhagen interpretation of quantum mechanics, he contrived the famous thought experiment called Schrödinger's cat paradox and was said to have angrily complained to his students that \"now the damned Göttingen physicists use my beautiful wave mechanics for calculating their shitty matrix elements.\")",
"title": "Academic interests and life of the mind"
},
{
"paragraph_id": 29,
"text": "Following his work on quantum mechanics, Schrödinger devoted considerable effort to working on a unified field theory that would unite gravity, electromagnetism, and nuclear forces within the basic framework of general relativity, doing the work with an extended correspondence with Albert Einstein. In 1947, he announced a result, \"Affine Field Theory\", in a talk at the Royal Irish Academy, but the announcement was criticized by Einstein as \"preliminary\" and failed to lead to the desired unified theory. Following the failure of his attempt at unification, Schrödinger gave up his work on unification and turned to other topics.",
"title": "Academic interests and life of the mind"
},
{
"paragraph_id": 30,
"text": "Schrödinger had a strong interest in psychology, in particular color perception and colorimetry (German: Farbenmetrik). He spent quite a few years of his life working on these questions and published a series of papers in this area:",
"title": "Academic interests and life of the mind"
},
{
"paragraph_id": 31,
"text": "His work on the psychology of color perception follows the step of Newton, Maxwell and von Helmholtz in the same area. Some of these papers have been translated into English and can be found in: Sources of Colour Science, Ed. David L. MacAdam, MIT Press (1970) and in Erwin Schrödinger’s Color Theory, Translated with Modern Commentary, Ed. Keith K. Niall, Springer (2017). ISBN 978-3-319-64619-0 doi:10.1007/978-3-319-64621-3.",
"title": "Academic interests and life of the mind"
},
{
"paragraph_id": 32,
"text": "Schrödinger had a deep interest in philosophy, and was influenced by the works of Arthur Schopenhauer and Baruch Spinoza. In his 1956 lecture \"Mind and Matter\", he said that \"The world extended in space and time is but our representation.\" This is a repetition of the first words of Schopenhauer's main work. Schopenhauer's works also introduced him to Indian philosophy, more specifically to the Upanishads and Advaita Vedanta’s interpretation. He once took on a particular line of thought: \"If the world is indeed created by our act of observation, there should be billions of such worlds, one for each of us. How come your world and my world are the same? If something happens in my world, does it happen in your world, too? What causes all these worlds to synchronize with each other?\".",
"title": "Academic interests and life of the mind"
},
{
"paragraph_id": 33,
"text": "\"There is obviously only one alternative, namely the unification of minds or consciousnesses. Their multiplicity is only apparent, in truth there is only one mind. This is the doctrine of the Upanishads.\"",
"title": "Academic interests and life of the mind"
},
{
"paragraph_id": 34,
"text": "Schrödinger discussed topics such as consciousness, the mind–body problem, sense perception, free will, and objective reality in his lectures and writings.",
"title": "Academic interests and life of the mind"
},
{
"paragraph_id": 35,
"text": "Schrödinger’s attitude with respect to the relations between Eastern and Western thought was one of prudence, expressing appreciation for Eastern philosophy while also admitting that some of the ideas did not fit with empirical approaches to natural philosophy. Some commentators have suggested that Schrödinger was so deeply immersed in a non-dualist Vedântic-like view that it may have served as a broad framework or subliminal inspiration for much of his work including that in theoretical physics. Schrödinger expressed sympathy for the idea of tat tvam asi, stating \"you can throw yourself flat on the ground, stretched out upon Mother Earth, with the certain conviction that you are one with her and she with you.\"",
"title": "Academic interests and life of the mind"
},
{
"paragraph_id": 36,
"text": "Schrödinger said that \"Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else.\"",
"title": "Academic interests and life of the mind"
},
{
"paragraph_id": 37,
"text": "The philosophical issues raised by Schrödinger's cat are still debated today and remain his most enduring legacy in popular science, while Schrödinger's equation is his most enduring legacy at a more technical level. Schrödinger is one of several individuals who have been called \"the father of quantum mechanics\". The large crater Schrödinger, on the far side of the Moon, is named after him. The Erwin Schrödinger International Institute for Mathematical Physics was founded in Vienna in 1992.",
"title": "Legacy"
},
{
"paragraph_id": 38,
"text": "Schrödinger's portrait was the main feature of the design of the 1983–97 Austrian 1000-schilling banknote, the second-highest denomination.",
"title": "Legacy"
},
{
"paragraph_id": 39,
"text": "A building is named after him at the University of Limerick, in Limerick, Ireland, as is the 'Erwin Schrödinger Zentrum' at Adlershof in Berlin and the Route Schrödinger at CERN, Prévessin, France.",
"title": "Legacy"
},
{
"paragraph_id": 40,
"text": "Schrödinger also has a lecture hall in Trinity College Dublin dedicated to him. In January 2022, the head of the school of physics stated there would be a recommendation to drop Schrödinger lecture theatre name due to Schrödinger's \"history of sexually abusing women and children\".",
"title": "Legacy"
},
{
"paragraph_id": 41,
"text": "Schrödinger's 126th birthday anniversary in 2013 was celebrated with a Google Doodle.",
"title": "Legacy"
},
{
"paragraph_id": 42,
"text": "Schrödinger's cat is named in his honour, see also: List of things named after Erwin Schrödinger.",
"title": "Honors and awards"
}
]
| Erwin Rudolf Josef Alexander Schrödinger, sometimes written as Schroedinger or Schrodinger, was a Nobel Prize–winning Austrian and naturalized Irish physicist who developed fundamental results in quantum theory. In particular, he is recognized for postulating the Schrödinger equation, an equation that provides a way to calculate the wave function of a system and how it changes dynamically in time. He coined the term "quantum entanglement", and was the earliest to discuss it, doing so in 1932. In addition, he wrote many works on various aspects of physics: statistical mechanics and thermodynamics, physics of dielectrics, colour theory, electrodynamics, general relativity, and cosmology, and he made several attempts to construct a unified field theory. In his book What Is Life? Schrödinger addressed the problems of genetics, looking at the phenomenon of life from the point of view of physics. He also paid great attention to the philosophical aspects of science, ancient, and oriental philosophical concepts, ethics, and religion. He also wrote on philosophy and theoretical biology. In popular culture, he is best known for his "Schrödinger's cat" thought experiment. Spending most of his life as an academic with positions at various universities, Schrödinger, along with Paul Dirac, won the Nobel Prize in Physics in 1933 for his work on quantum mechanics, the same year he left Germany due to his opposition to Nazism. In his personal life, he lived with both his wife and his mistress which may have led to problems causing him to leave his position at Oxford. Subsequently, until 1938, he had a position in Graz, Austria, until the Nazi takeover when he fled, finally finding a long-term arrangement in Dublin where he remained until retirement in 1955. He died in Vienna of tuberculosis when he was 73. | 2001-10-17T17:40:39Z | 2023-12-05T13:32:03Z | [
"Template:Use dmy dates",
"Template:Infobox scientist",
"Template:IPAc-en",
"Template:IPA-de",
"Template:Citation needed",
"Template:Cite web",
"Template:Pp-semi-indef",
"Template:Interlanguage link",
"Template:Cite news",
"Template:20th Century Press Archives",
"Template:Nobel Prize in Physics",
"Template:Not a typo",
"Template:Linktext",
"Template:Equation box 1",
"Template:MacTutor",
"Template:Short description",
"Template:Redirect",
"Template:Doi",
"Template:Cite journal",
"Template:Harvnb",
"Template:Lang",
"Template:ISBN",
"Template:Reflist",
"Template:Cite book",
"Template:Commons category",
"Template:YouTube",
"Template:In lang",
"Template:1933 Nobel Prize winners",
"Template:Sfn",
"Template:Webarchive",
"Template:Citation",
"Template:Wikiquote",
"Template:Nobelprize",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Erwin_Schr%C3%B6dinger |
9,944 | Episome | An episome is a special type of plasmid, which remains as a part of the eukaryotic genome without integration. Episomes manage this by replicating together with the rest of the genome and subsequently associating with metaphase chromosomes during mitosis. Episomes do not degrade, unlike standard plasmids, and can be designed so that they are not epigenetically silenced inside the eukaryotic cell nucleus. Episomes can be observed in nature in certain types of long-term infection by adeno-associated virus or Epstein-Barr virus. In 2004, it was proposed that non-viral episomes might be used in genetic therapy for long-term change in gene expression.
As of 1999, there were many known sequences of DNA (deoxyribonucleic acid) that allow a standard plasmid to become episomally retained. One example is the S/MAR sequence.
The length of episomal retention is fairly variable between different genetic constructs and there are many known features in the sequence of an episome which will affect the length and stability of genetic expression of the carried transgene. Among these features is the number of CpG sites which contribute to epigenetic silencing of the transgene carried by the episome.
The mechanism behind episomal retention in the case of S/MAR episomes is generally still uncertain. As of 1985, in the case of latent Epstein-Barr virus infection, episomes seemed to be associated with nuclear proteins of the host cell through a set of viral proteins.
Episomes in prokaryotes are special sequences which can mitotically divide either separate from or integrated into the prokaryotic chromosome. | [
{
"paragraph_id": 0,
"text": "An episome is a special type of plasmid, which remains as a part of the eukaryotic genome without integration. Episomes manage this by replicating together with the rest of the genome and subsequently associating with metaphase chromosomes during mitosis. Episomes do not degrade, unlike standard plasmids, and can be designed so that they are not epigenetically silenced inside the eukaryotic cell nucleus. Episomes can be observed in nature in certain types of long-term infection by adeno-associated virus or Epstein-Barr virus. In 2004, it was proposed that non-viral episomes might be used in genetic therapy for long-term change in gene expression.",
"title": ""
},
{
"paragraph_id": 1,
"text": "As of 1999, there were many known sequences of DNA (deoxyribonucleic acid) that allow a standard plasmid to become episomally retained. One example is the S/MAR sequence.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The length of episomal retention is fairly variable between different genetic constructs and there are many known features in the sequence of an episome which will affect the length and stability of genetic expression of the carried transgene. Among these features is the number of CpG sites which contribute to epigenetic silencing of the transgene carried by the episome.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The mechanism behind episomal retention in the case of S/MAR episomes is generally still uncertain. As of 1985, in the case of latent Epstein-Barr virus infection, episomes seemed to be associated with nuclear proteins of the host cell through a set of viral proteins.",
"title": "Mechanism of episomal retention"
},
{
"paragraph_id": 4,
"text": "Episomes in prokaryotes are special sequences which can mitotically divide either separate from or integrated into the prokaryotic chromosome.",
"title": "Episomes in prokaryotes"
}
]
| An episome is a special type of plasmid, which remains as a part of the eukaryotic genome without integration. Episomes manage this by replicating together with the rest of the genome and subsequently associating with metaphase chromosomes during mitosis. Episomes do not degrade, unlike standard plasmids, and can be designed so that they are not epigenetically silenced inside the eukaryotic cell nucleus. Episomes can be observed in nature in certain types of long-term infection by adeno-associated virus or Epstein-Barr virus. In 2004, it was proposed that non-viral episomes might be used in genetic therapy for long-term change in gene expression. As of 1999, there were many known sequences of DNA that allow a standard plasmid to become episomally retained. One example is the S/MAR sequence. The length of episomal retention is fairly variable between different genetic constructs and there are many known features in the sequence of an episome which will affect the length and stability of genetic expression of the carried transgene. Among these features is the number of CpG sites which contribute to epigenetic silencing of the transgene carried by the episome. | 2002-02-25T15:43:11Z | 2023-08-29T17:56:46Z | [
"Template:Short description",
"Template:Reflist",
"Template:Cite journal",
"Template:Citation"
]
| https://en.wikipedia.org/wiki/Episome |
9,945 | EasyWriter | EasyWriter was a word processor first written for the Apple II series computer in 1979, the first word processor for that platform.
Published by Information Unlimited Software (IUS), it was written by John Draper's Cap'n Software, which also produced a version of Forth, which EasyWriter was developed in. Draper developed EasyWriter while serving nights in the Alameda County Jail under a work furlough program.
It was later ported to the IBM PC and released with the new computer in August 1981 as a launch title. Many criticized EasyWriter 1.0, distributed by IBM, for being buggy and hard to use; PC Magazine told the company as early as December 1981 that subscribers "wish IBM had provided better word processing". The company quickly persuaded IUS to develop a new version. (When founder William Baker later sent "I Survived EasyWriter" T-shirts, IBM returned them stating that it did not accept gifts.) IBM offered a free upgrade to version 1.10 to version 1.0 owners, but EasyWriter's poor quality had caused others to quickly provide alternatives, such as Camilo Wilson's Volkswriter.
IUS released a separate application, EasyWriter II. Completely rewritten by Basic Software Group, IUS emphasized that II—developed with C instead of Forth—"is not an updated version of the original IBM selection or its upgrade".
BYTE in 1981 reviewed EasyWriter and EasyWriter Professional for the Apple II, stating that "editing is a pleasure with either version", and approving of their features, user interface, and documentation. In an early review of the IBM PC, however, the magazine in 1982 stated that EasyWriter for it or the Apple II "didn't seem to be of the same caliber as, say, VisiCalc or the Peachtree business packages", citing the lack of ease of use and slow scrolling as flaws, and advised those who planned to use the IBM PC primarily for word processing to buy another computer until alternative software became available. Andrew Fluegelman wrote in PC Magazine that although EasyWriter 1.0 appeared to be an easy-to-use word processor for casual users, it "contains a few very annoying inconveniences and some very serious traps". He cited several bugs, slow performance, and user-interface issues, and later called it "pretty much a lemon".
IBM's Don Estridge admitted in 1983 that he "tried to use EasyWriter 1.0 and had the same experience everybody else had". EasyWriter 1.10 resolved most of Fluegelman's complaints. He reported that it "performs smoothly, will handle most any routine writing and printing job, and is easy to learn and operate", and that if IBM had released 1.10 first EasyWriter would likely have become the standard PC word processor.
BYTE criticized EasyWriter II for running as a booter instead of using DOS, requiring specially formatted disks for storage and a utility to convert to DOS-formatted disks, not being compatible with double-sided drives, and using a heavily modal editing interface. | [
{
"paragraph_id": 0,
"text": "EasyWriter was a word processor first written for the Apple II series computer in 1979, the first word processor for that platform.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Published by Information Unlimited Software (IUS), it was written by John Draper's Cap'n Software, which also produced a version of Forth, which EasyWriter was developed in. Draper developed EasyWriter while serving nights in the Alameda County Jail under a work furlough program.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "It was later ported to the IBM PC and released with the new computer in August 1981 as a launch title. Many criticized EasyWriter 1.0, distributed by IBM, for being buggy and hard to use; PC Magazine told the company as early as December 1981 that subscribers \"wish IBM had provided better word processing\". The company quickly persuaded IUS to develop a new version. (When founder William Baker later sent \"I Survived EasyWriter\" T-shirts, IBM returned them stating that it did not accept gifts.) IBM offered a free upgrade to version 1.10 to version 1.0 owners, but EasyWriter's poor quality had caused others to quickly provide alternatives, such as Camilo Wilson's Volkswriter.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "IUS released a separate application, EasyWriter II. Completely rewritten by Basic Software Group, IUS emphasized that II—developed with C instead of Forth—\"is not an updated version of the original IBM selection or its upgrade\".",
"title": "History"
},
{
"paragraph_id": 4,
"text": "BYTE in 1981 reviewed EasyWriter and EasyWriter Professional for the Apple II, stating that \"editing is a pleasure with either version\", and approving of their features, user interface, and documentation. In an early review of the IBM PC, however, the magazine in 1982 stated that EasyWriter for it or the Apple II \"didn't seem to be of the same caliber as, say, VisiCalc or the Peachtree business packages\", citing the lack of ease of use and slow scrolling as flaws, and advised those who planned to use the IBM PC primarily for word processing to buy another computer until alternative software became available. Andrew Fluegelman wrote in PC Magazine that although EasyWriter 1.0 appeared to be an easy-to-use word processor for casual users, it \"contains a few very annoying inconveniences and some very serious traps\". He cited several bugs, slow performance, and user-interface issues, and later called it \"pretty much a lemon\".",
"title": "Reception"
},
{
"paragraph_id": 5,
"text": "IBM's Don Estridge admitted in 1983 that he \"tried to use EasyWriter 1.0 and had the same experience everybody else had\". EasyWriter 1.10 resolved most of Fluegelman's complaints. He reported that it \"performs smoothly, will handle most any routine writing and printing job, and is easy to learn and operate\", and that if IBM had released 1.10 first EasyWriter would likely have become the standard PC word processor.",
"title": "Reception"
},
{
"paragraph_id": 6,
"text": "BYTE criticized EasyWriter II for running as a booter instead of using DOS, requiring specially formatted disks for storage and a utility to convert to DOS-formatted disks, not being compatible with double-sided drives, and using a heavily modal editing interface.",
"title": "Reception"
}
]
| EasyWriter was a word processor first written for the Apple II series computer in 1979, the first word processor for that platform. | 2002-02-25T15:43:11Z | 2023-09-09T15:37:22Z | [
"Template:Infobox software",
"Template:R",
"Template:Reflist",
"Template:Cite news",
"Template:Cite magazine",
"Template:Word processors"
]
| https://en.wikipedia.org/wiki/EasyWriter |
9,946 | Ed Sullivan | Edward Vincent Sullivan (September 28, 1901 – October 13, 1974) was an American television host, impresario, sports and entertainment reporter, and syndicated columnist for the New York Daily News and the Chicago Tribune New York News Syndicate. He was the creator and host of the television variety program The Toast of the Town, which in 1955 was renamed The Ed Sullivan Show. Broadcast from 1948 to 1971, it set a record as the longest-running variety show in U.S. broadcast history. "It was, by almost any measure, the last great American TV show", said television critic David Hinckley. "It's one of our fondest, dearest pop culture memories."
Sullivan was a broadcasting pioneer during the early years of American television. As critic David Bianculli wrote, "Before MTV, Sullivan presented rock acts. Before Bravo, he presented jazz and classical music and theater. Before the Comedy Channel, even before there was The Tonight Show, Sullivan discovered, anointed and popularized young comedians. Before there were 500 channels, before there was cable, Ed Sullivan was where the choice was. From the start, he was indeed 'the Toast of the Town'." In 1996, Sullivan was ranked number 50 on TV Guide's "50 Greatest TV Stars of All Time".
Sullivan was born on September 28, 1901, in Harlem, New York City, to Elizabeth F. (née Smith) and Peter Arthur Sullivan, a customs house employee. His twin brother Daniel was sickly and lived only a few months. Sullivan was raised in Port Chester, New York, where the family lived in a small red brick home at 53 Washington Street. He was of Irish descent. The family loved music, frequently playing the piano, singing and playing phonograph records. Sullivan was a gifted athlete in high school, earning 12 athletic letters at Port Chester High School. He played football as a halfback, basketball as a guard and track as a sprinter. With the baseball team, Sullivan was a catcher and the team's captain, leading the team to several championships. Sullivan noted that, in the state of New York, integration was taken for granted in high-school sports: "When we went up into Connecticut, we ran into clubs that had Negro players. In those days this was accepted as commonplace; and so, my instinctive antagonism years later to any theory that a Negro wasn't a worthy opponent or was an inferior person. It was just as simple as that."
Sullivan landed his first job at The Port Chester Daily Item, a local newspaper for which he had written sports news while in high school and which he joined full-time after graduation. In 1919, he joined The Hartford Post, but the newspaper folded in his first week there. He next worked for The New York Evening Mail as a sports reporter. After the newspaper closed in 1923, he bounced through a series of news jobs with the Associated Press, the Philadelphia Bulletin, The Morning World, The Morning Telegraph, The New York Bulletin and The Leader. In 1927, Sullivan joined The New York Evening Graphic, first as a sports writer and then as a sports editor.
In 1929, when Walter Winchell moved to The Daily Mirror, Sullivan was named the New York Evening Graphic's Broadway columnist. He left the paper for the city's largest tabloid, the New York Daily News. His column, "Little Old New York", concentrated on Broadway shows and gossip, and Sullivan also delivered showbusiness news broadcasts on radio. In 1933, Sullivan wrote and starred in the film Mr. Broadway, in which he guided the audience around New York nightspots to meet entertainers and celebrities. Sullivan soon became a powerful force in the entertainment world and one of Winchell's main rivals, setting the El Morocco nightclub in New York as his unofficial headquarters against Winchell's seat of power at the nearby Stork Club. Sullivan continued writing for the New York Daily News throughout his broadcasting career, and his popularity long outlived that of Winchell. In the late 1960s, Sullivan praised Winchell's legacy in a magazine interview, leading to a major reconciliation between the longtime adversaries.
Throughout his career as a columnist, Sullivan had dabbled in entertainment, producing vaudeville shows with which he appeared as master of ceremonies in the 1920s and 1930s, directing a radio program over the original WABC and organizing benefit reviews for various causes.
In 1941, Sullivan became host of the Summer Silver Theater, a variety program on CBS, with Will Bradley as bandleader and a guest star featured each week.
In 1948, producer Marlo Lewis convinced CBS to hire Sullivan to host a weekly Sunday-night television variety show, Toast of the Town, which later became The Ed Sullivan Show. Debuting in June 1948, the show was originally broadcast from Maxine Elliott's Theatre on West 39th Street in New York. In January 1953, it moved to CBS-TV Studio 50 at 1697 Broadway, a former CBS Radio playhouse that in 1967 was renamed the Ed Sullivan Theater (and was later the home of the Late Show with David Letterman and The Late Show with Stephen Colbert).
Television critics gave the new show and its host poor reviews. Harriet Van Horne alleged that "he got where he is not by having a personality, but by having no personality." (The host wrote to the critic, "Dear Miss Van Horne: You bitch. Sincerely, Ed Sullivan.") Sullivan had little acting ability; in 1967, 20 years after his show's debut, Time magazine asked, "What exactly is Ed Sullivan's talent?" His mannerisms on camera were so awkward that some viewers believed the host suffered from Bell's palsy. Time in 1955 stated that Sullivan resembled
a cigar-store Indian, the Cardiff Giant and a stone-faced monument just off the boat from Easter Island. He moves like a sleepwalker; his smile is that of a man sucking a lemon; his speech is frequently lost in a thicket of syntax; his eyes pop from their sockets or sink so deep in their bags that they seem to be peering up at the camera from the bottom of twin wells.
"Yet," the magazine concluded, "instead of frightening children, Ed Sullivan charms the whole family." Sullivan appeared to the audience as an average guy who brought the great acts of show business to their home televisions. "Ed Sullivan will last", comedian Fred Allen said, "as long as someone else has talent." Frequent guest Alan King said, "Ed does nothing, but he does it better than anyone else in television." A typical show would feature a vaudeville act (such as acrobats, jugglers or magicians), one or two popular comedians, a singing star, a figure from the legitimate theater, an appearance by puppet Topo Gigio or a popular athlete. The bill was often international in scope, with many European performers appearing along with the American artists.
Sullivan had a healthy sense of humor about himself and permitted and even encouraged impersonators such as John Byner, Frank Gorshin, Rich Little and especially Will Jordan to imitate him on his show. Johnny Carson also performed a fair impression, and even Joan Rivers imitated Sullivan's unique posture. The impressionists exaggerated his stiffness, raised shoulders and nasal tenor phrasing, along with some of his commonly used introductions, such as "And now, right here on our stage ...", "For all you youngsters out there ..." and "a really big shew" (his pronunciation of the word "show"). The latter phrase was in fact in the exclusive domain of his impressionists, as Sullivan never actually spoke the phrase "really big show" during the opening introduction of any episode in the entire history of the series. Jordan portrayed Sullivan in the films I Wanna Hold Your Hand, The Buddy Holly Story, The Doors, Mr. Saturday Night, Down with Love and in the 1979 television movie Elvis.
Sullivan played himself, parodying his mannerisms as directed by Jerry Lewis, in Lewis' 1964 film The Patsy.
Sullivan inspired a song in the musical Bye Bye Birdie and in 1963 appeared as himself in the film.
In 1954, Sullivan appeared as a cohost on the television musical special General Foods 25th Anniversary Show: A Salute to Rodgers and Hammerstein.
Sullivan was quoted as saying: "In the conduct of my own show, I've never asked a performer his religion, his race or his politics. Performers are engaged on the basis of their abilities. I believe that this is another quality of our show that has helped win it a wide and loyal audience." Although Sullivan was wary of Elvis Presley's image and initially said that he would never book him, Presley became too big a name to ignore; in 1956, Sullivan signed him for three appearances. Six weeks earlier in August 1956, Sullivan and his son-in-law, the producer of the show, Robert Precht, were in a near fatal car accident near Sullivan's Connecticut country home in Southbury, Connecticut, and missed Presley's first appearance on September 9, when Charles Laughton introduced Presley. After Sullivan came to know Presley personally, he made amends by telling his audience, "This is a real decent, fine boy."
Sullivan's failure to scoop the TV industry with Presley made him determined to book the next big sensation first. In November 1963, while at Heathrow Airport, Sullivan witnessed the Beatlemania spectacle as the band returned from Sweden and the terminal was overrun by screaming teens. At first Sullivan was reluctant to book the Beatles because the band did not yet have a commercially successful single in the U.S., but at the behest of his friend Sid Bernstein, Sullivan signed the group. Their initial Sullivan show appearance on February 9, 1964, was the most-watched program in TV history to that point. The Beatles appeared three more times in person and submitted filmed performances afterwards. The Dave Clark Five, who claimed a "cleaner" image than the Beatles, made 13 appearances on the show, more than any other UK group.
Unlike many shows of the time, Sullivan asked that most musical acts perform their music live, rather than lip-synching to their recordings. However, exceptions were made, such as when a microphone could not be placed close enough to a performer for technical reasons. An example was B.J. Thomas' 1969 performance of "Raindrops Keep Fallin' on My Head", in which water was sprinkled on him as a special effect. In 1969, Sullivan presented the Jackson 5 with their first single "I Want You Back", which ousted Thomas' song from the top spot of the Billboard Hot 100.
Sullivan had an appreciation for black entertainers. According to biographer Gerald Nachman, "Most TV variety shows welcomed 'acceptable' black superstars like Louis Armstrong, Pearl Bailey and Sammy Davis Jr. ... but in the early 1950s, long before it was fashionable, Sullivan was presenting the much more obscure black entertainers he had enjoyed in Harlem on his uptown rounds — legends like Peg Leg Bates, Pigmeat Markham and Tim Moore ... strangers to white America." He hosted pioneering TV appearances by Bo Diddley, the Platters, Brook Benton, Jackie Wilson, Fats Domino and numerous Motown acts including the Supremes, who appeared 17 times. As the critic John Leonard wrote, "There wasn't an important black artist who didn't appear on Ed's show."
Sullivan defied pressure to exclude black entertainers or to avoid interacting with them on screen. "Sullivan had to fend off his hard-won sponsor, Ford's Lincoln dealers, after kissing Pearl Bailey on the cheek and daring to shake Nat King Cole's hand," Nachman wrote. According to biographer Jerry Bowles, "Sullivan once had a Ford executive thrown out of the theatre when he suggested that Sullivan stop booking so many black acts. And a dealer in Cleveland told him 'We realize that you got to have niggers on your show. But do you have to put your arm around Bill 'Bojangles' Robinson at the end of his dance?' Sullivan had to be physically restrained from beating the man to a pulp." Sullivan later raised money to help pay for Robinson's funeral. He said: "As a Catholic, it was inevitable that I would despise intolerance, because Catholics suffered more than their share of it. As I grew up, the causes of minorities were part and parcel of me. Negroes and Jews were the minority causes closest at hand. I need no urging to take a plunge in and help."
At a time when television had not yet embraced country and western music, Sullivan featured Nashville performers on his program. This in turn paved the way for shows such as Hee Haw and variety shows hosted by Johnny Cash, Glen Campbell and other country singers.
The Canadian comedy duo Wayne and Shuster made the most appearances of any act throughout the show's run with 67 appearances between 1958 and 1969.
Sullivan appeared as himself on other television programs, including an April 1958 episode of the Howard Duff and Ida Lupino CBS situation comedy Mr. Adams and Eve. On September 14, 1958, Sullivan appeared on What's My Line? as a mystery guest. In 1961, Sullivan substituted for Red Skelton on The Red Skelton Show. Sullivan took Skelton's roles in the various comedy sketches, with Skelton's hobo character Freddie the Freeloader renamed Eddie the Freeloader.
Sullivan was quick to take offense if he felt that he had been crossed, and he could hold a grudge for a long time. As he told biographer Gerald Nachman, "I'm a pop-off. I flare up, then I go around apologizing." "Armed with an Irish temper and thin skin," wrote Nachman, "Ed brought to his feuds a hunger for combat fed by his coverage of, and devotion to, boxing." Bo Diddley, Buddy Holly, Jackie Mason, and Jim Morrison were parties to some of Sullivan's most storied conflicts.
For his second Sullivan appearance in 1955, Bo Diddley planned to sing his namesake hit, "Bo Diddley", but Sullivan told him to perform Tennessee Ernie Ford's song "Sixteen Tons". "That would have been the end of my career right there," Diddley told his biographer, so he sang "Bo Diddley" anyway. Sullivan was enraged: "You're the first black boy that ever double-crossed me on the show," Diddley quoted him as saying. "We didn't have much to do with each other after that." Later, Diddley resented that Elvis Presley, whom he accused of copying his revolutionary style and beat, received the attention and accolades on Sullivan's show that he felt were rightfully his. "I am owed," he said, "and I never got paid." "He might have," wrote Nachman, "had things gone smoother with Sullivan."
Buddy Holly and the Crickets first appeared on the Sullivan show in 1957 to an enthusiastic response. For their second appearance in January 1958, Sullivan considered the lyrics of their chosen number "Oh, Boy!" too suggestive, and ordered Holly to substitute another song. Holly responded that he had already told his hometown friends in Texas that he would be singing "Oh, Boy!" for them. Sullivan, unaccustomed to having his instructions questioned, angrily repeated them, but Holly refused to back down. Later, when the band was slow to respond to a summons to the rehearsal stage, Sullivan commented, "I guess the Crickets are not too excited to be on The Ed Sullivan Show." Holly, still annoyed by Sullivan's attitude, replied, "I hope they're damn more excited than I am." Sullivan retaliated by cutting them from two numbers to one, then mispronounced Holly's name during the introduction. He also saw to it that Holly's guitar amplifier volume was barely audible, except during his guitar solo. Nevertheless, the band was so well-received that Sullivan was forced to invite them back; Holly responded that Sullivan did not have enough money. Archival photographs taken during the appearance show Holly smirking and ignoring a visibly angry Sullivan.
During Jackie Mason's October 1964 performance on a show that had been shortened by ten minutes due to an address by President Lyndon Johnson, Sullivan—on-stage but off-camera—signaled Mason that he had two minutes left by holding up two fingers. Sullivan's signal distracted the studio audience, and to television viewers unaware of the circumstances, it seemed as though Mason's jokes were falling flat. Mason, in a bid to regain the audience's attention, cried, "I'm getting fingers here!" and made his own frantic hand gesture: "Here's a finger for you!" Videotapes of the incident are inconclusive as to whether Mason's upswept hand (which was just off-camera) was intended to be an indecent gesture, but Sullivan was convinced that it was, and banned Mason from future appearances on the program. Mason later insisted that he did not know what the "middle finger" meant, and that he did not make the gesture anyway. In September 1965, Sullivan—who, according to Mason, was "deeply apologetic"—brought Mason on the show for a "surprise grand reunion". "He said they were old pals," Nachman wrote, "news to Mason, who never got a repeat invitation." Mason added that his earning power "... was cut right in half after that. I never really worked my way back until I opened on Broadway in 1986."
When the Byrds performed on December 12, 1965, David Crosby got into a shouting match with the show's director. They were never asked to return.
Sullivan decided that "Girl, we couldn't get much higher", from the Doors' signature song "Light My Fire", was too overt a reference to drug use, and directed that the lyric be changed to "Girl, we couldn't get much better" for the group's September 1967 appearance. The band members "nodded their assent", according to Doors biographer Ben Fong-Torres, then sang the song as written. After the broadcast, producer Bob Precht told the group, "Mr. Sullivan wanted you for six more shows, but you'll never work the Ed Sullivan Show again." Jim Morrison replied, "Hey, man, we just did the Ed Sullivan Show."
The Rolling Stones famously capitulated during their fifth appearance on the show, in 1967, when Mick Jagger was told to change the titular lyric of "Let's Spend the Night Together" to "Let's spend some time together". "But Jagger prevailed," wrote Nachman, by deliberately calling attention to the censorship, rolling his eyes, mugging, and drawing out the word "t-i-i-i-me" as he sang the revised lyric. Sullivan was angered by the insubordination, but the Stones did make one additional appearance on the show, in 1969.
Moe Howard of the Three Stooges recalled in 1975 that Sullivan had a memory problem of sorts: "Ed was a very nice man, but for a showman, quite forgetful. On our first appearance, he introduced us as the Three Ritz Brothers. He got out of it by adding, 'who look more like the Three Stooges to me'." Joe DeRita, who worked with the Stooges after 1959, had commented that Sullivan had a personality "like the bottom of a bird cage."
Diana Ross, who was very fond of Sullivan, later recalled Sullivan's forgetfulness during the many occasions the Supremes performed on his show. In a 1995 appearance on the Late Show with David Letterman (taped in the Ed Sullivan Theater), Ross stated, "he could never remember our names. He called us 'the girls'."
In a 1990 press conference, Paul McCartney recalled meeting Sullivan again in the early 1970s. Sullivan apparently had no idea who McCartney was. McCartney tried to remind Sullivan that he was one of the Beatles, but Sullivan obviously could not remember, and nodding and smiling, simply shook McCartney's hand and left. In an interview with Howard Stern around 2012, Joan Rivers said that Sullivan had been suffering from dementia toward the end of his life.
Sullivan, like many American entertainers, was pulled into the Cold War anticommunism of the late 1940s and 1950s. Tap dancer Paul Draper's scheduled January 1950 appearance on Toast of the Town met with opposition from Hester McCullough, an activist in the hunt for "subversives". Branding Draper a Communist Party "sympathizer", she demanded that Sullivan's lead sponsor, the Ford Motor Company, cancel Draper's appearance. Draper denied the charge, and appeared on the show as scheduled. Ford received over a thousand angry letters and telegrams, and Sullivan was obliged to promise Ford's advertising agency, Kenyon & Eckhardt, that he would avoid controversial guests going forward. Draper was forced to move to Europe to earn a living.
After the Draper incident, Sullivan began to work closely with Theodore Kirkpatrick of the anti-Communist Counterattack newsletter. He would consult Kirkpatrick if any questions came up regarding a potential guest's political leanings. Sullivan wrote in his June 21, 1950, Daily News column that "Kirkpatrick has sat in my living room on several occasions and listened attentively to performers eager to secure a certification of loyalty."
Cold War repercussions manifested in a different way when Bob Dylan was booked to appear in May 1963. His chosen song was "Talkin' John Birch Paranoid Blues", which poked fun at the ultraconservative John Birch Society and its tendency to see Communist conspiracies in many situations. No concern was voiced by anyone, including Sullivan, during rehearsals; but on the day of the broadcast, CBS's Standards and Practices department rejected the song, fearing that lyrics equating the Society's views with those of Adolf Hitler might trigger a defamation lawsuit. Dylan was offered the opportunity to perform a different song, but he responded that if he could not sing the number of his choice, he would rather not appear at all. The story generated widespread media attention in the days that followed; Sullivan denounced the network's decision in published interviews.
Sullivan butted heads with Standards and Practices on other occasions, as well. In 1956, Ingrid Bergman—who had been living in "exile" in Europe since 1950 in the wake of her scandalous love affair with director Roberto Rossellini while they were both married—was planning a return to Hollywood as the star of Anastasia. Sullivan, confident that the American public would welcome her back, invited her to appear on his show and flew to Europe to film an interview with Bergman, Yul Brynner, and Helen Hayes on the Anastasia set. When he arrived back in New York, Standards and Practices informed Sullivan that under no circumstances would Bergman be permitted to appear on the show, either live or on film. Sullivan's prediction later proved correct, as Bergman won her second Academy Award for her portrayal, as well as the forgiveness of her fans.
Sullivan was engaged to champion swimmer Sybil Bauer, but she died of cancer in 1927 at the age of 23.
In 1926, Sullivan met and began dating Sylvia Weinstein. Initially she told her family that she was dating a Jewish man named Ed Solomon, but her brother discovered it was Sullivan, who was Catholic. Both their families were strongly opposed to interfaith marriage, which resulted in a discontinuous relationship for the next three years. They were finally married on April 28, 1930, in a City Hall ceremony. Eight months later Sylvia gave birth to Elizabeth ("Betty"), named after Sullivan's mother, who had died that year. In 1952, Betty Sullivan married the Ed Sullivan Show's producer, Bob Precht.
The Sullivans rented a suite of rooms at the Hotel Delmonico in 1944 after living at the Hotel Astor on Times Square for many years. Sullivan rented a suite next door to the family suite, which he used as an office until The Ed Sullivan Show was canceled in 1971. Sullivan habitually called his wife after every program to get her critique.
The Sullivans regularly dined and socialized at New York City's best-known clubs and restaurants including the Stork Club, Danny's Hide-A-Way, and Jimmy Kelly's. His friends included celebrities and U.S. presidents. He also received audiences with popes.
Sylvia Sullivan was a financial advisor for her husband. She died on March 16, 1973, at Mount Sinai Hospital from a ruptured aorta.
In the fall of 1965, CBS began televising its weekly programs in color. Although the Sullivan show was seen live in the Central and Eastern time zones, it was taped for airing in the Pacific and Mountain time zones. Excerpts have been released on home video, and posted on the official Ed Sullivan Show YouTube Channel.
By 1971, the show's ratings had plummeted. In an effort to refresh the CBS lineup, CBS cancelled the program in March 1971, along with some of its other long-running shows throughout the 1970–1971 season (later known as the rural purge). Angered, Sullivan refused to host three more months of scheduled shows. They were replaced by reruns, and a final program without him aired in June. He remained with the network in various other capacities and hosted a 25th anniversary special in June 1973.
In early September 1974, Sullivan was diagnosed with an advanced stage of esophageal cancer. Doctors gave him very little time to live, and the family chose to keep the diagnosis secret from him. Sullivan, a lifelong smoker, believed his ailment to be yet another complication from a long-standing battle with gastric ulcers. Sullivan died on October 13, 1974, at New York's Lenox Hill Hospital. His funeral was attended by 2,000 people at St. Patrick's Cathedral, New York, on a cold, rainy day. Sullivan is interred in a crypt at the Ferncliff Cemetery in Hartsdale, New York.
Sullivan has a star on the Hollywood Walk of Fame at 6101 Hollywood Blvd. In 1985, Sullivan was welcomed to the Television Academy Hall of Fame. | [
{
"paragraph_id": 0,
"text": "Edward Vincent Sullivan (September 28, 1901 – October 13, 1974) was an American television host, impresario, sports and entertainment reporter, and syndicated columnist for the New York Daily News and the Chicago Tribune New York News Syndicate. He was the creator and host of the television variety program The Toast of the Town, which in 1955 was renamed The Ed Sullivan Show. Broadcast from 1948 to 1971, it set a record as the longest-running variety show in U.S. broadcast history. \"It was, by almost any measure, the last great American TV show\", said television critic David Hinckley. \"It's one of our fondest, dearest pop culture memories.\"",
"title": ""
},
{
"paragraph_id": 1,
"text": "Sullivan was a broadcasting pioneer during the early years of American television. As critic David Bianculli wrote, \"Before MTV, Sullivan presented rock acts. Before Bravo, he presented jazz and classical music and theater. Before the Comedy Channel, even before there was The Tonight Show, Sullivan discovered, anointed and popularized young comedians. Before there were 500 channels, before there was cable, Ed Sullivan was where the choice was. From the start, he was indeed 'the Toast of the Town'.\" In 1996, Sullivan was ranked number 50 on TV Guide's \"50 Greatest TV Stars of All Time\".",
"title": ""
},
{
"paragraph_id": 2,
"text": "Sullivan was born on September 28, 1901, in Harlem, New York City, to Elizabeth F. (née Smith) and Peter Arthur Sullivan, a customs house employee. His twin brother Daniel was sickly and lived only a few months. Sullivan was raised in Port Chester, New York, where the family lived in a small red brick home at 53 Washington Street. He was of Irish descent. The family loved music, frequently playing the piano, singing and playing phonograph records. Sullivan was a gifted athlete in high school, earning 12 athletic letters at Port Chester High School. He played football as a halfback, basketball as a guard and track as a sprinter. With the baseball team, Sullivan was a catcher and the team's captain, leading the team to several championships. Sullivan noted that, in the state of New York, integration was taken for granted in high-school sports: \"When we went up into Connecticut, we ran into clubs that had Negro players. In those days this was accepted as commonplace; and so, my instinctive antagonism years later to any theory that a Negro wasn't a worthy opponent or was an inferior person. It was just as simple as that.\"",
"title": "Early life and career"
},
{
"paragraph_id": 3,
"text": "Sullivan landed his first job at The Port Chester Daily Item, a local newspaper for which he had written sports news while in high school and which he joined full-time after graduation. In 1919, he joined The Hartford Post, but the newspaper folded in his first week there. He next worked for The New York Evening Mail as a sports reporter. After the newspaper closed in 1923, he bounced through a series of news jobs with the Associated Press, the Philadelphia Bulletin, The Morning World, The Morning Telegraph, The New York Bulletin and The Leader. In 1927, Sullivan joined The New York Evening Graphic, first as a sports writer and then as a sports editor.",
"title": "Early life and career"
},
{
"paragraph_id": 4,
"text": "In 1929, when Walter Winchell moved to The Daily Mirror, Sullivan was named the New York Evening Graphic's Broadway columnist. He left the paper for the city's largest tabloid, the New York Daily News. His column, \"Little Old New York\", concentrated on Broadway shows and gossip, and Sullivan also delivered showbusiness news broadcasts on radio. In 1933, Sullivan wrote and starred in the film Mr. Broadway, in which he guided the audience around New York nightspots to meet entertainers and celebrities. Sullivan soon became a powerful force in the entertainment world and one of Winchell's main rivals, setting the El Morocco nightclub in New York as his unofficial headquarters against Winchell's seat of power at the nearby Stork Club. Sullivan continued writing for the New York Daily News throughout his broadcasting career, and his popularity long outlived that of Winchell. In the late 1960s, Sullivan praised Winchell's legacy in a magazine interview, leading to a major reconciliation between the longtime adversaries.",
"title": "Early life and career"
},
{
"paragraph_id": 5,
"text": "Throughout his career as a columnist, Sullivan had dabbled in entertainment, producing vaudeville shows with which he appeared as master of ceremonies in the 1920s and 1930s, directing a radio program over the original WABC and organizing benefit reviews for various causes.",
"title": "Early life and career"
},
{
"paragraph_id": 6,
"text": "In 1941, Sullivan became host of the Summer Silver Theater, a variety program on CBS, with Will Bradley as bandleader and a guest star featured each week.",
"title": "Radio"
},
{
"paragraph_id": 7,
"text": "In 1948, producer Marlo Lewis convinced CBS to hire Sullivan to host a weekly Sunday-night television variety show, Toast of the Town, which later became The Ed Sullivan Show. Debuting in June 1948, the show was originally broadcast from Maxine Elliott's Theatre on West 39th Street in New York. In January 1953, it moved to CBS-TV Studio 50 at 1697 Broadway, a former CBS Radio playhouse that in 1967 was renamed the Ed Sullivan Theater (and was later the home of the Late Show with David Letterman and The Late Show with Stephen Colbert).",
"title": "Television"
},
{
"paragraph_id": 8,
"text": "Television critics gave the new show and its host poor reviews. Harriet Van Horne alleged that \"he got where he is not by having a personality, but by having no personality.\" (The host wrote to the critic, \"Dear Miss Van Horne: You bitch. Sincerely, Ed Sullivan.\") Sullivan had little acting ability; in 1967, 20 years after his show's debut, Time magazine asked, \"What exactly is Ed Sullivan's talent?\" His mannerisms on camera were so awkward that some viewers believed the host suffered from Bell's palsy. Time in 1955 stated that Sullivan resembled",
"title": "Television"
},
{
"paragraph_id": 9,
"text": "a cigar-store Indian, the Cardiff Giant and a stone-faced monument just off the boat from Easter Island. He moves like a sleepwalker; his smile is that of a man sucking a lemon; his speech is frequently lost in a thicket of syntax; his eyes pop from their sockets or sink so deep in their bags that they seem to be peering up at the camera from the bottom of twin wells.",
"title": "Television"
},
{
"paragraph_id": 10,
"text": "\"Yet,\" the magazine concluded, \"instead of frightening children, Ed Sullivan charms the whole family.\" Sullivan appeared to the audience as an average guy who brought the great acts of show business to their home televisions. \"Ed Sullivan will last\", comedian Fred Allen said, \"as long as someone else has talent.\" Frequent guest Alan King said, \"Ed does nothing, but he does it better than anyone else in television.\" A typical show would feature a vaudeville act (such as acrobats, jugglers or magicians), one or two popular comedians, a singing star, a figure from the legitimate theater, an appearance by puppet Topo Gigio or a popular athlete. The bill was often international in scope, with many European performers appearing along with the American artists.",
"title": "Television"
},
{
"paragraph_id": 11,
"text": "Sullivan had a healthy sense of humor about himself and permitted and even encouraged impersonators such as John Byner, Frank Gorshin, Rich Little and especially Will Jordan to imitate him on his show. Johnny Carson also performed a fair impression, and even Joan Rivers imitated Sullivan's unique posture. The impressionists exaggerated his stiffness, raised shoulders and nasal tenor phrasing, along with some of his commonly used introductions, such as \"And now, right here on our stage ...\", \"For all you youngsters out there ...\" and \"a really big shew\" (his pronunciation of the word \"show\"). The latter phrase was in fact in the exclusive domain of his impressionists, as Sullivan never actually spoke the phrase \"really big show\" during the opening introduction of any episode in the entire history of the series. Jordan portrayed Sullivan in the films I Wanna Hold Your Hand, The Buddy Holly Story, The Doors, Mr. Saturday Night, Down with Love and in the 1979 television movie Elvis.",
"title": "Television"
},
{
"paragraph_id": 12,
"text": "Sullivan played himself, parodying his mannerisms as directed by Jerry Lewis, in Lewis' 1964 film The Patsy.",
"title": "Television"
},
{
"paragraph_id": 13,
"text": "Sullivan inspired a song in the musical Bye Bye Birdie and in 1963 appeared as himself in the film.",
"title": "Television"
},
{
"paragraph_id": 14,
"text": "In 1954, Sullivan appeared as a cohost on the television musical special General Foods 25th Anniversary Show: A Salute to Rodgers and Hammerstein.",
"title": "Television"
},
{
"paragraph_id": 15,
"text": "Sullivan was quoted as saying: \"In the conduct of my own show, I've never asked a performer his religion, his race or his politics. Performers are engaged on the basis of their abilities. I believe that this is another quality of our show that has helped win it a wide and loyal audience.\" Although Sullivan was wary of Elvis Presley's image and initially said that he would never book him, Presley became too big a name to ignore; in 1956, Sullivan signed him for three appearances. Six weeks earlier in August 1956, Sullivan and his son-in-law, the producer of the show, Robert Precht, were in a near fatal car accident near Sullivan's Connecticut country home in Southbury, Connecticut, and missed Presley's first appearance on September 9, when Charles Laughton introduced Presley. After Sullivan came to know Presley personally, he made amends by telling his audience, \"This is a real decent, fine boy.\"",
"title": "Legacy"
},
{
"paragraph_id": 16,
"text": "Sullivan's failure to scoop the TV industry with Presley made him determined to book the next big sensation first. In November 1963, while at Heathrow Airport, Sullivan witnessed the Beatlemania spectacle as the band returned from Sweden and the terminal was overrun by screaming teens. At first Sullivan was reluctant to book the Beatles because the band did not yet have a commercially successful single in the U.S., but at the behest of his friend Sid Bernstein, Sullivan signed the group. Their initial Sullivan show appearance on February 9, 1964, was the most-watched program in TV history to that point. The Beatles appeared three more times in person and submitted filmed performances afterwards. The Dave Clark Five, who claimed a \"cleaner\" image than the Beatles, made 13 appearances on the show, more than any other UK group.",
"title": "Legacy"
},
{
"paragraph_id": 17,
"text": "Unlike many shows of the time, Sullivan asked that most musical acts perform their music live, rather than lip-synching to their recordings. However, exceptions were made, such as when a microphone could not be placed close enough to a performer for technical reasons. An example was B.J. Thomas' 1969 performance of \"Raindrops Keep Fallin' on My Head\", in which water was sprinkled on him as a special effect. In 1969, Sullivan presented the Jackson 5 with their first single \"I Want You Back\", which ousted Thomas' song from the top spot of the Billboard Hot 100.",
"title": "Legacy"
},
{
"paragraph_id": 18,
"text": "Sullivan had an appreciation for black entertainers. According to biographer Gerald Nachman, \"Most TV variety shows welcomed 'acceptable' black superstars like Louis Armstrong, Pearl Bailey and Sammy Davis Jr. ... but in the early 1950s, long before it was fashionable, Sullivan was presenting the much more obscure black entertainers he had enjoyed in Harlem on his uptown rounds — legends like Peg Leg Bates, Pigmeat Markham and Tim Moore ... strangers to white America.\" He hosted pioneering TV appearances by Bo Diddley, the Platters, Brook Benton, Jackie Wilson, Fats Domino and numerous Motown acts including the Supremes, who appeared 17 times. As the critic John Leonard wrote, \"There wasn't an important black artist who didn't appear on Ed's show.\"",
"title": "Legacy"
},
{
"paragraph_id": 19,
"text": "Sullivan defied pressure to exclude black entertainers or to avoid interacting with them on screen. \"Sullivan had to fend off his hard-won sponsor, Ford's Lincoln dealers, after kissing Pearl Bailey on the cheek and daring to shake Nat King Cole's hand,\" Nachman wrote. According to biographer Jerry Bowles, \"Sullivan once had a Ford executive thrown out of the theatre when he suggested that Sullivan stop booking so many black acts. And a dealer in Cleveland told him 'We realize that you got to have niggers on your show. But do you have to put your arm around Bill 'Bojangles' Robinson at the end of his dance?' Sullivan had to be physically restrained from beating the man to a pulp.\" Sullivan later raised money to help pay for Robinson's funeral. He said: \"As a Catholic, it was inevitable that I would despise intolerance, because Catholics suffered more than their share of it. As I grew up, the causes of minorities were part and parcel of me. Negroes and Jews were the minority causes closest at hand. I need no urging to take a plunge in and help.\"",
"title": "Legacy"
},
{
"paragraph_id": 20,
"text": "At a time when television had not yet embraced country and western music, Sullivan featured Nashville performers on his program. This in turn paved the way for shows such as Hee Haw and variety shows hosted by Johnny Cash, Glen Campbell and other country singers.",
"title": "Legacy"
},
{
"paragraph_id": 21,
"text": "The Canadian comedy duo Wayne and Shuster made the most appearances of any act throughout the show's run with 67 appearances between 1958 and 1969.",
"title": "Legacy"
},
{
"paragraph_id": 22,
"text": "Sullivan appeared as himself on other television programs, including an April 1958 episode of the Howard Duff and Ida Lupino CBS situation comedy Mr. Adams and Eve. On September 14, 1958, Sullivan appeared on What's My Line? as a mystery guest. In 1961, Sullivan substituted for Red Skelton on The Red Skelton Show. Sullivan took Skelton's roles in the various comedy sketches, with Skelton's hobo character Freddie the Freeloader renamed Eddie the Freeloader.",
"title": "Legacy"
},
{
"paragraph_id": 23,
"text": "Sullivan was quick to take offense if he felt that he had been crossed, and he could hold a grudge for a long time. As he told biographer Gerald Nachman, \"I'm a pop-off. I flare up, then I go around apologizing.\" \"Armed with an Irish temper and thin skin,\" wrote Nachman, \"Ed brought to his feuds a hunger for combat fed by his coverage of, and devotion to, boxing.\" Bo Diddley, Buddy Holly, Jackie Mason, and Jim Morrison were parties to some of Sullivan's most storied conflicts.",
"title": "Personality"
},
{
"paragraph_id": 24,
"text": "For his second Sullivan appearance in 1955, Bo Diddley planned to sing his namesake hit, \"Bo Diddley\", but Sullivan told him to perform Tennessee Ernie Ford's song \"Sixteen Tons\". \"That would have been the end of my career right there,\" Diddley told his biographer, so he sang \"Bo Diddley\" anyway. Sullivan was enraged: \"You're the first black boy that ever double-crossed me on the show,\" Diddley quoted him as saying. \"We didn't have much to do with each other after that.\" Later, Diddley resented that Elvis Presley, whom he accused of copying his revolutionary style and beat, received the attention and accolades on Sullivan's show that he felt were rightfully his. \"I am owed,\" he said, \"and I never got paid.\" \"He might have,\" wrote Nachman, \"had things gone smoother with Sullivan.\"",
"title": "Personality"
},
{
"paragraph_id": 25,
"text": "Buddy Holly and the Crickets first appeared on the Sullivan show in 1957 to an enthusiastic response. For their second appearance in January 1958, Sullivan considered the lyrics of their chosen number \"Oh, Boy!\" too suggestive, and ordered Holly to substitute another song. Holly responded that he had already told his hometown friends in Texas that he would be singing \"Oh, Boy!\" for them. Sullivan, unaccustomed to having his instructions questioned, angrily repeated them, but Holly refused to back down. Later, when the band was slow to respond to a summons to the rehearsal stage, Sullivan commented, \"I guess the Crickets are not too excited to be on The Ed Sullivan Show.\" Holly, still annoyed by Sullivan's attitude, replied, \"I hope they're damn more excited than I am.\" Sullivan retaliated by cutting them from two numbers to one, then mispronounced Holly's name during the introduction. He also saw to it that Holly's guitar amplifier volume was barely audible, except during his guitar solo. Nevertheless, the band was so well-received that Sullivan was forced to invite them back; Holly responded that Sullivan did not have enough money. Archival photographs taken during the appearance show Holly smirking and ignoring a visibly angry Sullivan.",
"title": "Personality"
},
{
"paragraph_id": 26,
"text": "During Jackie Mason's October 1964 performance on a show that had been shortened by ten minutes due to an address by President Lyndon Johnson, Sullivan—on-stage but off-camera—signaled Mason that he had two minutes left by holding up two fingers. Sullivan's signal distracted the studio audience, and to television viewers unaware of the circumstances, it seemed as though Mason's jokes were falling flat. Mason, in a bid to regain the audience's attention, cried, \"I'm getting fingers here!\" and made his own frantic hand gesture: \"Here's a finger for you!\" Videotapes of the incident are inconclusive as to whether Mason's upswept hand (which was just off-camera) was intended to be an indecent gesture, but Sullivan was convinced that it was, and banned Mason from future appearances on the program. Mason later insisted that he did not know what the \"middle finger\" meant, and that he did not make the gesture anyway. In September 1965, Sullivan—who, according to Mason, was \"deeply apologetic\"—brought Mason on the show for a \"surprise grand reunion\". \"He said they were old pals,\" Nachman wrote, \"news to Mason, who never got a repeat invitation.\" Mason added that his earning power \"... was cut right in half after that. I never really worked my way back until I opened on Broadway in 1986.\"",
"title": "Personality"
},
{
"paragraph_id": 27,
"text": "When the Byrds performed on December 12, 1965, David Crosby got into a shouting match with the show's director. They were never asked to return.",
"title": "Personality"
},
{
"paragraph_id": 28,
"text": "Sullivan decided that \"Girl, we couldn't get much higher\", from the Doors' signature song \"Light My Fire\", was too overt a reference to drug use, and directed that the lyric be changed to \"Girl, we couldn't get much better\" for the group's September 1967 appearance. The band members \"nodded their assent\", according to Doors biographer Ben Fong-Torres, then sang the song as written. After the broadcast, producer Bob Precht told the group, \"Mr. Sullivan wanted you for six more shows, but you'll never work the Ed Sullivan Show again.\" Jim Morrison replied, \"Hey, man, we just did the Ed Sullivan Show.\"",
"title": "Personality"
},
{
"paragraph_id": 29,
"text": "The Rolling Stones famously capitulated during their fifth appearance on the show, in 1967, when Mick Jagger was told to change the titular lyric of \"Let's Spend the Night Together\" to \"Let's spend some time together\". \"But Jagger prevailed,\" wrote Nachman, by deliberately calling attention to the censorship, rolling his eyes, mugging, and drawing out the word \"t-i-i-i-me\" as he sang the revised lyric. Sullivan was angered by the insubordination, but the Stones did make one additional appearance on the show, in 1969.",
"title": "Personality"
},
{
"paragraph_id": 30,
"text": "Moe Howard of the Three Stooges recalled in 1975 that Sullivan had a memory problem of sorts: \"Ed was a very nice man, but for a showman, quite forgetful. On our first appearance, he introduced us as the Three Ritz Brothers. He got out of it by adding, 'who look more like the Three Stooges to me'.\" Joe DeRita, who worked with the Stooges after 1959, had commented that Sullivan had a personality \"like the bottom of a bird cage.\"",
"title": "Personality"
},
{
"paragraph_id": 31,
"text": "Diana Ross, who was very fond of Sullivan, later recalled Sullivan's forgetfulness during the many occasions the Supremes performed on his show. In a 1995 appearance on the Late Show with David Letterman (taped in the Ed Sullivan Theater), Ross stated, \"he could never remember our names. He called us 'the girls'.\"",
"title": "Personality"
},
{
"paragraph_id": 32,
"text": "In a 1990 press conference, Paul McCartney recalled meeting Sullivan again in the early 1970s. Sullivan apparently had no idea who McCartney was. McCartney tried to remind Sullivan that he was one of the Beatles, but Sullivan obviously could not remember, and nodding and smiling, simply shook McCartney's hand and left. In an interview with Howard Stern around 2012, Joan Rivers said that Sullivan had been suffering from dementia toward the end of his life.",
"title": "Personality"
},
{
"paragraph_id": 33,
"text": "Sullivan, like many American entertainers, was pulled into the Cold War anticommunism of the late 1940s and 1950s. Tap dancer Paul Draper's scheduled January 1950 appearance on Toast of the Town met with opposition from Hester McCullough, an activist in the hunt for \"subversives\". Branding Draper a Communist Party \"sympathizer\", she demanded that Sullivan's lead sponsor, the Ford Motor Company, cancel Draper's appearance. Draper denied the charge, and appeared on the show as scheduled. Ford received over a thousand angry letters and telegrams, and Sullivan was obliged to promise Ford's advertising agency, Kenyon & Eckhardt, that he would avoid controversial guests going forward. Draper was forced to move to Europe to earn a living.",
"title": "Politics"
},
{
"paragraph_id": 34,
"text": "After the Draper incident, Sullivan began to work closely with Theodore Kirkpatrick of the anti-Communist Counterattack newsletter. He would consult Kirkpatrick if any questions came up regarding a potential guest's political leanings. Sullivan wrote in his June 21, 1950, Daily News column that \"Kirkpatrick has sat in my living room on several occasions and listened attentively to performers eager to secure a certification of loyalty.\"",
"title": "Politics"
},
{
"paragraph_id": 35,
"text": "Cold War repercussions manifested in a different way when Bob Dylan was booked to appear in May 1963. His chosen song was \"Talkin' John Birch Paranoid Blues\", which poked fun at the ultraconservative John Birch Society and its tendency to see Communist conspiracies in many situations. No concern was voiced by anyone, including Sullivan, during rehearsals; but on the day of the broadcast, CBS's Standards and Practices department rejected the song, fearing that lyrics equating the Society's views with those of Adolf Hitler might trigger a defamation lawsuit. Dylan was offered the opportunity to perform a different song, but he responded that if he could not sing the number of his choice, he would rather not appear at all. The story generated widespread media attention in the days that followed; Sullivan denounced the network's decision in published interviews.",
"title": "Politics"
},
{
"paragraph_id": 36,
"text": "Sullivan butted heads with Standards and Practices on other occasions, as well. In 1956, Ingrid Bergman—who had been living in \"exile\" in Europe since 1950 in the wake of her scandalous love affair with director Roberto Rossellini while they were both married—was planning a return to Hollywood as the star of Anastasia. Sullivan, confident that the American public would welcome her back, invited her to appear on his show and flew to Europe to film an interview with Bergman, Yul Brynner, and Helen Hayes on the Anastasia set. When he arrived back in New York, Standards and Practices informed Sullivan that under no circumstances would Bergman be permitted to appear on the show, either live or on film. Sullivan's prediction later proved correct, as Bergman won her second Academy Award for her portrayal, as well as the forgiveness of her fans.",
"title": "Politics"
},
{
"paragraph_id": 37,
"text": "Sullivan was engaged to champion swimmer Sybil Bauer, but she died of cancer in 1927 at the age of 23.",
"title": "Personal life"
},
{
"paragraph_id": 38,
"text": "In 1926, Sullivan met and began dating Sylvia Weinstein. Initially she told her family that she was dating a Jewish man named Ed Solomon, but her brother discovered it was Sullivan, who was Catholic. Both their families were strongly opposed to interfaith marriage, which resulted in a discontinuous relationship for the next three years. They were finally married on April 28, 1930, in a City Hall ceremony. Eight months later Sylvia gave birth to Elizabeth (\"Betty\"), named after Sullivan's mother, who had died that year. In 1952, Betty Sullivan married the Ed Sullivan Show's producer, Bob Precht.",
"title": "Personal life"
},
{
"paragraph_id": 39,
"text": "The Sullivans rented a suite of rooms at the Hotel Delmonico in 1944 after living at the Hotel Astor on Times Square for many years. Sullivan rented a suite next door to the family suite, which he used as an office until The Ed Sullivan Show was canceled in 1971. Sullivan habitually called his wife after every program to get her critique.",
"title": "Personal life"
},
{
"paragraph_id": 40,
"text": "The Sullivans regularly dined and socialized at New York City's best-known clubs and restaurants including the Stork Club, Danny's Hide-A-Way, and Jimmy Kelly's. His friends included celebrities and U.S. presidents. He also received audiences with popes.",
"title": "Personal life"
},
{
"paragraph_id": 41,
"text": "Sylvia Sullivan was a financial advisor for her husband. She died on March 16, 1973, at Mount Sinai Hospital from a ruptured aorta.",
"title": "Personal life"
},
{
"paragraph_id": 42,
"text": "In the fall of 1965, CBS began televising its weekly programs in color. Although the Sullivan show was seen live in the Central and Eastern time zones, it was taped for airing in the Pacific and Mountain time zones. Excerpts have been released on home video, and posted on the official Ed Sullivan Show YouTube Channel.",
"title": "Later years and death"
},
{
"paragraph_id": 43,
"text": "By 1971, the show's ratings had plummeted. In an effort to refresh the CBS lineup, CBS cancelled the program in March 1971, along with some of its other long-running shows throughout the 1970–1971 season (later known as the rural purge). Angered, Sullivan refused to host three more months of scheduled shows. They were replaced by reruns, and a final program without him aired in June. He remained with the network in various other capacities and hosted a 25th anniversary special in June 1973.",
"title": "Later years and death"
},
{
"paragraph_id": 44,
"text": "In early September 1974, Sullivan was diagnosed with an advanced stage of esophageal cancer. Doctors gave him very little time to live, and the family chose to keep the diagnosis secret from him. Sullivan, a lifelong smoker, believed his ailment to be yet another complication from a long-standing battle with gastric ulcers. Sullivan died on October 13, 1974, at New York's Lenox Hill Hospital. His funeral was attended by 2,000 people at St. Patrick's Cathedral, New York, on a cold, rainy day. Sullivan is interred in a crypt at the Ferncliff Cemetery in Hartsdale, New York.",
"title": "Later years and death"
},
{
"paragraph_id": 45,
"text": "Sullivan has a star on the Hollywood Walk of Fame at 6101 Hollywood Blvd. In 1985, Sullivan was welcomed to the Television Academy Hall of Fame.",
"title": "Later years and death"
}
]
| Edward Vincent Sullivan was an American television host, impresario, sports and entertainment reporter, and syndicated columnist for the New York Daily News and the Chicago Tribune New York News Syndicate. He was the creator and host of the television variety program The Toast of the Town, which in 1955 was renamed The Ed Sullivan Show. Broadcast from 1948 to 1971, it set a record as the longest-running variety show in U.S. broadcast history. "It was, by almost any measure, the last great American TV show", said television critic David Hinckley. "It's one of our fondest, dearest pop culture memories." Sullivan was a broadcasting pioneer during the early years of American television. As critic David Bianculli wrote, "Before MTV, Sullivan presented rock acts. Before Bravo, he presented jazz and classical music and theater. Before the Comedy Channel, even before there was The Tonight Show, Sullivan discovered, anointed and popularized young comedians. Before there were 500 channels, before there was cable, Ed Sullivan was where the choice was. From the start, he was indeed 'the Toast of the Town'." In 1996, Sullivan was ranked number 50 on TV Guide's "50 Greatest TV Stars of All Time". | 2001-10-18T18:09:17Z | 2023-12-05T12:59:34Z | [
"Template:Infobox person",
"Template:Find a Grave",
"Template:1985 Television Hall of Fame",
"Template:About",
"Template:Use American English",
"Template:Main",
"Template:Cite web",
"Template:Gilliland",
"Template:Cite magazine",
"Template:IMDb name",
"Template:Short description",
"Template:Use mdy dates",
"Template:Sfn",
"Template:Blockquote",
"Template:Nbsp",
"Template:Cite journal",
"Template:Cite book",
"Template:Commons category-inline",
"Template:Authority control",
"Template:Citation needed",
"Template:Reflist",
"Template:Cite AV media",
"Template:Cite news",
"Template:Portal",
"Template:Academy of Magical Arts Special Fellowship"
]
| https://en.wikipedia.org/wiki/Ed_Sullivan |
9,948 | Élisabeth Vigée Le Brun | Élisabeth Louise Vigée Le Brun (French: [elizabɛt lwiz viʒe lə bʁœ̃]; 16 April 1755 – 30 March 1842), also known as Louise Élisabeth Vigée Le Brun or simply as Madame Le Brun, was a French painter who mostly specialized in portrait painting, in the late 18th and early 19th centuries.
Her artistic style is generally considered part of the aftermath of Rococo with elements of an adopted Neoclassical style. Her subject matter and color palette can be classified as Rococo, but her style is aligned with the emergence of Neoclassicism. Vigée Le Brun created a name for herself in Ancien Régime society by serving as the portrait painter to Marie Antoinette. She enjoyed the patronage of European aristocrats, actors, and writers, and was elected to art academies in ten cities. Some famous contemporary artists, such as Joshua Reynolds, viewed her as one of the greatest portraitists of her time, comparing her with the old Dutch masters.
Vigée Le Brun created 660 portraits and 200 landscapes. In addition to many works in private collections, her paintings are owned by major museums, such as the Louvre in Paris, Hermitage Museum in Saint Petersburg, National Gallery in London, Metropolitan Museum of Art in New York, and many other collections in Europe and the United States. Her personal habitus was characterized by a high sensitivity to sound, sight and smell.
Between 1835 and 1837, when Vigée Le Brun was in her eighties, with the help of her nieces Caroline Rivière and Eugénie Tripier Le Franc[fr], she published her memoirs in three volumes (Souvenirs), some of which are in epistolary format. They also contain many pen portraits as well as advice for young portraitists.
Born in Paris on 16 April 1755, Élisabeth Louise Vigée was the daughter of Jeanne (née Maisin; 1728–1800), a hairdresser from a peasant background, and Louis Vigée, a portraitist, pastellist and member of the Académie de Saint-Luc, who mostly specialized in painting with oils. Élisabeth exhibited artistic inclinations from her childhood, making a sketch of a bearded man at the age of seven or eight; when he first saw her sketches her father was jubilant and exclaimed that "You will be a painter my child, if there ever was one", and started to give her lessons in art In 1760, at the age of five, she had entered a convent, where she remained until 1766. She then worked as an assistant to her father's friend, the painter and poet Pierre Davesne, with whom she learned more about painting. Her father died when she was 12 years old, from infections after several surgical operations. In 1768, her mother married a wealthy but mean jeweller, Jacques-François Le Sèvre, and shortly after, the family moved to the Rue Saint-Honoré, close to the Palais Royal. In her memoir, Vigée Le Brun directly stated her feelings about her stepfather: "I hated this man; even more so since he made use of my father's personal possessions. He wore his clothes, just as they were, without altering them to fit his figure." During this period, Élisabeth benefited from the advice of Gabriel François Doyen, Jean-Baptiste Greuze, and Joseph Vernet, whose influence is evident in her portrait of her younger brother, playwright and poet Étienne Vigée. After her father's death, her mother sought to raise her spirits by taking her to the Palais de Luxembourg's art gallery; seeing the works of Peter Paul Rubens and other old masters left a great impression on her. She also visited numerous private galleries, including those of Rendon de Boisset, the Duc de Praslin[fr], and the Marquis de Levis; the artist took notes and copied the works of old masters such as Van Dyke, Rubens and Rembrandt to improve her art.
At an early age, she reversed the order of her given names, and was known among her inner circle as 'Louise'. For most of her life, she signed her paintings, documents and letters as "Louise élisabeth Vigee Le Brun", although she acknowledged later in life that the correct baptismal order would be Élisabeth Louise.
By the time she was in her early teens, Élisabeth was painting portraits professionally. She greatly disliked the contemporary High Rococo fashion, and often solicited her sitters to allow her to alter their apparel. Inspired by Raphael and Domenichino, she often draped her subjects in shawls and long scarves; these styles would later become ubiquitous in her portraiture. After her studio was seized for her practicing without a license, she applied to the Académie de Saint-Luc, which unwittingly exhibited her works in its Salon. In 1774, she was made a member of the Académie. Her studio's reputation saw a meteoric rise, and her renown spread outside France. By 1774, she had painted portraits which included those of the Comte Orloff, Comte Pierre Chouvaloff[ru] (one of Empress Elizabeth's favorites), the Comtesse de Brionne[fr], the Duchess of Orléans (future mother of King Louis Philippe), the Marquis de Choiseul, and the Chancellor de Aguesseau, among many others. In 1776, she received her first royal commission, to paint the portrait of the Comte de Provence (the future King Louis XVIII).
After her stepfather retired from his business, he moved his family to the Hôtel de Lubert in Paris where she met Jean-Baptiste-Pierre Le Brun, a painter, art dealer and relation of the painter Charles Le Brun, on the Rue de Cléry where they lodged. Élisabeth visited M. Le Brun's apartments frequently to view his private collection of paintings, which included examples from many different schools. He agreed to her request to borrow some of the paintings in order to copy them and improve her skills, which she saw as one of the greatest boons of artistic instruction she had received. After residing in the Hôtel de Lubert for six months, M. Le Brun asked for the artist's hand in marriage. Élisabeth was in a dilemma as to whether to agree or refuse the offer; she had a steady source of income from her rising career as an artist and her future was secure; as such, she wrote, she had never contemplated marriage. On her mother's urging and goaded by her desire to be separated from her stepfather's worsening temperament, Élisabeth agreed, though her doubts were such that she was still hesitant on her wedding day on 11 January 1776; she was twenty years old. The wedding took place in great privacy in the Saint-Eustache church, with only two banns being read, and was kept secret for some time at the request of her husband, who was officially engaged to another woman at the time in an attempt to secure a lucrative art deal with a Dutch art dealer. Élisabeth acceded to his request as she was reluctant to give up her now famous maiden name. In 1778, she and her husband contracted to purchase the Hôtel de Lubert. In this same year she became the official painter to the Queen.
During the two weeks after the wedding had taken place in secret, the artist was visited by a stream of people giving her ominous news regarding her husband, these people believing that she had still not agreed to his proposal. These visitors started with the court jeweller, followed by the Duchesse de Arenberg and Mme. de Souza, the Portuguese ambassadress, who passed stories of M. Le Brun's habits as a spendthrift and womanizer. Élisabeth would later regret this match as she found these rumors to be true, though she wrote that in spite of his faults he was still an agreeable and obliging man with a sweet nature. However, she frequently condemned his gambling and adulterous habits in her memoirs, as these left her in a financially critical position at the time of her flight from France. Her relationship with him deteriorated later so much that she demanded the refund of her dowry from M. Le Brun in 1802. Vigée Le Brun began exhibiting her work at their home, and the salons she held there supplied her with many new and important contacts. Her husband's great-great-uncle was Charles Le Brun, the first director of the French Academy under Louis XIV. Her husband appropriated most of her income and pressed her to also take on the role of a private tutor to increase his income from her. The artist found tutoring to be frustrating due to her inability to assert authority over her pupils, most of whom were older than her, and found the distraction from her work irritating; she renounced tutoring soon after she had begun.
After two years of marriage, Vigée Le Brun became pregnant, and on 12 February 1780, she gave birth to a daughter, Jeanne Lucie Louise, whom she called Julie and nicknamed "Brunette". In 1784, she gave birth to a second child who died in infancy.
In 1781, she and her husband toured Flanders, Brussels and the Netherlands, where seeing the works of the Flemish masters inspired her to try new techniques. Her Self-portrait in a Straw Hat (1782) was a "free imitation" of Rubens's Le Chapeau de Paille. Dutch and Flemish influences have also been noted in The Comte d'Espagnac (1786) and Madame Perregaux (1789). It was also in Brussels that she met a longtime friend, the Prince de Ligne.
In yet another of the series of scandals that marked her early career, her 1785 portrait of Louis XVI's minister of finance, M. de Calonne, was the target of a public scandal after it was exhibited in the Salon of 1785. Rumors circulated that the minister had paid the artist a very large sum of money, while other rumors circulated that she had had an affair with de Calonne. The famous Paris Opera soprano Mlle. de de Arnould commented on the portrait "Madame Le Brun had cut off his legs so he could not escape". More rumors and scandals followed soon after as, to the painter's dismay, M. Le Brun began building a mansion on the Rue de-Gros-Chenet, with the public claiming that de Calonne was financing the new home - although her husband did not finish constructing the house until 1801, shortly before her return to France after her long exile. She was also rumored to have had another affair, with the Comte de Vaudreuil, who was one of her most devoted patrons. Their correspondence published later strongly affirmed the status of this affair. These rumors spiraled into an extensive defamation campaign targeting the painter throughout 1785.
In 1787, she caused a minor public scandal when her Self-portrait with her Daughter Julie was exhibited at that year's Salon showing her smiling and open-mouthed, which was in direct contravention of traditional painting conventions going back to antiquity. The court gossip-sheet Mémoires secrets commented: "An affectation which artists, art-lovers and persons of taste have been united in condemning, and which finds no precedent among the Ancients, is that in smiling, [Madame Vigée LeBrun] shows her teeth." In light of this and her other Self-portrait with her Daughter Julie (1789), Simone de Beauvoir dismissed Vigée Le Brun as narcissistic in The Second Sex (1949): "Madame Vigée-Lebrun never wearied of putting her smiling maternity on her canvases."
In 1788, Vigée Le Brun was impressed with the faces of the Mysorean ambassadors of Tipu-Sultan, and solicited their approval to take their portraits. The ambassador responded by saying he would only agree if the request came from the King, which Vigée Le Brun procured, and she proceeded to paint the portrait of Dervish Khan, followed by a group portrait of the ambassador and his son. After finishing the portraits and leaving them with the ambassadors to dry, Vigée Le Brun sought their return in order to exhibit them in the Salon; one of the ambassadors refused the request, stating that a painting "needs a soul", and hid the paintings behind his bed. Vigée Le Brun managed to secure the portraits through the ambassador's valet, which enraged the ambassador to the point that he wished to kill his valet, but he was dissuaded from doing so as "it was not custom in Paris to kill one's valet". She falsely convinced the ambassador that the King wanted the portraits, and they were exhibited in the Salon of 1789. Unknown to the artist, these ambassadors were later executed upon their return to Mysore for failing in their mission to forge a military alliance with Louis XVI. After her husband's death, the paintings were sold along with the remnants of his estate, and Vigée Le Brun did not know who possessed them at the time she wrote her memoirs.
As her career blossomed, Vigée Le Brun was granted patronage by Marie Antoinette. She painted more than 30 portraits of the Queen and her family, leading to the common perception that she was the official portraitist of Marie Antoinette. At the Salon of 1783, Vigée Le Brun exhibited Marie-Antoinette in a Muslin Dress (1783), sometimes called Marie-Antoinette en gaulle, in which the Queen chose to be shown in a simple, informal cotton muslin dress, worn as an undergarment. The resulting scandal was prompted by both the informality of the attire and the Queen's decision to be shown in that way. Vigée Le Brun immediately had the portrait removed from the Salon and quickly repainted it, this time with the Queen in more formal attire. After this scandal, the prices of Vigée Le Brun's paintings soared.
Vigée Le Brun's later Marie Antoinette and her Children (1787) was evidently an attempt to improve the Queen's image by making her more relatable to the public, in the hopes of countering the bad press and negative judgments that Marie Antoinette had recently received. The portrait shows the Queen at home in the Palace of Versailles, engaged in her official function as the mother of the King's children, but also suggests Marie Antoinette's uneasy identity as a foreign-born queen whose maternal role was her only true function under Salic law. The child, Louis Joseph, on the right is pointing to an empty cradle, which signified the Queen's recent loss of a child, further emphasizing Marie Antoinette's role as a mother. Vigée Le Brun was initially afraid of displaying this portrait due to the Queen's unpopularity and fear of another negative reaction to it, to such a degree that she locked herself in at home and prayed incessantly for its success. However, she was soon greatly pleased at the positive reception for this group portrait, which was presented to the King by M. de Angevilliers, Louis XVI's minister of arts. Vigée Le Brun herself was also presented to the King, who praised the painting and told her "I know nothing about painting, but I grow to love it through you". The portrait was hung in the halls of Versailles, so that Marie Antoinette passed it on her way to mass, but it was taken down after the Dauphin's death in 1789.
Later on, during the First Empire, she painted a posthumous portrait of the Queen ascending to heaven with two angels, alluding to the two children she had lost, and Louis XVI seated on two clouds. This painting was titled The Apotheosis of the Queen. It was displayed in the chapel of the Infirmerie Marie-Thérèse, rue Denfert-Rochereau, but vanished at some point in the 20th century. She also painted numerous other posthumous portraits of the Queen, and of King Louis XVI.
On 31 May 1783, Vigée Le Brun was received as a member of the Académie royale de peinture et de sculpture. She was one of only 15 women to be granted full membership in the Académie between 1648 and 1793. Her rival, Adélaïde Labille-Guiard, was admitted on the same day. Vigée Le Brun was initially refused on the grounds that her husband was an art dealer, but eventually the Académie was overruled by an order from Louis XVI because Marie Antoinette put considerable pressure on the King on behalf of her portraitist. As her reception piece, Vigée Le Brun submitted an allegorical painting, Peace Bringing Back Abundance (La Paix ramenant l'Abondance), instead of a portrait, even though she was not asked for a reception piece. As a consequence, the Académie did not place her work within a standard category of painting—either history or portraiture. Vigée Le Brun's membership in the Académie was dissolved after the French Revolution because the category of female academicians was abolished.
Vigée Le Brun witnessed many of the events that accelerated the already rapid deterioration of the Ancien Régime. While travelling to Romainville to visit the Maréchal de Ségur in July 1788, the artist experienced the massive hailstorm that swept the country, and observed the resultant devastation of crops. As the turmoil of the French Revolution grew, the artist's house on the Rue de-Gros-Chenet was harassed by Sans-culottes due to her association with Marie Antoinette. Stricken with an intense anxiety, Vigée Le Brun's health deteriorated. M. and Mme. Brongniart pleaded with her to live with them to convalesce and recover her health, to which she agreed and spent several days in their apartment at Les Invalides. Later in her life, in a letter to the Princess Kourakin, the artist wrote:
Society seemed to be in a state of complete chaos, and honest people were left to fend for themselves, for the National Guard was made up of a strange crew, a mixture of bizarre and even frightening types. Everyone seemed to be suffering from fear; I grieved for the pregnant women who passed; the faces of most of them were sallow with worry. I noticed besides that the generation born during the Revolution was, in general, a lot less healthy than the previous one; indeed most of the children born in this sad time were weak and suffering!
As the situation in Paris and France continued to deteriorate with the rising tide of the revolution, the artist decided to leave Paris, and obtained passports for herself, her daughter and their governess. The very next day a large band of national guards entered her house and ordered her not to leave or else face punishment. Two sympathetic national guards from her neighborhood later returned to her house, and advised her to leave the city as fast as possible, but to take the stagecoach instead of her carriage. Vigée Le Brun then ordered three places on the stagecoach out of Paris, but had to wait two weeks to obtain seats as there were many people departing the city. Vigée Le Brun visited her mother before leaving. On 5 October 1789, the King and Queen were driven from Versailles to the Tuilleries by a large crowd of Parisians – mostly women. Vigée Le Brun's stagecoach departed at midnight of the same day, with her brother and husband accompanying them to the Barrière du Trône. She, her daughter and governess dressed shabbily to avoid attracting attention. Vigée Le Brun travelled to Lyon where she stayed for three days with acquaintances (Mme. and M. de Artaut), where she was barely recognized due to her changed features and shabby clothes, and then continued her journey across the Beauvoisin bridge, she was relieved to be finally out of France, although throughout her journey she was accompanied by Jacobin spies who tracked her movement. Her husband, who remained in Paris, claimed that Vigée Le Brun went to Italy "to instruct and improve herself", but she feared for her own safety. In her 12-year absence from France, she lived and worked in Italy (1789–1792), Austria (1792–1795), Russia (1795–1801) and Germany (1801), and remained a committed royalist throughout her life.
The artist arrived in Turin after crossing the Savoyard alps. In Turin she met the famous engraver Porporati, who was now a professor in the city's academy. Porporati and his daughter received the artist for five or six days until she resumed her journey southwards to Parma, where she met the Comte de Flavigny[fr] (then minister plenipotentiary of Louis XVI) who generously accommodated her during her stay there. While staying in Parma, she sought out churches and galleries that possessed works of the old master Correggio, whose painting The Manger, or Nativity had captivated her when she first saw it in the Louvre. She visited the church of San Giovanni to observe the ceilings and alcoves painting by Correggio, and then the church of San Antonio. She also visited the library of Parma where she found ancient artifacts and sculptures. The Comte de Flavigny then introduced Vigée Le Brun to Marie Antoinette's older sister, the bereaved Infanta and Duchess of Parma, Maria Amalia, while she was in mourning for her recently deceased brother Emperor Joseph II. The artist regarded her as lacking in Marie Antoinette's beauty and grace, and being as pallid as a ghost, and criticized her way of life as being "like that of a man", although she praised the warm welcome the Infanta had given her. Vigée Le Brun did not stay long in Parma, wishing to cross the mountains southwards before the seasons changed. De Flavigny postponed Vigée Le Brun's departure from Parma by two days so that she and her daughter could be escorted by one of his trusted men, the Vicomte de Lespignière, whose carriage accompanied her all the way to Rome.
She first arrived in Modena, where she visited the local Palazzo, and saw several old master paintings by Raphael, Romano and Titian. She also visited the library and the theater there. From Modena, she departed for Bologna. The journey over the mountains was tortuous enough that she walked part of the way, and arrived in Bologna very tired. She wished to stay there at least one week to visit the local galleries and the Bologna arts school, which hosted some of the finest collections of old master paintings, but the innkeeper where she was residing had noticed her unloading her luggage, and informed her that her efforts were in vain, as French citizens were "allowed to reside in that city for only one night". Vigée Le Brun despaired at this news, and was fearful when a man clad in black arrived at the inn whom she recognized as a papal messenger, and assumed he was delivering an order to leave within the next twenty-four hours, She was surprised and elated when she realized that the missive he carried was permission for her to stay in Bologna as long as she pleased. At this juncture, Vigée Le Brun became aware that the Papal government was informed of all French travelers who entered Italy.
She visited the church of Sant'Agnese, of which she wrote:
I went immediately to the church of Sant'Agnese, where this saint's martyrdom is represented in a painting by Domenichino. The youth and innocence of Saint Agnes is so well captured on her beautiful face and the features of the torturer striking her with his sword form such a cruel contrast to her divine nature, that I was overwhelmed with pious admiration. As I knelt before the masterpiece, someone played the overture to Iphigenia on the organ. The involuntary link that I made between the young pagan victim of that story and the young Christian victim, the memory of the peaceful, happy time when I had last listened to that piece of music, and the sad thought of all the evils pressing upon my unhappy country, weighed down my heart to the point where I began to cry bitterly and to pray to God on behalf of France. Fortunately I was alone in the church and I was able to remain there for some time, giving vent to those painful emotions which took control of my soul.
She then visited several Palazzi, where she viewed some of the finest examples of the Bologna art school. She also visited the Palazzo Caprara, the Palazzo Bonfigliola and the Palazzo Sampierei, perusing arts and paintings by many old masters. Within three days of her arrival in Bologna, on 3 November 1789, she was received as a member of the academy and the institute of Bologna, with the academy director M. Bequetti personally delivering the letters of admission to her.
Soon after, she crossed the Apennines and arrived in the Tuscan countryside, and from there to Florence. The artist was initially disappointed with its position at the bottom of a wide valley, having preference for elevated views, but was soon charmed by the city's beauty. She lodged herself in a hotel recommended to her.
While in Florence, she visited the famous Medici gallery, where she saw the widely-celebrated and famous Venus de' Medici and the room of the Niobids. She then visited the Pitti palace where she was enamored of several paintings by old masters, including Raphael's Madonna della Sedia, Titian's portrait of Paul III, Rembrandt's Portrait of a Philosopher, Carracci's The Holy Family and many others. She then visited the town's most beautiful landmarks, including the Florence Baptistery, where she saw the Gates of Paradise by Ghiberti, the Church of San Lorenzo, and Michelangelo's mausoleum at the Santa Croce. She also visited the Santissima Annunziata, where she entered the cloister and was enthralled by Andrea del Sarto's Madonna del Sacco, comparing it to Raphael's paintings, but also lamented the state of neglect of the lunettes. She also visited the Palazzo Altoviti, where she saw the self-portrait of Raphael, praising his countenance and expression as that of a "man who was obviously a keen observer of life", but also stated that the painting's protective glass had made its shadows darker. She then visited the Medici library, and later a gallery containing numerous self-portraits by famous artists, where she was asked to present her own self-portrait to the collection, promised to do so as soon as she reached Rome. During her stay in Florence, Vigée Le Brun made the acquaintance of another French lady, the Marquise de Venturi, who took her on excursions along the Arno. She soon left Florence and departed for Rome, arriving there in late November 1789.
As she arrived in Rome, she was surprised by how filthy the famous Tiber was. She headed to the French Academy in the Via del Corso where the director of the academy, M. de Ménageot, went down to receive her. She requested lodging of him, and he quickly furnished her, her daughter and her governess a nearby apartment. He took her to see Saint Peter's on the very same day, where she was underwhelmed by its size; not matching the lavish descriptions she had heard of it, although its vastness became apparent to her upon walking around the structure. She stated to de Ménageot that she would have preferred for it to be supported by columns instead of enormous pillars, to which he replied that it was originally planned as such but it was found not feasible, later showing her some of the original plans for the Basilica.
She climbed the Sistine chapel later on to see Raphael's much criticized The Last Supper, for which she expressed great praise, writing in a letter to the painter Robert:
I also climbed the steps to the Sistine chapel, to admire the dome with a fresco by Michelangelo as well as his painting of The Last Supper. Despite all the criticisms of this painting, I thought it a masterpiece of the first order for the expression and the boldness of the foreshortened figures. There is a sublime quality in both the composition and in the execution. As for the general air of chaos, I believe it to be totally justified by the subject matter.
On the next day, she visited the Vatican museum; of her visit, she wrote to Robert:
The following day I went to the Vatican Meusum. There is really nothing to compare with the classical masterpieces either in shape, style or execution. The Greeks, in particular, created a complete and perfect unison between truth and beauty. Looking at their work, there is no doubt that they possessed exceptional models, or that the men and women of Greece discovered an ideal of beauty long, long ago. As yet I have made only a superficial study of the museum's contents, but the Apollo, The Dying Gladiator , The Laocoon, the magnificent altars, the splendid candelabras, indeed all the beautiful things that I saw have left a permanent impression on my memory.
On the same day, she was summoned by the members of the Academy of Painting, including Girodet: they presented her with the palette of the greatly talented deceased painter Jean Germain Drouais, In exchange, they asked her for her own palette, which she obliged. She later visited the Flavian Amphitheater, where she saw the cross placed on one of its high points by Robert. While in Rome, she was very keen to seek out the famous female painter Angelica Kaufmann, with whom she spent two evenings. Kaufmann showed Vigée Le Brun her gallery and sketches, and they engaged in long conversations. Vigée Le Brun praised her wit and intellect, although Vigée Le Brun found little inspirations in these evenings, citing Kaufmann's lack of enthusiasm and Vigée Le Brun's own dearth of knowledge. For the first three days of her stay in Rome, she visited the home of Cardinal Bernis, who was a gracious host to her.
Vigée Le Brun was very sensitive to sound while sleeping; this was a lifetime burden for her, and when traveling to new locations or cities, frequent moving of lodgings was customary until she found a suitably quiet residence. Due to the racket of coachmen and horses near her apartment in the French Academy and the nightly music of the Calabrians to a nearby Madonna, she searched for other lodgings, which she found in the home of the painter Simon Denis in the Piazza di Spagna, but soon afterwards left this apartment due to the nightly habits of young men and women of singing in the streets at the night. She departed and found a third home, which she carefully scrutinized, then paid one month's worth of rent in advance. On her first night there, she was awoken by a loud noise behind her bed caused by water being pumped through pipes to wash the laundry; a nightly occurrence. She quickly left this home as well to continue her search for quiet lodgings. After a painstaking search, she found a private mansion where she was told she might be able to rent an apartment. She lodged herself there but found it completely unsavory due to the filthiness of its rooms, its poor insulation and a rat infestation in the wooden paneling. Finding herself at her wit's end, she was forced to stay there for six weeks before seeking a new home suitable to her needs. She eventually found a house which seemed perfect, but she refused to pay rent until she had spent a night there; she was immediately woken up by noise caused by a worm infestation in the joists of her room. She left this house as well, later writing; "regretfully I had to abandon the idea of living there. No-one, I am sure, could have changed lodgings as often as I did during my various visits to the capital; I remain convinced that the most difficult thing to find in Rome is a place to live."
Soon after her arrival in Rome, she dispatched the promised self-portrait to Florence. In this portrait, she depicted herself in the act of painting, with the Queen's face on her canvas. She made numerous copies of this portrait later on. The Rome Academy also requested her self-portrait, which she presented them with. She attended the pope's blessing, delivered by Pope Pius IV during Easter Day in Saint Peter's, while in Rome. Vigée Le Brun found his features stunning, describing them as "not showing any signs of age".
She worked hard during her three-year residency in Rome, painting numerous subjects including Miss Pitt, Lord Bristol, Countess Potocka, Lady Hamilton, Mme. Roland and many others. She toured Rome's landmarks extensively, visiting the San Pietro in Vincoli, the San Lorenzo Fuori le Mura, the St. John Lateran, and the San Paolo la Fuori le Mura, which she found to be, architecturally, the most beautiful church in Rome. She also visited the Santa Maria della Vittoria, where she saw Bernini's notorious Ecstasy of Saint Teresa, writing of it "...whose scandalous expression defies description".
Apart from her fellow female artist Kaufmann, Vigée Le Brun found company in the Duchesse de Fleury, with whom she became close friends. She also found herself in the social circles of exiled French aristocracy who came to Rome, embedding herself there like most exiled French had done, instead of congregating with Italian aristocracy. She spent many evenings hosted by de Ménageot or the Prince Camille de Rohan, ambassador to Malta., who hosted many other exiled French aristocrats. Many of these she attended with her close friend the Duchesse de Fleury, on whom she greatly doted. She soon found one of her oldest friends, M. d'Agincourt, who had lent her art pieces from his gallery to copy when she was very young. She had last met him fourteen years previously in Paris, before he departed from there. She also met the Abbe Maury before he became Cardinal, who informed her that the pope wished her to paint his portrait. She was greatly flattered by the offer, but politely declined; fearing that she would fumble the portrait as she would be forced to wear a veil while painting the pope. Soon afterwards she was taken by de Ménageot, along with the painter Denis, for an excursion to Tivoli. There she visited the Temple of the Sibyl, and then Neptune's Grotto. De Ménageot also took her to see the Villa Aldobrandini, and the ancient ruins of the Roman town of Tusculum, which "evoked many sad thoughts". The entourage continued to Monte Cavo, seeking out the Temple of Jupiter built there. She visited numerous villas, including the Villa Conti, the Villa Palavicina and the ruins of Hadrian's Villa. She also made frequent excursions to the summit of Monte Mario to enjoy the view it offered of the Apennines, and visited the Villa Mellini there. In the summer months, she and the Duchesse de Fleury rented an apartment in the home of the painter Carlo Maratta in the Genazzano countryside. She and the Duchess toured the countryside there regularly, visiting Nemi and Albano among others. One of these excursions around Ariccia caused an incident in which she and the Duchess fled for their lives from what they suspected was a rogue following them, of which she wrote "I have never discovered whether the man who caused our exhaustion was a real villain or the most innocent man in the world".
After a residency of eight months in Rome, the painter planned to follow most of French polite society as it moved to Naples. She informed Cardinal Bernis, who approved of her decision to go, but told her to not travel alone; to that end, he referred her to M. Duvivier, the husband of Mme. Mignot, widow of the painter Denis and Voltaire's niece. She traveled in his spacious carriage to Naples, stopping at an inn in Terracina on the way. As she arrived in Naples she was captivated by the view of the city, the distant plumes of smoke from Mount Vesuvius, the rolling hills of the countryside, and its citizens, writing "...even the people, so lively, so boisterous, so different from the people of Rome, that one would think a thousand leagues lay between the two cities". Her first residency in Naples lasted for six months, although originally planned to be six weeks.
She initially lodged in Chiaia, in the Hotel de Maroc. Her neighbor, the ailing Count Scavronsky, Russian Minister Plenipotentiary to Naples, sent a missive to inquire of her shortly after her arrival, and sent her a lavish dinner. She visited him and his wife, Countess Catherine Skavronskaïa, the same night in their mansion, where she found amiable company with the couple, who invited her again on many evenings. The Count made Vigée Le Brun promise to paint his wife before anyone else in Naples, and she set to painting her portrait two days after her arrival. Soon afterwards, Sir William Hamilton, the English envoy extraordinary to the Kingdom of Naples, visited Vigée Le Brun while the Countess was sitting for her, requesting that the artist paint his mistress, Emma Hart, as her first portrait in the city, it being unknown to him that she had already promised Count Scavronsky that she would paint his wife. She later painted Emma Hart as a bacchante, and was captivated by her beauty and long chestnut hair. Sir William also commissioned a portrait of himself, which she completed later. The artist noticed that Sir William had a mercantile inclination towards art, frequently selling paintings and portraits he commissioned for profit. On her future visit to England, she found that he had sold her portrait of him for 300 guineas. She also met Lord Bristol again and painted a second portrait of him. While in Naples, she also painted portraits of the Queen of Naples, Maria Carolina of Austria (sister of Marie Antoinette) and her four eldest living children: Maria Teresa, Francesco, Luisa and Maria Cristina. She later recalled that Luisa "was extremely ugly, and pulled such faces that I was most reluctant to finish her portrait."
She visited the French ambassador to Naples, the Baron de Talleyrand, and while being hosted by him she met Mme. Silva, a Portuguese woman. Vigée Le Brun then decided to visit the island of Capri to see the palatial Roman ruins there. Her entourage included Mme. Silva, the Comte de la Roche-Aymon[fr] and the young son of Baron de Talleyrand. The voyage to the island was turbulent due to rough waters. Soon afterwards, she made multiple trips to the summit of Vesuvius. Her entourage included Mme. Silva and Abbé Bertrand on the first journey, which was hampered by severe rain. On the next day, with clear weather, she climbed the volcano again, with M. de la Chesnaye joining. The party observed the erupting volcano, with plumes of smoke and ash rising from it.
Of her visit to Mt. Vesuvius, she wrote in a letter to the architect Brongniart:
We also went up to the mountain refuge. The sun set and we watched its rays disappear behind the islands of Ischia and Procida: what a view! Eventually night fell, and the smoke turned into flames, the most magnificent I have ever seen in my life. Great jets of fire shot up from the craters in quick succession, throwing red hot rocks noisily on all sides. At the same time a cascade of fire ran down front the summit, covering an area of four to five miles. Another lower mouth of the volcano was also alight; this crater churned out a red and gold smoke, rounding off the frightening but wonderful spectacle. The thunderous noise that seemed to come from deep inside the volcano echoed around us, and the ground shook beneath our feet. I was quite frightened, but tried to hide my fear for the sake of my poor little daughter who was crying, `Maman, should I be afraid?'. But there was so much to admire that I soon forgot my fear. Imagine looking down over countless furnaces, whole fields swallowed by the blaze that followed in the wake of the lava. I saw bushes, trees, vines, consumed by this terrible rolling fire; I saw the fire rise up and die out, and I heard it eat its way through the surrounding undergrowth. This powerful scene of destruction is both painful and impressive, and stirs deep feelings within one's soul; I could not speak for a while on my return to Naples; on the road, I kept turning around to see the sparks and that river of fire once more. I was sad to leave such a spectacle; but I have the memory still, and every day I think on different aspects of what I saw. I have four drawings which I shall bring to Paris to show you. Two have already been mounted; we are very happy here.
She returned to the volcano several times, visiting it with the painter Lethière, former director of the French Academy of painting in Rome. Soon afterwards, she was invited by Sir William to visit the Islands of Ischia and Procida. This voyage included his mistress Emma Hart and her mother. Vigée Le Brun was instantly mesmerized by the island and its inhabitants, writing of its women "I was instantly struck by the beauty of the women we encountered on the road. They were nearly all tall and statuesque, their costume as well as their build reminding me of the ancient women of Greece".
The party departed from Procida on the same way, bound for Ischia. They arrived there in the late evening. On the next day, they were taken by General Baron de Salis with a party of twenty to visit the summit of Monte San Nicola. The journey was perilous, and Vigée Le Brun was separated from the party due to dense fog, but soon afterwards found her way to the refuge at the summit of the mountain. After returning to Naples, the artist visited the ancient ruins of Paestum, Herculaneum, Pompeii and the museum at Portici. Shortly before the new year, she moved to another home due to problems with her previous residence. It was there that she also met the famous composer Paëisiello and painted his portrait while he was in the process of composition. She frequented Mt. Posillipo during her stay in Naples, including the ancient ruins there and Virgil's grave, and it became one of her favorite landmarks.
She returned to Rome afterwards, just in time to find the Queen of Naples arriving from her visit to Austria. The Queen espied the artist in a large crowd, went to her and impressed her to return to Naples to paint her portrait; Le Brun agreed to the prospect. Upon her return to Naples, she was taken by Sir William to the widely popular local festival of Madonna di Piedigrotta, the festival of Madonna dell'Arco. She also visited the Solfatara volcano with M. Amaury Duval and Sacaut. While in Naples, the artist was also fascinated by the local culture of the Lazzaroni.
Upon finishing her portrait of the Queen, she was offered her summerhouse near the coast to entice her to spend more time in Naples, but Le Brun insisted on leaving. Before departing, the Queen gave her a luxurious lacquered box containing her monogram surrounded by fine gems. She returned to Rome once again, undertaking many commissions there, including those of Louis XVI's aunts, mesdames Victoire and Adélaïde. She left Rome on 14 April 1792 for Venice, writing later that she wept bitterly as she left Rome, having grown very attached to that city. She was accompanied by M. Auguste Rivière, occasional diplomat and painter and the brother of Le Brun's sister-in-law. He would be the artist's travelling companion for 9 years, often copying her portraits. Le Brun spent the first night on the road at Civita Castellana, then continued her journey through precipitous and craggy roads, describing the landscape there as gloomy and 'the saddest in the world'. She then arrived in Narni, where she was charmed by the countryside. From there she continued on to Terni where she toured the countryside and hiked up local mountains. She resumed her journey over Monte Somma across the Apennines then to Spoleto. In this town, she witnessed Raphael's partially completed Adoration of the Magi, from which she gained valuable information on his painting techniques, observing that he painted hands and faces first, and experimented frequently with different tints during the early drafting process. While in Spoleto she also visited the Temple of Concord in the mountains, and the ruins of the ancient town there. She continued to Venice, passing Trevi, Cetri and Foligno. In the latter town she found Raphael's Madonna di Foligno, which gained the complete admiration of Le Brun. She continued to Perugia, passing by Lake Trasimene and hen on to Lise, Combuccia, Arezzo, Levana and Pietre-Fonte, finally arriving in Florence, where she had resided for a short while after her flight from France.
Upon her arrival in Florence, she had a memorable meeting with the Abbé Fontana, then a renowned anatomist. Fontana showed Le Brun his study, filled with wax figures of human organs. The intricacy of the details on some of the replicas had made the artist feel that only divine power could have created the human body. Fontana then showed Le Brun a life-sized figure of a human female, with an exposed cutaway of the intestines. Vigée Le Brun was nearly sick at this sight, and was haunted by it for a long time, later writing to Fontana for advice on relieving herself from the stress and consequences of having seen the internal anatomy of the human body, to which he replied to her; "That which you describe as a weakness and a misfortune, is in fact the source of your strength and talent; moreover, if you wish to diminish the inconvenience caused by this sensitivity, then stop painting".
After departing Florence she travelled to Siena where she remained for a few days, excursing frequently in its countryside and visiting local churches and galleries. From Siena she left for Parma, where she was welcomed as a member of the Academy of Fine Arts of Parma, and donated a portrait of her daughter. During her stay there, she was visited by a small group of art students from the academy who wished to acquaint themselves with her work;
I was told that there were seven or eight art students downstairs who wished to see me. They were ushered into the room where I had placed my Sibyl and a few minutes later I went to receive them. Having spoken of their desire to meet me, they continued by saying that they would very much like to see one of my paintings. 'Here is one I have recently completed,' I replied, pointing to the Sibyl. At first their surprise held them silent: I considered this far more flattering than the most fulsome praise: several then said that they had thought the painting the work of one of the masters of their school: one actually threw himself at my feet, his eyes full of tears. I was even more moved, even more delighted with their admiration since the Sibyl had always been one of my favourite works.
After a few days in Parma, during which she revisited numerous churches and local landmarks & galleries, she finally departed Parma in July 1792, visiting Mantua on her way to Venice. In Mantua she visited the local Cathedral, the ducal palace, the house of Giulio Romano, the Church of Sant'Andrea, the Palazzo del Te and numerous other local landmarks.
She arrived in Venice on the eve of Ascension day. She was surprised by the city's partially submerged aspect, and it was some time before she became accustomed to the modes of transportation in the city's canals. She was received by M. Denon, a fellow artist whom she knew from Paris, who acted as her cicerone, touring the city's landmarks with her. She subsequently witnessed the marriage of Venice and the Sea ceremony. During the celebrations, she met the Prince Augustus of England, and the Princess de Monaco, whom she found to have been pining to return to France to see her children; this was to be her last meeting with the princess, who had been later executed during Reign of Terror.
While in Venice, she visited the churches of Santi Giovanni e Paolo, the Church of Saint Mark and the square there, and the local cemetery. While residing in Venice, she often engaged the company of the Spanish ambassadress, with whom she attended Paccherotti's last concert. She soon departed Venice for Milan, stopping at Vicenza, touring its palaces and landmarks, where she was also received lavishly. She then visited Padua, where she visited Church of the Eremitani, praising the church's frescos that were made by Mantegna, and also visited the Basilica del Santo and the church of St. John the Baptist. After departing Padua, she visited Verona, where she spent a week, touring the ruins of the Amphitheatre, the San Giorgio in Braida, the Church of Sant'Anastasia and the Church of San Zeno. After spending a week in Verona she left the city, hoping to return to France by way of Turin.
In Turin she referred herself to the Queen of Sardinia, having been given letters of introduction by her aunts, the Mesdames of France whom she had painted in Rome; requesting the artist to paint their niece on her way to France. When she presented this to the bereaved Queen, she politely refused the request, stating that she has given up all worldly matters and had taken up an austere life, which the painter had confirmed from the Queen's disheveled appearance. She also made acquaintance of her husband, the King of Sardinia, while visiting the Queen, finding that had become increasingly reclusive and very thin, and delegated most of his duties to the Queen.
After meeting the Queen of Sardinia, Le Brun visited Madame, the wife of the Comte de Provence, future king Louis XVIII (future queen of France in-exile). She excursed frequently to the countryside with her and her lady in waiting, Mme. de Gourbillon. She soon met the engraver Porporati again, who recommended her to lodge in a quiet inn in the countryside, she travelled there and was greatly pleased by the quietude and charming views it offered. not long afterwards Vigée Le Brun received news of the storming of the Tuileries in 10th August. Beset with despair, she set back for Turin, where she found the town filled with French refugees as turmoil intensified during the French revolution, setting a cruel spectacle for the artist. She subsequently rented a small home on the Moncalieri hillside, overlooking the Po river with M. de Rivière, who had arrived recently and narrowly escaped revolutionary violence as it swept the countryside, in solitude. Soon after she was frequently visited there by the Prince Ysoupoff. She soon decided to leave for Milan, but not before repaying the kindness Porporati had extended to her by painting his daughter's portrait, with which he was greatly pleased and made several engravings of the painting, sending several of them to Le Brun.
During her stay in Venice she lost yet another fortune, amounting to 35,000 francs, most of which she had accumulated from her commissions in Italy - which she had deposited in the bank of Venice, when French troops, campaigning under the command of the rising general Napoleon Buonaparte, captured the city shortly after she had left it. Le Brun had been repeatedly warned by M. Sacaut, the embassy secretary, to withdraw her money from the bank, foreseeing that French Republican troops might attack the city. The artist dismissed his warnings as 'a republic would never attack another republic'; nevertheless Napoleon later issued an ultimatum to the city to submit, and French troops entered the city. As Venice was looted, General Buonaparte had instructed the banker to spare Le Brun's deposit and afford her an annuity, but the orders were not carried out in the chaotic predicament of the city, and all that reached Vigée Le Brun were two hundred and fifty francs out of an original deposit of 40,000. During her travels in Italy, her name was added to the list of émigrés, losing her French citizenship and having her property scheduled for confiscation. M. Le Brun attempted to have his wife's name struck from the list of émigrés at this juncture by appealing to the Assemblée législative to no avail, and he along with Etienne, Vigée Le Brun's brother, were both briefly incarcerated in 1793, shortly before the terror began. Soon after, M. Le Brun attempted to protect himself and their properties from confiscation and began suing for divorce from his wife. The decree of divorce was issued on 3 June 1793.
Halfway through her journey to Milan, she was detained for two days due to her nationality. She sent a letter to Count Wilsheck, the Austrian embassador in the town, who secured her release. The count convinced Vigée Le Brun to travel to Vienna, and she decided to go there after her visit to Milan.
The artist received a warm welcome in Milan, with many young men and women from noble families serenading her outside her window, which persuaded the artist to extend her stay in Milan by a few days. It was during this time that she visited the Santa Maria delle Grazie and saw Leonardo Da Vinci's famous Last Supper. Writing of it;
I visited the refectory of the monastery known as Santa Maria delle Grazie with its famous Last Supper fresco by Leonardo da Vinci. It is one of the great masterpieces of the Italian school: yet in admiring this nobly portrayed Christ and all the other characters painted with such truth and such feeling, I groaned to see the extent to which this superb painting had been defaced: to begin with it had been covered with plaster, and then repainted in several parts. Nevertheless it was possible to judge what this beautiful work had been like prior to these disasters. for the effect, when viewed from a little distance, was still admirable. Since then I have learnt of a completely different cause of its poor condition. I was told that during the wars with Bonaparte in Italy, the soldiers would amuse themselves by firing musket balls at Leonardo's Last Supper! May these Barbarians be cursed!
She also saw various cartoons of Raphael's School of Athens, and various other drawings and sketches by Raphael, Da Vinci and numerous other artists at the Biblioteca Ambrosiana. She visited the Madonna del Monte, enjoying its commanding view, and sketched the countryside frequently. She later visited Lake Maggiore and resided on one of the two islands in the lake, the Isola Bella, being granted permission by the Prince Borromeo to lodge on the estate there. She soon attempted to visit the other isle, Isola Madre, but stormy weather affected her journey and she returned. It was during this period that she met the Countess Bistri, who would become one of her close friends. She informed the countess of her desire to travel to Vienna, and the countess replied that she and her husband were travelling there soon. Wishing to accompany the artist on her travel, the count and countess brought forward their date of departure to accomplish this. Vigée Le Brun praised the great care they took of her, and she finally left Milan for Austria. Vigée Le Brun would later describe Milan as being very similar to Paris.
While in Italy, Vigée Le Brun was elected to the Academy in Parma (1789) and the Accademia di San Luca in Rome (1790). Vigée Le Brun also painted allegorical portraits of Emma Hamilton as Ariadne (1790) and as a Bacchante (1792). Lady Hamilton was similarly the model for Vigée Le Brun's Sibyl (1792), which was inspired by the painted sibyls of Domenichino. The painting represents the Cumaean Sibyl, as indicated by the Greek inscription on the figure's scroll, which is taken from Virgil's fourth Eclogue. The Sibyl was Vigée Le Brun's favorite work; it is mentioned in her memoir more than any other work. She displayed it while in Venice (1792), Vienna (1792), Dresden (1794) and Saint Petersburg (1795); she also sent it to be shown at the Salon of 1798. It was perhaps her most successful painting, and had always garnered the most praise and attracted many viewers wherever it was displayed. Like her reception piece, Peace Bringing Back Abundance, Vigée Le Brun regarded her Sibyl as a history painting, the most elevated category in the Académie's hierarchy.
As well as the Countess Bistri and her husband she travelled to Vienna with two other French refugees of poorer origin whom they had taken on. The artist found their company priceless and lodged herself with them in Vienna, with some difficulty in procuring residence due to the travelling party's composition. This would be the beginning of two and a half years of her residency in Austria. Upon lodging herself there, she finished her painting of the countess Bistri, praising her as a "truly beautiful woman", then she presented herself to the Countess Thoun, armed with letters of introduction given to her by Count Wilsheck. The artist found a large number of elegant ladies in the countess' salon, and while there, met the Countess Kinska, of whom Vigée Le Brun was completely enraptured with her beauty. Vigée Le Brun proceeded to tour the city's galleries as was her custom when visiting new cities. She first paid a visit to the gallery of the famous painter of battles, Casanova. She found him in the middle of undertaking several paintings, and found him to be quite active despite being about sixty and "having the habit of wearing two or three spectacles, atop one another", and commented on his 'unusual and sharp mind' and his rich imagination when retelling stories or recounting past events during the dinners they had spent with the Prince Kaunitz. Vigée Le Brun praised his composition, though commented that numerous of his works that she witnessed were still not finished.
After meeting Casanova, she presented herself to the aging Prince Kaunitz, at his palace. She found dinners hosted by the prince to be uncomfortable due to the late time in which he dined and the large number of people often present at his table, and subsequently decided to dine at home most days. On days when she would accept his invitations, she would dine at home before leaving, and ate very little at his table. The prince noticed this and was offended by this and her frequent refusal of his invitations, leading to a short quarrel between the two, but they were soon reconciled. The Prince continued to host the artist and exhibited her Sibyl in his gallery, and she praised the kindness and sweetness he had extended her during her stay. When the Prince died shortly after, Vigée Le Brun was upset by the indifference the city's residents and aristocracy showed, and was further shocked when she visited the wax museum and found the Prince lying in state, his hair and clothes dressed exactly as they had always been. This sight had made a sorrowful impression on her.
While in Vienna, Vigée Le Brun was commissioned to paint Princess Maria Josepha Hermengilde Esterházy as Ariadne and Princess Karoline von Liechtenstein as Iris among many others, the latter portrait causing a minor scandal among the Princess's relatives. The portraits depict the Liechtenstein sisters-in-law in unornamented Roman-inspired garments that show the influence of Neoclassicism, and which may have been a reference to the virtuous republican Roman matron Cornelia, mother of the Gracchi. The artist met for the second time in Vienna one of her greatest friends, the Prince de Ligne, whom she had first met in Brussels in 1781. It was at his urging that Vigée Le Brun wished so much to meet the Russian sovereign Catherine the Great and to visit Russia. The Prince de Ligne urged her to stay at his former convent atop Kahlenberg, with its commanding view of the countryside, to which she agreed. During Vigée Le Brun's stay in Kahlenberg, de Ligne wrote a passionate poem about her. After two and half years in Vienna, the artist departed for Saint Petersburg on 19 April 1795, via Prague. She also visited Dresden on her way, and the Königsberg fortress, where she made the acquaintance of Prince Henry, who was very hospitable to the artist. While visiting Dresden on her way to Russia, Vigée Le Brun visited the famous Dresden gallery, writing that it was without doubt the most extensive one in all of Europe. It was there that she saw Raphael's Madonna di San Sisto. She was completely enamored of the painting, and wrote:
Suffice to say I came to the conclusion that Raphael is the greatest master of them all. I had just visited several rooms within the gallery when I found myself standing in front of a painting which aroused in me an admiration far more intense than that normally inspired by the art of painting. It showed the Virgin, sitting among the clouds. holding the infant Jesus in her arms. Her face is so beautiful and so noble that it is worthy of the divine brush that painted it. The face of the child, which is charming, bears an expression both innocent and celestial; the robes are accurately drawn and painted in the most magnificent colours. To the right of the Virgin stands a saint who seems quite real; his hands in particular merit admiration. To the left stands a young saint, her head bowed, watching two angels at the base of the painting. Her figure is full of beauty, candour and modesty. The two small angels lean upon their hands, their eyes lifted to the characters above them, and their heads bear an ingenuity and sensitivity that words alone cannot express. Having stood for some time gazing in awe at this painting, I had to pass it yet again on my way out, returning by the same route. The best paintings by the great masters had lost some of their perfection in my eyes, for I carried the image of that wonderful composition and that divine figure of the Virgin about with me! In Art nothing can compete with noble simplicity, and all the faces I viewed subsequently seemed to wear a sort of grimace.
In Russia, where she stayed from 1795 until 1801, she was well-received by the nobility and painted numerous aristocrats, including the former King of Poland, Stanisław August Poniatowski, whom she became well acquainted with, and other members of the family of Catherine the Great. Vigée Le Brun painted Catherine's granddaughters (daughters of Paul I), Elena and Alexandra Pavlovna, in Grecian tunics with exposed arms. The Empress's favorite, Platon Zubov, commented to Vigée Le Brun that the painting had scandalized the Empress due to the amount of bare skin the short sleeves revealed. Vigée Le Brun was greatly worried by this and considered it a hurtful remark and replaced the tunics with the muslin dresses the princesses wore, and added long sleeves (called Amadis in Russia). Vigée Le Brun was later reassured in a conversation with Catherine that she made no such remark, but by then the damage had already been done. When Paul later became Emperor, he expressed having been upset with the alterations Vigée Le Brun made to the painting. When Vigée Le Brun told him what Zubov told her, he shrugged and said "They played a joke on you".
Vigée Le Brun painted many other people during her stay in Russia, including the emperor Paul and his consort.
Catherine herself also agreed to sit for Vigée Le Brun, but she died the very next day, which was when she had promised to sit for the artist. While in Russia, Vigée Le Brun was made a member of the Academy of Fine Arts of Saint Petersburg. Much to her dismay, her daughter Julie married Gaétan Bernard Nigris, secretary to the Director of the Imperial Theaters of Saint Petersburg. Vigée Le Brun attempted everything in her power to prevent this match, and viewed it as a scheme concocted by her enemies and her governess to separate her from her daughter.However, as Julie's remonstrations and pressure on her mother grew, Vigée Le Brun relented and gave her approval for the wedding, though she was greatly distressed at the prospect, and soon found her stay in Russia, hitherto so enjoyable, had become suffocating and decided to return to Paris. She wrote;
As for myself, all the charm of my life seemed to have disappeared forever. I could not find the same pleasure in loving my daughter, and yet God knows how much I still love her, despite her faults. Only mothers will understand me when I say this. Shortly after her marriage she caught smallpox. Although I had never had this dreadful illness, no-one could stop me from running to her bedside. I found her face so swollen that I was seized with fright; but I was only frightened for her sake; as long as the malady lasted, I did not think of myself for one moment. To my joy she recovered without the least disfigurement. I needed to travel. I needed to leave Saint Petersburg, where I had suffered so much that my health had deteriorated. However those cruel remarks that had arisen as a result of this affair were soon retracted after the marriage. The men who had offended me the most were sorry indeed at the injustice.
Before departing for France, Vigée Le Brun decided to visit Moscow. Halfway through her journey to the city, news of the assassination of Paul I reached her. The journey was extremely difficult due to the melting snow, and the carriage often got stuck in the infamous Russian mud, and her journey was further delayed when most horses were taken by couriers spreading the news of the death of Paul and the coronation of Alexander. Vigée Le Brun enjoyed her stay in Moscow, and painted many portraits during her stay. Upon her return to Saint Petersburg she met the newly crowned Emperor Alexander I and Empress Louise, who urged her to stay in Saint Petersburg. Upon telling the Emperor of her poor health and prescription by a physician to take the waters near Karlsbad to cure her internal obstruction, the Emperor replied "Do not go there, there is no need to go so far to find a remedy; I shall give you the Empress's horse, a few rides will have you cured". Vigée Le Brun was touched by this, but replied to the Emperor that she did not know how to ride, to which the Emperor said "Well, I will give you a riding instructor, he will teach you". The artist was still adamant about leaving Russia, despite her closest friends, the Count Stroganoff, M. de Rivière and the princesses Dologruky and Kourakin and others attempting all they could to make her stay in Saint Petersburg, she left after residing there for six years. Julie predeceased her mother in 1819, by which time they had reconciled.
It was in Russia that Vigée Le Brun formed several of her longest lasting and most intimate friendships, with the Princesses Dologruky and Kourakin, and the Count Stroganoff.
After her departure from Saint Petersburg, Vigée Le Brun travelled – with some difficulty – through Prussia, visiting Berlin after an exhausting journey. The Queen of Prussia invited Vigée Le Brun to Potsdam to meet her; the Queen then commissioned a portrait of herself. The Queen invited the artist to reside in the Potsdam palace until she finished her portrait, but Vigée Le Brun, not wishing to intrude on the Queen's ladies-in-waiting, chose to reside in a nearby hotel, where her stay was uncomfortable.
The pair soon became friends. During a conversation, Vigée Le Brun complemented the Queen on her bracelets with an antique design, which the Queen then took off and put around Vigée Le Brun's arms. Vigée Le Brun considered this gift one of her most valued possessions for the rest of her life, and wore it almost everywhere. At the Queen's urging, Vigée Le Brun visited the Queen's Peacock Island, where the artist enjoyed the countryside.
Aside from two pastel portraits commissioned by the Queen, Vigée Le Brun also painted other pastel portraits of Prince Ferdinand's family.
During her stay in Berlin, she met with the General Plenipotentiary Bournonville, hoping to procure a passport to return to France. The general encouraged Vigée Le Brun to return and assured her that order and safety had been restored. Her brother and husband had already struck her name from the list of émigrés with ease, and had her French status restored. Shortly before her departure from Berlin, the General Director of the Academy of Painting visited her, bringing her the diploma for her admission to that academy. After her departure from Berlin, she visited Dresden and painted several copies of Emperor Alexander, which she had promised earlier, and also visited Brunswick where she resided for six days with the Rivière family, and was sought out by the Duke of Brunswick who wished to make her acquaintance. She also passed through Weimar and Frankfurt on her way.
After a sustained campaign by her ex-husband and other family members to have her name removed from the list of counter-revolutionary émigrés, Vigée Le Brun was finally able to return to France in January 1802. The artist received a rapturous welcome in her home at Rue de-Gros-Chenet and was greatly hailed by the press. Three days after her arrival, a letter arrived for her from the Comédie-Française, containing a decree reinstating her as a member of the theater. The leading members of the theater also wished to enact a comedy at her house to celebrate her return, which she politely refused. Soon afterwards, the artist was taken to witness the first consul's routine military ceremony at the Tuileries where she saw Napoleon Bonaparte for the first time, from a window inside the Louvre. The artist found it difficult to recognize the short figure as the man she had heard so much about; as with Catherine the Great, she had imagined a tall figure. A few days later, Bonaparte's brothers visited her gallery to view her works, with Lucien Bonaparte greatly complimenting her famous Sibyl. During her stay, Vigée Le Brun was surprised and dismayed by the greatly changed social customs of Parisian society upon her return there. She soon visited the famous painter M. Vien, who was the former Premier peintre du Roi; then 82 years old and a senator, he gave Vigée Le Brun an enthusiastic welcome and showed her some of his newest sketches. She met her friend from Saint Petersburg, Princess Dolgorouky, and saw her almost daily. In 1802, she demanded the refund of her dowry from her husband, whose gambling habits had dissipated a significant portion of the wealth she had accumulated in her early career as a portraitist. The artist soon felt mentally tormented in Paris, mainly due to memories of the early days of the revolution, and decided to move to a secluded house in Meudon forest. She was visited there by her neighbors, the famous dissident pair and Directory period Merveilleuses the Duchesse de Fleury, whom she met there for the first time since their friendship in Rome, and Adèle de Bellegarde; time spent with the pair restored her spirits. Shortly thereafter, Vigée Le Brun decided to travel to England, and departed from Paris on 15 April 1802.
Vigée Le Brun arrived at Dover, where she took the stagecoach to London, accompanied by the woman who would become her lifetime friend and chambermaid, Mme. Adélaïde, who later married M. Contat, Vigée Le Brun's accountant. Vigée Le Brun was confused by the large crowd at the quays, but was reassured that it was common for crowds of curious people to observe disembarking travelers in England. She had been told that highwaymen were common in England, and so hid her diamonds in her stocking. During her ride to London she was greatly frightened by two riders who approached the stagecoach whom she thought were bandits, but nothing came of it.
Upon her arrival at London she lodged at the Brunet hotel in Leicester Square. She could not sleep during her first night due to noise from her upstairs neighbor, who she found next morning was none other than the poet M. François-Auguste Parseval-Grandmaison, whom she had known from Paris. He always paced while reading or reciting his poetry. He promised her to take care not to interrupt her sleep, and she was able to rest well for the next night.
Wishing to find a more permanent lodging, a compatriot named Charmilly directed her to a house in Beck street, which overlooked the Royal Guards barracks. Vigée Le Brun terminated her residence there because of the noise from the barracks; in her words, "...every morning between three and four o'clock there was a trumpet blast so loud that it could have served for the day of judgement. The noise of the trumpet, together with that of the horses whose stables lay directly beneath my window, prevented me from catching any sleep at all. In the daytime there was a constant din made by the neighbor's children...". Vigée Le Brun then moved to a beautiful house in Portman Square. Upon closely scrutinizing the house's surroundings for any acoustic nuisance, she took up lodging there, only to be awakened at daybreak by a great screeching from a large bird owned by her neighbor. Later on, she also discovered that the former residents had buried two of their slaves in the cellar, where their bodies remained, and once again she decided to move, this time to a very damp building in Maddox Street. Although this was far from perfect, the artist was exhausted from constant moving, and decided to remain there, though the dampness of the house, combined with London's humid weather – greatly disliked by the artist – hindered her painting process. Vigée Le Brun found London lacking in inspiration for an artist due to its lack of public galleries at that time. She visited monuments, including Westminster Abbey, where she was greatly affected by the tomb of Mary, Queen of Scots, and visited the sarcophagi of the poets Shakespeare, Chatterton and Pope. She also visited St. Paul's Cathedral, the Tower of London and the London Museum. She greatly disliked the austere social customs of the English, particularly how quiet and empty the city was on Sundays, when all shops were closed and no social gatherings took place; the only pastime was the city's long walks. The artist also did not enjoy the local soiree equivalent – known as Routs (or rout-parties), describing them as stuffy and dour. The artist sought out the tree under which the famous poet Milton was said to have composed Paradise Lost, but was surprised to find that it had been cut down.
The artist visited the galleries of several prominent artists while in London, starting with the studio of artist Benjamin West. She also viewed some works by Joshua Reynolds. Vigée Le Brun was surprised to find that it was customary in England for visitors to the studios of artists to pay a small fee to the artist. Vigée Le Brun did not adhere to this local custom, and allowed her servant to pocket this toll. She was greatly pleased to meet one of the most famous actress and tragediennes of her era, Sarah Siddons, who visited Vigée Le Brun's studio in Maddox Street. During her stay in London, the English portraitist John Hoppner published a speech that viciously criticized her, her art and French artists in general, to which she made a scathing reply by letter which she published later in her life as part of her memoirs.
Vigée Le Brun continued to hold soirées and receptions in her house, which although damp, was beautiful. She received many people, including the Prince of Wales, Lady Hertford and Lord Borington and the famous actress Mme. Grassini among others. Vigée Le Brun sought out other compatriots during her stay in England, and cultivated a social circle of émigrés that included the Comte d'Artois (future King Charles X) and his son the Duc de Berri, the Duc de Serant and the Duc de Rivière.
Shortly after her arrival in London, the Treaty of Amiens was abrogated, and hostilities between France and the United Kingdom resumed. The British Government ordered all French people who had not resided more than a year in the UK to depart immediately. The Prince of Wales reassured Vigée Le Brun that this would not affect her, and she might reside in England however long she pleased. This permit from the King was difficult to procure, but the Prince of Wales personally delivered the permit to Vigée Le Brun.
Vigée Le Brun toured the countryside during her stay in England. She started with a visit to Margaret Chinnery at Gilwell Hall, where she received a "charming welcome" and met the famous musician Viotti, who composed a song for her which was sung by Mrs. Chinnery's daughter. She painted Mrs. Chinnery and her children whilst there, departing for Windsor after staying at Gilwell for a fortnight. She also visited Windsor Park and Hampton Court on the outskirts of London before leaving to visit Bath, where she greatly enjoyed the picturesque architecture of the city, its rolling hills and the countryside; but much like London, she found its society and weather dreary. She found some of her Russian friends from Saint Petersburg there, and went to visit the astronomer siblings William Herschel and Caroline Herschel. William Herschel showed Vigée Le Brun detailed maps of the moon, among other things.
The artist greatly enjoyed the English countryside, describing Matlock as being as picturesque as the Swiss countryside. Vigée Le Brun also visited the Duchess of Dorset at Knole House in Kent, which had once been owned by Elizabeth I. She returned to London, where she found the Comte de Vaudreuil, and then went to Twickenham where she visited Mme. la Comtesse de Vaudreuil and the Duc de Montpensier, with whom Vigée Le Brun became well acquainted; they enjoyed painting the countryside together. She was subsequently received by the Duc d'Orléans (the future King Louis Philippe). She then visited the Margravine of Brandenburg-Ansbach, the Baroness Craven, whom she painted and came to greatly enjoy her company, spending three weeks at her estate. Together, they visited the Isle of Wight, where Vigée Le Brun was mesmerized by the beauty of the countryside and the amiability of its inhabitants, writing later that along with the Isle of Ischia (near Naples), these were the only two places where she would happily spend her entire life.
She visited Mary Elizabeth Grenville, Marchioness of Buckingham, at Stowe. She also went to the home of Lord Moira and his sister Charlotte Adelaide Constantia Rawdon, where Vigée Le Brun further experienced the stern social milieu of English aristocracy; she spent some of the winter there. She then departed for Warwick Castle, eager to see this after hearing it praised so much. Vigée Le Brun attempted to visit the area incognito to avoid any awkwardness with Lord Warwick, as he would receive foreigners only if he knew their name. When he became aware that Vigée Le Brun was visiting, he went to her in person and gave her a decorous reception. After introducing the artist to his wife, he took her on a tour around the castle, looking over the lavish art collection there. He presented her with two drawings which she had sketched in Sir William Hamilton's summerhouse during her stay in Italy, telling her that he had paid a high price to buy them from his nephew. Vigée Le Brun later wrote that she had never sold them to Sir William to begin with. He also presented to her the famous Warwick vase, which he had purchased from Sir William as well. Vigée Le Brun then ended her tour by visiting Blenheim Palace before returning to London, and preparing to depart for France after staying in England for nearly three years. Upon her imminent departure becoming known, many of her acquaintances attempted to extend her residence with them, but to no avail as Vigée Le Brun wanted to see her daughter, who was in Paris at the time. As she prepared to leave London, Mme. Grassini arrived and then accompanied her, staying with her until her ship departed for Rotterdam, ending a trip that was originally intended to last only five months.
Her ship arrived in Rotterdam, where she first visited François de Beauharnais, the prefect of Rotterdam and brother in law to the Empress Joséphine de Beauharnais (brother to the late Alexandre de Beauharnais, who had been executed during The Terror). The artist was ordered to reside for eight to ten days in Rotterdam, as she has arrived from hostile soil, and was ordered to appear before General Oudinot, who was hospitable to her. After residing in Rotterdam for ten days, she received her passport and started for Paris. She visited Antwerp on her way to Paris and was received by its prefect, the Comte d'Hédouville[fr], and toured the city with him and his wife, and visited a sick young painter who wished to make her acquaintance.
She arrived in Paris and rejoiced to find her brother and her husband there, who was charged with recruiting artists for Saint Petersburg. He departed a few months later for Saint Petersburg, but Julie remained due to their failing union, though her relationship with her daughter continued to be a torment to her. She made the acquaintance of one of the most famous singers of her time, Angelica Catalani. She painted her and kept her portrait along with that of Mme. Grassini for the rest of her life, and continued to host soirées in her home as she had always had, to which Mme. Catalani was a regular.
Shortly after her arrival in Paris, Vigée Le Brun was commissioned by the court painter, Denon, to paint a portrait of the Emperor's sister Caroline Bonaparte, though she had heard that her journey to England had displeased Napoleon, who had allegedly said "Madame Le Brun has gone to England to see her friends." Vigée Le Brun accepted the commission despite the fact that she was paid 1800 Francs, less than half the customary asking price, and later also included Mme. Murat's daughter in the portrait without raising the fee. She later described this commission as "torture", and wrote in her memoirs:
It would be impossible to describe all the vexations and torment I had to suffer while painting this portrait. First of all Mme Murat arrived with two ladies in waiting who proceeded to dress her hair as I tried to paint her. When I observed that it would be impossible to capture a likeness if I allowed them to continue, she eventually agreed to send the two women away. Added to this inconvenience, she almost always broke our appointments, which meant my staying in Paris for the whole summer waiting, usually in vain, for her to appear, for I was eager to finish the painting; I cannot tell you how this woman tried my patience. Moreover the gap between sittings was so long, that each time she did appear, her hair was dressed differently. At the beginning, for example, she had curls falling onto her cheek and I painted them accordingly; but a little later this style had gone out of fashion and she returned with a completely different one; I then had to rub out the curls as well as the pearls on her bandeau and replace them with cameos. The same thing happened with the dresses. The first dress I painted was rather open, as was the fashion then, and had a great deal of bold embroidery; when the fashion changed and the embroidery became more delicate, I had to enlarge the dress in order not to lose the detail. Eventually all these irritations reached a pitch, and I became very bad tempered as a result; one day she happened to be in my studio and I said to M. Denon, in a voice loud enough for her to overhear: 'When I painted real princesses they never gave me any trouble and never kept me waiting.' Of course Mme Murat did not know that punctuality is the politeness of kings, as Louis XIV quite rightly remarked and he, at least, was no upstart.
The portrait was exhibited in the Salon of 1807, and was the only portrait the imperial government commissioned from her.
In July 1807, the artist crossed to Switzerland, arriving first at the town of Basel, where she was received by M. Ethinger, a local banker, who threw a banquet to welcome the artist. She proceeded to Biel on the advice of Ethinger, but the roads there were so hazardous that part of the journey had to be made on foot. After recuperating in Biel for a single day, she proceeded to the tiny Île Saint-Pierre to visit the home of Rousseau, which she found, to her great surprise and dismay, had become a tavern. Vigée Le Brun praised the picturesque countryside repeatedly in her letters to Countess Vincent Potocka. After departing the island to return to Biel, she went on to Berne, where she was received by the wife of the Landamann (magistrate), Mme. de Watteville, and the General Ambassador Honoré Vial. She also met the seven-months pregnant Mme. de Brac, who accompanied her to Thun, and then to the Lauterbrunnen Valley, which she found dark and grim due to its being hidden from sunlight on both sides by steep mountains. On her descent, she and her company encountered a group of local shepherdesses; the beauty and naivete of the local people and the wilderness where the encounter took place made her liken the experience to something out of Arabian Nights. She went on to visit the Staubbach Falls in the valley.
After traversing the rugged trails of the valley, she returned to Berne via Brientz, and then arrived at Schaffhausen where she was received by the local Burgomeister, who took her to see the Rhine Falls. After departing from Schaffhausen, she visited the city of Zürich, where she enjoyed the hospitality of General Baron de Salis.
After taking the young daughter-in-law of de Salis with her, she departed for the small island of Ufenau in Lake Zurich, then visited Rappercheld [sic] where she continued to be mesmerized by the beauty of the countryside and the "native innocence" of the locals. After a hazardous boat ride destined for Walenstadt, the entourage turned back to Rappercheld and then visited the valley of Glarus. The artist then continued to the village of Soleure, on the Jura mountains. Seeing a solitary chalet perched atop Mount Wunchenstein [sic], her curiosity was excited by who would live so far and high, and she made a trek up the mountain after being assured that the conditions of the road would support her carriage. After slightly less than an hour, the road became very rugged and far too steep, prompting her to dismount and continue the journey on foot. The trek lasted about five and a half hours, though she wrote in a letter to Countess Potocka that the view made it completely worth it:
to tell the truth, the view completely eliminated my fatigue. Five or six vast forests, piled one upon the other, fell away beneath my eyes; the canton of Soleure seemed no more than a plain, the town and the villages, tiny specks; the fine line of glaciers which fringed the horizon became redder and redder as the sun sank: the other mountains between them formed a complete color spectrum; gold rays stretched across the mountain to my left, each carrying a rainbow in its arc; the sun set behind the peak; red-violet mountains grew imperceptibly fainter and fainter in the distance, stretching away to the lake of Biel and the far edge of Lake Neuchatel., they stood so far apart that you could only distinguish them by two gold lines. heavy with translucent mist; I was still overlooking the deep ravines and mountains covered with thick foliage; at my feet lay wild valleys surrounded by black pine forests. As the sun set, I watched the shadows change; different points took on a more sinister character, partly because of their shape and partly because of that long silence which slips harmoniously into the day's demise. All I can tell you is that my soul gloried in such a solemn and melancholy vision.
She returned to Soleure the next day, and then departed for Vevey, which she described as "the land of my dreams". She rented a house on the banks of Lake Geneva and toured the countryside and mountains around Vevey. She walked up Mount Blonay where the Messieurs de Blonay hosted her at Blonay castle. After descending the mountain, the artist hired the innkeeper where she was lodged to row her out on the lake at night. She was enthralled by the charming beauty and silence of the lake, and wrote of the journey later "He was not Saint Preux and I was not Julie, but I was no less happy". Vigée Le Brun then departed for Coppet, where she met the famous dissident socialite and woman of letters Madame de Staël, who was exiled by the Napoleonic regime. She stayed at Coppet with Madame de Staël, whom she painted as Corinne, a character from Mme. de Staël's most recent novel, Corinne ou l'Italie (1807).
After returning from Coppet to Geneva, where she was made an honorary member of the Société pour l'Avancement des Beaux-Arts, she departed in a group with the de Brac family for Chamonix, intending to visit the Sallanches mountains, the Aiguille du Goûter, and Mont Blanc. The journey was perilous. The entourage visited the Bossons Glacier. On the way upwards, M. de Brac fell ill with catalepsy, and was slowly nursed back to health in a nearby inn, where Vigée Le Brun, the pregnant Mme. de Brac and her son were distraught and worried about his condition, but he recuperated slowly over the course of a week. After eleven days in Chamonix, the artist departed alone without the de Brac family, writing that nothing would bring her to visit the "melancholic"' Chamonix again. She then left Switzerland and returned to Paris.
With her desire for travel still not sated, Vigée Le Brun re-entered Switzerland in 1808 via Neuchâtel, and then visited Lucerne, where she was enchanted by the picturesque and wild town. The artist also visited Brown [sic] and the market town of Schwyz, then Zug, where she crossed Lake Zug. She visited an inn where she wanted to visit the infamous landslide of Goldau. The artist visited the valley, once populated with several villages, now buried under rocks. Heavy with sorrow, she contemplated the remains of the villages for a long time before departing for Arth. Vigée Le Brun then climbed Kussnacht, intending to visit the spot where the legendary William Tell was said to have killed Gessler; at the time a chapel had been constructed on the location. There, the artist observed a shepherd and shepherdess singing to each other across the valley, a local courting custom, although the two stopped singing when they noticed her. The "communication of love through melody" presented her with a delightful scene, which she would describe as an eclogue in action.
The artist then visited Untersee, where she was fortunate to arrive in time to witness the Shepherd's festival at Unspunnen castle, which took place once every century. She was hosted by M. and Mme. Konig, who hosted all notable people who came to visit the festivals. Vigée Le Brun went to the château du Bailli to witness the start of the festival, which had been postponed a few days due to incessant rain, and was captivated by the festival's solemn pastoral chants and fireworks at night. The next day, she returned to see the festival taking place at half past ten in the morning; she joined the celebrations and dancing, before sitting back and watching the contests between the shepherds and shepherdesses. Vigée Le Brun recorded that she was frequently moved to tears by the enchanting atmosphere of the festival.
Coincidentally, she found Madame de Staël at the festival, and joined her in the procession that followed the Bailli and his magistrates, which was joined by people from the neighboring valleys, dressed in their local costume and carrying flags representing each canton or valley.
After returning to Paris from her second visit to Switzerland, Vigée Le Brun purchased a house in Louveciennes, Île-de-France near the Seine, and invited her niece (daughter of her brother Etienne) Caroline Rivière and her husband to live with her. She doted on the newlywed couple and formed a close bond with them, and occasionally visited Paris. She had Mme. Pourat and the talented actress Comtesse de Hocquart as neighbors. She visited Madame du Barry's home, the Pavillon de Louveciennes, which she found had been looted and stripped clean of its furniture and contents. On 31 March 1814, her house was raided by Prussian troops who were advancing towards Paris in the final stages of the war of the Sixth Coalition. As she prepared to go to bed after eleven o'clock, with no knowledge of the proximity of the allied troops, they entered her home, while she lay in her bed. They entered her bedchamber and proceeded to loot her home. Her German-speaking Swiss servant Joseph screamed at the soldiers to spare her person until his voice was hoarse. After the looting, the soldiers left her home. She left as well, initially intending to head to St. Germain before learning that the road there was unsafe. Instead she decided to take refuge in a room above the pumping machine at Marly aqueduct, near Du Barry's pavilion, with many other people, having entrusted her house to Joseph. As fighting nearby intensified, Vigée Le Brun attempted to take refuge in cave, but gave up after injuring her leg. There, she observed how most of the merchants taking refuge were, like her, pining for the restoration of the Bourbons.
She departed for Paris as soon as she received the news, and communicated by letter with Joseph about the condition of her Louveciennes home, which had been ransacked and its garden destroyed by the Prussian troops. Her servant wrote to her: "I beg them to be less greedy, to content themselves with whatever I give them, they reply: "The French have done far worse things in our country". Vigée Le Brun wrote in her memoirs "The Prussians are right; poor Joseph and I had to answer for that."
Vigée Le Brun was exultant at the entry of the Comte d'Artois to Paris on 12 April, shortly after Napoleon had agreed to abdicate. She wrote to him about the King, to which he replied: "His legs are still bad, but his mind is in excellent form. We will march for him, and he will think for us". She attended the euphoric reception of the King in Paris on 3 May 1814, and the restoration of the monarchy. The King personally gave her his regards while on his way to attend the Sunday services when he spotted her in a crowd.
Upon Napoleon's return from Elba, she noted the contrast between the rapturous reception the Bourbons had received the previous year and Napoleon's tepid welcome upon his return to France from his exile in Elba, after which he initiated the Hundred Days war. Vigée Le Brun exhibited her staunch royalist sympathies in her memoirs, writing:
Without wishing to insult the memory of a great captain and many brave generals and soldiers who helped win such resounding victories, I would like nevertheless to ask where these victories led us, and whether we still own any of the land which cost us so dear? For my part, the bulletins from the Russian campaign both distressed and revolted me; one of the later ones spoke of the loss of thousands of French soldiers and added that the Emperor had never looked so well! We read this bulletin at the home of the Bellegarde ladies, and felt so angry that we threw it on to the fire. The fact that the people were tired of these interminable wars is easily attested by their lack of enthusiasm during the Hundred Days. More than once I saw Bonaparte appear at his window and then retire immediately, furious no doubt, for the acclamation of the crowd was limited to the shouts of a hundred or so boys, paid, I believe, as an act of derision to chant long live the Emperor! There is a sharp contrast between this indifference and the joyful enthusiasm which greeted the King on his entry into Paris on the 8th of July 1815; this joy was almost universal, for after the many misfortunes incurred by Bonaparte, Louis XVIII brought only peace.
Her Louveciennes home was once again looted in the Hundred Days, this time by British troops. Among the possessions lost during this incident was a lacquer box gifted to her by the Count Stroganoff during her stay at Saint Petersburg, which she had prized immensely.
Her estranged husband died in August 1813, in their old home built on the Rue de-Gros-Chenet. Though they had drifted apart for several years, she was nonetheless sorely affected by his death.
In 1819 she sold her portrait of Lady Hamilton as the Comaean Sibyl to the Duc de Berri, despite it being her favorite, because she wished to satisfy the Duke. She also painted two portraits of the Duchesse de Berri, initially in the Tuileries, but then finishing their sittings in her home. In the same year, her daughter Julie died of syphilis, which devastated her. The next year, her brother Etienne died an alcoholic, leaving her niece Caroline her principal heir. Her friends advised the grief-stricken artist to travel to Bordeaux to occupy her mind with something else. She traveled first to Orléans, where she resided in the Château de Méréville, where she was mesmerized by its elegance, beauty and architecture, designed in the English Garden style; she wrote that it "surpassed anything of its kind in England". She toured the city and sampled its architecture and landmarks, including the cathedral and the ruins surrounding the city. She then traveled to Blois where she visited the Château de Chambord, which she described it as "a romantic, fairy tale place". She then visited the Château de Chanteloup, residence of the late Duc de Choiseul. Afterwards, she traveled to Tours, where the impure air forced her to quit the city after only two days. In Tours, she was received by the director of the academy, who offered to be her guide in the city. She also visited the ruins of the Marmoutier monastery. She then passed Poitiers and Angoulême on her way to Bordeaux. After arriving in Bordeaux, she stayed in the Fumel Hospice and was received there by the prefect, the Comte de Tournon-Simiane. She toured the countryside and visited the cemetery, which she praised for its sepulchral beauty and symmetrical layout. It became her second-favorite after the Père La Chaise cemetery of Paris. She also visited the synagogue of Bordeaux, styled after the temple of Solomon, the ruins of the ancient Roman Gallien Arena. After spending a week in Bordeaux, she started back for Paris, greatly satisfied with her travels. During her journey, it was common for her to be mistaken for a noble lady owing to her expensive carriage; she later lamented in her memoirs that this often meant she had to pay more in the inns where she resided.
Her journey to Bordeaux was the last time she traveled extensively.
The artist formed an intimate friendship with Antoine-Jean Gros, whom she had known since he was seven years old and had painted his portrait when he was at that age, during which she had noticed an artistic inclination in the child. Upon her return to France she was surprised to find Gros had become a successful and famous painter, head of his own school of art. Gros was socially reclusive, and often brusque to others, but he formed a close bond with Vigée Le Brun, who wrote: "Gros was always a man of natural impulses. He was prone to feel the keenest sensations and would become equally passionate over a kind action or a beautiful work of art. He was ill at ease in society, rarely breaking the silence in a crowded place, but he listened attentively and replied with his gentle smile, or by a single word, always very apt. To appreciate Gros, one had to know him intimately. Then he would open up his heart, a kind and noble one at that; some people reproached him for having a certain brusqueness of tone, but this disappeared entirely in private. His conversation was even more fascinating because he never expressed himself in the same way as other men; always finding the most unusual and powerful images to convey a thought, you might almost say he painted with words."
She was greatly affected by his suicide in 1835; she had met him the day before and noted him brooding over criticism he had received over one of his paintings.
She spent most of her time in Louveciennes, typically eight months of the year. She formed new friendships with people including the writer and man of letters M. de Briffaut, the playwright M. Despré, the writer M. Louis Aimé-Martin, the composer M. Désaugiers, the painter and antiquarian Comte de Forbin, and the famous painter Antoine-Jean Gros. She hosted these people and socialized with them regularly in her countryside home or in Paris, as well as her old friend the Princess Kourakin. She painted Saint Geneviève, with the face being a posthumous portrait of 12-year old Julie. For the local chapel, the Comtesse de Genlis graced this painting with two separate poems; one for the saint, the other for the painter. She spent her time with her nieces Caroline Rivière and Eugénie Tripier-Le Franc, whom she came to regard as her own children. She had tutored the latter in painting since childhood and was greatly pleased to see her blossom into a professional artist. Eugénie and Caroline would assist her in writing her memoirs, late in her life. She died in Paris on 30 March 1842, aged 86. She was buried at the Cimetière de Louveciennes near her old home. Her tombstone epitaph says "Ici, enfin, je repose..." (Here, at last, I rest...).
During her lifetime, Vigée Le Brun's work was publicly exhibited in Paris at the Académie de Saint-Luc (1774), Salon de la Correspondance (1779, 1781, 1782, 1783) and Salon of the Académie in Paris (1783, 1785, 1787, 1789, 1791, 1798, 1802, 1817, 1824).
The first retrospective exhibition of Vigée Le Brun's work was held in 1982 at the Kimbell Art Museum in Fort Worth, Texas. The first major international retrospective exhibition of her art premiered at the Galeries nationales du Grand Palais in Paris (2015—2016) and was subsequently shown at the Metropolitan Museum of Art in New York City (2016) and the National Gallery of Canada in Ottawa (2016).
The 2014 docudrama made for French television, Le fabuleux destin d'Elisabeth Vigée Le Brun, directed by Arnaud Xainte, and starring Marlène Goulard and Julie Ravix as the young and old Elisabeth respectively, is available in English as The Fabulous Life of Elisabeth Vigée Le Brun.
In the episode "The Portrait" from the BBC series Let Them Eat Cake (1999) written by Peter Learmouth, starring Dawn French and Jennifer Saunders, Madame Vigée Le Brun (Maggie Steed) paints a portrait of the Comtesse de Vache (Jennifer Saunders) weeping over a dead canary.
Vigée Le Brun is one of only three characters in Joel Gross's Marie Antoinette: The Color of Flesh (premiered in 2007), a fictionalized historical drama about a love triangle set against the backdrop of the French Revolution.
Vigée Le Brun's portrait of Marie Antoinette is featured on the cover of the 2010 album Nobody's Daughter by Hole.
Élisabeth Vigée Le Brun is a dateable non-player character in the historically-based dating sim video game Ambition: A Minuet in Power published by Joy Manufacturing Co.
Singer-songwriter Kelly Chase released the song "Portrait of a Queen" in 2021 to accompany the History Detective Podcast, Season 2, Episode 3 Marie Antionette's Portrait Artist: Vigée Le Brun. | [
{
"paragraph_id": 0,
"text": "Élisabeth Louise Vigée Le Brun (French: [elizabɛt lwiz viʒe lə bʁœ̃]; 16 April 1755 – 30 March 1842), also known as Louise Élisabeth Vigée Le Brun or simply as Madame Le Brun, was a French painter who mostly specialized in portrait painting, in the late 18th and early 19th centuries.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Her artistic style is generally considered part of the aftermath of Rococo with elements of an adopted Neoclassical style. Her subject matter and color palette can be classified as Rococo, but her style is aligned with the emergence of Neoclassicism. Vigée Le Brun created a name for herself in Ancien Régime society by serving as the portrait painter to Marie Antoinette. She enjoyed the patronage of European aristocrats, actors, and writers, and was elected to art academies in ten cities. Some famous contemporary artists, such as Joshua Reynolds, viewed her as one of the greatest portraitists of her time, comparing her with the old Dutch masters.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Vigée Le Brun created 660 portraits and 200 landscapes. In addition to many works in private collections, her paintings are owned by major museums, such as the Louvre in Paris, Hermitage Museum in Saint Petersburg, National Gallery in London, Metropolitan Museum of Art in New York, and many other collections in Europe and the United States. Her personal habitus was characterized by a high sensitivity to sound, sight and smell.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Between 1835 and 1837, when Vigée Le Brun was in her eighties, with the help of her nieces Caroline Rivière and Eugénie Tripier Le Franc[fr], she published her memoirs in three volumes (Souvenirs), some of which are in epistolary format. They also contain many pen portraits as well as advice for young portraitists.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Born in Paris on 16 April 1755, Élisabeth Louise Vigée was the daughter of Jeanne (née Maisin; 1728–1800), a hairdresser from a peasant background, and Louis Vigée, a portraitist, pastellist and member of the Académie de Saint-Luc, who mostly specialized in painting with oils. Élisabeth exhibited artistic inclinations from her childhood, making a sketch of a bearded man at the age of seven or eight; when he first saw her sketches her father was jubilant and exclaimed that \"You will be a painter my child, if there ever was one\", and started to give her lessons in art In 1760, at the age of five, she had entered a convent, where she remained until 1766. She then worked as an assistant to her father's friend, the painter and poet Pierre Davesne, with whom she learned more about painting. Her father died when she was 12 years old, from infections after several surgical operations. In 1768, her mother married a wealthy but mean jeweller, Jacques-François Le Sèvre, and shortly after, the family moved to the Rue Saint-Honoré, close to the Palais Royal. In her memoir, Vigée Le Brun directly stated her feelings about her stepfather: \"I hated this man; even more so since he made use of my father's personal possessions. He wore his clothes, just as they were, without altering them to fit his figure.\" During this period, Élisabeth benefited from the advice of Gabriel François Doyen, Jean-Baptiste Greuze, and Joseph Vernet, whose influence is evident in her portrait of her younger brother, playwright and poet Étienne Vigée. After her father's death, her mother sought to raise her spirits by taking her to the Palais de Luxembourg's art gallery; seeing the works of Peter Paul Rubens and other old masters left a great impression on her. She also visited numerous private galleries, including those of Rendon de Boisset, the Duc de Praslin[fr], and the Marquis de Levis; the artist took notes and copied the works of old masters such as Van Dyke, Rubens and Rembrandt to improve her art.",
"title": "Biography"
},
{
"paragraph_id": 5,
"text": "At an early age, she reversed the order of her given names, and was known among her inner circle as 'Louise'. For most of her life, she signed her paintings, documents and letters as \"Louise élisabeth Vigee Le Brun\", although she acknowledged later in life that the correct baptismal order would be Élisabeth Louise.",
"title": "Biography"
},
{
"paragraph_id": 6,
"text": "By the time she was in her early teens, Élisabeth was painting portraits professionally. She greatly disliked the contemporary High Rococo fashion, and often solicited her sitters to allow her to alter their apparel. Inspired by Raphael and Domenichino, she often draped her subjects in shawls and long scarves; these styles would later become ubiquitous in her portraiture. After her studio was seized for her practicing without a license, she applied to the Académie de Saint-Luc, which unwittingly exhibited her works in its Salon. In 1774, she was made a member of the Académie. Her studio's reputation saw a meteoric rise, and her renown spread outside France. By 1774, she had painted portraits which included those of the Comte Orloff, Comte Pierre Chouvaloff[ru] (one of Empress Elizabeth's favorites), the Comtesse de Brionne[fr], the Duchess of Orléans (future mother of King Louis Philippe), the Marquis de Choiseul, and the Chancellor de Aguesseau, among many others. In 1776, she received her first royal commission, to paint the portrait of the Comte de Provence (the future King Louis XVIII).",
"title": "Biography"
},
{
"paragraph_id": 7,
"text": "After her stepfather retired from his business, he moved his family to the Hôtel de Lubert in Paris where she met Jean-Baptiste-Pierre Le Brun, a painter, art dealer and relation of the painter Charles Le Brun, on the Rue de Cléry where they lodged. Élisabeth visited M. Le Brun's apartments frequently to view his private collection of paintings, which included examples from many different schools. He agreed to her request to borrow some of the paintings in order to copy them and improve her skills, which she saw as one of the greatest boons of artistic instruction she had received. After residing in the Hôtel de Lubert for six months, M. Le Brun asked for the artist's hand in marriage. Élisabeth was in a dilemma as to whether to agree or refuse the offer; she had a steady source of income from her rising career as an artist and her future was secure; as such, she wrote, she had never contemplated marriage. On her mother's urging and goaded by her desire to be separated from her stepfather's worsening temperament, Élisabeth agreed, though her doubts were such that she was still hesitant on her wedding day on 11 January 1776; she was twenty years old. The wedding took place in great privacy in the Saint-Eustache church, with only two banns being read, and was kept secret for some time at the request of her husband, who was officially engaged to another woman at the time in an attempt to secure a lucrative art deal with a Dutch art dealer. Élisabeth acceded to his request as she was reluctant to give up her now famous maiden name. In 1778, she and her husband contracted to purchase the Hôtel de Lubert. In this same year she became the official painter to the Queen.",
"title": "Biography"
},
{
"paragraph_id": 8,
"text": "During the two weeks after the wedding had taken place in secret, the artist was visited by a stream of people giving her ominous news regarding her husband, these people believing that she had still not agreed to his proposal. These visitors started with the court jeweller, followed by the Duchesse de Arenberg and Mme. de Souza, the Portuguese ambassadress, who passed stories of M. Le Brun's habits as a spendthrift and womanizer. Élisabeth would later regret this match as she found these rumors to be true, though she wrote that in spite of his faults he was still an agreeable and obliging man with a sweet nature. However, she frequently condemned his gambling and adulterous habits in her memoirs, as these left her in a financially critical position at the time of her flight from France. Her relationship with him deteriorated later so much that she demanded the refund of her dowry from M. Le Brun in 1802. Vigée Le Brun began exhibiting her work at their home, and the salons she held there supplied her with many new and important contacts. Her husband's great-great-uncle was Charles Le Brun, the first director of the French Academy under Louis XIV. Her husband appropriated most of her income and pressed her to also take on the role of a private tutor to increase his income from her. The artist found tutoring to be frustrating due to her inability to assert authority over her pupils, most of whom were older than her, and found the distraction from her work irritating; she renounced tutoring soon after she had begun.",
"title": "Biography"
},
{
"paragraph_id": 9,
"text": "After two years of marriage, Vigée Le Brun became pregnant, and on 12 February 1780, she gave birth to a daughter, Jeanne Lucie Louise, whom she called Julie and nicknamed \"Brunette\". In 1784, she gave birth to a second child who died in infancy.",
"title": "Biography"
},
{
"paragraph_id": 10,
"text": "In 1781, she and her husband toured Flanders, Brussels and the Netherlands, where seeing the works of the Flemish masters inspired her to try new techniques. Her Self-portrait in a Straw Hat (1782) was a \"free imitation\" of Rubens's Le Chapeau de Paille. Dutch and Flemish influences have also been noted in The Comte d'Espagnac (1786) and Madame Perregaux (1789). It was also in Brussels that she met a longtime friend, the Prince de Ligne.",
"title": "Biography"
},
{
"paragraph_id": 11,
"text": "In yet another of the series of scandals that marked her early career, her 1785 portrait of Louis XVI's minister of finance, M. de Calonne, was the target of a public scandal after it was exhibited in the Salon of 1785. Rumors circulated that the minister had paid the artist a very large sum of money, while other rumors circulated that she had had an affair with de Calonne. The famous Paris Opera soprano Mlle. de de Arnould commented on the portrait \"Madame Le Brun had cut off his legs so he could not escape\". More rumors and scandals followed soon after as, to the painter's dismay, M. Le Brun began building a mansion on the Rue de-Gros-Chenet, with the public claiming that de Calonne was financing the new home - although her husband did not finish constructing the house until 1801, shortly before her return to France after her long exile. She was also rumored to have had another affair, with the Comte de Vaudreuil, who was one of her most devoted patrons. Their correspondence published later strongly affirmed the status of this affair. These rumors spiraled into an extensive defamation campaign targeting the painter throughout 1785.",
"title": "Biography"
},
{
"paragraph_id": 12,
"text": "In 1787, she caused a minor public scandal when her Self-portrait with her Daughter Julie was exhibited at that year's Salon showing her smiling and open-mouthed, which was in direct contravention of traditional painting conventions going back to antiquity. The court gossip-sheet Mémoires secrets commented: \"An affectation which artists, art-lovers and persons of taste have been united in condemning, and which finds no precedent among the Ancients, is that in smiling, [Madame Vigée LeBrun] shows her teeth.\" In light of this and her other Self-portrait with her Daughter Julie (1789), Simone de Beauvoir dismissed Vigée Le Brun as narcissistic in The Second Sex (1949): \"Madame Vigée-Lebrun never wearied of putting her smiling maternity on her canvases.\"",
"title": "Biography"
},
{
"paragraph_id": 13,
"text": "In 1788, Vigée Le Brun was impressed with the faces of the Mysorean ambassadors of Tipu-Sultan, and solicited their approval to take their portraits. The ambassador responded by saying he would only agree if the request came from the King, which Vigée Le Brun procured, and she proceeded to paint the portrait of Dervish Khan, followed by a group portrait of the ambassador and his son. After finishing the portraits and leaving them with the ambassadors to dry, Vigée Le Brun sought their return in order to exhibit them in the Salon; one of the ambassadors refused the request, stating that a painting \"needs a soul\", and hid the paintings behind his bed. Vigée Le Brun managed to secure the portraits through the ambassador's valet, which enraged the ambassador to the point that he wished to kill his valet, but he was dissuaded from doing so as \"it was not custom in Paris to kill one's valet\". She falsely convinced the ambassador that the King wanted the portraits, and they were exhibited in the Salon of 1789. Unknown to the artist, these ambassadors were later executed upon their return to Mysore for failing in their mission to forge a military alliance with Louis XVI. After her husband's death, the paintings were sold along with the remnants of his estate, and Vigée Le Brun did not know who possessed them at the time she wrote her memoirs.",
"title": "Biography"
},
{
"paragraph_id": 14,
"text": "As her career blossomed, Vigée Le Brun was granted patronage by Marie Antoinette. She painted more than 30 portraits of the Queen and her family, leading to the common perception that she was the official portraitist of Marie Antoinette. At the Salon of 1783, Vigée Le Brun exhibited Marie-Antoinette in a Muslin Dress (1783), sometimes called Marie-Antoinette en gaulle, in which the Queen chose to be shown in a simple, informal cotton muslin dress, worn as an undergarment. The resulting scandal was prompted by both the informality of the attire and the Queen's decision to be shown in that way. Vigée Le Brun immediately had the portrait removed from the Salon and quickly repainted it, this time with the Queen in more formal attire. After this scandal, the prices of Vigée Le Brun's paintings soared.",
"title": "Biography"
},
{
"paragraph_id": 15,
"text": "Vigée Le Brun's later Marie Antoinette and her Children (1787) was evidently an attempt to improve the Queen's image by making her more relatable to the public, in the hopes of countering the bad press and negative judgments that Marie Antoinette had recently received. The portrait shows the Queen at home in the Palace of Versailles, engaged in her official function as the mother of the King's children, but also suggests Marie Antoinette's uneasy identity as a foreign-born queen whose maternal role was her only true function under Salic law. The child, Louis Joseph, on the right is pointing to an empty cradle, which signified the Queen's recent loss of a child, further emphasizing Marie Antoinette's role as a mother. Vigée Le Brun was initially afraid of displaying this portrait due to the Queen's unpopularity and fear of another negative reaction to it, to such a degree that she locked herself in at home and prayed incessantly for its success. However, she was soon greatly pleased at the positive reception for this group portrait, which was presented to the King by M. de Angevilliers, Louis XVI's minister of arts. Vigée Le Brun herself was also presented to the King, who praised the painting and told her \"I know nothing about painting, but I grow to love it through you\". The portrait was hung in the halls of Versailles, so that Marie Antoinette passed it on her way to mass, but it was taken down after the Dauphin's death in 1789.",
"title": "Biography"
},
{
"paragraph_id": 16,
"text": "Later on, during the First Empire, she painted a posthumous portrait of the Queen ascending to heaven with two angels, alluding to the two children she had lost, and Louis XVI seated on two clouds. This painting was titled The Apotheosis of the Queen. It was displayed in the chapel of the Infirmerie Marie-Thérèse, rue Denfert-Rochereau, but vanished at some point in the 20th century. She also painted numerous other posthumous portraits of the Queen, and of King Louis XVI.",
"title": "Biography"
},
{
"paragraph_id": 17,
"text": "On 31 May 1783, Vigée Le Brun was received as a member of the Académie royale de peinture et de sculpture. She was one of only 15 women to be granted full membership in the Académie between 1648 and 1793. Her rival, Adélaïde Labille-Guiard, was admitted on the same day. Vigée Le Brun was initially refused on the grounds that her husband was an art dealer, but eventually the Académie was overruled by an order from Louis XVI because Marie Antoinette put considerable pressure on the King on behalf of her portraitist. As her reception piece, Vigée Le Brun submitted an allegorical painting, Peace Bringing Back Abundance (La Paix ramenant l'Abondance), instead of a portrait, even though she was not asked for a reception piece. As a consequence, the Académie did not place her work within a standard category of painting—either history or portraiture. Vigée Le Brun's membership in the Académie was dissolved after the French Revolution because the category of female academicians was abolished.",
"title": "Biography"
},
{
"paragraph_id": 18,
"text": "Vigée Le Brun witnessed many of the events that accelerated the already rapid deterioration of the Ancien Régime. While travelling to Romainville to visit the Maréchal de Ségur in July 1788, the artist experienced the massive hailstorm that swept the country, and observed the resultant devastation of crops. As the turmoil of the French Revolution grew, the artist's house on the Rue de-Gros-Chenet was harassed by Sans-culottes due to her association with Marie Antoinette. Stricken with an intense anxiety, Vigée Le Brun's health deteriorated. M. and Mme. Brongniart pleaded with her to live with them to convalesce and recover her health, to which she agreed and spent several days in their apartment at Les Invalides. Later in her life, in a letter to the Princess Kourakin, the artist wrote:",
"title": "Biography"
},
{
"paragraph_id": 19,
"text": "Society seemed to be in a state of complete chaos, and honest people were left to fend for themselves, for the National Guard was made up of a strange crew, a mixture of bizarre and even frightening types. Everyone seemed to be suffering from fear; I grieved for the pregnant women who passed; the faces of most of them were sallow with worry. I noticed besides that the generation born during the Revolution was, in general, a lot less healthy than the previous one; indeed most of the children born in this sad time were weak and suffering!",
"title": "Biography"
},
{
"paragraph_id": 20,
"text": "As the situation in Paris and France continued to deteriorate with the rising tide of the revolution, the artist decided to leave Paris, and obtained passports for herself, her daughter and their governess. The very next day a large band of national guards entered her house and ordered her not to leave or else face punishment. Two sympathetic national guards from her neighborhood later returned to her house, and advised her to leave the city as fast as possible, but to take the stagecoach instead of her carriage. Vigée Le Brun then ordered three places on the stagecoach out of Paris, but had to wait two weeks to obtain seats as there were many people departing the city. Vigée Le Brun visited her mother before leaving. On 5 October 1789, the King and Queen were driven from Versailles to the Tuilleries by a large crowd of Parisians – mostly women. Vigée Le Brun's stagecoach departed at midnight of the same day, with her brother and husband accompanying them to the Barrière du Trône. She, her daughter and governess dressed shabbily to avoid attracting attention. Vigée Le Brun travelled to Lyon where she stayed for three days with acquaintances (Mme. and M. de Artaut), where she was barely recognized due to her changed features and shabby clothes, and then continued her journey across the Beauvoisin bridge, she was relieved to be finally out of France, although throughout her journey she was accompanied by Jacobin spies who tracked her movement. Her husband, who remained in Paris, claimed that Vigée Le Brun went to Italy \"to instruct and improve herself\", but she feared for her own safety. In her 12-year absence from France, she lived and worked in Italy (1789–1792), Austria (1792–1795), Russia (1795–1801) and Germany (1801), and remained a committed royalist throughout her life.",
"title": "Biography"
},
{
"paragraph_id": 21,
"text": "The artist arrived in Turin after crossing the Savoyard alps. In Turin she met the famous engraver Porporati, who was now a professor in the city's academy. Porporati and his daughter received the artist for five or six days until she resumed her journey southwards to Parma, where she met the Comte de Flavigny[fr] (then minister plenipotentiary of Louis XVI) who generously accommodated her during her stay there. While staying in Parma, she sought out churches and galleries that possessed works of the old master Correggio, whose painting The Manger, or Nativity had captivated her when she first saw it in the Louvre. She visited the church of San Giovanni to observe the ceilings and alcoves painting by Correggio, and then the church of San Antonio. She also visited the library of Parma where she found ancient artifacts and sculptures. The Comte de Flavigny then introduced Vigée Le Brun to Marie Antoinette's older sister, the bereaved Infanta and Duchess of Parma, Maria Amalia, while she was in mourning for her recently deceased brother Emperor Joseph II. The artist regarded her as lacking in Marie Antoinette's beauty and grace, and being as pallid as a ghost, and criticized her way of life as being \"like that of a man\", although she praised the warm welcome the Infanta had given her. Vigée Le Brun did not stay long in Parma, wishing to cross the mountains southwards before the seasons changed. De Flavigny postponed Vigée Le Brun's departure from Parma by two days so that she and her daughter could be escorted by one of his trusted men, the Vicomte de Lespignière, whose carriage accompanied her all the way to Rome.",
"title": "Biography"
},
{
"paragraph_id": 22,
"text": "She first arrived in Modena, where she visited the local Palazzo, and saw several old master paintings by Raphael, Romano and Titian. She also visited the library and the theater there. From Modena, she departed for Bologna. The journey over the mountains was tortuous enough that she walked part of the way, and arrived in Bologna very tired. She wished to stay there at least one week to visit the local galleries and the Bologna arts school, which hosted some of the finest collections of old master paintings, but the innkeeper where she was residing had noticed her unloading her luggage, and informed her that her efforts were in vain, as French citizens were \"allowed to reside in that city for only one night\". Vigée Le Brun despaired at this news, and was fearful when a man clad in black arrived at the inn whom she recognized as a papal messenger, and assumed he was delivering an order to leave within the next twenty-four hours, She was surprised and elated when she realized that the missive he carried was permission for her to stay in Bologna as long as she pleased. At this juncture, Vigée Le Brun became aware that the Papal government was informed of all French travelers who entered Italy.",
"title": "Biography"
},
{
"paragraph_id": 23,
"text": "She visited the church of Sant'Agnese, of which she wrote:",
"title": "Biography"
},
{
"paragraph_id": 24,
"text": "I went immediately to the church of Sant'Agnese, where this saint's martyrdom is represented in a painting by Domenichino. The youth and innocence of Saint Agnes is so well captured on her beautiful face and the features of the torturer striking her with his sword form such a cruel contrast to her divine nature, that I was overwhelmed with pious admiration. As I knelt before the masterpiece, someone played the overture to Iphigenia on the organ. The involuntary link that I made between the young pagan victim of that story and the young Christian victim, the memory of the peaceful, happy time when I had last listened to that piece of music, and the sad thought of all the evils pressing upon my unhappy country, weighed down my heart to the point where I began to cry bitterly and to pray to God on behalf of France. Fortunately I was alone in the church and I was able to remain there for some time, giving vent to those painful emotions which took control of my soul.",
"title": "Biography"
},
{
"paragraph_id": 25,
"text": "She then visited several Palazzi, where she viewed some of the finest examples of the Bologna art school. She also visited the Palazzo Caprara, the Palazzo Bonfigliola and the Palazzo Sampierei, perusing arts and paintings by many old masters. Within three days of her arrival in Bologna, on 3 November 1789, she was received as a member of the academy and the institute of Bologna, with the academy director M. Bequetti personally delivering the letters of admission to her.",
"title": "Biography"
},
{
"paragraph_id": 26,
"text": "Soon after, she crossed the Apennines and arrived in the Tuscan countryside, and from there to Florence. The artist was initially disappointed with its position at the bottom of a wide valley, having preference for elevated views, but was soon charmed by the city's beauty. She lodged herself in a hotel recommended to her.",
"title": "Biography"
},
{
"paragraph_id": 27,
"text": "While in Florence, she visited the famous Medici gallery, where she saw the widely-celebrated and famous Venus de' Medici and the room of the Niobids. She then visited the Pitti palace where she was enamored of several paintings by old masters, including Raphael's Madonna della Sedia, Titian's portrait of Paul III, Rembrandt's Portrait of a Philosopher, Carracci's The Holy Family and many others. She then visited the town's most beautiful landmarks, including the Florence Baptistery, where she saw the Gates of Paradise by Ghiberti, the Church of San Lorenzo, and Michelangelo's mausoleum at the Santa Croce. She also visited the Santissima Annunziata, where she entered the cloister and was enthralled by Andrea del Sarto's Madonna del Sacco, comparing it to Raphael's paintings, but also lamented the state of neglect of the lunettes. She also visited the Palazzo Altoviti, where she saw the self-portrait of Raphael, praising his countenance and expression as that of a \"man who was obviously a keen observer of life\", but also stated that the painting's protective glass had made its shadows darker. She then visited the Medici library, and later a gallery containing numerous self-portraits by famous artists, where she was asked to present her own self-portrait to the collection, promised to do so as soon as she reached Rome. During her stay in Florence, Vigée Le Brun made the acquaintance of another French lady, the Marquise de Venturi, who took her on excursions along the Arno. She soon left Florence and departed for Rome, arriving there in late November 1789.",
"title": "Biography"
},
{
"paragraph_id": 28,
"text": "As she arrived in Rome, she was surprised by how filthy the famous Tiber was. She headed to the French Academy in the Via del Corso where the director of the academy, M. de Ménageot, went down to receive her. She requested lodging of him, and he quickly furnished her, her daughter and her governess a nearby apartment. He took her to see Saint Peter's on the very same day, where she was underwhelmed by its size; not matching the lavish descriptions she had heard of it, although its vastness became apparent to her upon walking around the structure. She stated to de Ménageot that she would have preferred for it to be supported by columns instead of enormous pillars, to which he replied that it was originally planned as such but it was found not feasible, later showing her some of the original plans for the Basilica.",
"title": "Biography"
},
{
"paragraph_id": 29,
"text": "She climbed the Sistine chapel later on to see Raphael's much criticized The Last Supper, for which she expressed great praise, writing in a letter to the painter Robert:",
"title": "Biography"
},
{
"paragraph_id": 30,
"text": "I also climbed the steps to the Sistine chapel, to admire the dome with a fresco by Michelangelo as well as his painting of The Last Supper. Despite all the criticisms of this painting, I thought it a masterpiece of the first order for the expression and the boldness of the foreshortened figures. There is a sublime quality in both the composition and in the execution. As for the general air of chaos, I believe it to be totally justified by the subject matter.",
"title": "Biography"
},
{
"paragraph_id": 31,
"text": "On the next day, she visited the Vatican museum; of her visit, she wrote to Robert:",
"title": "Biography"
},
{
"paragraph_id": 32,
"text": "The following day I went to the Vatican Meusum. There is really nothing to compare with the classical masterpieces either in shape, style or execution. The Greeks, in particular, created a complete and perfect unison between truth and beauty. Looking at their work, there is no doubt that they possessed exceptional models, or that the men and women of Greece discovered an ideal of beauty long, long ago. As yet I have made only a superficial study of the museum's contents, but the Apollo, The Dying Gladiator , The Laocoon, the magnificent altars, the splendid candelabras, indeed all the beautiful things that I saw have left a permanent impression on my memory.",
"title": "Biography"
},
{
"paragraph_id": 33,
"text": "On the same day, she was summoned by the members of the Academy of Painting, including Girodet: they presented her with the palette of the greatly talented deceased painter Jean Germain Drouais, In exchange, they asked her for her own palette, which she obliged. She later visited the Flavian Amphitheater, where she saw the cross placed on one of its high points by Robert. While in Rome, she was very keen to seek out the famous female painter Angelica Kaufmann, with whom she spent two evenings. Kaufmann showed Vigée Le Brun her gallery and sketches, and they engaged in long conversations. Vigée Le Brun praised her wit and intellect, although Vigée Le Brun found little inspirations in these evenings, citing Kaufmann's lack of enthusiasm and Vigée Le Brun's own dearth of knowledge. For the first three days of her stay in Rome, she visited the home of Cardinal Bernis, who was a gracious host to her.",
"title": "Biography"
},
{
"paragraph_id": 34,
"text": "Vigée Le Brun was very sensitive to sound while sleeping; this was a lifetime burden for her, and when traveling to new locations or cities, frequent moving of lodgings was customary until she found a suitably quiet residence. Due to the racket of coachmen and horses near her apartment in the French Academy and the nightly music of the Calabrians to a nearby Madonna, she searched for other lodgings, which she found in the home of the painter Simon Denis in the Piazza di Spagna, but soon afterwards left this apartment due to the nightly habits of young men and women of singing in the streets at the night. She departed and found a third home, which she carefully scrutinized, then paid one month's worth of rent in advance. On her first night there, she was awoken by a loud noise behind her bed caused by water being pumped through pipes to wash the laundry; a nightly occurrence. She quickly left this home as well to continue her search for quiet lodgings. After a painstaking search, she found a private mansion where she was told she might be able to rent an apartment. She lodged herself there but found it completely unsavory due to the filthiness of its rooms, its poor insulation and a rat infestation in the wooden paneling. Finding herself at her wit's end, she was forced to stay there for six weeks before seeking a new home suitable to her needs. She eventually found a house which seemed perfect, but she refused to pay rent until she had spent a night there; she was immediately woken up by noise caused by a worm infestation in the joists of her room. She left this house as well, later writing; \"regretfully I had to abandon the idea of living there. No-one, I am sure, could have changed lodgings as often as I did during my various visits to the capital; I remain convinced that the most difficult thing to find in Rome is a place to live.\"",
"title": "Biography"
},
{
"paragraph_id": 35,
"text": "Soon after her arrival in Rome, she dispatched the promised self-portrait to Florence. In this portrait, she depicted herself in the act of painting, with the Queen's face on her canvas. She made numerous copies of this portrait later on. The Rome Academy also requested her self-portrait, which she presented them with. She attended the pope's blessing, delivered by Pope Pius IV during Easter Day in Saint Peter's, while in Rome. Vigée Le Brun found his features stunning, describing them as \"not showing any signs of age\".",
"title": "Biography"
},
{
"paragraph_id": 36,
"text": "She worked hard during her three-year residency in Rome, painting numerous subjects including Miss Pitt, Lord Bristol, Countess Potocka, Lady Hamilton, Mme. Roland and many others. She toured Rome's landmarks extensively, visiting the San Pietro in Vincoli, the San Lorenzo Fuori le Mura, the St. John Lateran, and the San Paolo la Fuori le Mura, which she found to be, architecturally, the most beautiful church in Rome. She also visited the Santa Maria della Vittoria, where she saw Bernini's notorious Ecstasy of Saint Teresa, writing of it \"...whose scandalous expression defies description\".",
"title": "Biography"
},
{
"paragraph_id": 37,
"text": "Apart from her fellow female artist Kaufmann, Vigée Le Brun found company in the Duchesse de Fleury, with whom she became close friends. She also found herself in the social circles of exiled French aristocracy who came to Rome, embedding herself there like most exiled French had done, instead of congregating with Italian aristocracy. She spent many evenings hosted by de Ménageot or the Prince Camille de Rohan, ambassador to Malta., who hosted many other exiled French aristocrats. Many of these she attended with her close friend the Duchesse de Fleury, on whom she greatly doted. She soon found one of her oldest friends, M. d'Agincourt, who had lent her art pieces from his gallery to copy when she was very young. She had last met him fourteen years previously in Paris, before he departed from there. She also met the Abbe Maury before he became Cardinal, who informed her that the pope wished her to paint his portrait. She was greatly flattered by the offer, but politely declined; fearing that she would fumble the portrait as she would be forced to wear a veil while painting the pope. Soon afterwards she was taken by de Ménageot, along with the painter Denis, for an excursion to Tivoli. There she visited the Temple of the Sibyl, and then Neptune's Grotto. De Ménageot also took her to see the Villa Aldobrandini, and the ancient ruins of the Roman town of Tusculum, which \"evoked many sad thoughts\". The entourage continued to Monte Cavo, seeking out the Temple of Jupiter built there. She visited numerous villas, including the Villa Conti, the Villa Palavicina and the ruins of Hadrian's Villa. She also made frequent excursions to the summit of Monte Mario to enjoy the view it offered of the Apennines, and visited the Villa Mellini there. In the summer months, she and the Duchesse de Fleury rented an apartment in the home of the painter Carlo Maratta in the Genazzano countryside. She and the Duchess toured the countryside there regularly, visiting Nemi and Albano among others. One of these excursions around Ariccia caused an incident in which she and the Duchess fled for their lives from what they suspected was a rogue following them, of which she wrote \"I have never discovered whether the man who caused our exhaustion was a real villain or the most innocent man in the world\".",
"title": "Biography"
},
{
"paragraph_id": 38,
"text": "After a residency of eight months in Rome, the painter planned to follow most of French polite society as it moved to Naples. She informed Cardinal Bernis, who approved of her decision to go, but told her to not travel alone; to that end, he referred her to M. Duvivier, the husband of Mme. Mignot, widow of the painter Denis and Voltaire's niece. She traveled in his spacious carriage to Naples, stopping at an inn in Terracina on the way. As she arrived in Naples she was captivated by the view of the city, the distant plumes of smoke from Mount Vesuvius, the rolling hills of the countryside, and its citizens, writing \"...even the people, so lively, so boisterous, so different from the people of Rome, that one would think a thousand leagues lay between the two cities\". Her first residency in Naples lasted for six months, although originally planned to be six weeks.",
"title": "Biography"
},
{
"paragraph_id": 39,
"text": "She initially lodged in Chiaia, in the Hotel de Maroc. Her neighbor, the ailing Count Scavronsky, Russian Minister Plenipotentiary to Naples, sent a missive to inquire of her shortly after her arrival, and sent her a lavish dinner. She visited him and his wife, Countess Catherine Skavronskaïa, the same night in their mansion, where she found amiable company with the couple, who invited her again on many evenings. The Count made Vigée Le Brun promise to paint his wife before anyone else in Naples, and she set to painting her portrait two days after her arrival. Soon afterwards, Sir William Hamilton, the English envoy extraordinary to the Kingdom of Naples, visited Vigée Le Brun while the Countess was sitting for her, requesting that the artist paint his mistress, Emma Hart, as her first portrait in the city, it being unknown to him that she had already promised Count Scavronsky that she would paint his wife. She later painted Emma Hart as a bacchante, and was captivated by her beauty and long chestnut hair. Sir William also commissioned a portrait of himself, which she completed later. The artist noticed that Sir William had a mercantile inclination towards art, frequently selling paintings and portraits he commissioned for profit. On her future visit to England, she found that he had sold her portrait of him for 300 guineas. She also met Lord Bristol again and painted a second portrait of him. While in Naples, she also painted portraits of the Queen of Naples, Maria Carolina of Austria (sister of Marie Antoinette) and her four eldest living children: Maria Teresa, Francesco, Luisa and Maria Cristina. She later recalled that Luisa \"was extremely ugly, and pulled such faces that I was most reluctant to finish her portrait.\"",
"title": "Biography"
},
{
"paragraph_id": 40,
"text": "She visited the French ambassador to Naples, the Baron de Talleyrand, and while being hosted by him she met Mme. Silva, a Portuguese woman. Vigée Le Brun then decided to visit the island of Capri to see the palatial Roman ruins there. Her entourage included Mme. Silva, the Comte de la Roche-Aymon[fr] and the young son of Baron de Talleyrand. The voyage to the island was turbulent due to rough waters. Soon afterwards, she made multiple trips to the summit of Vesuvius. Her entourage included Mme. Silva and Abbé Bertrand on the first journey, which was hampered by severe rain. On the next day, with clear weather, she climbed the volcano again, with M. de la Chesnaye joining. The party observed the erupting volcano, with plumes of smoke and ash rising from it.",
"title": "Biography"
},
{
"paragraph_id": 41,
"text": "Of her visit to Mt. Vesuvius, she wrote in a letter to the architect Brongniart:",
"title": "Biography"
},
{
"paragraph_id": 42,
"text": "We also went up to the mountain refuge. The sun set and we watched its rays disappear behind the islands of Ischia and Procida: what a view! Eventually night fell, and the smoke turned into flames, the most magnificent I have ever seen in my life. Great jets of fire shot up from the craters in quick succession, throwing red hot rocks noisily on all sides. At the same time a cascade of fire ran down front the summit, covering an area of four to five miles. Another lower mouth of the volcano was also alight; this crater churned out a red and gold smoke, rounding off the frightening but wonderful spectacle. The thunderous noise that seemed to come from deep inside the volcano echoed around us, and the ground shook beneath our feet. I was quite frightened, but tried to hide my fear for the sake of my poor little daughter who was crying, `Maman, should I be afraid?'. But there was so much to admire that I soon forgot my fear. Imagine looking down over countless furnaces, whole fields swallowed by the blaze that followed in the wake of the lava. I saw bushes, trees, vines, consumed by this terrible rolling fire; I saw the fire rise up and die out, and I heard it eat its way through the surrounding undergrowth. This powerful scene of destruction is both painful and impressive, and stirs deep feelings within one's soul; I could not speak for a while on my return to Naples; on the road, I kept turning around to see the sparks and that river of fire once more. I was sad to leave such a spectacle; but I have the memory still, and every day I think on different aspects of what I saw. I have four drawings which I shall bring to Paris to show you. Two have already been mounted; we are very happy here.",
"title": "Biography"
},
{
"paragraph_id": 43,
"text": "She returned to the volcano several times, visiting it with the painter Lethière, former director of the French Academy of painting in Rome. Soon afterwards, she was invited by Sir William to visit the Islands of Ischia and Procida. This voyage included his mistress Emma Hart and her mother. Vigée Le Brun was instantly mesmerized by the island and its inhabitants, writing of its women \"I was instantly struck by the beauty of the women we encountered on the road. They were nearly all tall and statuesque, their costume as well as their build reminding me of the ancient women of Greece\".",
"title": "Biography"
},
{
"paragraph_id": 44,
"text": "The party departed from Procida on the same way, bound for Ischia. They arrived there in the late evening. On the next day, they were taken by General Baron de Salis with a party of twenty to visit the summit of Monte San Nicola. The journey was perilous, and Vigée Le Brun was separated from the party due to dense fog, but soon afterwards found her way to the refuge at the summit of the mountain. After returning to Naples, the artist visited the ancient ruins of Paestum, Herculaneum, Pompeii and the museum at Portici. Shortly before the new year, she moved to another home due to problems with her previous residence. It was there that she also met the famous composer Paëisiello and painted his portrait while he was in the process of composition. She frequented Mt. Posillipo during her stay in Naples, including the ancient ruins there and Virgil's grave, and it became one of her favorite landmarks.",
"title": "Biography"
},
{
"paragraph_id": 45,
"text": "She returned to Rome afterwards, just in time to find the Queen of Naples arriving from her visit to Austria. The Queen espied the artist in a large crowd, went to her and impressed her to return to Naples to paint her portrait; Le Brun agreed to the prospect. Upon her return to Naples, she was taken by Sir William to the widely popular local festival of Madonna di Piedigrotta, the festival of Madonna dell'Arco. She also visited the Solfatara volcano with M. Amaury Duval and Sacaut. While in Naples, the artist was also fascinated by the local culture of the Lazzaroni.",
"title": "Biography"
},
{
"paragraph_id": 46,
"text": "Upon finishing her portrait of the Queen, she was offered her summerhouse near the coast to entice her to spend more time in Naples, but Le Brun insisted on leaving. Before departing, the Queen gave her a luxurious lacquered box containing her monogram surrounded by fine gems. She returned to Rome once again, undertaking many commissions there, including those of Louis XVI's aunts, mesdames Victoire and Adélaïde. She left Rome on 14 April 1792 for Venice, writing later that she wept bitterly as she left Rome, having grown very attached to that city. She was accompanied by M. Auguste Rivière, occasional diplomat and painter and the brother of Le Brun's sister-in-law. He would be the artist's travelling companion for 9 years, often copying her portraits. Le Brun spent the first night on the road at Civita Castellana, then continued her journey through precipitous and craggy roads, describing the landscape there as gloomy and 'the saddest in the world'. She then arrived in Narni, where she was charmed by the countryside. From there she continued on to Terni where she toured the countryside and hiked up local mountains. She resumed her journey over Monte Somma across the Apennines then to Spoleto. In this town, she witnessed Raphael's partially completed Adoration of the Magi, from which she gained valuable information on his painting techniques, observing that he painted hands and faces first, and experimented frequently with different tints during the early drafting process. While in Spoleto she also visited the Temple of Concord in the mountains, and the ruins of the ancient town there. She continued to Venice, passing Trevi, Cetri and Foligno. In the latter town she found Raphael's Madonna di Foligno, which gained the complete admiration of Le Brun. She continued to Perugia, passing by Lake Trasimene and hen on to Lise, Combuccia, Arezzo, Levana and Pietre-Fonte, finally arriving in Florence, where she had resided for a short while after her flight from France.",
"title": "Biography"
},
{
"paragraph_id": 47,
"text": "Upon her arrival in Florence, she had a memorable meeting with the Abbé Fontana, then a renowned anatomist. Fontana showed Le Brun his study, filled with wax figures of human organs. The intricacy of the details on some of the replicas had made the artist feel that only divine power could have created the human body. Fontana then showed Le Brun a life-sized figure of a human female, with an exposed cutaway of the intestines. Vigée Le Brun was nearly sick at this sight, and was haunted by it for a long time, later writing to Fontana for advice on relieving herself from the stress and consequences of having seen the internal anatomy of the human body, to which he replied to her; \"That which you describe as a weakness and a misfortune, is in fact the source of your strength and talent; moreover, if you wish to diminish the inconvenience caused by this sensitivity, then stop painting\".",
"title": "Biography"
},
{
"paragraph_id": 48,
"text": "After departing Florence she travelled to Siena where she remained for a few days, excursing frequently in its countryside and visiting local churches and galleries. From Siena she left for Parma, where she was welcomed as a member of the Academy of Fine Arts of Parma, and donated a portrait of her daughter. During her stay there, she was visited by a small group of art students from the academy who wished to acquaint themselves with her work;",
"title": "Biography"
},
{
"paragraph_id": 49,
"text": "I was told that there were seven or eight art students downstairs who wished to see me. They were ushered into the room where I had placed my Sibyl and a few minutes later I went to receive them. Having spoken of their desire to meet me, they continued by saying that they would very much like to see one of my paintings. 'Here is one I have recently completed,' I replied, pointing to the Sibyl. At first their surprise held them silent: I considered this far more flattering than the most fulsome praise: several then said that they had thought the painting the work of one of the masters of their school: one actually threw himself at my feet, his eyes full of tears. I was even more moved, even more delighted with their admiration since the Sibyl had always been one of my favourite works.",
"title": "Biography"
},
{
"paragraph_id": 50,
"text": "After a few days in Parma, during which she revisited numerous churches and local landmarks & galleries, she finally departed Parma in July 1792, visiting Mantua on her way to Venice. In Mantua she visited the local Cathedral, the ducal palace, the house of Giulio Romano, the Church of Sant'Andrea, the Palazzo del Te and numerous other local landmarks.",
"title": "Biography"
},
{
"paragraph_id": 51,
"text": "She arrived in Venice on the eve of Ascension day. She was surprised by the city's partially submerged aspect, and it was some time before she became accustomed to the modes of transportation in the city's canals. She was received by M. Denon, a fellow artist whom she knew from Paris, who acted as her cicerone, touring the city's landmarks with her. She subsequently witnessed the marriage of Venice and the Sea ceremony. During the celebrations, she met the Prince Augustus of England, and the Princess de Monaco, whom she found to have been pining to return to France to see her children; this was to be her last meeting with the princess, who had been later executed during Reign of Terror.",
"title": "Biography"
},
{
"paragraph_id": 52,
"text": "While in Venice, she visited the churches of Santi Giovanni e Paolo, the Church of Saint Mark and the square there, and the local cemetery. While residing in Venice, she often engaged the company of the Spanish ambassadress, with whom she attended Paccherotti's last concert. She soon departed Venice for Milan, stopping at Vicenza, touring its palaces and landmarks, where she was also received lavishly. She then visited Padua, where she visited Church of the Eremitani, praising the church's frescos that were made by Mantegna, and also visited the Basilica del Santo and the church of St. John the Baptist. After departing Padua, she visited Verona, where she spent a week, touring the ruins of the Amphitheatre, the San Giorgio in Braida, the Church of Sant'Anastasia and the Church of San Zeno. After spending a week in Verona she left the city, hoping to return to France by way of Turin.",
"title": "Biography"
},
{
"paragraph_id": 53,
"text": "In Turin she referred herself to the Queen of Sardinia, having been given letters of introduction by her aunts, the Mesdames of France whom she had painted in Rome; requesting the artist to paint their niece on her way to France. When she presented this to the bereaved Queen, she politely refused the request, stating that she has given up all worldly matters and had taken up an austere life, which the painter had confirmed from the Queen's disheveled appearance. She also made acquaintance of her husband, the King of Sardinia, while visiting the Queen, finding that had become increasingly reclusive and very thin, and delegated most of his duties to the Queen.",
"title": "Biography"
},
{
"paragraph_id": 54,
"text": "After meeting the Queen of Sardinia, Le Brun visited Madame, the wife of the Comte de Provence, future king Louis XVIII (future queen of France in-exile). She excursed frequently to the countryside with her and her lady in waiting, Mme. de Gourbillon. She soon met the engraver Porporati again, who recommended her to lodge in a quiet inn in the countryside, she travelled there and was greatly pleased by the quietude and charming views it offered. not long afterwards Vigée Le Brun received news of the storming of the Tuileries in 10th August. Beset with despair, she set back for Turin, where she found the town filled with French refugees as turmoil intensified during the French revolution, setting a cruel spectacle for the artist. She subsequently rented a small home on the Moncalieri hillside, overlooking the Po river with M. de Rivière, who had arrived recently and narrowly escaped revolutionary violence as it swept the countryside, in solitude. Soon after she was frequently visited there by the Prince Ysoupoff. She soon decided to leave for Milan, but not before repaying the kindness Porporati had extended to her by painting his daughter's portrait, with which he was greatly pleased and made several engravings of the painting, sending several of them to Le Brun.",
"title": "Biography"
},
{
"paragraph_id": 55,
"text": "During her stay in Venice she lost yet another fortune, amounting to 35,000 francs, most of which she had accumulated from her commissions in Italy - which she had deposited in the bank of Venice, when French troops, campaigning under the command of the rising general Napoleon Buonaparte, captured the city shortly after she had left it. Le Brun had been repeatedly warned by M. Sacaut, the embassy secretary, to withdraw her money from the bank, foreseeing that French Republican troops might attack the city. The artist dismissed his warnings as 'a republic would never attack another republic'; nevertheless Napoleon later issued an ultimatum to the city to submit, and French troops entered the city. As Venice was looted, General Buonaparte had instructed the banker to spare Le Brun's deposit and afford her an annuity, but the orders were not carried out in the chaotic predicament of the city, and all that reached Vigée Le Brun were two hundred and fifty francs out of an original deposit of 40,000. During her travels in Italy, her name was added to the list of émigrés, losing her French citizenship and having her property scheduled for confiscation. M. Le Brun attempted to have his wife's name struck from the list of émigrés at this juncture by appealing to the Assemblée législative to no avail, and he along with Etienne, Vigée Le Brun's brother, were both briefly incarcerated in 1793, shortly before the terror began. Soon after, M. Le Brun attempted to protect himself and their properties from confiscation and began suing for divorce from his wife. The decree of divorce was issued on 3 June 1793.",
"title": "Biography"
},
{
"paragraph_id": 56,
"text": "Halfway through her journey to Milan, she was detained for two days due to her nationality. She sent a letter to Count Wilsheck, the Austrian embassador in the town, who secured her release. The count convinced Vigée Le Brun to travel to Vienna, and she decided to go there after her visit to Milan.",
"title": "Biography"
},
{
"paragraph_id": 57,
"text": "The artist received a warm welcome in Milan, with many young men and women from noble families serenading her outside her window, which persuaded the artist to extend her stay in Milan by a few days. It was during this time that she visited the Santa Maria delle Grazie and saw Leonardo Da Vinci's famous Last Supper. Writing of it;",
"title": "Biography"
},
{
"paragraph_id": 58,
"text": "I visited the refectory of the monastery known as Santa Maria delle Grazie with its famous Last Supper fresco by Leonardo da Vinci. It is one of the great masterpieces of the Italian school: yet in admiring this nobly portrayed Christ and all the other characters painted with such truth and such feeling, I groaned to see the extent to which this superb painting had been defaced: to begin with it had been covered with plaster, and then repainted in several parts. Nevertheless it was possible to judge what this beautiful work had been like prior to these disasters. for the effect, when viewed from a little distance, was still admirable. Since then I have learnt of a completely different cause of its poor condition. I was told that during the wars with Bonaparte in Italy, the soldiers would amuse themselves by firing musket balls at Leonardo's Last Supper! May these Barbarians be cursed!",
"title": "Biography"
},
{
"paragraph_id": 59,
"text": "She also saw various cartoons of Raphael's School of Athens, and various other drawings and sketches by Raphael, Da Vinci and numerous other artists at the Biblioteca Ambrosiana. She visited the Madonna del Monte, enjoying its commanding view, and sketched the countryside frequently. She later visited Lake Maggiore and resided on one of the two islands in the lake, the Isola Bella, being granted permission by the Prince Borromeo to lodge on the estate there. She soon attempted to visit the other isle, Isola Madre, but stormy weather affected her journey and she returned. It was during this period that she met the Countess Bistri, who would become one of her close friends. She informed the countess of her desire to travel to Vienna, and the countess replied that she and her husband were travelling there soon. Wishing to accompany the artist on her travel, the count and countess brought forward their date of departure to accomplish this. Vigée Le Brun praised the great care they took of her, and she finally left Milan for Austria. Vigée Le Brun would later describe Milan as being very similar to Paris.",
"title": "Biography"
},
{
"paragraph_id": 60,
"text": "While in Italy, Vigée Le Brun was elected to the Academy in Parma (1789) and the Accademia di San Luca in Rome (1790). Vigée Le Brun also painted allegorical portraits of Emma Hamilton as Ariadne (1790) and as a Bacchante (1792). Lady Hamilton was similarly the model for Vigée Le Brun's Sibyl (1792), which was inspired by the painted sibyls of Domenichino. The painting represents the Cumaean Sibyl, as indicated by the Greek inscription on the figure's scroll, which is taken from Virgil's fourth Eclogue. The Sibyl was Vigée Le Brun's favorite work; it is mentioned in her memoir more than any other work. She displayed it while in Venice (1792), Vienna (1792), Dresden (1794) and Saint Petersburg (1795); she also sent it to be shown at the Salon of 1798. It was perhaps her most successful painting, and had always garnered the most praise and attracted many viewers wherever it was displayed. Like her reception piece, Peace Bringing Back Abundance, Vigée Le Brun regarded her Sibyl as a history painting, the most elevated category in the Académie's hierarchy.",
"title": "Biography"
},
{
"paragraph_id": 61,
"text": "As well as the Countess Bistri and her husband she travelled to Vienna with two other French refugees of poorer origin whom they had taken on. The artist found their company priceless and lodged herself with them in Vienna, with some difficulty in procuring residence due to the travelling party's composition. This would be the beginning of two and a half years of her residency in Austria. Upon lodging herself there, she finished her painting of the countess Bistri, praising her as a \"truly beautiful woman\", then she presented herself to the Countess Thoun, armed with letters of introduction given to her by Count Wilsheck. The artist found a large number of elegant ladies in the countess' salon, and while there, met the Countess Kinska, of whom Vigée Le Brun was completely enraptured with her beauty. Vigée Le Brun proceeded to tour the city's galleries as was her custom when visiting new cities. She first paid a visit to the gallery of the famous painter of battles, Casanova. She found him in the middle of undertaking several paintings, and found him to be quite active despite being about sixty and \"having the habit of wearing two or three spectacles, atop one another\", and commented on his 'unusual and sharp mind' and his rich imagination when retelling stories or recounting past events during the dinners they had spent with the Prince Kaunitz. Vigée Le Brun praised his composition, though commented that numerous of his works that she witnessed were still not finished.",
"title": "Biography"
},
{
"paragraph_id": 62,
"text": "After meeting Casanova, she presented herself to the aging Prince Kaunitz, at his palace. She found dinners hosted by the prince to be uncomfortable due to the late time in which he dined and the large number of people often present at his table, and subsequently decided to dine at home most days. On days when she would accept his invitations, she would dine at home before leaving, and ate very little at his table. The prince noticed this and was offended by this and her frequent refusal of his invitations, leading to a short quarrel between the two, but they were soon reconciled. The Prince continued to host the artist and exhibited her Sibyl in his gallery, and she praised the kindness and sweetness he had extended her during her stay. When the Prince died shortly after, Vigée Le Brun was upset by the indifference the city's residents and aristocracy showed, and was further shocked when she visited the wax museum and found the Prince lying in state, his hair and clothes dressed exactly as they had always been. This sight had made a sorrowful impression on her.",
"title": "Biography"
},
{
"paragraph_id": 63,
"text": "While in Vienna, Vigée Le Brun was commissioned to paint Princess Maria Josepha Hermengilde Esterházy as Ariadne and Princess Karoline von Liechtenstein as Iris among many others, the latter portrait causing a minor scandal among the Princess's relatives. The portraits depict the Liechtenstein sisters-in-law in unornamented Roman-inspired garments that show the influence of Neoclassicism, and which may have been a reference to the virtuous republican Roman matron Cornelia, mother of the Gracchi. The artist met for the second time in Vienna one of her greatest friends, the Prince de Ligne, whom she had first met in Brussels in 1781. It was at his urging that Vigée Le Brun wished so much to meet the Russian sovereign Catherine the Great and to visit Russia. The Prince de Ligne urged her to stay at his former convent atop Kahlenberg, with its commanding view of the countryside, to which she agreed. During Vigée Le Brun's stay in Kahlenberg, de Ligne wrote a passionate poem about her. After two and half years in Vienna, the artist departed for Saint Petersburg on 19 April 1795, via Prague. She also visited Dresden on her way, and the Königsberg fortress, where she made the acquaintance of Prince Henry, who was very hospitable to the artist. While visiting Dresden on her way to Russia, Vigée Le Brun visited the famous Dresden gallery, writing that it was without doubt the most extensive one in all of Europe. It was there that she saw Raphael's Madonna di San Sisto. She was completely enamored of the painting, and wrote:",
"title": "Biography"
},
{
"paragraph_id": 64,
"text": "Suffice to say I came to the conclusion that Raphael is the greatest master of them all. I had just visited several rooms within the gallery when I found myself standing in front of a painting which aroused in me an admiration far more intense than that normally inspired by the art of painting. It showed the Virgin, sitting among the clouds. holding the infant Jesus in her arms. Her face is so beautiful and so noble that it is worthy of the divine brush that painted it. The face of the child, which is charming, bears an expression both innocent and celestial; the robes are accurately drawn and painted in the most magnificent colours. To the right of the Virgin stands a saint who seems quite real; his hands in particular merit admiration. To the left stands a young saint, her head bowed, watching two angels at the base of the painting. Her figure is full of beauty, candour and modesty. The two small angels lean upon their hands, their eyes lifted to the characters above them, and their heads bear an ingenuity and sensitivity that words alone cannot express. Having stood for some time gazing in awe at this painting, I had to pass it yet again on my way out, returning by the same route. The best paintings by the great masters had lost some of their perfection in my eyes, for I carried the image of that wonderful composition and that divine figure of the Virgin about with me! In Art nothing can compete with noble simplicity, and all the faces I viewed subsequently seemed to wear a sort of grimace.",
"title": "Biography"
},
{
"paragraph_id": 65,
"text": "In Russia, where she stayed from 1795 until 1801, she was well-received by the nobility and painted numerous aristocrats, including the former King of Poland, Stanisław August Poniatowski, whom she became well acquainted with, and other members of the family of Catherine the Great. Vigée Le Brun painted Catherine's granddaughters (daughters of Paul I), Elena and Alexandra Pavlovna, in Grecian tunics with exposed arms. The Empress's favorite, Platon Zubov, commented to Vigée Le Brun that the painting had scandalized the Empress due to the amount of bare skin the short sleeves revealed. Vigée Le Brun was greatly worried by this and considered it a hurtful remark and replaced the tunics with the muslin dresses the princesses wore, and added long sleeves (called Amadis in Russia). Vigée Le Brun was later reassured in a conversation with Catherine that she made no such remark, but by then the damage had already been done. When Paul later became Emperor, he expressed having been upset with the alterations Vigée Le Brun made to the painting. When Vigée Le Brun told him what Zubov told her, he shrugged and said \"They played a joke on you\".",
"title": "Biography"
},
{
"paragraph_id": 66,
"text": "Vigée Le Brun painted many other people during her stay in Russia, including the emperor Paul and his consort.",
"title": "Biography"
},
{
"paragraph_id": 67,
"text": "Catherine herself also agreed to sit for Vigée Le Brun, but she died the very next day, which was when she had promised to sit for the artist. While in Russia, Vigée Le Brun was made a member of the Academy of Fine Arts of Saint Petersburg. Much to her dismay, her daughter Julie married Gaétan Bernard Nigris, secretary to the Director of the Imperial Theaters of Saint Petersburg. Vigée Le Brun attempted everything in her power to prevent this match, and viewed it as a scheme concocted by her enemies and her governess to separate her from her daughter.However, as Julie's remonstrations and pressure on her mother grew, Vigée Le Brun relented and gave her approval for the wedding, though she was greatly distressed at the prospect, and soon found her stay in Russia, hitherto so enjoyable, had become suffocating and decided to return to Paris. She wrote;",
"title": "Biography"
},
{
"paragraph_id": 68,
"text": "As for myself, all the charm of my life seemed to have disappeared forever. I could not find the same pleasure in loving my daughter, and yet God knows how much I still love her, despite her faults. Only mothers will understand me when I say this. Shortly after her marriage she caught smallpox. Although I had never had this dreadful illness, no-one could stop me from running to her bedside. I found her face so swollen that I was seized with fright; but I was only frightened for her sake; as long as the malady lasted, I did not think of myself for one moment. To my joy she recovered without the least disfigurement. I needed to travel. I needed to leave Saint Petersburg, where I had suffered so much that my health had deteriorated. However those cruel remarks that had arisen as a result of this affair were soon retracted after the marriage. The men who had offended me the most were sorry indeed at the injustice.",
"title": "Biography"
},
{
"paragraph_id": 69,
"text": "Before departing for France, Vigée Le Brun decided to visit Moscow. Halfway through her journey to the city, news of the assassination of Paul I reached her. The journey was extremely difficult due to the melting snow, and the carriage often got stuck in the infamous Russian mud, and her journey was further delayed when most horses were taken by couriers spreading the news of the death of Paul and the coronation of Alexander. Vigée Le Brun enjoyed her stay in Moscow, and painted many portraits during her stay. Upon her return to Saint Petersburg she met the newly crowned Emperor Alexander I and Empress Louise, who urged her to stay in Saint Petersburg. Upon telling the Emperor of her poor health and prescription by a physician to take the waters near Karlsbad to cure her internal obstruction, the Emperor replied \"Do not go there, there is no need to go so far to find a remedy; I shall give you the Empress's horse, a few rides will have you cured\". Vigée Le Brun was touched by this, but replied to the Emperor that she did not know how to ride, to which the Emperor said \"Well, I will give you a riding instructor, he will teach you\". The artist was still adamant about leaving Russia, despite her closest friends, the Count Stroganoff, M. de Rivière and the princesses Dologruky and Kourakin and others attempting all they could to make her stay in Saint Petersburg, she left after residing there for six years. Julie predeceased her mother in 1819, by which time they had reconciled.",
"title": "Biography"
},
{
"paragraph_id": 70,
"text": "It was in Russia that Vigée Le Brun formed several of her longest lasting and most intimate friendships, with the Princesses Dologruky and Kourakin, and the Count Stroganoff.",
"title": "Biography"
},
{
"paragraph_id": 71,
"text": "After her departure from Saint Petersburg, Vigée Le Brun travelled – with some difficulty – through Prussia, visiting Berlin after an exhausting journey. The Queen of Prussia invited Vigée Le Brun to Potsdam to meet her; the Queen then commissioned a portrait of herself. The Queen invited the artist to reside in the Potsdam palace until she finished her portrait, but Vigée Le Brun, not wishing to intrude on the Queen's ladies-in-waiting, chose to reside in a nearby hotel, where her stay was uncomfortable.",
"title": "Biography"
},
{
"paragraph_id": 72,
"text": "The pair soon became friends. During a conversation, Vigée Le Brun complemented the Queen on her bracelets with an antique design, which the Queen then took off and put around Vigée Le Brun's arms. Vigée Le Brun considered this gift one of her most valued possessions for the rest of her life, and wore it almost everywhere. At the Queen's urging, Vigée Le Brun visited the Queen's Peacock Island, where the artist enjoyed the countryside.",
"title": "Biography"
},
{
"paragraph_id": 73,
"text": "Aside from two pastel portraits commissioned by the Queen, Vigée Le Brun also painted other pastel portraits of Prince Ferdinand's family.",
"title": "Biography"
},
{
"paragraph_id": 74,
"text": "During her stay in Berlin, she met with the General Plenipotentiary Bournonville, hoping to procure a passport to return to France. The general encouraged Vigée Le Brun to return and assured her that order and safety had been restored. Her brother and husband had already struck her name from the list of émigrés with ease, and had her French status restored. Shortly before her departure from Berlin, the General Director of the Academy of Painting visited her, bringing her the diploma for her admission to that academy. After her departure from Berlin, she visited Dresden and painted several copies of Emperor Alexander, which she had promised earlier, and also visited Brunswick where she resided for six days with the Rivière family, and was sought out by the Duke of Brunswick who wished to make her acquaintance. She also passed through Weimar and Frankfurt on her way.",
"title": "Biography"
},
{
"paragraph_id": 75,
"text": "After a sustained campaign by her ex-husband and other family members to have her name removed from the list of counter-revolutionary émigrés, Vigée Le Brun was finally able to return to France in January 1802. The artist received a rapturous welcome in her home at Rue de-Gros-Chenet and was greatly hailed by the press. Three days after her arrival, a letter arrived for her from the Comédie-Française, containing a decree reinstating her as a member of the theater. The leading members of the theater also wished to enact a comedy at her house to celebrate her return, which she politely refused. Soon afterwards, the artist was taken to witness the first consul's routine military ceremony at the Tuileries where she saw Napoleon Bonaparte for the first time, from a window inside the Louvre. The artist found it difficult to recognize the short figure as the man she had heard so much about; as with Catherine the Great, she had imagined a tall figure. A few days later, Bonaparte's brothers visited her gallery to view her works, with Lucien Bonaparte greatly complimenting her famous Sibyl. During her stay, Vigée Le Brun was surprised and dismayed by the greatly changed social customs of Parisian society upon her return there. She soon visited the famous painter M. Vien, who was the former Premier peintre du Roi; then 82 years old and a senator, he gave Vigée Le Brun an enthusiastic welcome and showed her some of his newest sketches. She met her friend from Saint Petersburg, Princess Dolgorouky, and saw her almost daily. In 1802, she demanded the refund of her dowry from her husband, whose gambling habits had dissipated a significant portion of the wealth she had accumulated in her early career as a portraitist. The artist soon felt mentally tormented in Paris, mainly due to memories of the early days of the revolution, and decided to move to a secluded house in Meudon forest. She was visited there by her neighbors, the famous dissident pair and Directory period Merveilleuses the Duchesse de Fleury, whom she met there for the first time since their friendship in Rome, and Adèle de Bellegarde; time spent with the pair restored her spirits. Shortly thereafter, Vigée Le Brun decided to travel to England, and departed from Paris on 15 April 1802.",
"title": "Biography"
},
{
"paragraph_id": 76,
"text": "Vigée Le Brun arrived at Dover, where she took the stagecoach to London, accompanied by the woman who would become her lifetime friend and chambermaid, Mme. Adélaïde, who later married M. Contat, Vigée Le Brun's accountant. Vigée Le Brun was confused by the large crowd at the quays, but was reassured that it was common for crowds of curious people to observe disembarking travelers in England. She had been told that highwaymen were common in England, and so hid her diamonds in her stocking. During her ride to London she was greatly frightened by two riders who approached the stagecoach whom she thought were bandits, but nothing came of it.",
"title": "Biography"
},
{
"paragraph_id": 77,
"text": "Upon her arrival at London she lodged at the Brunet hotel in Leicester Square. She could not sleep during her first night due to noise from her upstairs neighbor, who she found next morning was none other than the poet M. François-Auguste Parseval-Grandmaison, whom she had known from Paris. He always paced while reading or reciting his poetry. He promised her to take care not to interrupt her sleep, and she was able to rest well for the next night.",
"title": "Biography"
},
{
"paragraph_id": 78,
"text": "Wishing to find a more permanent lodging, a compatriot named Charmilly directed her to a house in Beck street, which overlooked the Royal Guards barracks. Vigée Le Brun terminated her residence there because of the noise from the barracks; in her words, \"...every morning between three and four o'clock there was a trumpet blast so loud that it could have served for the day of judgement. The noise of the trumpet, together with that of the horses whose stables lay directly beneath my window, prevented me from catching any sleep at all. In the daytime there was a constant din made by the neighbor's children...\". Vigée Le Brun then moved to a beautiful house in Portman Square. Upon closely scrutinizing the house's surroundings for any acoustic nuisance, she took up lodging there, only to be awakened at daybreak by a great screeching from a large bird owned by her neighbor. Later on, she also discovered that the former residents had buried two of their slaves in the cellar, where their bodies remained, and once again she decided to move, this time to a very damp building in Maddox Street. Although this was far from perfect, the artist was exhausted from constant moving, and decided to remain there, though the dampness of the house, combined with London's humid weather – greatly disliked by the artist – hindered her painting process. Vigée Le Brun found London lacking in inspiration for an artist due to its lack of public galleries at that time. She visited monuments, including Westminster Abbey, where she was greatly affected by the tomb of Mary, Queen of Scots, and visited the sarcophagi of the poets Shakespeare, Chatterton and Pope. She also visited St. Paul's Cathedral, the Tower of London and the London Museum. She greatly disliked the austere social customs of the English, particularly how quiet and empty the city was on Sundays, when all shops were closed and no social gatherings took place; the only pastime was the city's long walks. The artist also did not enjoy the local soiree equivalent – known as Routs (or rout-parties), describing them as stuffy and dour. The artist sought out the tree under which the famous poet Milton was said to have composed Paradise Lost, but was surprised to find that it had been cut down.",
"title": "Biography"
},
{
"paragraph_id": 79,
"text": "The artist visited the galleries of several prominent artists while in London, starting with the studio of artist Benjamin West. She also viewed some works by Joshua Reynolds. Vigée Le Brun was surprised to find that it was customary in England for visitors to the studios of artists to pay a small fee to the artist. Vigée Le Brun did not adhere to this local custom, and allowed her servant to pocket this toll. She was greatly pleased to meet one of the most famous actress and tragediennes of her era, Sarah Siddons, who visited Vigée Le Brun's studio in Maddox Street. During her stay in London, the English portraitist John Hoppner published a speech that viciously criticized her, her art and French artists in general, to which she made a scathing reply by letter which she published later in her life as part of her memoirs.",
"title": "Biography"
},
{
"paragraph_id": 80,
"text": "Vigée Le Brun continued to hold soirées and receptions in her house, which although damp, was beautiful. She received many people, including the Prince of Wales, Lady Hertford and Lord Borington and the famous actress Mme. Grassini among others. Vigée Le Brun sought out other compatriots during her stay in England, and cultivated a social circle of émigrés that included the Comte d'Artois (future King Charles X) and his son the Duc de Berri, the Duc de Serant and the Duc de Rivière.",
"title": "Biography"
},
{
"paragraph_id": 81,
"text": "Shortly after her arrival in London, the Treaty of Amiens was abrogated, and hostilities between France and the United Kingdom resumed. The British Government ordered all French people who had not resided more than a year in the UK to depart immediately. The Prince of Wales reassured Vigée Le Brun that this would not affect her, and she might reside in England however long she pleased. This permit from the King was difficult to procure, but the Prince of Wales personally delivered the permit to Vigée Le Brun.",
"title": "Biography"
},
{
"paragraph_id": 82,
"text": "Vigée Le Brun toured the countryside during her stay in England. She started with a visit to Margaret Chinnery at Gilwell Hall, where she received a \"charming welcome\" and met the famous musician Viotti, who composed a song for her which was sung by Mrs. Chinnery's daughter. She painted Mrs. Chinnery and her children whilst there, departing for Windsor after staying at Gilwell for a fortnight. She also visited Windsor Park and Hampton Court on the outskirts of London before leaving to visit Bath, where she greatly enjoyed the picturesque architecture of the city, its rolling hills and the countryside; but much like London, she found its society and weather dreary. She found some of her Russian friends from Saint Petersburg there, and went to visit the astronomer siblings William Herschel and Caroline Herschel. William Herschel showed Vigée Le Brun detailed maps of the moon, among other things.",
"title": "Biography"
},
{
"paragraph_id": 83,
"text": "The artist greatly enjoyed the English countryside, describing Matlock as being as picturesque as the Swiss countryside. Vigée Le Brun also visited the Duchess of Dorset at Knole House in Kent, which had once been owned by Elizabeth I. She returned to London, where she found the Comte de Vaudreuil, and then went to Twickenham where she visited Mme. la Comtesse de Vaudreuil and the Duc de Montpensier, with whom Vigée Le Brun became well acquainted; they enjoyed painting the countryside together. She was subsequently received by the Duc d'Orléans (the future King Louis Philippe). She then visited the Margravine of Brandenburg-Ansbach, the Baroness Craven, whom she painted and came to greatly enjoy her company, spending three weeks at her estate. Together, they visited the Isle of Wight, where Vigée Le Brun was mesmerized by the beauty of the countryside and the amiability of its inhabitants, writing later that along with the Isle of Ischia (near Naples), these were the only two places where she would happily spend her entire life.",
"title": "Biography"
},
{
"paragraph_id": 84,
"text": "She visited Mary Elizabeth Grenville, Marchioness of Buckingham, at Stowe. She also went to the home of Lord Moira and his sister Charlotte Adelaide Constantia Rawdon, where Vigée Le Brun further experienced the stern social milieu of English aristocracy; she spent some of the winter there. She then departed for Warwick Castle, eager to see this after hearing it praised so much. Vigée Le Brun attempted to visit the area incognito to avoid any awkwardness with Lord Warwick, as he would receive foreigners only if he knew their name. When he became aware that Vigée Le Brun was visiting, he went to her in person and gave her a decorous reception. After introducing the artist to his wife, he took her on a tour around the castle, looking over the lavish art collection there. He presented her with two drawings which she had sketched in Sir William Hamilton's summerhouse during her stay in Italy, telling her that he had paid a high price to buy them from his nephew. Vigée Le Brun later wrote that she had never sold them to Sir William to begin with. He also presented to her the famous Warwick vase, which he had purchased from Sir William as well. Vigée Le Brun then ended her tour by visiting Blenheim Palace before returning to London, and preparing to depart for France after staying in England for nearly three years. Upon her imminent departure becoming known, many of her acquaintances attempted to extend her residence with them, but to no avail as Vigée Le Brun wanted to see her daughter, who was in Paris at the time. As she prepared to leave London, Mme. Grassini arrived and then accompanied her, staying with her until her ship departed for Rotterdam, ending a trip that was originally intended to last only five months.",
"title": "Biography"
},
{
"paragraph_id": 85,
"text": "Her ship arrived in Rotterdam, where she first visited François de Beauharnais, the prefect of Rotterdam and brother in law to the Empress Joséphine de Beauharnais (brother to the late Alexandre de Beauharnais, who had been executed during The Terror). The artist was ordered to reside for eight to ten days in Rotterdam, as she has arrived from hostile soil, and was ordered to appear before General Oudinot, who was hospitable to her. After residing in Rotterdam for ten days, she received her passport and started for Paris. She visited Antwerp on her way to Paris and was received by its prefect, the Comte d'Hédouville[fr], and toured the city with him and his wife, and visited a sick young painter who wished to make her acquaintance.",
"title": "Biography"
},
{
"paragraph_id": 86,
"text": "She arrived in Paris and rejoiced to find her brother and her husband there, who was charged with recruiting artists for Saint Petersburg. He departed a few months later for Saint Petersburg, but Julie remained due to their failing union, though her relationship with her daughter continued to be a torment to her. She made the acquaintance of one of the most famous singers of her time, Angelica Catalani. She painted her and kept her portrait along with that of Mme. Grassini for the rest of her life, and continued to host soirées in her home as she had always had, to which Mme. Catalani was a regular.",
"title": "Biography"
},
{
"paragraph_id": 87,
"text": "Shortly after her arrival in Paris, Vigée Le Brun was commissioned by the court painter, Denon, to paint a portrait of the Emperor's sister Caroline Bonaparte, though she had heard that her journey to England had displeased Napoleon, who had allegedly said \"Madame Le Brun has gone to England to see her friends.\" Vigée Le Brun accepted the commission despite the fact that she was paid 1800 Francs, less than half the customary asking price, and later also included Mme. Murat's daughter in the portrait without raising the fee. She later described this commission as \"torture\", and wrote in her memoirs:",
"title": "Biography"
},
{
"paragraph_id": 88,
"text": "It would be impossible to describe all the vexations and torment I had to suffer while painting this portrait. First of all Mme Murat arrived with two ladies in waiting who proceeded to dress her hair as I tried to paint her. When I observed that it would be impossible to capture a likeness if I allowed them to continue, she eventually agreed to send the two women away. Added to this inconvenience, she almost always broke our appointments, which meant my staying in Paris for the whole summer waiting, usually in vain, for her to appear, for I was eager to finish the painting; I cannot tell you how this woman tried my patience. Moreover the gap between sittings was so long, that each time she did appear, her hair was dressed differently. At the beginning, for example, she had curls falling onto her cheek and I painted them accordingly; but a little later this style had gone out of fashion and she returned with a completely different one; I then had to rub out the curls as well as the pearls on her bandeau and replace them with cameos. The same thing happened with the dresses. The first dress I painted was rather open, as was the fashion then, and had a great deal of bold embroidery; when the fashion changed and the embroidery became more delicate, I had to enlarge the dress in order not to lose the detail. Eventually all these irritations reached a pitch, and I became very bad tempered as a result; one day she happened to be in my studio and I said to M. Denon, in a voice loud enough for her to overhear: 'When I painted real princesses they never gave me any trouble and never kept me waiting.' Of course Mme Murat did not know that punctuality is the politeness of kings, as Louis XIV quite rightly remarked and he, at least, was no upstart.",
"title": "Biography"
},
{
"paragraph_id": 89,
"text": "The portrait was exhibited in the Salon of 1807, and was the only portrait the imperial government commissioned from her.",
"title": "Biography"
},
{
"paragraph_id": 90,
"text": "In July 1807, the artist crossed to Switzerland, arriving first at the town of Basel, where she was received by M. Ethinger, a local banker, who threw a banquet to welcome the artist. She proceeded to Biel on the advice of Ethinger, but the roads there were so hazardous that part of the journey had to be made on foot. After recuperating in Biel for a single day, she proceeded to the tiny Île Saint-Pierre to visit the home of Rousseau, which she found, to her great surprise and dismay, had become a tavern. Vigée Le Brun praised the picturesque countryside repeatedly in her letters to Countess Vincent Potocka. After departing the island to return to Biel, she went on to Berne, where she was received by the wife of the Landamann (magistrate), Mme. de Watteville, and the General Ambassador Honoré Vial. She also met the seven-months pregnant Mme. de Brac, who accompanied her to Thun, and then to the Lauterbrunnen Valley, which she found dark and grim due to its being hidden from sunlight on both sides by steep mountains. On her descent, she and her company encountered a group of local shepherdesses; the beauty and naivete of the local people and the wilderness where the encounter took place made her liken the experience to something out of Arabian Nights. She went on to visit the Staubbach Falls in the valley.",
"title": "Biography"
},
{
"paragraph_id": 91,
"text": "After traversing the rugged trails of the valley, she returned to Berne via Brientz, and then arrived at Schaffhausen where she was received by the local Burgomeister, who took her to see the Rhine Falls. After departing from Schaffhausen, she visited the city of Zürich, where she enjoyed the hospitality of General Baron de Salis.",
"title": "Biography"
},
{
"paragraph_id": 92,
"text": "After taking the young daughter-in-law of de Salis with her, she departed for the small island of Ufenau in Lake Zurich, then visited Rappercheld [sic] where she continued to be mesmerized by the beauty of the countryside and the \"native innocence\" of the locals. After a hazardous boat ride destined for Walenstadt, the entourage turned back to Rappercheld and then visited the valley of Glarus. The artist then continued to the village of Soleure, on the Jura mountains. Seeing a solitary chalet perched atop Mount Wunchenstein [sic], her curiosity was excited by who would live so far and high, and she made a trek up the mountain after being assured that the conditions of the road would support her carriage. After slightly less than an hour, the road became very rugged and far too steep, prompting her to dismount and continue the journey on foot. The trek lasted about five and a half hours, though she wrote in a letter to Countess Potocka that the view made it completely worth it:",
"title": "Biography"
},
{
"paragraph_id": 93,
"text": "to tell the truth, the view completely eliminated my fatigue. Five or six vast forests, piled one upon the other, fell away beneath my eyes; the canton of Soleure seemed no more than a plain, the town and the villages, tiny specks; the fine line of glaciers which fringed the horizon became redder and redder as the sun sank: the other mountains between them formed a complete color spectrum; gold rays stretched across the mountain to my left, each carrying a rainbow in its arc; the sun set behind the peak; red-violet mountains grew imperceptibly fainter and fainter in the distance, stretching away to the lake of Biel and the far edge of Lake Neuchatel., they stood so far apart that you could only distinguish them by two gold lines. heavy with translucent mist; I was still overlooking the deep ravines and mountains covered with thick foliage; at my feet lay wild valleys surrounded by black pine forests. As the sun set, I watched the shadows change; different points took on a more sinister character, partly because of their shape and partly because of that long silence which slips harmoniously into the day's demise. All I can tell you is that my soul gloried in such a solemn and melancholy vision.",
"title": "Biography"
},
{
"paragraph_id": 94,
"text": "She returned to Soleure the next day, and then departed for Vevey, which she described as \"the land of my dreams\". She rented a house on the banks of Lake Geneva and toured the countryside and mountains around Vevey. She walked up Mount Blonay where the Messieurs de Blonay hosted her at Blonay castle. After descending the mountain, the artist hired the innkeeper where she was lodged to row her out on the lake at night. She was enthralled by the charming beauty and silence of the lake, and wrote of the journey later \"He was not Saint Preux and I was not Julie, but I was no less happy\". Vigée Le Brun then departed for Coppet, where she met the famous dissident socialite and woman of letters Madame de Staël, who was exiled by the Napoleonic regime. She stayed at Coppet with Madame de Staël, whom she painted as Corinne, a character from Mme. de Staël's most recent novel, Corinne ou l'Italie (1807).",
"title": "Biography"
},
{
"paragraph_id": 95,
"text": "After returning from Coppet to Geneva, where she was made an honorary member of the Société pour l'Avancement des Beaux-Arts, she departed in a group with the de Brac family for Chamonix, intending to visit the Sallanches mountains, the Aiguille du Goûter, and Mont Blanc. The journey was perilous. The entourage visited the Bossons Glacier. On the way upwards, M. de Brac fell ill with catalepsy, and was slowly nursed back to health in a nearby inn, where Vigée Le Brun, the pregnant Mme. de Brac and her son were distraught and worried about his condition, but he recuperated slowly over the course of a week. After eleven days in Chamonix, the artist departed alone without the de Brac family, writing that nothing would bring her to visit the \"melancholic\"' Chamonix again. She then left Switzerland and returned to Paris.",
"title": "Biography"
},
{
"paragraph_id": 96,
"text": "With her desire for travel still not sated, Vigée Le Brun re-entered Switzerland in 1808 via Neuchâtel, and then visited Lucerne, where she was enchanted by the picturesque and wild town. The artist also visited Brown [sic] and the market town of Schwyz, then Zug, where she crossed Lake Zug. She visited an inn where she wanted to visit the infamous landslide of Goldau. The artist visited the valley, once populated with several villages, now buried under rocks. Heavy with sorrow, she contemplated the remains of the villages for a long time before departing for Arth. Vigée Le Brun then climbed Kussnacht, intending to visit the spot where the legendary William Tell was said to have killed Gessler; at the time a chapel had been constructed on the location. There, the artist observed a shepherd and shepherdess singing to each other across the valley, a local courting custom, although the two stopped singing when they noticed her. The \"communication of love through melody\" presented her with a delightful scene, which she would describe as an eclogue in action.",
"title": "Biography"
},
{
"paragraph_id": 97,
"text": "The artist then visited Untersee, where she was fortunate to arrive in time to witness the Shepherd's festival at Unspunnen castle, which took place once every century. She was hosted by M. and Mme. Konig, who hosted all notable people who came to visit the festivals. Vigée Le Brun went to the château du Bailli to witness the start of the festival, which had been postponed a few days due to incessant rain, and was captivated by the festival's solemn pastoral chants and fireworks at night. The next day, she returned to see the festival taking place at half past ten in the morning; she joined the celebrations and dancing, before sitting back and watching the contests between the shepherds and shepherdesses. Vigée Le Brun recorded that she was frequently moved to tears by the enchanting atmosphere of the festival.",
"title": "Biography"
},
{
"paragraph_id": 98,
"text": "Coincidentally, she found Madame de Staël at the festival, and joined her in the procession that followed the Bailli and his magistrates, which was joined by people from the neighboring valleys, dressed in their local costume and carrying flags representing each canton or valley.",
"title": "Biography"
},
{
"paragraph_id": 99,
"text": "After returning to Paris from her second visit to Switzerland, Vigée Le Brun purchased a house in Louveciennes, Île-de-France near the Seine, and invited her niece (daughter of her brother Etienne) Caroline Rivière and her husband to live with her. She doted on the newlywed couple and formed a close bond with them, and occasionally visited Paris. She had Mme. Pourat and the talented actress Comtesse de Hocquart as neighbors. She visited Madame du Barry's home, the Pavillon de Louveciennes, which she found had been looted and stripped clean of its furniture and contents. On 31 March 1814, her house was raided by Prussian troops who were advancing towards Paris in the final stages of the war of the Sixth Coalition. As she prepared to go to bed after eleven o'clock, with no knowledge of the proximity of the allied troops, they entered her home, while she lay in her bed. They entered her bedchamber and proceeded to loot her home. Her German-speaking Swiss servant Joseph screamed at the soldiers to spare her person until his voice was hoarse. After the looting, the soldiers left her home. She left as well, initially intending to head to St. Germain before learning that the road there was unsafe. Instead she decided to take refuge in a room above the pumping machine at Marly aqueduct, near Du Barry's pavilion, with many other people, having entrusted her house to Joseph. As fighting nearby intensified, Vigée Le Brun attempted to take refuge in cave, but gave up after injuring her leg. There, she observed how most of the merchants taking refuge were, like her, pining for the restoration of the Bourbons.",
"title": "Biography"
},
{
"paragraph_id": 100,
"text": "She departed for Paris as soon as she received the news, and communicated by letter with Joseph about the condition of her Louveciennes home, which had been ransacked and its garden destroyed by the Prussian troops. Her servant wrote to her: \"I beg them to be less greedy, to content themselves with whatever I give them, they reply: \"The French have done far worse things in our country\". Vigée Le Brun wrote in her memoirs \"The Prussians are right; poor Joseph and I had to answer for that.\"",
"title": "Biography"
},
{
"paragraph_id": 101,
"text": "Vigée Le Brun was exultant at the entry of the Comte d'Artois to Paris on 12 April, shortly after Napoleon had agreed to abdicate. She wrote to him about the King, to which he replied: \"His legs are still bad, but his mind is in excellent form. We will march for him, and he will think for us\". She attended the euphoric reception of the King in Paris on 3 May 1814, and the restoration of the monarchy. The King personally gave her his regards while on his way to attend the Sunday services when he spotted her in a crowd.",
"title": "Biography"
},
{
"paragraph_id": 102,
"text": "Upon Napoleon's return from Elba, she noted the contrast between the rapturous reception the Bourbons had received the previous year and Napoleon's tepid welcome upon his return to France from his exile in Elba, after which he initiated the Hundred Days war. Vigée Le Brun exhibited her staunch royalist sympathies in her memoirs, writing:",
"title": "Biography"
},
{
"paragraph_id": 103,
"text": "Without wishing to insult the memory of a great captain and many brave generals and soldiers who helped win such resounding victories, I would like nevertheless to ask where these victories led us, and whether we still own any of the land which cost us so dear? For my part, the bulletins from the Russian campaign both distressed and revolted me; one of the later ones spoke of the loss of thousands of French soldiers and added that the Emperor had never looked so well! We read this bulletin at the home of the Bellegarde ladies, and felt so angry that we threw it on to the fire. The fact that the people were tired of these interminable wars is easily attested by their lack of enthusiasm during the Hundred Days. More than once I saw Bonaparte appear at his window and then retire immediately, furious no doubt, for the acclamation of the crowd was limited to the shouts of a hundred or so boys, paid, I believe, as an act of derision to chant long live the Emperor! There is a sharp contrast between this indifference and the joyful enthusiasm which greeted the King on his entry into Paris on the 8th of July 1815; this joy was almost universal, for after the many misfortunes incurred by Bonaparte, Louis XVIII brought only peace.",
"title": "Biography"
},
{
"paragraph_id": 104,
"text": "Her Louveciennes home was once again looted in the Hundred Days, this time by British troops. Among the possessions lost during this incident was a lacquer box gifted to her by the Count Stroganoff during her stay at Saint Petersburg, which she had prized immensely.",
"title": "Biography"
},
{
"paragraph_id": 105,
"text": "Her estranged husband died in August 1813, in their old home built on the Rue de-Gros-Chenet. Though they had drifted apart for several years, she was nonetheless sorely affected by his death.",
"title": "Biography"
},
{
"paragraph_id": 106,
"text": "In 1819 she sold her portrait of Lady Hamilton as the Comaean Sibyl to the Duc de Berri, despite it being her favorite, because she wished to satisfy the Duke. She also painted two portraits of the Duchesse de Berri, initially in the Tuileries, but then finishing their sittings in her home. In the same year, her daughter Julie died of syphilis, which devastated her. The next year, her brother Etienne died an alcoholic, leaving her niece Caroline her principal heir. Her friends advised the grief-stricken artist to travel to Bordeaux to occupy her mind with something else. She traveled first to Orléans, where she resided in the Château de Méréville, where she was mesmerized by its elegance, beauty and architecture, designed in the English Garden style; she wrote that it \"surpassed anything of its kind in England\". She toured the city and sampled its architecture and landmarks, including the cathedral and the ruins surrounding the city. She then traveled to Blois where she visited the Château de Chambord, which she described it as \"a romantic, fairy tale place\". She then visited the Château de Chanteloup, residence of the late Duc de Choiseul. Afterwards, she traveled to Tours, where the impure air forced her to quit the city after only two days. In Tours, she was received by the director of the academy, who offered to be her guide in the city. She also visited the ruins of the Marmoutier monastery. She then passed Poitiers and Angoulême on her way to Bordeaux. After arriving in Bordeaux, she stayed in the Fumel Hospice and was received there by the prefect, the Comte de Tournon-Simiane. She toured the countryside and visited the cemetery, which she praised for its sepulchral beauty and symmetrical layout. It became her second-favorite after the Père La Chaise cemetery of Paris. She also visited the synagogue of Bordeaux, styled after the temple of Solomon, the ruins of the ancient Roman Gallien Arena. After spending a week in Bordeaux, she started back for Paris, greatly satisfied with her travels. During her journey, it was common for her to be mistaken for a noble lady owing to her expensive carriage; she later lamented in her memoirs that this often meant she had to pay more in the inns where she resided.",
"title": "Biography"
},
{
"paragraph_id": 107,
"text": "Her journey to Bordeaux was the last time she traveled extensively.",
"title": "Biography"
},
{
"paragraph_id": 108,
"text": "The artist formed an intimate friendship with Antoine-Jean Gros, whom she had known since he was seven years old and had painted his portrait when he was at that age, during which she had noticed an artistic inclination in the child. Upon her return to France she was surprised to find Gros had become a successful and famous painter, head of his own school of art. Gros was socially reclusive, and often brusque to others, but he formed a close bond with Vigée Le Brun, who wrote: \"Gros was always a man of natural impulses. He was prone to feel the keenest sensations and would become equally passionate over a kind action or a beautiful work of art. He was ill at ease in society, rarely breaking the silence in a crowded place, but he listened attentively and replied with his gentle smile, or by a single word, always very apt. To appreciate Gros, one had to know him intimately. Then he would open up his heart, a kind and noble one at that; some people reproached him for having a certain brusqueness of tone, but this disappeared entirely in private. His conversation was even more fascinating because he never expressed himself in the same way as other men; always finding the most unusual and powerful images to convey a thought, you might almost say he painted with words.\"",
"title": "Biography"
},
{
"paragraph_id": 109,
"text": "She was greatly affected by his suicide in 1835; she had met him the day before and noted him brooding over criticism he had received over one of his paintings.",
"title": "Biography"
},
{
"paragraph_id": 110,
"text": "She spent most of her time in Louveciennes, typically eight months of the year. She formed new friendships with people including the writer and man of letters M. de Briffaut, the playwright M. Despré, the writer M. Louis Aimé-Martin, the composer M. Désaugiers, the painter and antiquarian Comte de Forbin, and the famous painter Antoine-Jean Gros. She hosted these people and socialized with them regularly in her countryside home or in Paris, as well as her old friend the Princess Kourakin. She painted Saint Geneviève, with the face being a posthumous portrait of 12-year old Julie. For the local chapel, the Comtesse de Genlis graced this painting with two separate poems; one for the saint, the other for the painter. She spent her time with her nieces Caroline Rivière and Eugénie Tripier-Le Franc, whom she came to regard as her own children. She had tutored the latter in painting since childhood and was greatly pleased to see her blossom into a professional artist. Eugénie and Caroline would assist her in writing her memoirs, late in her life. She died in Paris on 30 March 1842, aged 86. She was buried at the Cimetière de Louveciennes near her old home. Her tombstone epitaph says \"Ici, enfin, je repose...\" (Here, at last, I rest...).",
"title": "Biography"
},
{
"paragraph_id": 111,
"text": "During her lifetime, Vigée Le Brun's work was publicly exhibited in Paris at the Académie de Saint-Luc (1774), Salon de la Correspondance (1779, 1781, 1782, 1783) and Salon of the Académie in Paris (1783, 1785, 1787, 1789, 1791, 1798, 1802, 1817, 1824).",
"title": "Exhibitions"
},
{
"paragraph_id": 112,
"text": "The first retrospective exhibition of Vigée Le Brun's work was held in 1982 at the Kimbell Art Museum in Fort Worth, Texas. The first major international retrospective exhibition of her art premiered at the Galeries nationales du Grand Palais in Paris (2015—2016) and was subsequently shown at the Metropolitan Museum of Art in New York City (2016) and the National Gallery of Canada in Ottawa (2016).",
"title": "Exhibitions"
},
{
"paragraph_id": 113,
"text": "The 2014 docudrama made for French television, Le fabuleux destin d'Elisabeth Vigée Le Brun, directed by Arnaud Xainte, and starring Marlène Goulard and Julie Ravix as the young and old Elisabeth respectively, is available in English as The Fabulous Life of Elisabeth Vigée Le Brun.",
"title": "Portrayal in popular culture"
},
{
"paragraph_id": 114,
"text": "In the episode \"The Portrait\" from the BBC series Let Them Eat Cake (1999) written by Peter Learmouth, starring Dawn French and Jennifer Saunders, Madame Vigée Le Brun (Maggie Steed) paints a portrait of the Comtesse de Vache (Jennifer Saunders) weeping over a dead canary.",
"title": "Portrayal in popular culture"
},
{
"paragraph_id": 115,
"text": "Vigée Le Brun is one of only three characters in Joel Gross's Marie Antoinette: The Color of Flesh (premiered in 2007), a fictionalized historical drama about a love triangle set against the backdrop of the French Revolution.",
"title": "Portrayal in popular culture"
},
{
"paragraph_id": 116,
"text": "Vigée Le Brun's portrait of Marie Antoinette is featured on the cover of the 2010 album Nobody's Daughter by Hole.",
"title": "Portrayal in popular culture"
},
{
"paragraph_id": 117,
"text": "Élisabeth Vigée Le Brun is a dateable non-player character in the historically-based dating sim video game Ambition: A Minuet in Power published by Joy Manufacturing Co.",
"title": "Portrayal in popular culture"
},
{
"paragraph_id": 118,
"text": "Singer-songwriter Kelly Chase released the song \"Portrait of a Queen\" in 2021 to accompany the History Detective Podcast, Season 2, Episode 3 Marie Antionette's Portrait Artist: Vigée Le Brun.",
"title": "Portrayal in popular culture"
}
]
| Élisabeth Louise Vigée Le Brun, also known as Louise Élisabeth Vigée Le Brun or simply as Madame Le Brun, was a French painter who mostly specialized in portrait painting, in the late 18th and early 19th centuries. Her artistic style is generally considered part of the aftermath of Rococo with elements of an adopted Neoclassical style. Her subject matter and color palette can be classified as Rococo, but her style is aligned with the emergence of Neoclassicism. Vigée Le Brun created a name for herself in Ancien Régime society by serving as the portrait painter to Marie Antoinette. She enjoyed the patronage of European aristocrats, actors, and writers, and was elected to art academies in ten cities. Some famous contemporary artists, such as Joshua Reynolds, viewed her as one of the greatest portraitists of her time, comparing her with the old Dutch masters. Vigée Le Brun created 660 portraits and 200 landscapes. In addition to many works in private collections, her paintings are owned by major museums, such as the Louvre in Paris, Hermitage Museum in Saint Petersburg, National Gallery in London, Metropolitan Museum of Art in New York, and many other collections in Europe and the United States. Her personal habitus was characterized by a high sensitivity to sound, sight and smell. Between 1835 and 1837, when Vigée Le Brun was in her eighties, with the help of her nieces Caroline Rivière and Eugénie Tripier Le Franc[fr], she published her memoirs in three volumes (Souvenirs), some of which are in epistolary format. They also contain many pen portraits as well as advice for young portraitists. | 2001-10-19T02:33:14Z | 2023-12-28T06:32:06Z | [
"Template:Infobox artist",
"Template:Clarify",
"Template:Librivox author",
"Template:Efn",
"Template:Née",
"Template:Quote",
"Template:Citation needed",
"Template:Citation required",
"Template:Notelist",
"Template:Reflist",
"Template:Élisabeth Vigée Le Brun",
"Template:Short description",
"Template:Use dmy dates",
"Template:IPA",
"Template:Sic",
"Template:Cite book",
"Template:Cite news",
"Template:Authority control (arts)",
"Template:Blockquote",
"Template:Cite web",
"Template:Cite journal",
"Template:Commons category"
]
| https://en.wikipedia.org/wiki/%C3%89lisabeth_Vig%C3%A9e_Le_Brun |
9,949 | Epistle to the Galatians | The Epistle to the Galatians is the ninth book of the New Testament. It is a letter from Paul the Apostle to a number of Early Christian communities in Galatia. Scholars have suggested that this is either the Roman province of Galatia in southern Anatolia, or a large region defined by Galatians, an ethnic group of Celtic people in central Anatolia. The letter was originally written in Koine Greek and later translated into other languages.
In this letter, Paul is principally concerned with the controversy surrounding Gentile Christians and the Mosaic Law during the Apostolic Age. Paul argues that the Gentile Galatians do not need to adhere to the tenets of the Mosaic Law, particularly religious male circumcision, by contextualizing the role of the law in light of the revelation of Christ. The Epistle to the Galatians has exerted enormous influence on the history of Christianity, the development of Christian theology, and the study of the Apostle Paul.
The central dispute in the letter concerns the question of how Gentiles could convert to Christianity, which shows that this letter was written at a very early stage in church history, when the vast majority of Christians were Jewish or Jewish proselytes, which historians refer to as the Jewish Christians. Another indicator that the letter is early is that there is no hint in the letter of a developed organization within the Christian community at large. This puts it during the lifetime of Paul himself.
The original of the letter (autograph) is not known to survive. Papyrus 46, the earliest reasonably complete version available to scholars today, dates to approximately AD 200, around 150 years after the original was presumably drafted. This papyrus is fragmented in a few areas, causing some of the original text to be missing. But, as biblical scholar Bruce Metzger puts it, "through careful research relating to paper construction, handwriting development, and the established principles of textual criticism, scholars can be rather certain about where these errors and changes appeared and what the original text probably said."
In the past, a small number of scholars have questioned Paul's authorship of Galatians, such as Bruno Bauer, Abraham Loman, C. H. Weisse, and Frank R. McGuire. Currently, biblical scholars agree that Galatians is a true example of Paul's writing. The main arguments in favor of the authenticity of Galatians include its style and themes, which are common to the core letters of the Pauline corpus. George S. Duncan described its authenticity as "unquestioned. In every line it betrays its origin as a genuine letter of Paul." Moreover, Paul's possible description of the Council of Jerusalem gives a different point of view from the description in Acts 15:2–29, if it is, in fact, describing the Jerusalem Council.
A majority of scholars agree that Galatians was written between the late 40s and early 50s, although some date the original composition to c. 50–60. Jon Jordan notes that an interesting point to be made in the search for the dating of Galatians concerns whether or not it is a response to the Council of Jerusalem or a factor leading up to the Council. He writes, "did Paul's argument in Galatians flow out of the Jerusalem Council's decision, or did it come before the Jerusalem Council and possibly help shape that very decision?" It would have been enormously helpful to Paul's argument if he could have mentioned the decision of the Council of Jerusalem that Gentiles should not be circumcised. The absence of this argument from Paul strongly implies Galatians was written prior to the council. Since the council took place in 48–49 AD, and Paul evangelized South Galatia in 47–48 AD, the most plausible date for the writing of Galatians is 48 AD.
Paul's letter is addressed "to the churches of Galatia", but the location of these churches is a matter of debate. Most scholars agree that it is a geographical reference to the Roman province in central Asia Minor, which had been settled by immigrant Celts in the 270s BC and retained Gaulish features of culture and language in Paul's day. Acts records Paul traveling to the "region of Galatia and Phrygia", which lies immediately west of Galatia. Some scholars have argued that "Galatia" is an ethnic reference to Galatians, a Celtic people living in northern Asia Minor.
The New Testament indicates that Paul spent time personally in the cities of Galatia (Antioch of Pisidia, Iconium, Lystra and Derbe) during his missionary journeys. They seem to have been composed mainly of Gentile converts. After Paul's departure, the churches were led astray from Paul's trust/faith-centered teachings by individuals proposing "another gospel" (which centered on salvation through the Mosaic Law, so-called legalism), whom Paul saw as preaching a "different gospel" from what Paul had taught. The Galatians appear to have been receptive to the teaching of these newcomers, and the epistle is Paul's response to what he sees as their willingness to turn from his teaching.
The identity of these "opponents" is disputed. However, the majority of modern scholars view them as Jewish Christians, who taught that in order for converts to belong to the People of God, they must be subject to some or all of the Jewish Law (i.e. Judaizers). The letter indicates controversy concerning circumcision, Sabbath observance, and the Mosaic Covenant. It would appear, from Paul's response, that they cited the example of Abraham, who was circumcised as a mark of receiving the covenant blessings. They certainly appear to have questioned Paul's authority as an apostle, perhaps appealing to the greater authority of the Jerusalem church governed by James (brother of Jesus).
The North Galatian view holds that the epistle was written very soon after Paul's second visit to Galatia. In this view, the visit to Jerusalem, mentioned in Galatians 2:1–10, is identical with that of Acts 15, which is spoken of as a thing of the past. Consequently, the epistle seems to have been written after the Council of Jerusalem. The similarity between this epistle and the epistle to the Romans has led to the conclusion that they were both written at roughly the same time, during Paul's stay in Macedonia in roughly 56–57.
This third date takes the word "quickly" in Gal. 1:6 literally. John P. Meier suggests that Galatians was "written in the middle or late 50s, only a few years after the Antiochene incident he narrates". Eminent biblical scholar Helmut Koester also subscribes to the "North Galatian Hypothesis". Koester points out that the cities of Galatia in the north consist of Ankyra, Pessinus, and Gordium (of the Gordian Knot fame of Alexander the Great).
The South Galatian view holds that Paul wrote Galatians before the First Jerusalem Council, probably on his way to it, and that it was written to churches he had presumably planted during either his time in Tarsus (he would have traveled a short distance, since Tarsus is in Cilicia) after his first visit to Jerusalem as a Christian, or during his first missionary journey, when he traveled throughout southern Galatia. If it was written to the believers in South Galatia, it would likely have been written in 49.
A third theory is that Galatians 2:1–10 describes Paul and Barnabas' visit to Jerusalem described in Acts 11:30 and 12:25. This theory holds that the epistle was written before the Council was convened, possibly making it the earliest of Paul's epistles. According to this theory, the revelation mentioned (Gal. 2:2) corresponds with the prophecy of Agabus (Acts 11:27–28). This view holds that the private speaking about the gospel shared among the Gentiles precludes the Acts 15 visit, but fits perfectly with Acts 11. It further holds that continuing to remember the poor (Gal. 2:10) fits with the purpose of the Acts 11 visit, but not Acts 15.
In addition, the exclusion of any mention of the letter of Acts 15 is seen to indicate that such a letter did not yet exist, since Paul would have been likely to use it against the legalism confronted in Galatians. Finally, this view doubts Paul's confrontation of Peter (Gal. 2:11) would have been necessary after the events described in Acts 15. If this view is correct, the epistle should be dated somewhere around 47, depending on other difficult to date events, such as Paul's conversion.
Kirsopp Lake found this view less likely and wondered why it would be necessary for the Jerusalem Council (Acts 15) to take place at all if the issue were settled in Acts 11:30/12:25, as this view holds. Defenders of the view do not think it unlikely an issue of such magnitude would need to be discussed more than once. Renowned New Testament scholar J.B. Lightfoot also objected to this view since it "clearly implies that his [Paul's] Apostolic office and labours were well known and recognized before this conference."
Defenders of this view, such as Ronald Fung, disagree with both parts of Lightfoot's statement, insisting Paul received his "Apostolic Office" at his conversion (Gal. 1:15–17; Acts 9). Fung holds, then, that Paul's apostolic mission began almost immediately in Damascus (Acts 9:20). While accepting that Paul's apostolic anointing was likely only recognized by the Apostles in Jerusalem during the events described in Galatians 2/Acts 11:30, Fung does not see this as a problem for this theory.
Scholars have debated whether we can or should attempt to reconstruct both the backgrounds and arguments of the opponents to which Paul would have been responding. Though these opponents have traditionally been designated as Judaizers, this classification has fallen out of favor in contemporary scholarship. Some instead refer to them as Agitators. While many scholars have claimed that Paul's opponents were circumcisionist Jewish followers of Jesus, the ability to make such determinations with a reasonable degree of certainty has been called into question. It has often been presumed that they traveled from Jerusalem, but some commentators have raised the question of whether they may have actually been insiders familiar with the dynamics of the community. Furthermore, some commentaries and articles pointed out the inherent problems in mirror-reading, emphasizing that we simply do not have the evidence necessary to reconstruct the arguments of Paul's opponents. It is not sufficient to simply reverse his denials and assertions. This does not result in a coherent argument nor can it possibly reflect the thought processes of his opponents accurately. It is nearly impossible to reconstruct the opponents from Paul's text because their representation is necessarily polemical. All we can say with certainty is that they supported a different position of Gentile relations with Jewish folk than Paul did.
This outline is provided by Douglas J. Moo.
This epistle addresses the question of whether the Gentiles in Galatia were obligated to follow Mosaic Law to be part of the Christ community. After an introductory address, the apostle discusses the subjects which had occasioned the epistle.
In the first two chapters, Paul discusses his life before Christ and his early ministry, including interactions with other apostles in Jerusalem. This is the most extended discussion of Paul's past that we find in the Pauline letters (cf. Philippians 3:1–7). Some have read this autobiographical narrative as Paul's defense of his apostolic authority. Others, however, see Paul's telling of the narrative as making an argument to the Galatians about the nature of the gospel and the Galatians' own situation.
Chapter 3 exhorts the Galatian believers to stand fast in the faith as it is in Jesus. Paul engages in an exegetical argument, drawing upon the figure of Abraham and the priority of his faith to the covenant of circumcision. Paul explains that the law was introduced as a temporary measure, one that is no longer efficacious now that the seed of Abraham, Christ, has come. Chapter 4 then concludes with a summary of the topics discussed and with the benediction, followed by 5:1–6:10 teaching about the right use of their Christian freedom.
In the conclusion of the epistle, Paul wrote, "See with what large letters I am writing to you with my own hand." (Galatians 6:11, ESV) Regarding this conclusion, Lightfoot, in his Commentary on the epistle, says:
At this point the apostle takes the pen from his amanuensis, and the concluding paragraph is written with his own hand. From the time when letters began to be forged in his name it seems to have been his practice to close with a few words in his own handwriting, as a precaution against such forgeries ... In the present case he writes a whole paragraph, summing up the main lessons of the epistle in terse, eager, disjointed sentences. He writes it, too, in large, bold characters (Gr. pelikois grammasin), that his hand-writing may reflect the energy and determination of his soul.
Some commentators have postulated that Paul's large letters are owed to his poor eyesight, his deformed hands, or to other physical, mental, or psychological afflictions. Other commentators have attributed Paul's large letters to his poor education, his attempt to assert his authority, or his effort to emphasize his final words. Classics scholar Steve Reece has compared similar autographic subscriptions in thousands of Greek, Roman, and Jewish letters of this period and observes that large letters are a normal feature when senders of letters, regardless of their education, take the pen from their amanuensis and add a few words of greeting in their own hands.
Galatians also contains a catalogue of vices and virtues, a popular formulation of ancient Christian ethics.
Probably the most famous single statement made in the Epistle, by Paul, is in chapter 3, verse 28: "There is neither Jew nor Gentile, neither slave nor free, nor is there male and female, for you are all one in Christ Jesus." The debate surrounding that verse is legend and the two schools of thought are (1) this only applies to the spiritual standing of people in the eyes of God, it does not implicate social distinctions and gender roles on earth; and (2) this is not just about our spiritual standing but is also about how we relate to each other and treat each other in the here and now.
Position (1) emphasises the immediate context of the verse and notes that it is embedded in a discussion about justification: our relationship with God. Position (2) reminds its critics that the "whole letter context" is very much about how people got on in the here and now together, and in fact the discussion about justification came out of an actual example of people treating other people differently (2:11ff).
Much variety exists in discussions of Paul's view of the Law in Galatians. Nicole Chibici-Revneanu noticed a difference in Paul's treatment of the Law in Galatians and Romans. In Galatians the law is described as the "oppressor" whereas in Romans Paul describes that the Law was as being just as much in need of the Spirit to set it free from sin as humans do. Peter Oakes argues that Galatians cannot be construed as depicting the law positively because the Law played the role it was meant to play in the scope of human history. Wolfgang Reinbold argues that, contrary to the popular reading of Paul, the Law was possible to keep.
Regarding "under the law" (Gal. 3:23; 4:4, 5, 21; 5:18), Todd Wilson argues that "under the law" in Galatians was a "rhetorical abbreviation for 'under the curse of the law'". Regarding "works of the law" (Gal. 2:16), Robert Keith Rapa argues Paul is speaking of viewing Torah-observances as the means of salvation which he is seeking to combat in the Galatian congregation. Jacqueline de Roo noticed a similar phrase in the works found at Qumran and argues that "works of the law" is speaking of obedience to the Torah acting as a way of being atoned for. Michael Bachmann argues that this phrase is a mention of certain actions taken by Jewish people to distinguish themselves and perpetuate separation between themselves and Gentiles.
Much debate surrounds what Paul means by "law of Christ" in Galatians 6:2, a phrase that occurs only once in all of Paul's letters. As Schreiner explains, some scholars think that the "law of Christ" is the sum of Jesus's words, functioning as a "new Torah for believers". Others argue that the "genitive in the 'law of Christ' 'should be understood as explanatory, i.e. the law which is Christ'". Some focus on the relationship between the law of Christ and the Old Testament Decalogue. Still other scholars argue that whereas the "Mosaic law is abolished", the new "law of Christ fits with the Zion Torah", which "hails from Zion ... and is eschatological". Schreiner himself believes that the law of Christ is equivalent to Galatians 5:13–14's "law of love". According to Schreiner, when believers love others, "they behave as Christ did and fulfill his law".
As Thomas Schreiner explains, there is significant debate over the meaning of Peter's eating with the Gentiles—specifically, why might the eating have been considered wrong? E. P. Sanders argues that though Jews could eat in the same location with Gentiles, Jews did not want to consume food from the same vessels used by Gentiles. As Sanders explains, Galatia's Jews and Gentiles might have had to share the same cup and loaf (i.e. food from the same vessels). Other scholars such as James Dunn argue that Cephas was "already observing the basic food laws of the Torah" and then "men from James advocated an even stricter observance". Schreiner himself argues that Peter "actually ate unclean food—food prohibited by the OT law—before the men from James came". Depending on how one construes "eating with the Gentiles" in Galatians 2:12, one may reach different conclusions as to why Paul was so angry with Peter in Antioch.
There is debate about the meaning of the phrase δια πιστεος Χριστου in Galatians 2:16. Grammatically, this phrase can be interpreted either as an objective genitive "through faith in Jesus Christ" or as a subjective genitive "through the faith of Jesus Christ". There are theological ramifications to each position, but given the corpus of the Pauline literature, the majority of scholars have treated as an objective genitive, translating it as "faith in Jesus Christ". Daniel Harrington writes, "the subjective genitive does not oppose or do away with the concept of faith in Christ. Rather, it reestablishes priorities. One is justified by the faith of Jesus Christ manifested in his obedience to God by his death upon the cross. It is on the basis of that faith that one believes in Christ".
Galatians 3:28 says, "There is no longer Jew or Greek, there is no longer slave or free, there is no longer male and female; for all of you are one in Christ Jesus." According to Norbert Baumert, Galatians 3:28 is Paul's declaration that one can be in relationship with Jesus no matter their gender. Judith Gundry-Volf argues for a more general approach, stating that one's gender does not provide any benefit or burden. Pamela Eisenbaum argues that Paul was exhorting his readers to be mindful in changing conduct in relationships that involved people of different status. Ben Witherington argues that Paul is combatting the position espoused by opponents who were attempting to influence Paul's community to return to the patriarchal standards held by the majority culture.
There are two different interpretations within the modern scholarship regarding the meaning and function of Paul's statement that "there is no longer male and female". The first interpretation states that Paul's words eliminate the biological differences between males and females and thus calls gender roles into question. Nancy Bedford says that this does not mean that there is no distinction between males and females; instead, it means that there is no room for gender hierarchy in the gospel. The second interpretation states that one must recognize the historical background of Paul's time. Jeremy Punt argues that although many scholars want to say that this verse signifies changing gender norms, it actually reflects on the patriarchal structure of Paul's time. In Paul's time, females and males were considered one sex, and the female was understood to be the inferior version of the male. It is under this one sex understanding that Paul says that "there is no longer male and female", and it therefore does not show an abolition of the boundary between genders because that boundary did not exist in Paul's time. At the same time, the women in Paul's time would also not necessarily have heard an ideology of gender equality from the message of Galatians 3:28 because of their subordinated status in society at that time. Punt argues that in Galatians 3:28, Paul's intention was to fix social conflicts rather than to alter gender norms. By stating the importance of becoming one in Christ, Paul tries to give his society a new identity, which is the identity in Christ, and he believes that this will fix the social conflicts.
Many scholars debate the meaning of the phrase "Israel of God" in Galatians 6:16, wherein Paul wishes for "peace and mercy" to be "even upon the Israel of God". As Schreiner explains, scholars debate whether "Israel of God" refers to ethnically Jewish believers "within the church of Jesus Christ", or to the church of Christ as a whole (Jewish and Gentiles all included). Those who believe that "Israel of God" only refers to ethnically Jewish believers, argue that had Paul meant the entire church, he would use the word "mercy" before "peace", because Paul "sees peace as the petition for the church, while mercy is the request for unredeemed Jews". Other scholars, such as G. K. Beale, argue that the Old Testament backdrop of Galatians 6:16—e.g. a verse such as Isaiah 54:10 wherein God promises mercy and peace to Israel—suggests that "the Israel of God" refers to a portion of the new, eschatological Israel "composed of Jews and Gentiles". Schreiner himself is sympathetic to this view, believing that treating the "Israel of God" as the church of Christ fits with the whole of Galatians, since Galatians pronounces "believers in Christ" as "the true sons of Abraham".
Luther's fundamental belief in justification by faith was formed in large part by his interpretation of Galatians. Masaki claims
At the heart of Luther's Lectures on Galatians is the doctrine of the proper distinction between law and gospel. While Luther's contemporary opponents failed to see this—whether they were the papists, enthusiasts, Anabaptists, Sacramentarians, or antinomians—law/gospel articulation defined Luther's legacy in the thinking of his colleagues, students, and generations after him.
This distinction of law and gospel has been imperative to Luther's understanding of Paul's Judaism as well, but modern scholarship has formed a new perspective of the Judaism of Paul's time. "Luther's treatment of Galatians has affected most interpretations of the letter, at least among Protestants, up to the present time... Problems with Luther's interpretations and perspectives have become evident in modern times, particularly in his understanding and treatment of Judaism in Paul's day.
This development led to some schools of thought, such as Canadian religious historian Barrie Wilson who points out in How Jesus Became Christian, how Paul's Letter to the Galatians represents a sweeping rejection of Jewish Law (Torah). In so doing, Paul clearly takes his Christ movement out of the orbit of Judaism and into an entirely different milieu. Paul's stance constitutes a major contrast to the position of James, brother of Jesus, whose group in Jerusalem adhered to the observance of Torah.
Galatians 3:28 is one of the most controversial and influential verses in Galatians. There are three different pairs that Paul uses to elaborate his ideology. The first one is "Jew or Greek", the second one is "slave or free", and the third one is "male and female", Paul states that in Jesus Christ there is no longer a distinction between them. However, the meaning of this verse is not expounded upon further by Paul. In modern politics, the debate about the meaning of Galatians 3:28 is significant, as it is used by different people and scholars in order to make normative claims about sexuality, gender, and even marriage. The ongoing nature of this debate reveals that scholars still have not come to a unified conclusion regarding Paul's theology.
Online translations of the Epistle to Galatians:
Related articles: | [
{
"paragraph_id": 0,
"text": "The Epistle to the Galatians is the ninth book of the New Testament. It is a letter from Paul the Apostle to a number of Early Christian communities in Galatia. Scholars have suggested that this is either the Roman province of Galatia in southern Anatolia, or a large region defined by Galatians, an ethnic group of Celtic people in central Anatolia. The letter was originally written in Koine Greek and later translated into other languages.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In this letter, Paul is principally concerned with the controversy surrounding Gentile Christians and the Mosaic Law during the Apostolic Age. Paul argues that the Gentile Galatians do not need to adhere to the tenets of the Mosaic Law, particularly religious male circumcision, by contextualizing the role of the law in light of the revelation of Christ. The Epistle to the Galatians has exerted enormous influence on the history of Christianity, the development of Christian theology, and the study of the Apostle Paul.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The central dispute in the letter concerns the question of how Gentiles could convert to Christianity, which shows that this letter was written at a very early stage in church history, when the vast majority of Christians were Jewish or Jewish proselytes, which historians refer to as the Jewish Christians. Another indicator that the letter is early is that there is no hint in the letter of a developed organization within the Christian community at large. This puts it during the lifetime of Paul himself.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The original of the letter (autograph) is not known to survive. Papyrus 46, the earliest reasonably complete version available to scholars today, dates to approximately AD 200, around 150 years after the original was presumably drafted. This papyrus is fragmented in a few areas, causing some of the original text to be missing. But, as biblical scholar Bruce Metzger puts it, \"through careful research relating to paper construction, handwriting development, and the established principles of textual criticism, scholars can be rather certain about where these errors and changes appeared and what the original text probably said.\"",
"title": "Background"
},
{
"paragraph_id": 4,
"text": "In the past, a small number of scholars have questioned Paul's authorship of Galatians, such as Bruno Bauer, Abraham Loman, C. H. Weisse, and Frank R. McGuire. Currently, biblical scholars agree that Galatians is a true example of Paul's writing. The main arguments in favor of the authenticity of Galatians include its style and themes, which are common to the core letters of the Pauline corpus. George S. Duncan described its authenticity as \"unquestioned. In every line it betrays its origin as a genuine letter of Paul.\" Moreover, Paul's possible description of the Council of Jerusalem gives a different point of view from the description in Acts 15:2–29, if it is, in fact, describing the Jerusalem Council.",
"title": "Background"
},
{
"paragraph_id": 5,
"text": "A majority of scholars agree that Galatians was written between the late 40s and early 50s, although some date the original composition to c. 50–60. Jon Jordan notes that an interesting point to be made in the search for the dating of Galatians concerns whether or not it is a response to the Council of Jerusalem or a factor leading up to the Council. He writes, \"did Paul's argument in Galatians flow out of the Jerusalem Council's decision, or did it come before the Jerusalem Council and possibly help shape that very decision?\" It would have been enormously helpful to Paul's argument if he could have mentioned the decision of the Council of Jerusalem that Gentiles should not be circumcised. The absence of this argument from Paul strongly implies Galatians was written prior to the council. Since the council took place in 48–49 AD, and Paul evangelized South Galatia in 47–48 AD, the most plausible date for the writing of Galatians is 48 AD.",
"title": "Background"
},
{
"paragraph_id": 6,
"text": "Paul's letter is addressed \"to the churches of Galatia\", but the location of these churches is a matter of debate. Most scholars agree that it is a geographical reference to the Roman province in central Asia Minor, which had been settled by immigrant Celts in the 270s BC and retained Gaulish features of culture and language in Paul's day. Acts records Paul traveling to the \"region of Galatia and Phrygia\", which lies immediately west of Galatia. Some scholars have argued that \"Galatia\" is an ethnic reference to Galatians, a Celtic people living in northern Asia Minor.",
"title": "Background"
},
{
"paragraph_id": 7,
"text": "The New Testament indicates that Paul spent time personally in the cities of Galatia (Antioch of Pisidia, Iconium, Lystra and Derbe) during his missionary journeys. They seem to have been composed mainly of Gentile converts. After Paul's departure, the churches were led astray from Paul's trust/faith-centered teachings by individuals proposing \"another gospel\" (which centered on salvation through the Mosaic Law, so-called legalism), whom Paul saw as preaching a \"different gospel\" from what Paul had taught. The Galatians appear to have been receptive to the teaching of these newcomers, and the epistle is Paul's response to what he sees as their willingness to turn from his teaching.",
"title": "Background"
},
{
"paragraph_id": 8,
"text": "The identity of these \"opponents\" is disputed. However, the majority of modern scholars view them as Jewish Christians, who taught that in order for converts to belong to the People of God, they must be subject to some or all of the Jewish Law (i.e. Judaizers). The letter indicates controversy concerning circumcision, Sabbath observance, and the Mosaic Covenant. It would appear, from Paul's response, that they cited the example of Abraham, who was circumcised as a mark of receiving the covenant blessings. They certainly appear to have questioned Paul's authority as an apostle, perhaps appealing to the greater authority of the Jerusalem church governed by James (brother of Jesus).",
"title": "Background"
},
{
"paragraph_id": 9,
"text": "The North Galatian view holds that the epistle was written very soon after Paul's second visit to Galatia. In this view, the visit to Jerusalem, mentioned in Galatians 2:1–10, is identical with that of Acts 15, which is spoken of as a thing of the past. Consequently, the epistle seems to have been written after the Council of Jerusalem. The similarity between this epistle and the epistle to the Romans has led to the conclusion that they were both written at roughly the same time, during Paul's stay in Macedonia in roughly 56–57.",
"title": "Background"
},
{
"paragraph_id": 10,
"text": "This third date takes the word \"quickly\" in Gal. 1:6 literally. John P. Meier suggests that Galatians was \"written in the middle or late 50s, only a few years after the Antiochene incident he narrates\". Eminent biblical scholar Helmut Koester also subscribes to the \"North Galatian Hypothesis\". Koester points out that the cities of Galatia in the north consist of Ankyra, Pessinus, and Gordium (of the Gordian Knot fame of Alexander the Great).",
"title": "Background"
},
{
"paragraph_id": 11,
"text": "The South Galatian view holds that Paul wrote Galatians before the First Jerusalem Council, probably on his way to it, and that it was written to churches he had presumably planted during either his time in Tarsus (he would have traveled a short distance, since Tarsus is in Cilicia) after his first visit to Jerusalem as a Christian, or during his first missionary journey, when he traveled throughout southern Galatia. If it was written to the believers in South Galatia, it would likely have been written in 49.",
"title": "Background"
},
{
"paragraph_id": 12,
"text": "A third theory is that Galatians 2:1–10 describes Paul and Barnabas' visit to Jerusalem described in Acts 11:30 and 12:25. This theory holds that the epistle was written before the Council was convened, possibly making it the earliest of Paul's epistles. According to this theory, the revelation mentioned (Gal. 2:2) corresponds with the prophecy of Agabus (Acts 11:27–28). This view holds that the private speaking about the gospel shared among the Gentiles precludes the Acts 15 visit, but fits perfectly with Acts 11. It further holds that continuing to remember the poor (Gal. 2:10) fits with the purpose of the Acts 11 visit, but not Acts 15.",
"title": "Background"
},
{
"paragraph_id": 13,
"text": "In addition, the exclusion of any mention of the letter of Acts 15 is seen to indicate that such a letter did not yet exist, since Paul would have been likely to use it against the legalism confronted in Galatians. Finally, this view doubts Paul's confrontation of Peter (Gal. 2:11) would have been necessary after the events described in Acts 15. If this view is correct, the epistle should be dated somewhere around 47, depending on other difficult to date events, such as Paul's conversion.",
"title": "Background"
},
{
"paragraph_id": 14,
"text": "Kirsopp Lake found this view less likely and wondered why it would be necessary for the Jerusalem Council (Acts 15) to take place at all if the issue were settled in Acts 11:30/12:25, as this view holds. Defenders of the view do not think it unlikely an issue of such magnitude would need to be discussed more than once. Renowned New Testament scholar J.B. Lightfoot also objected to this view since it \"clearly implies that his [Paul's] Apostolic office and labours were well known and recognized before this conference.\"",
"title": "Background"
},
{
"paragraph_id": 15,
"text": "Defenders of this view, such as Ronald Fung, disagree with both parts of Lightfoot's statement, insisting Paul received his \"Apostolic Office\" at his conversion (Gal. 1:15–17; Acts 9). Fung holds, then, that Paul's apostolic mission began almost immediately in Damascus (Acts 9:20). While accepting that Paul's apostolic anointing was likely only recognized by the Apostles in Jerusalem during the events described in Galatians 2/Acts 11:30, Fung does not see this as a problem for this theory.",
"title": "Background"
},
{
"paragraph_id": 16,
"text": "Scholars have debated whether we can or should attempt to reconstruct both the backgrounds and arguments of the opponents to which Paul would have been responding. Though these opponents have traditionally been designated as Judaizers, this classification has fallen out of favor in contemporary scholarship. Some instead refer to them as Agitators. While many scholars have claimed that Paul's opponents were circumcisionist Jewish followers of Jesus, the ability to make such determinations with a reasonable degree of certainty has been called into question. It has often been presumed that they traveled from Jerusalem, but some commentators have raised the question of whether they may have actually been insiders familiar with the dynamics of the community. Furthermore, some commentaries and articles pointed out the inherent problems in mirror-reading, emphasizing that we simply do not have the evidence necessary to reconstruct the arguments of Paul's opponents. It is not sufficient to simply reverse his denials and assertions. This does not result in a coherent argument nor can it possibly reflect the thought processes of his opponents accurately. It is nearly impossible to reconstruct the opponents from Paul's text because their representation is necessarily polemical. All we can say with certainty is that they supported a different position of Gentile relations with Jewish folk than Paul did.",
"title": "Background"
},
{
"paragraph_id": 17,
"text": "This outline is provided by Douglas J. Moo.",
"title": "Outline"
},
{
"paragraph_id": 18,
"text": "This epistle addresses the question of whether the Gentiles in Galatia were obligated to follow Mosaic Law to be part of the Christ community. After an introductory address, the apostle discusses the subjects which had occasioned the epistle.",
"title": "Contents"
},
{
"paragraph_id": 19,
"text": "In the first two chapters, Paul discusses his life before Christ and his early ministry, including interactions with other apostles in Jerusalem. This is the most extended discussion of Paul's past that we find in the Pauline letters (cf. Philippians 3:1–7). Some have read this autobiographical narrative as Paul's defense of his apostolic authority. Others, however, see Paul's telling of the narrative as making an argument to the Galatians about the nature of the gospel and the Galatians' own situation.",
"title": "Contents"
},
{
"paragraph_id": 20,
"text": "Chapter 3 exhorts the Galatian believers to stand fast in the faith as it is in Jesus. Paul engages in an exegetical argument, drawing upon the figure of Abraham and the priority of his faith to the covenant of circumcision. Paul explains that the law was introduced as a temporary measure, one that is no longer efficacious now that the seed of Abraham, Christ, has come. Chapter 4 then concludes with a summary of the topics discussed and with the benediction, followed by 5:1–6:10 teaching about the right use of their Christian freedom.",
"title": "Contents"
},
{
"paragraph_id": 21,
"text": "In the conclusion of the epistle, Paul wrote, \"See with what large letters I am writing to you with my own hand.\" (Galatians 6:11, ESV) Regarding this conclusion, Lightfoot, in his Commentary on the epistle, says:",
"title": "Contents"
},
{
"paragraph_id": 22,
"text": "At this point the apostle takes the pen from his amanuensis, and the concluding paragraph is written with his own hand. From the time when letters began to be forged in his name it seems to have been his practice to close with a few words in his own handwriting, as a precaution against such forgeries ... In the present case he writes a whole paragraph, summing up the main lessons of the epistle in terse, eager, disjointed sentences. He writes it, too, in large, bold characters (Gr. pelikois grammasin), that his hand-writing may reflect the energy and determination of his soul.",
"title": "Contents"
},
{
"paragraph_id": 23,
"text": "Some commentators have postulated that Paul's large letters are owed to his poor eyesight, his deformed hands, or to other physical, mental, or psychological afflictions. Other commentators have attributed Paul's large letters to his poor education, his attempt to assert his authority, or his effort to emphasize his final words. Classics scholar Steve Reece has compared similar autographic subscriptions in thousands of Greek, Roman, and Jewish letters of this period and observes that large letters are a normal feature when senders of letters, regardless of their education, take the pen from their amanuensis and add a few words of greeting in their own hands.",
"title": "Contents"
},
{
"paragraph_id": 24,
"text": "Galatians also contains a catalogue of vices and virtues, a popular formulation of ancient Christian ethics.",
"title": "Contents"
},
{
"paragraph_id": 25,
"text": "Probably the most famous single statement made in the Epistle, by Paul, is in chapter 3, verse 28: \"There is neither Jew nor Gentile, neither slave nor free, nor is there male and female, for you are all one in Christ Jesus.\" The debate surrounding that verse is legend and the two schools of thought are (1) this only applies to the spiritual standing of people in the eyes of God, it does not implicate social distinctions and gender roles on earth; and (2) this is not just about our spiritual standing but is also about how we relate to each other and treat each other in the here and now.",
"title": "Contents"
},
{
"paragraph_id": 26,
"text": "Position (1) emphasises the immediate context of the verse and notes that it is embedded in a discussion about justification: our relationship with God. Position (2) reminds its critics that the \"whole letter context\" is very much about how people got on in the here and now together, and in fact the discussion about justification came out of an actual example of people treating other people differently (2:11ff).",
"title": "Contents"
},
{
"paragraph_id": 27,
"text": "Much variety exists in discussions of Paul's view of the Law in Galatians. Nicole Chibici-Revneanu noticed a difference in Paul's treatment of the Law in Galatians and Romans. In Galatians the law is described as the \"oppressor\" whereas in Romans Paul describes that the Law was as being just as much in need of the Spirit to set it free from sin as humans do. Peter Oakes argues that Galatians cannot be construed as depicting the law positively because the Law played the role it was meant to play in the scope of human history. Wolfgang Reinbold argues that, contrary to the popular reading of Paul, the Law was possible to keep.",
"title": "Major issues"
},
{
"paragraph_id": 28,
"text": "Regarding \"under the law\" (Gal. 3:23; 4:4, 5, 21; 5:18), Todd Wilson argues that \"under the law\" in Galatians was a \"rhetorical abbreviation for 'under the curse of the law'\". Regarding \"works of the law\" (Gal. 2:16), Robert Keith Rapa argues Paul is speaking of viewing Torah-observances as the means of salvation which he is seeking to combat in the Galatian congregation. Jacqueline de Roo noticed a similar phrase in the works found at Qumran and argues that \"works of the law\" is speaking of obedience to the Torah acting as a way of being atoned for. Michael Bachmann argues that this phrase is a mention of certain actions taken by Jewish people to distinguish themselves and perpetuate separation between themselves and Gentiles.",
"title": "Major issues"
},
{
"paragraph_id": 29,
"text": "Much debate surrounds what Paul means by \"law of Christ\" in Galatians 6:2, a phrase that occurs only once in all of Paul's letters. As Schreiner explains, some scholars think that the \"law of Christ\" is the sum of Jesus's words, functioning as a \"new Torah for believers\". Others argue that the \"genitive in the 'law of Christ' 'should be understood as explanatory, i.e. the law which is Christ'\". Some focus on the relationship between the law of Christ and the Old Testament Decalogue. Still other scholars argue that whereas the \"Mosaic law is abolished\", the new \"law of Christ fits with the Zion Torah\", which \"hails from Zion ... and is eschatological\". Schreiner himself believes that the law of Christ is equivalent to Galatians 5:13–14's \"law of love\". According to Schreiner, when believers love others, \"they behave as Christ did and fulfill his law\".",
"title": "Major issues"
},
{
"paragraph_id": 30,
"text": "As Thomas Schreiner explains, there is significant debate over the meaning of Peter's eating with the Gentiles—specifically, why might the eating have been considered wrong? E. P. Sanders argues that though Jews could eat in the same location with Gentiles, Jews did not want to consume food from the same vessels used by Gentiles. As Sanders explains, Galatia's Jews and Gentiles might have had to share the same cup and loaf (i.e. food from the same vessels). Other scholars such as James Dunn argue that Cephas was \"already observing the basic food laws of the Torah\" and then \"men from James advocated an even stricter observance\". Schreiner himself argues that Peter \"actually ate unclean food—food prohibited by the OT law—before the men from James came\". Depending on how one construes \"eating with the Gentiles\" in Galatians 2:12, one may reach different conclusions as to why Paul was so angry with Peter in Antioch.",
"title": "Major issues"
},
{
"paragraph_id": 31,
"text": "There is debate about the meaning of the phrase δια πιστεος Χριστου in Galatians 2:16. Grammatically, this phrase can be interpreted either as an objective genitive \"through faith in Jesus Christ\" or as a subjective genitive \"through the faith of Jesus Christ\". There are theological ramifications to each position, but given the corpus of the Pauline literature, the majority of scholars have treated as an objective genitive, translating it as \"faith in Jesus Christ\". Daniel Harrington writes, \"the subjective genitive does not oppose or do away with the concept of faith in Christ. Rather, it reestablishes priorities. One is justified by the faith of Jesus Christ manifested in his obedience to God by his death upon the cross. It is on the basis of that faith that one believes in Christ\".",
"title": "Major issues"
},
{
"paragraph_id": 32,
"text": "Galatians 3:28 says, \"There is no longer Jew or Greek, there is no longer slave or free, there is no longer male and female; for all of you are one in Christ Jesus.\" According to Norbert Baumert, Galatians 3:28 is Paul's declaration that one can be in relationship with Jesus no matter their gender. Judith Gundry-Volf argues for a more general approach, stating that one's gender does not provide any benefit or burden. Pamela Eisenbaum argues that Paul was exhorting his readers to be mindful in changing conduct in relationships that involved people of different status. Ben Witherington argues that Paul is combatting the position espoused by opponents who were attempting to influence Paul's community to return to the patriarchal standards held by the majority culture.",
"title": "Major issues"
},
{
"paragraph_id": 33,
"text": "There are two different interpretations within the modern scholarship regarding the meaning and function of Paul's statement that \"there is no longer male and female\". The first interpretation states that Paul's words eliminate the biological differences between males and females and thus calls gender roles into question. Nancy Bedford says that this does not mean that there is no distinction between males and females; instead, it means that there is no room for gender hierarchy in the gospel. The second interpretation states that one must recognize the historical background of Paul's time. Jeremy Punt argues that although many scholars want to say that this verse signifies changing gender norms, it actually reflects on the patriarchal structure of Paul's time. In Paul's time, females and males were considered one sex, and the female was understood to be the inferior version of the male. It is under this one sex understanding that Paul says that \"there is no longer male and female\", and it therefore does not show an abolition of the boundary between genders because that boundary did not exist in Paul's time. At the same time, the women in Paul's time would also not necessarily have heard an ideology of gender equality from the message of Galatians 3:28 because of their subordinated status in society at that time. Punt argues that in Galatians 3:28, Paul's intention was to fix social conflicts rather than to alter gender norms. By stating the importance of becoming one in Christ, Paul tries to give his society a new identity, which is the identity in Christ, and he believes that this will fix the social conflicts.",
"title": "Major issues"
},
{
"paragraph_id": 34,
"text": "Many scholars debate the meaning of the phrase \"Israel of God\" in Galatians 6:16, wherein Paul wishes for \"peace and mercy\" to be \"even upon the Israel of God\". As Schreiner explains, scholars debate whether \"Israel of God\" refers to ethnically Jewish believers \"within the church of Jesus Christ\", or to the church of Christ as a whole (Jewish and Gentiles all included). Those who believe that \"Israel of God\" only refers to ethnically Jewish believers, argue that had Paul meant the entire church, he would use the word \"mercy\" before \"peace\", because Paul \"sees peace as the petition for the church, while mercy is the request for unredeemed Jews\". Other scholars, such as G. K. Beale, argue that the Old Testament backdrop of Galatians 6:16—e.g. a verse such as Isaiah 54:10 wherein God promises mercy and peace to Israel—suggests that \"the Israel of God\" refers to a portion of the new, eschatological Israel \"composed of Jews and Gentiles\". Schreiner himself is sympathetic to this view, believing that treating the \"Israel of God\" as the church of Christ fits with the whole of Galatians, since Galatians pronounces \"believers in Christ\" as \"the true sons of Abraham\".",
"title": "Major issues"
},
{
"paragraph_id": 35,
"text": "Luther's fundamental belief in justification by faith was formed in large part by his interpretation of Galatians. Masaki claims",
"title": "Significance and reception"
},
{
"paragraph_id": 36,
"text": "At the heart of Luther's Lectures on Galatians is the doctrine of the proper distinction between law and gospel. While Luther's contemporary opponents failed to see this—whether they were the papists, enthusiasts, Anabaptists, Sacramentarians, or antinomians—law/gospel articulation defined Luther's legacy in the thinking of his colleagues, students, and generations after him.",
"title": "Significance and reception"
},
{
"paragraph_id": 37,
"text": "This distinction of law and gospel has been imperative to Luther's understanding of Paul's Judaism as well, but modern scholarship has formed a new perspective of the Judaism of Paul's time. \"Luther's treatment of Galatians has affected most interpretations of the letter, at least among Protestants, up to the present time... Problems with Luther's interpretations and perspectives have become evident in modern times, particularly in his understanding and treatment of Judaism in Paul's day.",
"title": "Significance and reception"
},
{
"paragraph_id": 38,
"text": "This development led to some schools of thought, such as Canadian religious historian Barrie Wilson who points out in How Jesus Became Christian, how Paul's Letter to the Galatians represents a sweeping rejection of Jewish Law (Torah). In so doing, Paul clearly takes his Christ movement out of the orbit of Judaism and into an entirely different milieu. Paul's stance constitutes a major contrast to the position of James, brother of Jesus, whose group in Jerusalem adhered to the observance of Torah.",
"title": "Significance and reception"
},
{
"paragraph_id": 39,
"text": "Galatians 3:28 is one of the most controversial and influential verses in Galatians. There are three different pairs that Paul uses to elaborate his ideology. The first one is \"Jew or Greek\", the second one is \"slave or free\", and the third one is \"male and female\", Paul states that in Jesus Christ there is no longer a distinction between them. However, the meaning of this verse is not expounded upon further by Paul. In modern politics, the debate about the meaning of Galatians 3:28 is significant, as it is used by different people and scholars in order to make normative claims about sexuality, gender, and even marriage. The ongoing nature of this debate reveals that scholars still have not come to a unified conclusion regarding Paul's theology.",
"title": "Significance and reception"
},
{
"paragraph_id": 40,
"text": "Online translations of the Epistle to Galatians:",
"title": "External links"
},
{
"paragraph_id": 41,
"text": "Related articles:",
"title": "External links"
}
]
| The Epistle to the Galatians is the ninth book of the New Testament. It is a letter from Paul the Apostle to a number of Early Christian communities in Galatia. Scholars have suggested that this is either the Roman province of Galatia in southern Anatolia, or a large region defined by Galatians, an ethnic group of Celtic people in central Anatolia. The letter was originally written in Koine Greek and later translated into other languages. In this letter, Paul is principally concerned with the controversy surrounding Gentile Christians and the Mosaic Law during the Apostolic Age. Paul argues that the Gentile Galatians do not need to adhere to the tenets of the Mosaic Law, particularly religious male circumcision, by contextualizing the role of the law in light of the revelation of Christ. The Epistle to the Galatians has exerted enormous influence on the history of Christianity, the development of Christian theology, and the study of the Apostle Paul. The central dispute in the letter concerns the question of how Gentiles could convert to Christianity, which shows that this letter was written at a very early stage in church history, when the vast majority of Christians were Jewish or Jewish proselytes, which historians refer to as the Jewish Christians. Another indicator that the letter is early is that there is no hint in the letter of a developed organization within the Christian community at large. This puts it during the lifetime of Paul himself. | 2001-10-19T05:10:58Z | 2023-12-23T13:38:03Z | [
"Template:Lang",
"Template:S-ttl",
"Template:Wikisource",
"Template:Cite EB1911",
"Template:S-aft",
"Template:S-start",
"Template:Books of the Bible",
"Template:See also",
"Template:Blockquote",
"Template:Cite journal",
"Template:Epistle to the Galatians",
"Template:Essay-like",
"Template:'\"",
"Template:Librivox book",
"Template:Bibleref",
"Template:Citation",
"Template:EBD",
"Template:Efn",
"Template:Bibleverse",
"Template:Reflist",
"Template:S-bef",
"Template:S-end",
"Template:Paul",
"Template:Circa",
"Template:S-hou",
"Template:Bibleref2",
"Template:Wikiquote",
"Template:Dead link",
"Template:Books of the New Testament",
"Template:Sfn",
"Template:Cite web",
"Template:Cite book",
"Template:Webarchive",
"Template:Authority control",
"Template:Short description",
"Template:Main",
"Template:Notelist"
]
| https://en.wikipedia.org/wiki/Epistle_to_the_Galatians |
9,950 | Epistle to the Philippians | The Epistle to the Philippians is a Pauline epistle of the New Testament of the Christian Bible. The epistle is attributed to Paul the Apostle and Timothy is named with him as co-author or co-sender. The letter is addressed to the Christian church in Philippi. Paul, Timothy, Silas (and perhaps Luke) first visited Philippi in Greece (Macedonia) during Paul's second missionary journey from Antioch, which occurred between approximately 49 and 51 AD. In the account of his visit in the Acts of the Apostles, Paul and Silas are accused of "disturbing the city".
There is a general consensus that Philippians consists of authentically Pauline material, and that the epistle is a composite of multiple letter fragments from Paul to the church in Philippi. These letters could have been written from Ephesus in 52–55 AD or Caesarea Maritima in 57–59, but the most likely city of provenance is Rome, around 62 AD, or about 10 years after Paul's first visit to Philippi.
Starting in the 1960s, a general consensus has emerged among biblical scholars that Philippians was not written as one unified letter, but is rather a compilation of fragments from three separate letters from Paul to the church in Philippi. According to Philip Sellew, Philippians contains the following letter fragments:
In support of the idea that Philippians is a composite work, scholars point to the abrupt shifts in tone and topic within the text. There also seem to be chronological inconsistencies from one chapter to the next concerning Paul's associate Epaphroditus:
Another argument against unity has been found in the swiftly changing fortunes of Epaphroditus: this associate of Paul is at the point of death in chapter two (Phil 2:25–30), where seemingly he has long been bereft of the company of the Philippian Christians; Paul says that he intended to send him back to Philippi after this apparently lengthy, or at least near-fatal separation. Two chapters later, however, at the end of the canonical letter, Paul notes that Epaphroditus had only now just arrived at Paul's side, carrying a gift from Philippi, a reference found toward the close of the "thank-you note" as a formulaic acknowledgement of receipt at Phil 4:18.
These letter fragments likely would have been edited into a single document by the first collector of the Pauline corpus, although there is no clear consensus among scholars regarding who this initial collector may have been, or when the first collection of Pauline epistles may have been published.
Today, a number of scholars believe that Philippians is a composite of multiple letter fragments. According to the theologian G. Walter Hansen, "The traditional view that Philippians was composed as one letter in the form presented in the NT [New Testament] can no longer claim widespread support." Nevertheless, many scholars continue to argue for the unity of Philippians.
Regardless of the literary unity of the letter, scholars agree that the material that was compiled into the Epistle to the Philippians was originally composed in Greek, sometime during the 50s or early 60s AD.
It is uncertain where Paul was when he wrote the letter(s) that make up Philippians. Internal evidence in the letter itself points clearly to it being composed while Paul was in custody, but it is unclear which period of imprisonment the letter refers to. If the testimony of the Acts of the Apostles is to be trusted, candidates would include the Roman imprisonment at the end of Acts, and the earlier Caesarean imprisonment. Any identification of the place of writing of Philippians is complicated by the fact that some scholars view Acts as being an unreliable source of information about the early Church.
Jim Reiher has suggested that the letters could stem from the second period of Roman imprisonment attested by early church fathers. The main reasons suggested for a later date include:
In Chapters 1 and 2 of Philippians (Letter B), Paul sends word to the Philippians of his upcoming sentence in Rome and of his optimism in the face of death, along with exhortations to imitate his capacity to rejoice in the Lord despite one's circumstances. Paul assures the Philippians that his imprisonment is actually helping to spread the Christian message, rather than hindering it. He also expresses gratitude for the devotion and heroism of Epaphroditus, who the Philippian church had sent to visit Paul and bring him gifts. Some time during his visit with Paul, Epaphroditus apparently contracted some life-threatening debilitating illness. But he recovers before being sent back to the Philippians.
In Chapter 3 (Letter C), Paul warns the Philippians about those Christians who insist that circumcision is necessary for salvation. He testifies that while he once was a devout Pharisee and follower of the Jewish law, he now considers these things to be worthless and worldly compared to the gospel of Jesus.
In Chapter 4, Paul urges the Philippians to resolve conflicts within their fellowship. In the latter part of the chapter (Letter A), Paul expresses his gratitude for the gifts that the Philippians had sent him, and assures them that God will reward them for their generosity.
Throughout the epistle there is a sense of optimism. Paul is hopeful that he will be released, and on this basis he promises to send Timothy to the Philippians for ministry, and also expects to pay them a personal visit.
Chapter 2 of the epistle contains a famous poem describing the nature of Christ and his act of redemption:
Who, though he was in the form of God,
But he emptied himself
And being found in appearance as a human
Therefore God highly exalted him
That at the name of Jesus
And every tongue should confess
Due to its unique poetic style, Bart D. Ehrman suggests that this passage constitutes an early Christian poem that was composed by someone else prior to Paul's writings, as early as the mid-late 30s AD and was later used by Paul in his epistle. While the passage is often called a "hymn", some scholars believe this to be an inappropriate name since it does not have a rhythmic or metrical structure in the original Greek.
The Christ poem is significant because it strongly suggests that there were very early Christians who understood Jesus to be a pre-existent celestial being, who chose to take on human form, rather than a human who was later exalted to a divine status.
Importantly, while the author of the poem did believe that Jesus existed in heaven before his physical incarnation, this does not necessarily mean that he was believed to be equal to God the Father prior to his death and resurrection. This largely depends on how the Greek word harpagmon (ἁρπαγμόν, accusative form of ἁρπαγμός) is translated in verse 6 ("Something to be grasped after / exploited"). If harpagmon is rendered as "something to be exploited," as it is in many Christian Bible translations, then the implication is that Christ was already equal to God prior to his incarnation. But Bart Ehrman and others have argued that the correct translation is in fact "something to be grasped after," implying that Jesus was not equal to God before his resurrection. Outside of this passage, harpagmon and related words were almost always used to refer to something that a person doesn't yet possess but tries to acquire.
It is widely agreed by interpreters, however, that the Christ poem depicts Jesus as equal to God after his resurrection. This is because the last two stanzas quote Isaiah 45:22–23: ("Every knee shall bow, every tongue confess"), which in the original context clearly refers to God the Father.
Online translations of the Epistle to the Philippians:
Online Study of Philippians:
Related articles: | [
{
"paragraph_id": 0,
"text": "The Epistle to the Philippians is a Pauline epistle of the New Testament of the Christian Bible. The epistle is attributed to Paul the Apostle and Timothy is named with him as co-author or co-sender. The letter is addressed to the Christian church in Philippi. Paul, Timothy, Silas (and perhaps Luke) first visited Philippi in Greece (Macedonia) during Paul's second missionary journey from Antioch, which occurred between approximately 49 and 51 AD. In the account of his visit in the Acts of the Apostles, Paul and Silas are accused of \"disturbing the city\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "There is a general consensus that Philippians consists of authentically Pauline material, and that the epistle is a composite of multiple letter fragments from Paul to the church in Philippi. These letters could have been written from Ephesus in 52–55 AD or Caesarea Maritima in 57–59, but the most likely city of provenance is Rome, around 62 AD, or about 10 years after Paul's first visit to Philippi.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Starting in the 1960s, a general consensus has emerged among biblical scholars that Philippians was not written as one unified letter, but is rather a compilation of fragments from three separate letters from Paul to the church in Philippi. According to Philip Sellew, Philippians contains the following letter fragments:",
"title": "Composition"
},
{
"paragraph_id": 3,
"text": "In support of the idea that Philippians is a composite work, scholars point to the abrupt shifts in tone and topic within the text. There also seem to be chronological inconsistencies from one chapter to the next concerning Paul's associate Epaphroditus:",
"title": "Composition"
},
{
"paragraph_id": 4,
"text": "Another argument against unity has been found in the swiftly changing fortunes of Epaphroditus: this associate of Paul is at the point of death in chapter two (Phil 2:25–30), where seemingly he has long been bereft of the company of the Philippian Christians; Paul says that he intended to send him back to Philippi after this apparently lengthy, or at least near-fatal separation. Two chapters later, however, at the end of the canonical letter, Paul notes that Epaphroditus had only now just arrived at Paul's side, carrying a gift from Philippi, a reference found toward the close of the \"thank-you note\" as a formulaic acknowledgement of receipt at Phil 4:18.",
"title": "Composition"
},
{
"paragraph_id": 5,
"text": "These letter fragments likely would have been edited into a single document by the first collector of the Pauline corpus, although there is no clear consensus among scholars regarding who this initial collector may have been, or when the first collection of Pauline epistles may have been published.",
"title": "Composition"
},
{
"paragraph_id": 6,
"text": "Today, a number of scholars believe that Philippians is a composite of multiple letter fragments. According to the theologian G. Walter Hansen, \"The traditional view that Philippians was composed as one letter in the form presented in the NT [New Testament] can no longer claim widespread support.\" Nevertheless, many scholars continue to argue for the unity of Philippians.",
"title": "Composition"
},
{
"paragraph_id": 7,
"text": "Regardless of the literary unity of the letter, scholars agree that the material that was compiled into the Epistle to the Philippians was originally composed in Greek, sometime during the 50s or early 60s AD.",
"title": "Composition"
},
{
"paragraph_id": 8,
"text": "It is uncertain where Paul was when he wrote the letter(s) that make up Philippians. Internal evidence in the letter itself points clearly to it being composed while Paul was in custody, but it is unclear which period of imprisonment the letter refers to. If the testimony of the Acts of the Apostles is to be trusted, candidates would include the Roman imprisonment at the end of Acts, and the earlier Caesarean imprisonment. Any identification of the place of writing of Philippians is complicated by the fact that some scholars view Acts as being an unreliable source of information about the early Church.",
"title": "Composition"
},
{
"paragraph_id": 9,
"text": "Jim Reiher has suggested that the letters could stem from the second period of Roman imprisonment attested by early church fathers. The main reasons suggested for a later date include:",
"title": "Composition"
},
{
"paragraph_id": 10,
"text": "In Chapters 1 and 2 of Philippians (Letter B), Paul sends word to the Philippians of his upcoming sentence in Rome and of his optimism in the face of death, along with exhortations to imitate his capacity to rejoice in the Lord despite one's circumstances. Paul assures the Philippians that his imprisonment is actually helping to spread the Christian message, rather than hindering it. He also expresses gratitude for the devotion and heroism of Epaphroditus, who the Philippian church had sent to visit Paul and bring him gifts. Some time during his visit with Paul, Epaphroditus apparently contracted some life-threatening debilitating illness. But he recovers before being sent back to the Philippians.",
"title": "Contents"
},
{
"paragraph_id": 11,
"text": "In Chapter 3 (Letter C), Paul warns the Philippians about those Christians who insist that circumcision is necessary for salvation. He testifies that while he once was a devout Pharisee and follower of the Jewish law, he now considers these things to be worthless and worldly compared to the gospel of Jesus.",
"title": "Contents"
},
{
"paragraph_id": 12,
"text": "In Chapter 4, Paul urges the Philippians to resolve conflicts within their fellowship. In the latter part of the chapter (Letter A), Paul expresses his gratitude for the gifts that the Philippians had sent him, and assures them that God will reward them for their generosity.",
"title": "Contents"
},
{
"paragraph_id": 13,
"text": "Throughout the epistle there is a sense of optimism. Paul is hopeful that he will be released, and on this basis he promises to send Timothy to the Philippians for ministry, and also expects to pay them a personal visit.",
"title": "Contents"
},
{
"paragraph_id": 14,
"text": "Chapter 2 of the epistle contains a famous poem describing the nature of Christ and his act of redemption:",
"title": "Christ poem"
},
{
"paragraph_id": 15,
"text": "Who, though he was in the form of God,",
"title": "Christ poem"
},
{
"paragraph_id": 16,
"text": "But he emptied himself",
"title": "Christ poem"
},
{
"paragraph_id": 17,
"text": "And being found in appearance as a human",
"title": "Christ poem"
},
{
"paragraph_id": 18,
"text": "Therefore God highly exalted him",
"title": "Christ poem"
},
{
"paragraph_id": 19,
"text": "That at the name of Jesus",
"title": "Christ poem"
},
{
"paragraph_id": 20,
"text": "And every tongue should confess",
"title": "Christ poem"
},
{
"paragraph_id": 21,
"text": "Due to its unique poetic style, Bart D. Ehrman suggests that this passage constitutes an early Christian poem that was composed by someone else prior to Paul's writings, as early as the mid-late 30s AD and was later used by Paul in his epistle. While the passage is often called a \"hymn\", some scholars believe this to be an inappropriate name since it does not have a rhythmic or metrical structure in the original Greek.",
"title": "Christ poem"
},
{
"paragraph_id": 22,
"text": "The Christ poem is significant because it strongly suggests that there were very early Christians who understood Jesus to be a pre-existent celestial being, who chose to take on human form, rather than a human who was later exalted to a divine status.",
"title": "Christ poem"
},
{
"paragraph_id": 23,
"text": "Importantly, while the author of the poem did believe that Jesus existed in heaven before his physical incarnation, this does not necessarily mean that he was believed to be equal to God the Father prior to his death and resurrection. This largely depends on how the Greek word harpagmon (ἁρπαγμόν, accusative form of ἁρπαγμός) is translated in verse 6 (\"Something to be grasped after / exploited\"). If harpagmon is rendered as \"something to be exploited,\" as it is in many Christian Bible translations, then the implication is that Christ was already equal to God prior to his incarnation. But Bart Ehrman and others have argued that the correct translation is in fact \"something to be grasped after,\" implying that Jesus was not equal to God before his resurrection. Outside of this passage, harpagmon and related words were almost always used to refer to something that a person doesn't yet possess but tries to acquire.",
"title": "Christ poem"
},
{
"paragraph_id": 24,
"text": "It is widely agreed by interpreters, however, that the Christ poem depicts Jesus as equal to God after his resurrection. This is because the last two stanzas quote Isaiah 45:22–23: (\"Every knee shall bow, every tongue confess\"), which in the original context clearly refers to God the Father.",
"title": "Christ poem"
},
{
"paragraph_id": 25,
"text": "Online translations of the Epistle to the Philippians:",
"title": "External links"
},
{
"paragraph_id": 26,
"text": "Online Study of Philippians:",
"title": "External links"
},
{
"paragraph_id": 27,
"text": "Related articles:",
"title": "External links"
}
]
| The Epistle to the Philippians is a Pauline epistle of the New Testament of the Christian Bible. The epistle is attributed to Paul the Apostle and Timothy is named with him as co-author or co-sender. The letter is addressed to the Christian church in Philippi. Paul, Timothy, Silas first visited Philippi in Greece (Macedonia) during Paul's second missionary journey from Antioch, which occurred between approximately 49 and 51 AD. In the account of his visit in the Acts of the Apostles, Paul and Silas are accused of "disturbing the city". There is a general consensus that Philippians consists of authentically Pauline material, and that the epistle is a composite of multiple letter fragments from Paul to the church in Philippi. These letters could have been written from Ephesus in 52–55 AD or Caesarea Maritima in 57–59, but the most likely city of provenance is Rome, around 62 AD, or about 10 years after Paul's first visit to Philippi. | 2001-10-19T05:14:44Z | 2023-11-03T15:32:18Z | [
"Template:Short description",
"Template:Wikiquote",
"Template:S-bef",
"Template:Cite journal",
"Template:Books of the New Testament",
"Template:Lang",
"Template:Cite book",
"Template:Cite EB1911",
"Template:Authority control",
"Template:Efn",
"Template:Div col",
"Template:CathEncy",
"Template:Bibleverse",
"Template:Eastons",
"Template:Books of the Bible",
"Template:Redirect-distinguish",
"Template:Cn",
"Template:Quote",
"Template:Rp",
"Template:S-ttl",
"Template:Epistle to the Philippians",
"Template:Blockquote",
"Template:Librivox book",
"Template:S-aft",
"Template:Webarchive",
"Template:S-start",
"Template:S-end",
"Template:Paul",
"Template:Notelist",
"Template:Cite web",
"Template:S-hou",
"Template:Div col end",
"Template:Reflist",
"Template:Bibleref2"
]
| https://en.wikipedia.org/wiki/Epistle_to_the_Philippians |
9,951 | Epistle to the Colossians | The Epistle to the Colossians is the twelfth book of the New Testament. It was written, according to the text, by Paul the Apostle and Timothy, and addressed to the church in Colossae, a small Phrygian city near Laodicea and approximately 100 miles (160 km) from Ephesus in Asia Minor.
Some scholars have increasingly questioned Paul's authorship and attributed the letter to an early follower instead, but others still defend it as authentic. If Paul was the author, he probably used an amanuensis, or secretary, in writing the letter (Col 4:18), possibly Timothy.
During the first generation after Jesus, Paul's epistles to various churches helped establish early Christian theology. According to Bruce Metzger, it was written in the 60s while Paul was in prison. Colossians is similar to Ephesians, also written at this time. Some critical scholars have ascribed the epistle to an early follower of Paul, writing as Paul. The epistle's description of Christ as pre-eminent over creation marks it, for some scholars, as representing an advanced christology not present during Paul's lifetime. Defenders of Pauline authorship cite the work's similarities to the letter to Philemon, which is broadly accepted as authentic.
The letter's authors claim to be Paul and Timothy, but authorship began to be authoritatively questioned during the 19th century. Pauline authorship was held to by many of the early church's prominent theologians, such as Irenaeus, Clement of Alexandria, Tertullian, Origen of Alexandria and Eusebius.
However, as with several epistles attributed to Paul, critical scholarship disputes this claim. One ground is that the epistle's language doesn't seem to match Paul's, with 48 words appearing in Colossians that are found nowhere else in his writings and 33 of which occur nowhere else in the New Testament. A second ground is that the epistle features a strong use of liturgical-hymnic style which appears nowhere else in Paul's work to the same extent. A third is that the epistle's themes related to Christ, eschatology and the church seem to have no parallel in Paul's undisputed works.
Advocates of Pauline authorship defend the differences that there are between elements in this letter and those commonly considered the genuine work of Paul (e.g. 1 Thessalonians). It is argued that these differences can come by human variability, such as by growth in theological knowledge over time, different occasion for writing, as well as use of different secretaries (or amanuenses) in composition. As it is usually pointed out by the same authors who note the differences in language and style, the number of words foreign to the New Testament and Paul is no greater in Colossians than in the undisputed Pauline letters (Galatians, of similar length, has 35 hapax legomena). In regard to the style, as Norman Perrin, who argues for pseudonymity, notes, "The letter does employ a great deal of traditional material and it can be argued that this accounts for the non-Pauline language and style. If this is the case, the non-Pauline language and style are not indications of pseudonymity." Not only that, but it has been noted that Colossians has indisputably Pauline stylistic characteristics, found nowhere else in the New Testament. Advocates of Pauline authorship also argue that the differences between Colossians and the rest of the New Testament are not as great as they are purported to be.
The connection between Colossians and to Philemon, an undisputed epistle, (Philemon 2, Colossians 4:17), the greetings of both epistles bear similar names (Philemon 23–24, Colossians 4:10–14) is used as evidence by those who advocate Pauline authorship.
As theologian Stephen D. Morrison points out in context, "Biblical scholars are divided over the authorship of Ephesians and Colossians." He provides as an example the reflection of theologian Karl Barth on the question. While acknowledging the validity of many questions regarding Pauline authorship, Barth was inclined to defend it. Nevertheless, he concluded that it didn't much matter one way or the other to him. It was more important to focus on "Quid scriptum est" (What is written) than "Quis scripseris" (Who wrote it). "It is enough to know that someone, at any rate, wrote Ephesians (why not Paul?), 30 to 60 years after Christ’s death (hardly any later than that, since it is attested by Ignatius, Polycarp, and Justin), someone who understood Paul well and developed the apostle’s ideas with conspicuous loyalty as well as originality.”
If the text was written by Paul, it could have been written at Rome during his first imprisonment. Paul would likely have composed it at roughly the same time that he wrote Philemon and Ephesians, as all three letters were sent with Tychicus and Onesimus. A date of 62 AD assumes that the imprisonment Paul speaks of is his Roman imprisonment that followed his voyage to Rome.
Other scholars have suggested that it was written from Caesarea or Ephesus.
If the letter is not considered to be an authentic part of the Pauline corpus, then it might be dated during the late 1st century, possibly as late as AD 90.
Colossae is in the same region as the seven churches of the Book of Revelation. In Colossians there is mention of local brethren in Colossae, Laodicea, and Hierapolis. Colossae was approximately 12 miles (19 km) from Laodicea and 14 miles (23 km) from Hierapolis.
References to "the elements" and the only mention of the word "philosophy" in the New Testament have led scholar Norman DeWitt to conclude that early Christians at Colossae must have been under the influence of Epicurean philosophy, which taught atomism. The Epistle to the Colossians proclaimed Christ to be the supreme power over the entire universe, and urged Christians to lead godly lives. The letter consists of two parts: first a doctrinal section, then a second regarding conduct. Those who believe that the motivation of the letter was a growing heresy in the church see both sections of the letter as opposing false teachers who have been spreading error in the congregation. Others see both sections of the letter as primarily encouragement and edification for a developing church.
I. Introduction (1:1–14)
II. The Supremacy of Christ (1:15–23)
III. Paul's Labor for the Church (1:24–2:7)
IV. Freedom from Human Regulations through Life with Christ (2:8–23)
V. Rules for Holy Living (3:1–4:6)
VI. Final Greetings (4:7–18)
The doctrinal part of the letter is found in the first two chapters. The main theme is developed in chapter 2, with a warning against being drawn away from him in whom dwelt all the fullness of the deity, and who was the head of all spiritual powers. Colossians 2:8–15 offers firstly a "general warning" against accepting a purely human philosophy, and then Colossians 2:16–23 a "more specific warning against false teachers".
In these doctrinal sections, the letter proclaims that Christ is supreme over all that has been created. All things were created through him and for him, and the universe is sustained by him. God had chosen for his complete being to dwell in Christ. The "cosmic powers" revered by the false teachers had been "discarded" and "led captive" at Christ's death. Christ is the master of all angelic forces and the head of the church. Christ is the only mediator between God and humanity, the unique agent of cosmic reconciliation. It is the Father in Colossians who is said to have delivered us from the domain of darkness and transferred us to the kingdom of His beloved Son. The Son is the agent of reconciliation and salvation not merely of the church, but in some sense redeems the rest of creation as well ("all things, whether things on earth or things in heaven").
Colossians praises the spiritual growth of its recipients because of their love for all the set-apart ones in Christ. It calls them to grow in wisdom and knowledge that their love might be principled love and not sentimentality. "Christ in you is your hope of glory!".
"Christ in you, the hope of Glory"
One of the themes of the doctrinal section of Colossians is promise of union with Christ through the indwelling life of God the Holy Spirit. For example, Colossians 1:27, "To them God has chosen to make known among the Gentiles the glorious riches of this mystery, which is Christ in you, the hope of glory." The Apostle Paul wrote to remind them of this promise and guard them against moving their ongoing trust from Christ to other philosophies and traditions which did not depend on Christ.
The practical part of the Epistle, enforces various duties naturally flowing from the doctrines expounded. They are exhorted to mind things that are above Colossians 3:1–4, to mortify every evil principle of their nature, and to put on the new man. Many special duties of the Christian life are also insisted upon as the fitting evidence of the Christian character. The letter ends with customary prayer, instruction, and greetings.
Colossians is often categorized as one of the "prison epistles", along with Ephesians, Philippians, and Philemon. Colossians has some close parallels with the letter to Philemon: names of some of the same people (e.g., Timothy, Aristarchus, Archippus, Mark, Epaphras, Luke, Onesimus, and Demas) appear in both epistles, and both are claimed to be written by Paul.
Online translations of the Epistle to the Colossians: | [
{
"paragraph_id": 0,
"text": "The Epistle to the Colossians is the twelfth book of the New Testament. It was written, according to the text, by Paul the Apostle and Timothy, and addressed to the church in Colossae, a small Phrygian city near Laodicea and approximately 100 miles (160 km) from Ephesus in Asia Minor.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Some scholars have increasingly questioned Paul's authorship and attributed the letter to an early follower instead, but others still defend it as authentic. If Paul was the author, he probably used an amanuensis, or secretary, in writing the letter (Col 4:18), possibly Timothy.",
"title": ""
},
{
"paragraph_id": 2,
"text": "During the first generation after Jesus, Paul's epistles to various churches helped establish early Christian theology. According to Bruce Metzger, it was written in the 60s while Paul was in prison. Colossians is similar to Ephesians, also written at this time. Some critical scholars have ascribed the epistle to an early follower of Paul, writing as Paul. The epistle's description of Christ as pre-eminent over creation marks it, for some scholars, as representing an advanced christology not present during Paul's lifetime. Defenders of Pauline authorship cite the work's similarities to the letter to Philemon, which is broadly accepted as authentic.",
"title": "Composition"
},
{
"paragraph_id": 3,
"text": "The letter's authors claim to be Paul and Timothy, but authorship began to be authoritatively questioned during the 19th century. Pauline authorship was held to by many of the early church's prominent theologians, such as Irenaeus, Clement of Alexandria, Tertullian, Origen of Alexandria and Eusebius.",
"title": "Composition"
},
{
"paragraph_id": 4,
"text": "However, as with several epistles attributed to Paul, critical scholarship disputes this claim. One ground is that the epistle's language doesn't seem to match Paul's, with 48 words appearing in Colossians that are found nowhere else in his writings and 33 of which occur nowhere else in the New Testament. A second ground is that the epistle features a strong use of liturgical-hymnic style which appears nowhere else in Paul's work to the same extent. A third is that the epistle's themes related to Christ, eschatology and the church seem to have no parallel in Paul's undisputed works.",
"title": "Composition"
},
{
"paragraph_id": 5,
"text": "Advocates of Pauline authorship defend the differences that there are between elements in this letter and those commonly considered the genuine work of Paul (e.g. 1 Thessalonians). It is argued that these differences can come by human variability, such as by growth in theological knowledge over time, different occasion for writing, as well as use of different secretaries (or amanuenses) in composition. As it is usually pointed out by the same authors who note the differences in language and style, the number of words foreign to the New Testament and Paul is no greater in Colossians than in the undisputed Pauline letters (Galatians, of similar length, has 35 hapax legomena). In regard to the style, as Norman Perrin, who argues for pseudonymity, notes, \"The letter does employ a great deal of traditional material and it can be argued that this accounts for the non-Pauline language and style. If this is the case, the non-Pauline language and style are not indications of pseudonymity.\" Not only that, but it has been noted that Colossians has indisputably Pauline stylistic characteristics, found nowhere else in the New Testament. Advocates of Pauline authorship also argue that the differences between Colossians and the rest of the New Testament are not as great as they are purported to be.",
"title": "Composition"
},
{
"paragraph_id": 6,
"text": "The connection between Colossians and to Philemon, an undisputed epistle, (Philemon 2, Colossians 4:17), the greetings of both epistles bear similar names (Philemon 23–24, Colossians 4:10–14) is used as evidence by those who advocate Pauline authorship.",
"title": "Composition"
},
{
"paragraph_id": 7,
"text": "As theologian Stephen D. Morrison points out in context, \"Biblical scholars are divided over the authorship of Ephesians and Colossians.\" He provides as an example the reflection of theologian Karl Barth on the question. While acknowledging the validity of many questions regarding Pauline authorship, Barth was inclined to defend it. Nevertheless, he concluded that it didn't much matter one way or the other to him. It was more important to focus on \"Quid scriptum est\" (What is written) than \"Quis scripseris\" (Who wrote it). \"It is enough to know that someone, at any rate, wrote Ephesians (why not Paul?), 30 to 60 years after Christ’s death (hardly any later than that, since it is attested by Ignatius, Polycarp, and Justin), someone who understood Paul well and developed the apostle’s ideas with conspicuous loyalty as well as originality.”",
"title": "Composition"
},
{
"paragraph_id": 8,
"text": "If the text was written by Paul, it could have been written at Rome during his first imprisonment. Paul would likely have composed it at roughly the same time that he wrote Philemon and Ephesians, as all three letters were sent with Tychicus and Onesimus. A date of 62 AD assumes that the imprisonment Paul speaks of is his Roman imprisonment that followed his voyage to Rome.",
"title": "Composition"
},
{
"paragraph_id": 9,
"text": "Other scholars have suggested that it was written from Caesarea or Ephesus.",
"title": "Composition"
},
{
"paragraph_id": 10,
"text": "If the letter is not considered to be an authentic part of the Pauline corpus, then it might be dated during the late 1st century, possibly as late as AD 90.",
"title": "Composition"
},
{
"paragraph_id": 11,
"text": "Colossae is in the same region as the seven churches of the Book of Revelation. In Colossians there is mention of local brethren in Colossae, Laodicea, and Hierapolis. Colossae was approximately 12 miles (19 km) from Laodicea and 14 miles (23 km) from Hierapolis.",
"title": "Content"
},
{
"paragraph_id": 12,
"text": "References to \"the elements\" and the only mention of the word \"philosophy\" in the New Testament have led scholar Norman DeWitt to conclude that early Christians at Colossae must have been under the influence of Epicurean philosophy, which taught atomism. The Epistle to the Colossians proclaimed Christ to be the supreme power over the entire universe, and urged Christians to lead godly lives. The letter consists of two parts: first a doctrinal section, then a second regarding conduct. Those who believe that the motivation of the letter was a growing heresy in the church see both sections of the letter as opposing false teachers who have been spreading error in the congregation. Others see both sections of the letter as primarily encouragement and edification for a developing church.",
"title": "Content"
},
{
"paragraph_id": 13,
"text": "I. Introduction (1:1–14)",
"title": "Content"
},
{
"paragraph_id": 14,
"text": "II. The Supremacy of Christ (1:15–23)",
"title": "Content"
},
{
"paragraph_id": 15,
"text": "III. Paul's Labor for the Church (1:24–2:7)",
"title": "Content"
},
{
"paragraph_id": 16,
"text": "IV. Freedom from Human Regulations through Life with Christ (2:8–23)",
"title": "Content"
},
{
"paragraph_id": 17,
"text": "V. Rules for Holy Living (3:1–4:6)",
"title": "Content"
},
{
"paragraph_id": 18,
"text": "VI. Final Greetings (4:7–18)",
"title": "Content"
},
{
"paragraph_id": 19,
"text": "The doctrinal part of the letter is found in the first two chapters. The main theme is developed in chapter 2, with a warning against being drawn away from him in whom dwelt all the fullness of the deity, and who was the head of all spiritual powers. Colossians 2:8–15 offers firstly a \"general warning\" against accepting a purely human philosophy, and then Colossians 2:16–23 a \"more specific warning against false teachers\".",
"title": "Content"
},
{
"paragraph_id": 20,
"text": "In these doctrinal sections, the letter proclaims that Christ is supreme over all that has been created. All things were created through him and for him, and the universe is sustained by him. God had chosen for his complete being to dwell in Christ. The \"cosmic powers\" revered by the false teachers had been \"discarded\" and \"led captive\" at Christ's death. Christ is the master of all angelic forces and the head of the church. Christ is the only mediator between God and humanity, the unique agent of cosmic reconciliation. It is the Father in Colossians who is said to have delivered us from the domain of darkness and transferred us to the kingdom of His beloved Son. The Son is the agent of reconciliation and salvation not merely of the church, but in some sense redeems the rest of creation as well (\"all things, whether things on earth or things in heaven\").",
"title": "Content"
},
{
"paragraph_id": 21,
"text": "Colossians praises the spiritual growth of its recipients because of their love for all the set-apart ones in Christ. It calls them to grow in wisdom and knowledge that their love might be principled love and not sentimentality. \"Christ in you is your hope of glory!\".",
"title": "Content"
},
{
"paragraph_id": 22,
"text": "\"Christ in you, the hope of Glory\"",
"title": "Content"
},
{
"paragraph_id": 23,
"text": "One of the themes of the doctrinal section of Colossians is promise of union with Christ through the indwelling life of God the Holy Spirit. For example, Colossians 1:27, \"To them God has chosen to make known among the Gentiles the glorious riches of this mystery, which is Christ in you, the hope of glory.\" The Apostle Paul wrote to remind them of this promise and guard them against moving their ongoing trust from Christ to other philosophies and traditions which did not depend on Christ.",
"title": "Content"
},
{
"paragraph_id": 24,
"text": "The practical part of the Epistle, enforces various duties naturally flowing from the doctrines expounded. They are exhorted to mind things that are above Colossians 3:1–4, to mortify every evil principle of their nature, and to put on the new man. Many special duties of the Christian life are also insisted upon as the fitting evidence of the Christian character. The letter ends with customary prayer, instruction, and greetings.",
"title": "Content"
},
{
"paragraph_id": 25,
"text": "Colossians is often categorized as one of the \"prison epistles\", along with Ephesians, Philippians, and Philemon. Colossians has some close parallels with the letter to Philemon: names of some of the same people (e.g., Timothy, Aristarchus, Archippus, Mark, Epaphras, Luke, Onesimus, and Demas) appear in both epistles, and both are claimed to be written by Paul.",
"title": "Content"
},
{
"paragraph_id": 26,
"text": "Online translations of the Epistle to the Colossians:",
"title": "External links"
}
]
| The Epistle to the Colossians is the twelfth book of the New Testament. It was written, according to the text, by Paul the Apostle and Timothy, and addressed to the church in Colossae, a small Phrygian city near Laodicea and approximately 100 miles (160 km) from Ephesus in Asia Minor. Some scholars have increasingly questioned Paul's authorship and attributed the letter to an early follower instead, but others still defend it as authentic. If Paul was the author, he probably used an amanuensis, or secretary, in writing the letter, possibly Timothy. | 2001-10-19T05:17:19Z | 2023-10-14T19:10:41Z | [
"Template:Cite web",
"Template:Bibleref2-nb",
"Template:Bibleverse",
"Template:S-start",
"Template:S-hou",
"Template:Main",
"Template:Citation needed",
"Template:Cite book",
"Template:Wikiquote",
"Template:S-ttl",
"Template:Books of the Bible",
"Template:Convert",
"Template:Like whom?",
"Template:Citation",
"Template:Cite journal",
"Template:S-aft",
"Template:S-bef",
"Template:Epistle to the Colossians",
"Template:Short description",
"Template:Books of the New Testament",
"Template:Bibleref2",
"Template:Tone inline",
"Template:Reflist",
"Template:Efn",
"Template:Wikisource",
"Template:Explain",
"Template:Librivox book",
"Template:Authority control",
"Template:Paul",
"Template:Notelist",
"Template:S-end"
]
| https://en.wikipedia.org/wiki/Epistle_to_the_Colossians |
9,952 | First Epistle to the Thessalonians | The First Epistle to the Thessalonians is a Pauline epistle of the New Testament of the Christian Bible. The epistle is attributed to Paul the Apostle, and is addressed to the church in Thessalonica, in modern-day Greece. It is likely among the first of Paul's letters, probably written by the end of AD 52, in the reign of Claudius although some scholars believe the Epistle to the Galatians may have been written by AD 48.
Thessalonica is a city on the Thermaic Gulf, which at the time of Paul was within the Roman Empire. Paul visited Thessalonica and preached to the local population, winning converts who became a Christian community. There is debate as to whether or not Paul's converts were originally Jewish. The Acts of the Apostles describes Paul preaching in a Jewish synagogue and persuading people who were already Jewish that Jesus was the Messiah, but in 1 Thessalonians itself Paul says that the converts had turned from idols, suggesting that they were not Jewish before Paul arrived.
Most New Testament scholars believe Paul wrote this letter from Corinth only months after he left Thessalonica, although information appended to this work in many early manuscripts (e.g., Codices Alexandrinus, Mosquensis, and Angelicus) state that Paul wrote it in Athens after Timothy had returned from Macedonia with news of the state of the church in Thessalonica.
It is widely agreed that 1 Thessalonians is the first book of the New Testament to be written, and the earliest extant Christian text. A majority of modern New Testament scholars date 1 Thessalonians to 49–51 AD, during Paul's 18-month stay in Corinth coinciding with his second missionary journey. A minority of scholars who do not recognize the historicity of Acts date it in the early 40s AD. The Delphi Inscription dates Gallio's proconsulship of Achaia to 51-52 AD, and Acts 18:12-17 mentions Gallio, toward the end of Paul's stay in Corinth.
1 Thessalonians does not focus on justification by faith or questions of Jewish–Gentile relations, themes that are covered in all other letters. Because of this, some scholars see this as an indication that this letter was written before the Epistle to the Galatians, where Paul's positions on these matters were formed and elucidated.
The majority of New Testament scholars hold 1 Thessalonians to be authentic, although a number of scholars in the mid-19th century contested its authenticity, most notably Clement Schrader and F.C. Baur. 1 Thessalonians matches other accepted Pauline letters, both in style and in content, and its authorship is also affirmed by 2 Thessalonians.
The authenticity of 1 Thessalonians 2:13–16 is disputed by some because its content appears at odds with the surrounding passages and Paul's theology in other epistles. However, the authenticity of the passage has continued to find defenders over the last two centuries, and in the last thirty years the common opinion has swung decisively in favor of authenticity. It is also sometimes suggested that 1 Thessalonians 5:1–11 is a post-Pauline insertion that has many features of Lukan language and theology that serves as an apologetic correction to Paul's imminent expectation of the Second Coming in 1 Thessalonians 4:13–18. Some scholars, such as Schmithals, Eckhart, Demke and Munro, have developed complicated theories involving redaction and interpolation in 1 and 2 Thessalonians.
Paul, speaking for himself, Silas, and Timothy, gives thanks for the news about their faith and love; he reminds them of the kind of life he had lived while he was with them. Paul stresses how honorably he conducted himself, reminding them that he had worked to earn his keep, taking great pains not to burden anyone. He did this, he says, even though he could have used his status as an apostle to impose upon them.
Paul goes on to explain that the dead will be resurrected prior to those still living, and both groups will greet the Lord in the air. | [
{
"paragraph_id": 0,
"text": "The First Epistle to the Thessalonians is a Pauline epistle of the New Testament of the Christian Bible. The epistle is attributed to Paul the Apostle, and is addressed to the church in Thessalonica, in modern-day Greece. It is likely among the first of Paul's letters, probably written by the end of AD 52, in the reign of Claudius although some scholars believe the Epistle to the Galatians may have been written by AD 48.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Thessalonica is a city on the Thermaic Gulf, which at the time of Paul was within the Roman Empire. Paul visited Thessalonica and preached to the local population, winning converts who became a Christian community. There is debate as to whether or not Paul's converts were originally Jewish. The Acts of the Apostles describes Paul preaching in a Jewish synagogue and persuading people who were already Jewish that Jesus was the Messiah, but in 1 Thessalonians itself Paul says that the converts had turned from idols, suggesting that they were not Jewish before Paul arrived.",
"title": "Background and Audience"
},
{
"paragraph_id": 2,
"text": "Most New Testament scholars believe Paul wrote this letter from Corinth only months after he left Thessalonica, although information appended to this work in many early manuscripts (e.g., Codices Alexandrinus, Mosquensis, and Angelicus) state that Paul wrote it in Athens after Timothy had returned from Macedonia with news of the state of the church in Thessalonica.",
"title": "Background and Audience"
},
{
"paragraph_id": 3,
"text": "It is widely agreed that 1 Thessalonians is the first book of the New Testament to be written, and the earliest extant Christian text. A majority of modern New Testament scholars date 1 Thessalonians to 49–51 AD, during Paul's 18-month stay in Corinth coinciding with his second missionary journey. A minority of scholars who do not recognize the historicity of Acts date it in the early 40s AD. The Delphi Inscription dates Gallio's proconsulship of Achaia to 51-52 AD, and Acts 18:12-17 mentions Gallio, toward the end of Paul's stay in Corinth.",
"title": "Composition"
},
{
"paragraph_id": 4,
"text": "1 Thessalonians does not focus on justification by faith or questions of Jewish–Gentile relations, themes that are covered in all other letters. Because of this, some scholars see this as an indication that this letter was written before the Epistle to the Galatians, where Paul's positions on these matters were formed and elucidated.",
"title": "Composition"
},
{
"paragraph_id": 5,
"text": "The majority of New Testament scholars hold 1 Thessalonians to be authentic, although a number of scholars in the mid-19th century contested its authenticity, most notably Clement Schrader and F.C. Baur. 1 Thessalonians matches other accepted Pauline letters, both in style and in content, and its authorship is also affirmed by 2 Thessalonians.",
"title": "Composition"
},
{
"paragraph_id": 6,
"text": "The authenticity of 1 Thessalonians 2:13–16 is disputed by some because its content appears at odds with the surrounding passages and Paul's theology in other epistles. However, the authenticity of the passage has continued to find defenders over the last two centuries, and in the last thirty years the common opinion has swung decisively in favor of authenticity. It is also sometimes suggested that 1 Thessalonians 5:1–11 is a post-Pauline insertion that has many features of Lukan language and theology that serves as an apologetic correction to Paul's imminent expectation of the Second Coming in 1 Thessalonians 4:13–18. Some scholars, such as Schmithals, Eckhart, Demke and Munro, have developed complicated theories involving redaction and interpolation in 1 and 2 Thessalonians.",
"title": "Composition"
},
{
"paragraph_id": 7,
"text": "Paul, speaking for himself, Silas, and Timothy, gives thanks for the news about their faith and love; he reminds them of the kind of life he had lived while he was with them. Paul stresses how honorably he conducted himself, reminding them that he had worked to earn his keep, taking great pains not to burden anyone. He did this, he says, even though he could have used his status as an apostle to impose upon them.",
"title": "Contents"
},
{
"paragraph_id": 8,
"text": "Paul goes on to explain that the dead will be resurrected prior to those still living, and both groups will greet the Lord in the air.",
"title": "Contents"
}
]
| The First Epistle to the Thessalonians is a Pauline epistle of the New Testament of the Christian Bible. The epistle is attributed to Paul the Apostle, and is addressed to the church in Thessalonica, in modern-day Greece. It is likely among the first of Paul's letters, probably written by the end of AD 52, in the reign of Claudius although some scholars believe the Epistle to the Galatians may have been written by AD 48. | 2001-10-19T05:18:01Z | 2023-12-11T13:56:03Z | [
"Template:S-ttl",
"Template:S-end",
"Template:Books of the Bible",
"Template:Efn",
"Template:Cite book",
"Template:EBD",
"Template:S-bef",
"Template:Paul",
"Template:S-start",
"Template:First Epistle to the Thessalonians",
"Template:Bibleverse",
"Template:Bibleref2",
"Template:Wikisource",
"Template:Wikiquote",
"Template:Short description",
"Template:Books of the New Testament",
"Template:Notelist",
"Template:Reflist",
"Template:Librivox book",
"Template:S-hou",
"Template:S-aft",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/First_Epistle_to_the_Thessalonians |
9,953 | Epistle to Titus | The Epistle to Titus is one of the three pastoral epistles (along with 1 Timothy and 2 Timothy) in the New Testament, historically attributed to Paul the Apostle. It is addressed to Saint Titus and describes the requirements and duties of presbyters/bishops.
The epistle is divided into three chapters, 46 verses in total.
Not mentioned in the Acts of the Apostles, Saint Titus was noted in Galatians (cf. Galatians 2:1, 3) where Paul wrote of journeying to Jerusalem with Barnabas, accompanied by Titus. He was then dispatched to Corinth, Greece, where he successfully reconciled the Christian community there with Paul, its founder. Titus was later left on the island of Crete to help organize the Church there, and later met back with the Apostle Paul in Nicopolis. He soon went to Dalmatia (now Croatia). According to Eusebius of Caesarea in the Ecclesiastical History, he served as the first bishop of Crete. He was buried in Cortyna (Gortyna), Crete; his head was later removed to Venice during the invasion of Crete by the Saracens in 832 and was enshrined in St Mark's Basilica, Venice, Italy.
According to Clare Drury, the claim that Paul himself wrote this letter and those to Timothy "seems at first sight obvious and incontrovertible. All three begin with a greeting from the apostle and contain personal notes and asides", but in reality "things are not so straightforward: signs of the late date of the letters proliferate". There has therefore been some debate regarding the authenticity of the letter.
Titus, along with the two other pastoral epistles (1 Timothy and 2 Timothy), is regarded by some scholars as being pseudepigraphical. On the basis of the language and content of the pastoral epistles, these scholars reject that they were written by Paul and believe that they were written by an anonymous forger after his death. Critics claim the vocabulary and style of the Pauline letters could not have been written by Paul according to available biographical information and reflect the views of the emerging Church rather than the apostle's. These scholars date the epistle from the 80s CE up to the end of the 2nd century, though most would place it sometime between 80 and 100 CE. The Church of England's Common Worship Lectionary Scripture Commentary concurs with this view: "the proportioning of the theological and practical themes is one factor that leads us to think of these writings as coming from the post-Pauline church world of the late first or early second century".
Titus has a very close affinity with 1 Timothy, sharing similar phrases and expressions and similar subject matter. This has led many scholars to believe that it was written by the same author who wrote 1 and 2 Timothy: their author is sometimes referred to as "the Pastor".
The gnostic writer Basilides rejected the epistle.
Other scholars who do believe that Paul wrote Titus date its composition from the circumstance that it was written after Paul's visit to Crete (Titus 1:5). This visit could not be the one referred to in the Acts of the Apostles 27:7, when Paul was on his voyage to Rome as a prisoner, and where he continued a prisoner for two years. Thus traditional exegesis supposes that after his release Paul sailed from Rome into Asia, passing Crete by the way, and that there he left Titus "to set in order the things that were wanting". Thence he would have gone to Ephesus, where he left Timothy, and from Ephesus to Macedonia, where he wrote the First Epistle to Timothy, and thence, according to the subscription of this epistle, to "Nicopolis of Macedonia", from which place he wrote to Titus, about 66 or 67.
Recent scholarship has revived the theory that Paul used an amanuensis, or secretaries, in writing his letters (e.g. Romans 16:22), but possibly Luke for the pastorals. This was a common practice in ancient letter writing, even for the biblical writers.
One of the secular peculiarities of the Epistle to Titus is the reference to the Epimenides paradox: "One of the Cretans, a prophet of their own, said, 'Cretans are always liars'."
This article incorporates text from a publication now in the public domain: Easton, Matthew George (1897). "Titus, Epistle to". Easton's Bible Dictionary (New and revised ed.). T. Nelson and Sons.
Online translations of the Epistle to Titus:
Exegetical papers on Titus: | [
{
"paragraph_id": 0,
"text": "The Epistle to Titus is one of the three pastoral epistles (along with 1 Timothy and 2 Timothy) in the New Testament, historically attributed to Paul the Apostle. It is addressed to Saint Titus and describes the requirements and duties of presbyters/bishops.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The epistle is divided into three chapters, 46 verses in total.",
"title": "Text"
},
{
"paragraph_id": 2,
"text": "Not mentioned in the Acts of the Apostles, Saint Titus was noted in Galatians (cf. Galatians 2:1, 3) where Paul wrote of journeying to Jerusalem with Barnabas, accompanied by Titus. He was then dispatched to Corinth, Greece, where he successfully reconciled the Christian community there with Paul, its founder. Titus was later left on the island of Crete to help organize the Church there, and later met back with the Apostle Paul in Nicopolis. He soon went to Dalmatia (now Croatia). According to Eusebius of Caesarea in the Ecclesiastical History, he served as the first bishop of Crete. He was buried in Cortyna (Gortyna), Crete; his head was later removed to Venice during the invasion of Crete by the Saracens in 832 and was enshrined in St Mark's Basilica, Venice, Italy.",
"title": "Recipient"
},
{
"paragraph_id": 3,
"text": "According to Clare Drury, the claim that Paul himself wrote this letter and those to Timothy \"seems at first sight obvious and incontrovertible. All three begin with a greeting from the apostle and contain personal notes and asides\", but in reality \"things are not so straightforward: signs of the late date of the letters proliferate\". There has therefore been some debate regarding the authenticity of the letter.",
"title": "Authenticity"
},
{
"paragraph_id": 4,
"text": "Titus, along with the two other pastoral epistles (1 Timothy and 2 Timothy), is regarded by some scholars as being pseudepigraphical. On the basis of the language and content of the pastoral epistles, these scholars reject that they were written by Paul and believe that they were written by an anonymous forger after his death. Critics claim the vocabulary and style of the Pauline letters could not have been written by Paul according to available biographical information and reflect the views of the emerging Church rather than the apostle's. These scholars date the epistle from the 80s CE up to the end of the 2nd century, though most would place it sometime between 80 and 100 CE. The Church of England's Common Worship Lectionary Scripture Commentary concurs with this view: \"the proportioning of the theological and practical themes is one factor that leads us to think of these writings as coming from the post-Pauline church world of the late first or early second century\".",
"title": "Authenticity"
},
{
"paragraph_id": 5,
"text": "Titus has a very close affinity with 1 Timothy, sharing similar phrases and expressions and similar subject matter. This has led many scholars to believe that it was written by the same author who wrote 1 and 2 Timothy: their author is sometimes referred to as \"the Pastor\".",
"title": "Authenticity"
},
{
"paragraph_id": 6,
"text": "The gnostic writer Basilides rejected the epistle.",
"title": "Authenticity"
},
{
"paragraph_id": 7,
"text": "Other scholars who do believe that Paul wrote Titus date its composition from the circumstance that it was written after Paul's visit to Crete (Titus 1:5). This visit could not be the one referred to in the Acts of the Apostles 27:7, when Paul was on his voyage to Rome as a prisoner, and where he continued a prisoner for two years. Thus traditional exegesis supposes that after his release Paul sailed from Rome into Asia, passing Crete by the way, and that there he left Titus \"to set in order the things that were wanting\". Thence he would have gone to Ephesus, where he left Timothy, and from Ephesus to Macedonia, where he wrote the First Epistle to Timothy, and thence, according to the subscription of this epistle, to \"Nicopolis of Macedonia\", from which place he wrote to Titus, about 66 or 67.",
"title": "Authenticity"
},
{
"paragraph_id": 8,
"text": "Recent scholarship has revived the theory that Paul used an amanuensis, or secretaries, in writing his letters (e.g. Romans 16:22), but possibly Luke for the pastorals. This was a common practice in ancient letter writing, even for the biblical writers.",
"title": "Authenticity"
},
{
"paragraph_id": 9,
"text": "One of the secular peculiarities of the Epistle to Titus is the reference to the Epimenides paradox: \"One of the Cretans, a prophet of their own, said, 'Cretans are always liars'.\"",
"title": "Epimenides paradox"
},
{
"paragraph_id": 10,
"text": "This article incorporates text from a publication now in the public domain: Easton, Matthew George (1897). \"Titus, Epistle to\". Easton's Bible Dictionary (New and revised ed.). T. Nelson and Sons.",
"title": "Attribution"
},
{
"paragraph_id": 11,
"text": "Online translations of the Epistle to Titus:",
"title": "External links"
},
{
"paragraph_id": 12,
"text": "Exegetical papers on Titus:",
"title": "External links"
}
]
| The Epistle to Titus is one of the three pastoral epistles in the New Testament, historically attributed to Paul the Apostle. It is addressed to Saint Titus and describes the requirements and duties of presbyters/bishops. | 2001-10-19T05:23:00Z | 2023-11-04T03:14:13Z | [
"Template:Librivox book",
"Template:S-ttl",
"Template:S-end",
"Template:Books of the Bible",
"Template:Books of the New Testament",
"Template:C.",
"Template:Lang",
"Template:Notelist",
"Template:Bibleverse",
"Template:Cite book",
"Template:Cite wikisource",
"Template:S-aft",
"Template:Short description",
"Template:Efn",
"Template:Citation needed",
"Template:Further",
"Template:Epistle to Titus",
"Template:S-hou",
"Template:S-bef",
"Template:Authority control",
"Template:Reflist",
"Template:Cite EB1911",
"Template:EBD",
"Template:S-start",
"Template:Paul"
]
| https://en.wikipedia.org/wiki/Epistle_to_Titus |
9,954 | Eurovision Song Contest | The Eurovision Song Contest (French: Concours Eurovision de la chanson), often known simply as Eurovision or by its initialism ESC, is an international song competition organised annually by the European Broadcasting Union. Each participating country submits an original song to be performed live and transmitted to national broadcasters via the Eurovision and Euroradio networks, with competing countries then casting votes for the other countries' songs to determine a winner.
Based on the Sanremo Music Festival held in Italy since 1951, Eurovision has been held annually since 1956 (apart from 2020), making it the longest-running annual international televised music competition and one of the world's longest-running television programmes. Active members of the EBU and invited associate members are eligible to compete; as of 2023, 52 countries have participated at least once. Each participating broadcaster sends one original song of three minutes duration or less to be performed live by a singer or group of up to six people aged 16 or older. Each country awards 1–8, 10 and 12 points to their ten favourite songs, based on the views of an assembled group of music professionals and the country's viewing public, with the song receiving the most points declared the winner. Other performances feature alongside the competition, including a specially-commissioned opening and interval act and guest performances by musicians and other personalities, with past acts including Cirque du Soleil, Madonna, Justin Timberlake, Mika, Rita Ora and the first performance of Riverdance. Originally consisting of a single evening event, the contest has expanded as new countries joined (including countries outside of Europe, such as Israel and Australia), leading to the introduction of relegation procedures in the 1990s, before the creation of semi-finals in the 2000s. As of 2023, Germany has competed more times than any other country, having participated in all but one edition, while Ireland and Sweden both hold the record for the most victories, with seven wins each in total.
Traditionally held in the country which won the preceding year's event, the contest provides an opportunity to promote the host country and city as a tourist destination. Thousands of spectators attend each year, along with journalists who cover all aspects of the contest, including rehearsals in venue, press conferences with the competing acts, in addition to other related events and performances in the host city. Alongside the generic Eurovision logo, a unique theme is typically developed for each event. The contest has aired in countries across all continents; it has been available online via the official Eurovision website since 2001. Eurovision ranks among the world's most watched non-sporting events every year, with hundreds of millions of viewers globally. Performing at the contest has often provided artists with a local career boost and in some cases long-lasting international success. Several of the best-selling music artists in the world have competed in past editions, including ABBA, Celine Dion, Julio Iglesias, Cliff Richard and Olivia Newton-John; some of the world's best-selling singles have received their first international performance on the Eurovision stage.
While having gained popularity with the viewing public in both participating and non-participating countries, the contest has also been the subject of criticism for its artistic quality as well as a perceived political aspect to the event. Concerns have been raised regarding political friendships and rivalries between countries potentially having an impact on the results. Controversial moments have included participating countries withdrawing at a late stage, censorship of broadcast segments by broadcasters, as well as political events impacting participation. Likewise, the contest has also been criticised for an over-abundance of elaborate stage shows at the cost of artistic merit. Eurovision has, however, gained popularity for its kitsch appeal, its musical span of ethnic and international styles, as well as emergence as part of LGBT culture, resulting in a large, active fanbase and an influence on popular culture. The popularity of the contest has led to the creation of several similar events, either organised by the EBU or created by external organisations; several special events have been organised by the EBU to celebrate select anniversaries or as a replacement due to cancellation.
The Eurovision Song Contest was developed by the European Broadcasting Union (EBU) as an experiment in live television broadcasting and a way to produce cheaper programming for national broadcasting organisations. The word "Eurovision" was first used by British journalist George Campey in the London Evening Standard in 1951, when he referred to a BBC programme being relayed by Dutch television. Following several events broadcast internationally via the Eurovision transmission network in the early 1950s, including the coronation of Elizabeth II in 1953, an EBU committee, headed by Marcel Bezençon, was formed in January 1955 to investigate new initiatives for cooperation between broadcasters, which approved for further study a European song competition from an idea initially proposed by RAI manager Sergio Pugliese. The EBU's general assembly agreed to the organising of the song contest in October 1955, under the initial title of the European Grand Prix, and accepted a proposal by the Swiss delegation to host the event in Lugano in the spring of 1956. The Italian Sanremo Music Festival, held since 1951, was used as a basis for the initial planning of the contest, with several amendments and additions given its international nature.
Seven countries participated in the first contest, with each country represented by two songs; the only time in which multiple entries per country were permitted. The winning song was "Refrain", representing the host country Switzerland and performed by Lys Assia. Voting during the first contest was held behind closed doors, with only the winner being announced on stage; the use of a scoreboard and public announcement of the voting, inspired by the BBC's Festival of British Popular Songs, has been used since 1957. The tradition of the winning country hosting the following year's contest, which has since become a standard feature of the event, began in 1958. Technological developments have transformed the contest: colour broadcasts began in 1968; satellite broadcasts in 1985; and streaming in 2000. Broadcasts in widescreen began in 2005 and in high-definition since 2007, with ultra-high-definition tested for the first time in 2022.
By the 1960s, between 16 and 18 countries were regularly competing each year. Countries from outside the traditional boundaries of Europe began entering the contest, and countries in Western Asia and North Africa started competing in the 1970s and 1980s. Changes in Europe following the end of the Cold War saw an influx of new countries from Central and Eastern Europe applying for the first time. The 1993 contest included a separate pre-qualifying round for seven of these new countries, and from 1994 relegation systems were introduced to manage the number of competing entries, with the poorest performing countries barred from entering the following year's contest. From 2004 the contest expanded to become a multi-programme event, with a semi-final at the 49th contest allowing all interested countries to compete each year; a second semi-final was added to each edition from 2008.
There have been 67 contests as of 2023, making Eurovision the longest-running annual international televised music competition as determined by Guinness World Records. The contest has been listed as one of the longest-running television programmes in the world and among the world's most watched non-sporting events. A total of 52 countries have taken part in at least one edition, with a record 43 countries participating in a single contest, first in 2008 and subsequently in 2011 and 2018. Australia became the first non-EBU member country to compete following an invitation by the EBU ahead of the contest's 60th edition in 2015; initially announced as a "one-off" for the anniversary edition, the country was invited back the following year and has subsequently participated every year since.
Eurovision had been held every year until 2020, when that year's contest was cancelled in response to the COVID-19 pandemic. No competitive event was able to take place due to uncertainty caused by the spread of the virus in Europe and the various restrictions imposed by the governments of the participating countries. In its place a special broadcast, Eurovision: Europe Shine a Light, was produced by the organisers, which honoured the songs and artists that would have competed in 2020 in a non-competitive format.
Over the years the name used to describe the contest, and used on the official logo for each edition, has evolved. The first contests were produced under the name of Grand Prix Eurovision de la Chanson Européenne in French and as the Eurovision Song Contest Grand Prix in English, with similar variations used in the languages of each of the broadcasting countries. From 1968, the English name dropped the 'Grand Prix' from the name, with the French name being aligned as the Concours Eurovision de la Chanson, first used in 1973. The contest's official brand guidance specifies that translations of the name may be used depending on national tradition and brand recognition in the competing countries, but that the official name Eurovision Song Contest is always preferred; the contest is commonly referred to in English by the abbreviation "Eurovision", and in internal documents by the acronym "ESC".
On only four occasions has the name used for the official logo of the contest not been in English or French: the Italian names Gran Premio Eurovisione della Canzone and Concorso Eurovisione della Canzone were used when Italy hosted the 1965 and 1991 contests respectively; and the Dutch name Eurovisiesongfestival was used when the Netherlands hosted in 1976 and 1980.
Original songs representing participating countries are performed in a live television programme broadcast via the Eurovision and Euroradio networks simultaneously to all countries. A "country" as a participant is represented by one television broadcaster from that country, a member of the European Broadcasting Union, and is typically that country's national public broadcasting organisation. The programme is staged by one of the participant countries and is broadcast from an auditorium in the selected host city. Since 2008, each contest is typically formed of three live television shows held over one week: two semi-finals are held on the Tuesday and Thursday, followed by a final on the Saturday. All participating countries compete in one of the two semi-finals, except for the host country of that year's contest and the contest's biggest financial contributors known as the "Big Five"—France, Germany, Italy, Spain and the United Kingdom. The remaining countries are split between the two semi-finals, and the 10 highest-scoring entries in each qualify to produce 26 countries competing in the final.
Each show typically begins with an opening act consisting of music and/or dance performances by invited artists, which contributes to a unique theme and identity created for that year's event; since 2013, the opening of the contest's final has included a "Flag Parade", with competing artists entering the stage behind their country's flag in a similar manner to the procession of competing athletes at the Olympic Games opening ceremony. Viewers are welcomed by one or more presenters who provide key updates during the show, conduct interviews with competing acts from the green room, and guide the voting procedure in English and French. Competing acts perform sequentially, and after all songs have been performed, viewers are invited to vote for their favourite performances—except for the performance of their own country—via telephone, SMS and the official Eurovision app. The public vote comprises 50% of the final result alongside the views of a jury of music industry professionals from each country. An interval act is invariably featured during this voting period, which on several occasions has included a well-known personality from the host country or an internationally recognised figure. The results of the voting are subsequently announced; in the semi-finals, the 10 highest-ranked countries are announced in a random order, with the full results undisclosed until after the final. In the final, the presenters call upon a representative spokesperson for each country in turn who announces their jury's points, while the results of the public vote are subsequently announced by the presenters. In recent years, it has been tradition that the first country to announce its jury points is the previous host, whereas the last country is the current host (with the exception of 2023, when the United Kingdom hosted the contest on behalf of Ukraine, who went first). The winning delegation is invited back on stage, where a trophy is awarded to the winning performers and songwriters by the previous year's winner, followed by a reprise of the winning song. The full results of the competition, including detailed results of the jury and public vote, are released online shortly after the final, and the participating broadcaster of the winning entry is traditionally given the honour of organising the following year's event.
Each participating broadcaster has sole discretion over the process it may employ to select its entry for the contest. Typical methods in which participants are selected include a televised national final using a public vote; an internal selection by a committee appointed by the broadcaster; and through a mixed format where some decisions are made internally and the public are engaged in others. Among the most successful televised selection shows is Sweden's Melodifestivalen, first established in 1959 and now one of Sweden's most watched television shows each year.
Active members (as opposed to associate members) of the European Broadcasting Union are eligible to participate; active members are those who are located in states that fall within the European Broadcasting Area, or are member states of the Council of Europe. Active members include media organisations whose broadcasts are often made available to at least 98% of households in their own country which are equipped to receive such transmissions. Associate member broadcasters may be eligible to compete, dependent on approval by the contest's Reference Group.
The European Broadcasting Area is defined by the International Telecommunication Union as encompassing the geographical area between the boundary of ITU Region 1 in the west, the meridian 40° East of Greenwich in the east, and parallel 30° North in the south. Armenia, Azerbaijan, Georgia and Ukraine, Iraq, Jordan and Syria lying outside these limits are included in the European Broadcasting Area.
Eligibility to participate in the contest is therefore not limited to countries in Europe, as several states geographically outside the boundaries of the continent or which span more than one continent are included in the Broadcasting Area. Countries from these groups have taken part in past editions, including countries in Western Asia such as Israel and Cyprus, countries which span Europe and Asia like Russia and Turkey, and North African countries such as Morocco. Australia became the first country to participate from outside the European Broadcasting Area in 2015, following an invitation by the contest's Reference Group.
EBU members who wish to participate must fulfil conditions as laid down in the rules of the contest, a separate copy of which is drafted annually. A maximum of 44 countries can take part in any one contest. Broadcasters must have paid the EBU a participation fee in advance to the deadline specified in the rules for the year in which they wish to participate; this fee is different for each country based on its size and viewership.
Fifty-two countries have participated at least once. These are listed here alongside the year in which they made their debut:
The winning country traditionally hosts the following year's event, with some exceptions since 1958. Hosting the contest can be seen as a unique opportunity for promoting the host country as a tourist destination and can provide benefits to the local economy and tourism sectors of the host city. Preparations for each year's contest typically begin at the conclusion of the previous year's contest, with the winning country's head of delegation receiving a welcome package of information related to hosting the contest at the winner's press conference. Eurovision is a non-profit event, and financing is typically achieved through a fee from each participating broadcaster, contributions from the host broadcaster and the host city, and commercial revenues from sponsorships, ticket sales, televoting and merchandise.
The host broadcaster will subsequently select a host city, typically a national or regional capital city, which must meet certain criteria set out in the contest's rules. The host venue must be able to accommodate at least 10,000 spectators, a press centre for 1,500 journalists, should be within easy reach of an international airport and with hotel accommodation available for at least 2,000 delegates, journalists and spectators. A variety of different venues have been used for past editions, from small theatres and television studios to large arenas and stadiums. The largest host venue is Parken Stadium in Copenhagen, which was attended by almost 38,000 spectators in 2001. With a population of 1,500 at the time of the 1993 contest, Millstreet, Ireland remains the smallest hosting settlement, although its Green Glens Arena is capable of hosting up to 8,000 spectators.
Until 2004, each edition of the contest used its own logo and visual identity as determined by the respective host broadcaster. To create a consistent visual identity, a generic logo was introduced ahead of the 2004 contest. This is typically accompanied by a unique theme artwork designed for each individual contest by the host broadcaster, with the flag of the host country placed prominently in the centre of the Eurovision heart. The original logo was designed by the London-based agency JM International, and received a revamp in 2014 by the Amsterdam-based Cityzen Agency for the contest's 60th edition.
An individual theme is utilised by contest producers when constructing the visual identity of each edition of the contest, including the stage design, the opening and interval acts, and the "postcards". The short video postcards are interspersed between the entries and were first introduced in 1970, initially as an attempt to "bulk up" the contest after a number of countries decided not to compete, but has since become a regular part of the show and usually highlight the host country and introduce the competing acts. A unique slogan for each edition, first introduced in 2002, was also an integral part of each contest's visual identity, which was replaced by a permanent slogan from 2024 onwards. The permanent slogan, "United by Music", had previously served as the slogan for the 2023 contest before being retained for all future editions as part of the contest's global brand strategy.
Preparations in the host venue typically begin approximately six weeks before the final, to accommodate building works and technical rehearsals before the arrival of the competing artists. Delegations will typically arrive in the host city two to three weeks before the live show, and each participating broadcaster nominates a head of delegation, responsible for coordinating the movements of their delegation and being that country's representative to the EBU. Members of each country's delegation include performers, composers, lyricists, members of the press, and—in the years where a live orchestra was present—a conductor. Present if desired is a commentator, who provides commentary of the event for their country's radio and/or television feed in their country's own language in dedicated booths situated around the back of the arena behind the audience.
Each country conducts two individual rehearsals behind closed doors, the first for 30 minutes and the second for 20 minutes. Individual rehearsals for the semi-finalists commence the week before the live shows, with countries typically rehearsing in the order in which they will perform during the contest; rehearsals for the host country and the "Big Five" automatic finalists are held towards the end of the week. Following rehearsals, delegations meet with the show's production team to review footage of the rehearsal and raise any special requirements or changes. "Meet and greet" sessions with accredited fans and press are held during these rehearsal weeks. Each live show is preceded by three dress rehearsals, where the whole show is run in the same way as it will be presented on TV. The second dress rehearsal, alternatively called the "jury show" or "evening preview show" and held the night before the broadcast, is used as a recorded back-up in case of technological failure, and performances during this show are used by each country's professional jury to determine their votes. The delegations from the qualifying countries in each semi-final attend a qualifiers' press conference after their respective semi-final, and the winning delegation attends a winners' press conference following the final.
A welcome reception is typically held at a venue in the host city on the Sunday preceding the live shows, which includes a red carpet ceremony for all the participating countries and is usually broadcast online. Accredited delegates, press and fans have access to an official nightclub, the "EuroClub", and some delegations will hold their own parties. The "Eurovision Village" is an official fan zone open to the public free of charge, with live performances by the contest's artists and screenings of the live shows on big screens.
The contest is organised annually by the European Broadcasting Union (EBU), together with the participating broadcaster of the host country. The event is monitored by an Executive Supervisor appointed by the EBU, and by the Reference Group which represents all participating broadcasters, who are each represented by a nominated Head of Delegation. The current Executive Supervisor as of 2023 is Martin Österdahl, who took over the role from Jon Ola Sand in May 2020. A detailed set of rules is written by the EBU for each contest and approved by the Reference Group. These rules have changed over time, and typically outline, among other points, the eligibility of the competing songs, the format of the contest, and the voting system to be used to determine the winner and how the results will be presented.
All competing songs must have a duration of three minutes or less. This rule applies only to the version performed during the live shows. In order to be considered eligible, competing songs in a given year's contest must not have been released commercially before the first day of September of the previous year. All competing entries must include vocals and lyrics of some kind, purely instrumental pieces are not allowed. Competing entries may be performed in any language, be that natural or constructed, and participating broadcasters are free to decide the language in which their entry may be performed.
Rules specifying in which language a song may be performed have changed over time. No restrictions were originally enacted when the contest was first founded, however following criticism over the 1965 Swedish entry being performed in English, a new rule was introduced for the 1966 contest restricting songs to be performed only in an official language of the country it represented. This rule was first abolished in 1973, and subsequently reinstated for most countries in 1977, with only Belgium and Germany permitted freedom of language as their selection processes for that year's contest had already commenced. The language rule was once again abolished ahead of the 1999 contest.
The rules for the first contest specified that only solo performers were permitted to enter; this criterion was changed the following year to permit duos to compete, and groups were subsequently permitted for the first time in 1971. Currently the number of people permitted on stage during competing performances is limited to a maximum of six, and no live animals are allowed. Since 1990, all contestants must be aged 16 or over on the day of the live show in which they perform. Sandra Kim, the winner in 1986 at the age of 13, shall remain the contest's youngest winner while this rule remains in place. There is no limit on the nationality or country of birth of the competing artists, and participating broadcasters are free to select an artist from any country; several winning artists have subsequently held a different nationality or were born in a different country to that which they represented. No performer may compete for more than one country in a given year.
The orchestra was a prominent aspect of the contest from 1956 to 1998. Pre-recorded backing tracks were first allowed for competing acts in 1973, but any pre-recorded instruments were required to be seen being "performed" on stage; in 1997, all instrumental music was allowed to be pre-recorded, however the host country was still required to provide an orchestra. In 1999, the rules were changed again, making the orchestra an optional requirement; the host broadcaster of that year's contest, Israel's IBA, subsequently decided not to provide an orchestra, resulting in all entries using backing tracks for the first time. Currently all instrumental music for competing entries must now be pre-recorded, and no live instrumentation is allowed during performances.
The main vocals of competing songs must be performed live during the contest. Previously live backing vocals were also required; since 2021 these may optionally be pre-recorded – this change has been implemented in an effort to introduce flexibility following the cancellation of the 2020 edition and to facilitate modernisation.
Since 2013, the order in which the competing countries perform has been determined by the contest's producers, and submitted to the EBU Executive Supervisor and Reference Group for approval before public announcement. This was changed from a random draw used in previous years in order to provide a better experience for television viewers and ensure all countries stand out by avoiding instances where songs of a similar style or tempo are performed in sequence.
Since the creation of a second semi-final in 2008, a semi-final allocation draw is held each year. Countries are placed into pots based on their geographical location and voting history in recent contests, and are assigned to compete in one of the two semi-finals through a random draw. Countries are then randomly assigned to compete in either the first or second half of their respective semi-final, and once all competing songs have been selected the producers then determine the running order for the semi-finals. The automatic qualifiers are assigned at random to a semi-final for the purposes of voting rights.
Semi-final qualifiers make a draw at random during the winners' press conference to determine whether they will perform during the first or second half of the final; the automatic finalists then randomly draw their competing half in the run-up to the final, except for the host country, whose exact performance position is determined in a separate draw. The running order for the final is then decided following the second semi-final by the producers. The running orders are decided with the competing songs' musical qualities, stage performance, prop and lighting set-up, and other production considerations taken into account.
Since 2023, the voting system used to determine the results of the contest works on the basis of positional voting. Each country awards 1–8, 10 and 12 points to the ten favourite songs as voted for by that country's general public or assembled jury, with the most preferred song receiving 12 points. In the semi-finals, each country awards one set of points based primarily on the votes cast by that country's viewing public via telephone, SMS or the official Eurovision app, while in the final, each country awards two sets of points, with one set awarded by the viewers and another awarded by a jury panel comprising five music professionals from that country. Since 2023, viewers in non-participating countries are also able to vote during the contest, with those viewers able to cast votes via an online platform, which are then aggregated and awarded as one set of points from an "extra country" for the overall public vote. This system is a modification of that used since 1975, when the "12 points" system was first introduced but with one set of points per country, and a similar system used since 2016 where two sets of points were awarded in both the semi-finals and final. National juries and the public in each country are not allowed to vote for their own country, a rule first introduced in 1957.
Historically, each country's points were determined by a jury, consisting at various times of members of the public, music professionals, or both in combination. With advances in telecommunication technology, televoting was first introduced to the contest in 1997 on a trial basis, with broadcasters in five countries allowing the viewing public to determine their votes for the first time. From 1998, televoting was extended to almost all competing countries, and subsequently became mandatory from 2004. A jury was reintroduced for the final in 2009, with each country's points comprising both the votes of the jury and public in an equal split; this mix of jury and public voting was expanded into the semi-finals from 2010, and was used until 2023, when full public voting was reintroduced to determine the results of the semi-finals. The mix of jury and public voting continues to be used in the final.
Should two or more countries finish with the same number of points, a tie-break procedure is employed to determine the final placings. As of 2016, a combined national televoting and jury result is calculated for each country, and the country which has obtained more points from the public voting following this calculation is deemed to have placed higher.
Since 1957, each country's votes have been announced during a special voting segment as part of the contest's broadcast, with a selected spokesperson assigned to announce the results of their country's vote. This spokesperson is typically well known in their country; previous spokespersons have included former Eurovision artists and presenters. Historically, the announcements were made through telephone lines from the countries of origin, with satellite links employed for the first time in 1994, allowing the spokespersons to be seen visually by the audience and TV spectators.
Scoring is done by both a national jury and a national televote. Each country's jury votes are consecutively added to the totals scoreboard as they are called upon by the contest presenter(s). The scoreboard was historically placed at the side of the stage and updated manually as each country gave their votes; in 1988 a computer graphics scoreboard was introduced. The jury points from 1–8 and 10 are displayed on screen and added automatically to the scoreboard, then the country's spokesperson announces which country will receive the 12 points. Once jury points from all countries have been announced, the presenter(s) announce the total public points received for each finalist, with the votes for each country being consolidated and announced as a single value. Since 2019, the public points have been revealed in ascending order based on the jury vote, with the country that received the fewest points from the jury being the first to receive their public points. A full breakdown of the results across all shows is published on the official Eurovision website after the final, including each country's televoting ranking and the votes of its jury and individual jury members. Each country's individual televoting points in the final are typically displayed on-screen by that country's broadcaster following the announcement of the winner.
Participating broadcasters are required to air live the semi-final in which they compete, or in the case of the automatic finalists the semi-final in which they are required to vote, and the final, in its entirety; this includes all competing songs, the voting recap containing short clips of the performances, the voting procedure or semi-final qualification reveal, and the reprise of the winning song in the final. Since 1999, broadcasters who wished to do so were given the opportunity to provide advertising during short, non-essential hiatuses in the show's schedule. In exceptional circumstances, such as due to developing emergency situations, participating broadcasters may delay or postpone broadcast of the event. Should a broadcaster fail to air a show as expected in any other scenario they may be subject to sanctions by the EBU. Several broadcasters in countries that are unable to compete have previously aired the contest in their markets.
As national broadcasters join and leave the Eurovision feed transmitted by the EBU, the EBU/Eurovision network logo ident (not to be confused with the logo of the song contest itself) is displayed. The accompanying music (used on other Eurovision broadcasts) is the Prelude (Marche en rondeau) to Marc-Antoine Charpentier's Te Deum. Originally, the same logo was used for both the Eurovision network and the European Broadcasting Union, however, they now have two different logos; the latest Eurovision network logo was introduced in 2012, and when the ident is transmitted at the start and end of programmes it is this Eurovision network logo that appears.
The EBU now holds the recordings of all but two editions of the contest in its archives, following a project initiated in 2011 to collate footage and related materials of all editions ahead of the event's 60th edition in 2015. Although cameras were present to practice pan-European broadcasting for the first contest in 1956 to the few Europeans who had television sets, its audience was primarily over the radio. The only footage available is a Kinescope recording of Lys Assia's reprise of her winning song. No full recording of the 1964 contest exists, with conflicting reports of the fate of any copies that may have survived. Audio recordings of both contests do however exist, and some short pieces of footage from both events have survived.
From the original seven countries which entered the first contest in 1956, the number of competing countries has steadily grown over time. 18 countries participated in the contest's tenth edition in 1965, and by 1990, 22 countries were regularly competing each year.
Besides slight modifications to the voting system and other contest rules, no fundamental changes to the contest's format were introduced until the early 1990s, when events in Europe in the late 1980s and early 1990s resulted in a growing interest from new countries in the former Eastern Bloc, particularly following the merger of the Eastern European rival OIRT network with the EBU in 1993.
29 countries registered to take part in the 1993 contest, a figure the EBU considered unable to fit reasonably into a single TV show. A pre-selection method was subsequently introduced for the first time in order to reduce the number of competing entries, with seven countries in Central and Eastern Europe participating in Kvalifikacija za Millstreet, held in Ljubljana, Slovenia one month before the event. Following a vote among the seven competing countries, Bosnia and Herzegovina, Croatia and Slovenia were chosen to head to the contest in Millstreet, Ireland, and Estonia, Hungary, Romania and Slovakia were forced to wait another year before being allowed to compete. A new relegation system was introduced for entry into the 1994 contest, with the lowest-placed countries being forced to sit out the following year's event to be replaced by countries which had not competed in the previous contest. The bottom seven countries in 1993 were required to miss the following year's contest, and were replaced by the four unsuccessful countries in Kvalifikacija za Millstreet and new entries from Lithuania, Poland and Russia.
This system was used again in 1994 for qualification for the 1995 contest, but a new system was introduced for the 1996 contest, when an audio-only qualification round held in the months before the contest in Oslo, Norway; this system was primarily introduced in an attempt to appease Germany, one of Eurovision's biggest markets and financial contributors, which would have otherwise been relegated under the previous system. 29 countries competed for 22 places in the main contest alongside the automatically qualified Norwegian hosts, however Germany would ultimately still miss out, and joined Hungary, Romania, Russia, Denmark, Israel, and Macedonia as one of the seven countries to be absent from the Oslo contest. For the 1997 contest, a similar relegation system to that used between 1993 and 1995 was introduced, with each country's average scores in the preceding five contests being used as a measure to determine which countries would be relegated. This was subsequently changed again in 2001, back to the same system used between 1993 and 1995 where only the results from that year's contest would count towards relegation.
In 1999, an exemption from relegation was introduced for France, Germany, Spain and the United Kingdom, giving them an automatic right to compete in the 2000 contest and in all subsequent editions. This group, as the highest-paying EBU members which significantly fund the contest each year, subsequently became known as the "Big Four" countries. This group was expanded in 2011 when Italy began competing again, becoming the "Big Five". Originally brought in to ensure that the financial contributions of the contest's biggest financial backers would not be missed, since the introduction of the semi-finals in 2004, the "Big Five" now instead automatically qualify for the final along with the host country.
There remains debate on whether this status prejudices the countries' results, based on reported antipathy over their automatic qualification and the potential disadvantage of having spent less time on stage through not competing in the semi-finals, however this status appears to be more complex given that the results of the "Big Five" countries can vary widely. This status has caused consternation from other competing countries, and was cited, among other aspects, as a reason why Turkey had ceased participating after 2012.
An influx of new countries applying for the 2003 contest resulted in the introduction of a semi-final from 2004, with the contest becoming a two-day event. The top 10 countries in each year's final would qualify automatically to the following year's final, alongside the "Big Four", meaning all other countries would compete in the semi-final to compete for 10 qualification spots. The 2004 contest in Istanbul, Turkey saw a record 36 countries competing, with new entries from Albania, Andorra, Belarus and Serbia and Montenegro and the return of previously relegated countries. The format of this semi-final remained similar to the final proper, taking place a few days before the final; following the performances and the voting window, the names of the 10 countries with the highest number of points, which would therefore qualify for the final, were announced at the end of the show, revealed in a random order by the contest's presenters.
The single semi-final continued to be held between 2005 and 2007; however, with 42 countries competing in the 2007 contest in Helsinki, Finland, the semi-final had 28 entries competing for 10 spots in the final. Following criticism over the mainly Central and Eastern European qualifiers at the 2007 event and the poor performance of entries from Western European countries, a second semi-final was subsequently introduced for the 2008 contest in Belgrade, Serbia, with all countries now competing in one of the two semi-finals, with only the host country and the "Big Four", and subsequently the "Big Five" from 2011, qualifying automatically. 10 qualification spots would be available in each of the semi-finals, and a new system to split the competing countries between the two semi-finals was introduced based on their geographic location and previous voting patterns, in an attempt to reduce the impact of bloc voting and to make the outcome less predictable.
The contest has been used as a launching point for artists who went on to achieve worldwide fame, and several of the world's best-selling artists are counted among past Eurovision Song Contest participants and winning artists. ABBA, the 1974 winners for Sweden, have sold an estimated 380 million albums and singles since their contest win brought them to worldwide attention, with their winning song "Waterloo" selling over five million records. Celine Dion's win for Switzerland in 1988 helped launch her international career, particularly in the anglophone market, and she would go on to sell an estimated 200 million records worldwide. Julio Iglesias was relatively unknown when he represented Spain in 1970 and placed fourth, but worldwide success followed his Eurovision appearance, with an estimated 100 million records sold during his career. Australian-British singer Olivia Newton-John represented the United Kingdom in 1974, placing fourth behind ABBA, but went on to sell an estimated 100 million records, win four Grammy Awards, and star in the critically and commercially successful musical film Grease.
A number of performers have competed in the contest after having already achieved considerable success. These include winning artists Lulu, Toto Cutugno, and Katrina and the Waves, and acts that failed to win such as Nana Mouskouri, Cliff Richard, Baccara, Umberto Tozzi, Plastic Bertrand, t.A.T.u., Las Ketchup, Patricia Kaas, Engelbert Humperdinck, Bonnie Tyler, and Flo Rida. Many well-known composers and lyricists have penned entries of varying success over the years, including Serge Gainsbourg, Goran Bregović, Diane Warren, Andrew Lloyd Webber, Pete Waterman, and Tony Iommi, as well as producers Timbaland and Guy-Manuel de Homem-Christo.
Past participants have contributed to other fields in addition to their music careers. The Netherlands' Annie Schmidt, lyricist of the first entry performed at Eurovision, has gained a worldwide reputation for her stories and earned the Hans Christian Andersen Award for children's literature. French "yé-yé girls" Françoise Hardy and contest winner France Gall are household names of 1960s pop culture, with Hardy also being a pioneer of street style fashion trends and an inspiration for the global youthquake movement. Figures who carved a career in politics and gained international acclaim for humanitarian achievements include contest winner Dana as a two-time Irish presidential candidate and Member of the European Parliament (MEP); Nana Mouskouri as Greek MEP and a UNICEF international goodwill ambassador; contest winner Ruslana as member of Verkhovna Rada, Ukraine's parliament and a figure of the Orange Revolution and Euromaidan protests, who gained global honours for leadership and courage; and North Macedonia's Esma Redžepova as member of political parties and a two-time Nobel Peace Prize nominee.
Competing songs have occasionally gone on to become successes for their original performers and other artists, and some of the best-selling singles globally received their first international performances at Eurovision. "Save Your Kisses for Me", the winning song in 1976 for the United Kingdom's Brotherhood of Man, went on to sell over six million singles, more than any other winning song. "Nel blu, dipinto di blu", also known as "Volare", Italy's third-placed song in 1958 performed by Domenico Modugno, is the only Eurovision entry to win a Grammy Award. It was the first Grammy winner for both Record of the Year and Song of the Year and it has since been recorded by various artists, topped the Billboard Hot 100 in the United States and achieved combined sales of over 22 million copies worldwide. "Eres tú", performed by Spain's Mocedades and runner-up in 1973, became the first Spanish-language song to reach the top 10 of the Billboard Hot 100, and the Grammy-nominated "Ooh Aah... Just a Little Bit", which came eighth in 1996 for the United Kingdom's Gina G, sold 790,000 records and achieved success across Europe and the US, reaching #1 on the UK Singles Chart and peaking at #12 on the Billboard Hot 100.
The turn of the century has also seen numerous competing songs becoming successes. "Euphoria", Loreen's winning song for Sweden in 2012, achieved Europe-wide success, reaching number one in several countries and by 2014 had become the most downloaded Eurovision song to date. The video for "Occidentali's Karma" by Francesco Gabbani, which placed sixth for Italy in 2017, became the first Eurovision song to reach more than 200 million views on YouTube, while "Soldi" by Mahmood, the Italian runner-up in 2019, was the most-streamed Eurovision song on Spotify until it was overtaken by that year's winner for the Netherlands, "Arcade" by Duncan Laurence, following viral success on TikTok in late 2020 and early 2021; "Arcade" later became the first Eurovision song since "Ooh Aah... Just a Little Bit" and the first Eurovision winning song since "Save Your Kisses for Me" to chart on the Billboard Hot 100, eventually peaking at #30. The 2021 contest saw the next major breakthrough success from Eurovision, with Måneskin, that year's winners for Italy with "Zitti e buoni", attracting worldwide attention across their repertoire immediately following their victory.
Johnny Logan was the first artist to have won multiple contests as a performer, winning for Ireland in 1980 with "What's Another Year", written by Shay Healy, and in 1987 with the self-penned "Hold Me Now". Logan was also the winning songwriter in 1992 for the Irish winner, "Why Me?" performed by Linda Martin, and has therefore achieved three contest victories as either a performer or writer. Four further songwriters have each written two contest-winning songs: Willy van Hemert, Yves Dessca, Rolf Løvland, and Brendan Graham. Following their introduction in 2004, Alexander Rybak became the first artist to win multiple Eurovision semi-finals, finishing in first at the second semi-finals in 2009 and 2018; he remains the only entrant to have done so to date.
70 songs from 27 countries have won the Eurovision Song Contest as of 2023. Ireland and Sweden have recorded the most wins with seven each, followed by France, Luxembourg, the United Kingdom and the Netherlands with five each. Of the 52 countries to have taken part, 25 have yet to win. On only one occasion have multiple winners been declared in a single contest: in 1969, four countries finished the contest with an equal number of votes and due to the lack of a tie-break rule at the time, all four countries were declared winners. A majority of winning songs have been performed in English, particularly since the language rule was abolished in 1999. Since that contest, seven winning songs have been performed either fully or partially in a language other than English.
Two countries have won the contest on their first appearance: Switzerland, by virtue of being declared the winner of the first contest in 1956; and Serbia, which won in 2007 in its first participation as an independent country, following entries in previous editions as part of the now-defunct Yugoslavia and then Serbia and Montenegro. Other countries have had relatively short waits before winning their first contest, with Ukraine victorious on its second contest appearance in 2004 and Latvia winning with its third entry in 2002. Conversely, some countries have competed for many years before recording their first win: Greece recorded its first win in 2005, 31 years after its first appearance, while Finland ended a 45-year losing streak in 2006. Portugal waited the longest, recording its first win in 2017, 53 years after its first participation. Countries have in the past had to wait many years to win again: Switzerland went 32 years between winning in 1956 and 1988; Denmark held a 37-year gap between wins in 1963 and 2000; the Netherlands waited 44 years to win again in 2019, its most recent win having been in 1975; and Austria won its second contest in 2014, 48 years after its first win in 1966.
The United Kingdom holds the record for the highest number of second-place finishes, having finished runner-up sixteen times. Meanwhile, Norway has come last more than any other country, appearing at the bottom of the scoreboard on eleven occasions, including scoring nul points four times. A country has recorded back-to-back wins on four occasions: Spain recorded consecutive wins in 1968 and 1969; Luxembourg did likewise in 1972 and 1973; Israel won the contest four times in 1978, 1979, 1998 and 2018; and Ireland became the first country to win three consecutive titles, winning in 1992, 1993 and 1994. Ireland's winning streak in the 1990s includes the 1996 contest, giving it a record four wins in five years.
The winning artists and songwriters receive a trophy, which since 2008 has followed a standard design: a handmade piece of sandblasted glass with painted details in the shape of a 1950s-style microphone, designed by Kjell Engman of the Swedish-based glassworks Kosta Boda. The trophy is typically presented by the previous year's winner; others who have handed out the award in the past include representatives from the host broadcaster or the EBU, and politicians; in 2007, the fictional character Joulupukki (original Santa Claus from Finland) presented the award to the winner Marija Šerifović.
Alongside the song contest and appearances from local and international personalities, performances from non-competing artists and musicians have been included since the first edition, and have become a staple of the live show. These performances have varied widely, previously featuring music, art, dance and circus performances, and past participants are regularly invited to perform, with the reigning champion traditionally returning each year to perform the previous year's winning song.
The contest's opening performance and the main interval act, held following the final competing song and before the announcement of the voting results, has become a memorable part of the contest and has included both internationally known artists and local stars. Contest organisers have previously used these performances as a way to explore their country's culture and history, such as in "4,000 Years of Greek Song" at the 2006 contest held in Greece; other performances have been more comedic in nature, featuring parody and humour, as was the case with "Love Love Peace Peace" in 2016, a humorous ode to the history and spectacle of the contest itself. Riverdance, which later became one of the most successful dance productions in the world, first began as the interval performance at the 1994 contest in Ireland; the seven-minute performance of traditional Irish music and dance was later expanded into a full stage show that has been seen by over 25 million people worldwide and provided a launchpad for its lead dancers Michael Flatley and Jean Butler.
Among other artists who have performed in a non-competitive manner are Danish Europop group Aqua in 2001, Russian pop duo t.A.T.u. in 2009, and American entertainers Justin Timberlake and Madonna in 2016 and 2019 respectively. Other notable artists, including Cirque du Soleil (2009), Alexandrov Ensemble (2009), Vienna Boys' Choir (1967 and 2015) and Fire of Anatolia (2004), also performed on the Eurovision stage, and there have been guest appearances from well-known faces from outside the world of music, including actors, athletes, and serving astronauts and cosmonauts. Guest performances have been used as a channel in response to global events happening concurrently with the contest. The 1999 contest in Israel closed with all competing acts performing a rendition of Israel's 1979 winning song "Hallelujah" as a tribute to the victims of the war in the Balkans, a dance performance entitled "The Grey People" in 2016's first semi-final was devoted to the European migrant crisis, the 2022 contest featured known anti-war songs "Fragile", "People Have the Power" and "Give Peace a Chance" in response to the Russian invasion of Ukraine that same year, and an interval act in 2023's first semi-final alluded to the refugee crisis caused by the aforementioned invasion.
The contest has been the subject of considerable criticism regarding both its musical content and what has been reported to be a political element to the event, and several controversial moments have been witnessed over the course of its history.
Criticism has been levied against the musical quality of past competing entries, with a perception that certain music styles seen as being presented more often than others in an attempt to appeal to as many potential voters as possible among the international audience. Power ballads, folk rhythms and bubblegum pop have been considered staples of the contest in recent years, leading to allegations that the event has become formulaic. Other traits in past competing entries which have regularly been mocked by media and viewers include an abundance of key changes and lyrics about love and/or peace, as well as the pronunciation of English by non-native users of the language. Given Eurovision is principally a television show, over the years competing performances have attempted to attract the viewers' attention through means other than music, and elaborate lighting displays, pyrotechnics, and extravagant on-stage theatrics and costumes having become a common sight at recent contests; criticism of these tactics have been levied as being a method of distracting the viewer from the weak musical quality of some of the competing entries.
While many of these traits are ridiculed in the media and elsewhere, for others these traits are celebrated and considered an integral part of what makes the contest appealing. Although many of the competing acts each year will fall into some of the categories above, the contest has seen a diverse range of musical styles in its history, including rock, heavy metal, jazz, country, electronic, R&B, hip hop and avant-garde.
As artists and songs ultimately represent a country, the contest has seen several controversial moments where political tensions between competing countries as a result of frozen conflicts, and in some cases open warfare, are reflected in the performances and voting.
The continuing conflict between Armenia and Azerbaijan has affected the contest on numerous occasions. Conflicts between the two countries at Eurovision escalated quickly since both countries began competing in the late 2000s, resulting in fines and disciplinary action for both countries' broadcasters over political stunts, and a forced change of title for one competing song due to allegations of political subtext. Interactions between Russia and Ukraine in the contest had originally been positive, however as political relations soured between the two countries so too have relations at Eurovision become more complex. Complaints were levied against Ukraine's winning song in 2016, "1944", whose lyrics referenced the deportation of the Crimean Tatars, but which the Russian delegation claimed had a greater political meaning in light of Russia's annexation of Crimea. As Ukraine prepared to host the following year's contest, Russia's selected representative, Yuliya Samoylova, was barred from entering the country due to having previously entered Crimea illegally according to Ukrainian law. Russia eventually pulled out of the contest after offers for Samoylova to perform remotely were refused by Russia's broadcaster, Channel One Russia, resulting in the EBU reprimanding the Ukrainian broadcaster, UA:PBC. In the wake of the Russian invasion of Ukraine and subsequent protests from other participating countries, Russia was barred from competing in the 2022 contest, where Ukraine went on to win. Georgia's planned entry for the 2009 contest in Moscow, Russia, "We Don't Wanna Put In", caused controversy as the lyrics appeared to criticise Vladimir Putin, in a move seen as opposition to the then-Russian prime minister in the aftermath of the Russo-Georgian War. After requests by the EBU for changes to the lyrics were refused, Georgia's broadcaster GPB subsequently withdrew from the event. Belarus' planned entry in 2021, "Ya nauchu tebya (I'll Teach You)", also caused controversy in the wake of demonstrations against disputed election results, resulting in the country's disqualification when the aforementioned song and another potential song were deemed to breach the contest's rules on neutrality and politicisation.
Israel's participation in the contest has resulted in several controversial moments in the past, with the country's first appearance in 1973, less than a year after the Munich massacre, resulting in an increased security presence at the venue in Luxembourg City. Israel's first win in 1978 proved controversial for Arab states broadcasting the contest which would typically cut to advertisements when Israel performed due to a lack of recognition of the country, and when it became apparent Israel would win many of these broadcasters cut the feed before the end of the voting. Arab states which are eligible to compete have declined to participate due to Israel's presence, with Morocco the only Arab state to have entered Eurovision, competing for the first, and as of 2023 the only time, in 1980 when Israel was absent. Israeli participation has been criticised by those who oppose current government policies in the state, with calls raised by various political groups for a boycott ahead of the 2019 contest in Tel Aviv, including proponents of the Boycott, Divestment and Sanctions (BDS) movement in response to the country's policies towards Palestinians in the West Bank and Gaza, as well as groups who take issue with perceived pinkwashing in Israel. Others campaigned against a boycott, asserting that any cultural boycott would be antithetical to advancing peace in the region.
The contest has been described as containing political elements in its voting process, a perception that countries will give votes more frequently and in higher quantities to other countries based on political relationships, rather than the musical merits of the songs themselves. Numerous studies and academic papers have been written on this subject, which have corroborated that certain countries form "clusters" or "cliques" by frequently voting in the same way; one study concludes that voting blocs can play a crucial role in deciding the winner of the contest, with evidence that on at least two occasions bloc voting was a pivotal factor in the vote for the winning song. Other views on these "blocs" argue that certain countries will allocate high points to others based on similar musical tastes, shared cultural links and a high degree of similarity and mutual intelligibility between languages, and are therefore more likely to appreciate and vote for the competing songs from these countries based on these factors, rather than political relationships specifically. Analysis on other voting patterns have revealed examples which indicate voting preferences among countries based on shared religion, as well as "patriotic voting", particularly since the introduction of televoting in 1997, where foreign nationals vote for their country of origin.
Voting patterns in the contest have been reported by news publishers, including The Economist and BBC News. Criticism of the voting system was at its highest in the mid-2000s, resulting in a number of calls for countries to boycott the contest over reported voting biases, particularly following the 2007 contest where Eastern European countries occupied the top 15 places in the final and dominated the qualifying spaces. The poor performance of the entries from more traditional Eurovision countries had subsequently been discussed in European national parliaments, and the developments in the voting was cited as among the reasons for the resignation of Terry Wogan as commentator for the UK, a role he had performed at every contest from 1980. In response to this criticism, the EBU introduced a second semi-final in 2008, with countries split based on geographic proximity and voting history, and juries of music professionals were reintroduced in 2009, in an effort to reduce the impacts of bloc voting.
Eurovision has had a long-held fan base in the LGBT community, and contest organisers have actively worked to include these fans in the event since the 1990s. Paul Oscar became the contest's first openly gay artist to compete when he represented Iceland in 1997. Israel's Dana International, the contest's first trans performer, became the first LGBT artist to win in 1998. In 2021, Nikkie de Jager became the first trans person to host the contest.
Several open members of the LGBT community have since gone on to compete and win: Conchita Wurst, the drag persona of openly gay Thomas Neuwirth, won the 2014 contest for Austria; openly bisexual performer Duncan Laurence was the winner of the 2019 contest for the Netherlands; and rock band Måneskin, winners of the 2021 contest for Italy, features openly bisexual Victoria De Angelis as its bassist. Marija Šerifović, who won the 2007 contest for Serbia, subsequently came out publicly as a lesbian in 2013. Past competing songs and performances have included references and allusions to same-sex relationships; "Nous les amoureux", the 1961 winning song, contained references to the difficulties faced by a homosexual relationship; Krista Siegfrids' performance of "Marry Me" at the 2013 contest included a same-sex kiss with one of her female backing dancers; and the stage show of Ireland's Ryan O'Shaughnessy's "Together" in 2018 had two male dancers portraying a same-sex relationship. Drag performers, such as Ukraine's Verka Serduchka, Denmark's DQ and Slovenia's Sestre, have appeared, including Wurst winning in 2014.
In recent years, various political ideologies across Europe have clashed in the Eurovision setting, particularly on LGBT rights. Dana International's selection for the 1998 contest in Birmingham was marked by objections and death threats from orthodox religious sections of Israeli society, and at the contest her accommodation was reportedly in the only hotel in Birmingham with bulletproof windows. Turkey, once a regular participant and a one-time winner, first pulled out of the contest in 2013, citing dissatisfaction in the voting rules and more recently Turkish broadcaster TRT have cited LGBT performances as another reason for their continued boycott, refusing to broadcast the 2013 event over Finland's same sex kiss. LGBT visibility in the contest has been cited as a deciding factor for Hungary's non-participation since 2020, although no official reason was given by the Hungarian broadcaster MTVA. The rise of anti-LGBT sentiment in Europe has led to a marked increase in booing from contest audiences, particularly since the introduction of a "gay propaganda" law in Russia in 2013. Conchita Wurst's win was met with criticism on the Russian political stage, with several conservative politicians voicing displeasure in the result. Clashes on LGBT visibility in the contest have occurred in countries which do not compete, such as in China, where broadcasting rights were terminated during the 2018 contest due to censorship of "abnormal sexual relationships and behaviours" that went against Chinese broadcasting guidelines.
The Eurovision Song Contest has amassed a global following and sees annual audience figures of between 100 and 600 million. The contest has become a cultural influence worldwide since its first years, is regularly described as having kitsch appeal, and is included as a topic of parody in television sketches and in stage performances at the Edinburgh Fringe and Melbourne Comedy festivals amongst others. Several films have been created which celebrate the contest, including Eytan Fox's 2013 Israeli comedy Cupcakes, and the Netflix 2020 musical comedy, Eurovision Song Contest: The Story of Fire Saga, produced with backing from the EBU and starring Will Ferrell and Rachel McAdams.
Eurovision has a large online following and multiple independent websites, news blogs and fan clubs are dedicated to the event. One of the oldest and largest Eurovision fan clubs is OGAE, founded in 1984 in Finland and currently a network of over 40 national branches across the world. National branches regularly host events to promote and celebrate Eurovision, and several participating broadcasters work closely with these branches when preparing their entries.
In the run-up to each year's contest, several countries regularly host smaller events between the conclusion of the national selection shows in March and the contest proper in May, known as the "pre-parties". These events typically feature the artists which will go on to compete at that year's contest, and consist of performances at a venue and meet-and-greets with fans and the press. Eurovision in Concert, held annually in Amsterdam, was one of the first of these events to be created, holding its first edition in 2008. Other events held regularly include the London Eurovision Party, PrePartyES in Madrid, and Israel Calling in Tel Aviv. Several community events have been held virtually, particularly since the outbreak of the COVID-19 pandemic in Europe in 2020, among these EurovisionAgain, an initiative where fans watched and discussed past contests in sync on YouTube and other social media platforms. Launched during the first COVID-19 lockdowns, the event subsequently became a top trend on Twitter across Europe and facilitated over £20,000 in donations for UK-based LGBTQ+ charities.
Several anniversary events, and related contests under the "Eurovision Live Events" brand, have been organised by the EBU with its member broadcasters. In addition, participating broadcasters have occasionally commissioned special Eurovision programmes for their home audiences, and a number of other imitator contests have been developed outside of the EBU framework, on both a national and international level.
The EBU has held several events to mark selected anniversaries in the contest's history: Songs of Europe, held in 1981 to celebrate its twenty-fifth anniversary, had live performances and video recordings of all Eurovision Song Contest winners up to 1981; Congratulations: 50 Years of the Eurovision Song Contest was organised in 2005 to celebrate the event's fiftieth anniversary, and featured a contest to determine the most popular song from among 14 selected entries from the contest's first 50 years; and in 2015 the event's sixtieth anniversary was marked by Eurovision Song Contest's Greatest Hits, a concert of performances by past Eurovision artists and video montages of performances and footage from previous contests. Following the cancellation of the 2020 contest, the EBU subsequently organised a special non-competitive broadcast, Eurovision: Europe Shine a Light, which provided a showcase for the songs that would have taken part in the competition.
Other contests organised by the EBU include Eurovision Young Musicians, a classical music competition for European musicians between the ages of 12 and 21; Eurovision Young Dancers, a dance competition for non-professional performers between the ages of 16 and 21; Eurovision Choir, a choral competition for non-professional European choirs produced in partnership with the Interkultur [de] and modelled after the World Choir Games; and the Junior Eurovision Song Contest, a similar song contest for singers aged between 9 and 14 representing primarily European countries. The Eurovision Dance Contest was an event featuring pairs of dancers performing ballroom and Latin dancing, which took place for two editions, in 2007 and 2008.
Similar international music competitions have been organised externally to the EBU. The Sopot International Song Festival has been held annually since 1961; between 1977 and 1980, under the patronage of the International Radio and Television Organisation (OIRT), an Eastern European broadcasting network similar to the EBU, it was rebranded as the Intervision Song Contest. An Ibero-American contest, the OTI Festival, was previously held among hispanophone and lusophone countries in Europe, North America and South America; and a contest for countries and autonomous regions with Turkic links, the Turkvision Song Contest, has been organised since 2013. Similarly, an adaption of the contest for artists in the United States, the American Song Contest, was held in 2022 and featured songs representing U.S. states and territories. Adaptions of the contest for artists in Canada and Latin America are in development, though development on the former has been halted.
Sources: | [
{
"paragraph_id": 0,
"text": "The Eurovision Song Contest (French: Concours Eurovision de la chanson), often known simply as Eurovision or by its initialism ESC, is an international song competition organised annually by the European Broadcasting Union. Each participating country submits an original song to be performed live and transmitted to national broadcasters via the Eurovision and Euroradio networks, with competing countries then casting votes for the other countries' songs to determine a winner.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Based on the Sanremo Music Festival held in Italy since 1951, Eurovision has been held annually since 1956 (apart from 2020), making it the longest-running annual international televised music competition and one of the world's longest-running television programmes. Active members of the EBU and invited associate members are eligible to compete; as of 2023, 52 countries have participated at least once. Each participating broadcaster sends one original song of three minutes duration or less to be performed live by a singer or group of up to six people aged 16 or older. Each country awards 1–8, 10 and 12 points to their ten favourite songs, based on the views of an assembled group of music professionals and the country's viewing public, with the song receiving the most points declared the winner. Other performances feature alongside the competition, including a specially-commissioned opening and interval act and guest performances by musicians and other personalities, with past acts including Cirque du Soleil, Madonna, Justin Timberlake, Mika, Rita Ora and the first performance of Riverdance. Originally consisting of a single evening event, the contest has expanded as new countries joined (including countries outside of Europe, such as Israel and Australia), leading to the introduction of relegation procedures in the 1990s, before the creation of semi-finals in the 2000s. As of 2023, Germany has competed more times than any other country, having participated in all but one edition, while Ireland and Sweden both hold the record for the most victories, with seven wins each in total.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Traditionally held in the country which won the preceding year's event, the contest provides an opportunity to promote the host country and city as a tourist destination. Thousands of spectators attend each year, along with journalists who cover all aspects of the contest, including rehearsals in venue, press conferences with the competing acts, in addition to other related events and performances in the host city. Alongside the generic Eurovision logo, a unique theme is typically developed for each event. The contest has aired in countries across all continents; it has been available online via the official Eurovision website since 2001. Eurovision ranks among the world's most watched non-sporting events every year, with hundreds of millions of viewers globally. Performing at the contest has often provided artists with a local career boost and in some cases long-lasting international success. Several of the best-selling music artists in the world have competed in past editions, including ABBA, Celine Dion, Julio Iglesias, Cliff Richard and Olivia Newton-John; some of the world's best-selling singles have received their first international performance on the Eurovision stage.",
"title": ""
},
{
"paragraph_id": 3,
"text": "While having gained popularity with the viewing public in both participating and non-participating countries, the contest has also been the subject of criticism for its artistic quality as well as a perceived political aspect to the event. Concerns have been raised regarding political friendships and rivalries between countries potentially having an impact on the results. Controversial moments have included participating countries withdrawing at a late stage, censorship of broadcast segments by broadcasters, as well as political events impacting participation. Likewise, the contest has also been criticised for an over-abundance of elaborate stage shows at the cost of artistic merit. Eurovision has, however, gained popularity for its kitsch appeal, its musical span of ethnic and international styles, as well as emergence as part of LGBT culture, resulting in a large, active fanbase and an influence on popular culture. The popularity of the contest has led to the creation of several similar events, either organised by the EBU or created by external organisations; several special events have been organised by the EBU to celebrate select anniversaries or as a replacement due to cancellation.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The Eurovision Song Contest was developed by the European Broadcasting Union (EBU) as an experiment in live television broadcasting and a way to produce cheaper programming for national broadcasting organisations. The word \"Eurovision\" was first used by British journalist George Campey in the London Evening Standard in 1951, when he referred to a BBC programme being relayed by Dutch television. Following several events broadcast internationally via the Eurovision transmission network in the early 1950s, including the coronation of Elizabeth II in 1953, an EBU committee, headed by Marcel Bezençon, was formed in January 1955 to investigate new initiatives for cooperation between broadcasters, which approved for further study a European song competition from an idea initially proposed by RAI manager Sergio Pugliese. The EBU's general assembly agreed to the organising of the song contest in October 1955, under the initial title of the European Grand Prix, and accepted a proposal by the Swiss delegation to host the event in Lugano in the spring of 1956. The Italian Sanremo Music Festival, held since 1951, was used as a basis for the initial planning of the contest, with several amendments and additions given its international nature.",
"title": "Origins and history"
},
{
"paragraph_id": 5,
"text": "Seven countries participated in the first contest, with each country represented by two songs; the only time in which multiple entries per country were permitted. The winning song was \"Refrain\", representing the host country Switzerland and performed by Lys Assia. Voting during the first contest was held behind closed doors, with only the winner being announced on stage; the use of a scoreboard and public announcement of the voting, inspired by the BBC's Festival of British Popular Songs, has been used since 1957. The tradition of the winning country hosting the following year's contest, which has since become a standard feature of the event, began in 1958. Technological developments have transformed the contest: colour broadcasts began in 1968; satellite broadcasts in 1985; and streaming in 2000. Broadcasts in widescreen began in 2005 and in high-definition since 2007, with ultra-high-definition tested for the first time in 2022.",
"title": "Origins and history"
},
{
"paragraph_id": 6,
"text": "By the 1960s, between 16 and 18 countries were regularly competing each year. Countries from outside the traditional boundaries of Europe began entering the contest, and countries in Western Asia and North Africa started competing in the 1970s and 1980s. Changes in Europe following the end of the Cold War saw an influx of new countries from Central and Eastern Europe applying for the first time. The 1993 contest included a separate pre-qualifying round for seven of these new countries, and from 1994 relegation systems were introduced to manage the number of competing entries, with the poorest performing countries barred from entering the following year's contest. From 2004 the contest expanded to become a multi-programme event, with a semi-final at the 49th contest allowing all interested countries to compete each year; a second semi-final was added to each edition from 2008.",
"title": "Origins and history"
},
{
"paragraph_id": 7,
"text": "There have been 67 contests as of 2023, making Eurovision the longest-running annual international televised music competition as determined by Guinness World Records. The contest has been listed as one of the longest-running television programmes in the world and among the world's most watched non-sporting events. A total of 52 countries have taken part in at least one edition, with a record 43 countries participating in a single contest, first in 2008 and subsequently in 2011 and 2018. Australia became the first non-EBU member country to compete following an invitation by the EBU ahead of the contest's 60th edition in 2015; initially announced as a \"one-off\" for the anniversary edition, the country was invited back the following year and has subsequently participated every year since.",
"title": "Origins and history"
},
{
"paragraph_id": 8,
"text": "Eurovision had been held every year until 2020, when that year's contest was cancelled in response to the COVID-19 pandemic. No competitive event was able to take place due to uncertainty caused by the spread of the virus in Europe and the various restrictions imposed by the governments of the participating countries. In its place a special broadcast, Eurovision: Europe Shine a Light, was produced by the organisers, which honoured the songs and artists that would have competed in 2020 in a non-competitive format.",
"title": "Origins and history"
},
{
"paragraph_id": 9,
"text": "Over the years the name used to describe the contest, and used on the official logo for each edition, has evolved. The first contests were produced under the name of Grand Prix Eurovision de la Chanson Européenne in French and as the Eurovision Song Contest Grand Prix in English, with similar variations used in the languages of each of the broadcasting countries. From 1968, the English name dropped the 'Grand Prix' from the name, with the French name being aligned as the Concours Eurovision de la Chanson, first used in 1973. The contest's official brand guidance specifies that translations of the name may be used depending on national tradition and brand recognition in the competing countries, but that the official name Eurovision Song Contest is always preferred; the contest is commonly referred to in English by the abbreviation \"Eurovision\", and in internal documents by the acronym \"ESC\".",
"title": "Origins and history"
},
{
"paragraph_id": 10,
"text": "On only four occasions has the name used for the official logo of the contest not been in English or French: the Italian names Gran Premio Eurovisione della Canzone and Concorso Eurovisione della Canzone were used when Italy hosted the 1965 and 1991 contests respectively; and the Dutch name Eurovisiesongfestival was used when the Netherlands hosted in 1976 and 1980.",
"title": "Origins and history"
},
{
"paragraph_id": 11,
"text": "Original songs representing participating countries are performed in a live television programme broadcast via the Eurovision and Euroradio networks simultaneously to all countries. A \"country\" as a participant is represented by one television broadcaster from that country, a member of the European Broadcasting Union, and is typically that country's national public broadcasting organisation. The programme is staged by one of the participant countries and is broadcast from an auditorium in the selected host city. Since 2008, each contest is typically formed of three live television shows held over one week: two semi-finals are held on the Tuesday and Thursday, followed by a final on the Saturday. All participating countries compete in one of the two semi-finals, except for the host country of that year's contest and the contest's biggest financial contributors known as the \"Big Five\"—France, Germany, Italy, Spain and the United Kingdom. The remaining countries are split between the two semi-finals, and the 10 highest-scoring entries in each qualify to produce 26 countries competing in the final.",
"title": "Format"
},
{
"paragraph_id": 12,
"text": "Each show typically begins with an opening act consisting of music and/or dance performances by invited artists, which contributes to a unique theme and identity created for that year's event; since 2013, the opening of the contest's final has included a \"Flag Parade\", with competing artists entering the stage behind their country's flag in a similar manner to the procession of competing athletes at the Olympic Games opening ceremony. Viewers are welcomed by one or more presenters who provide key updates during the show, conduct interviews with competing acts from the green room, and guide the voting procedure in English and French. Competing acts perform sequentially, and after all songs have been performed, viewers are invited to vote for their favourite performances—except for the performance of their own country—via telephone, SMS and the official Eurovision app. The public vote comprises 50% of the final result alongside the views of a jury of music industry professionals from each country. An interval act is invariably featured during this voting period, which on several occasions has included a well-known personality from the host country or an internationally recognised figure. The results of the voting are subsequently announced; in the semi-finals, the 10 highest-ranked countries are announced in a random order, with the full results undisclosed until after the final. In the final, the presenters call upon a representative spokesperson for each country in turn who announces their jury's points, while the results of the public vote are subsequently announced by the presenters. In recent years, it has been tradition that the first country to announce its jury points is the previous host, whereas the last country is the current host (with the exception of 2023, when the United Kingdom hosted the contest on behalf of Ukraine, who went first). The winning delegation is invited back on stage, where a trophy is awarded to the winning performers and songwriters by the previous year's winner, followed by a reprise of the winning song. The full results of the competition, including detailed results of the jury and public vote, are released online shortly after the final, and the participating broadcaster of the winning entry is traditionally given the honour of organising the following year's event.",
"title": "Format"
},
{
"paragraph_id": 13,
"text": "Each participating broadcaster has sole discretion over the process it may employ to select its entry for the contest. Typical methods in which participants are selected include a televised national final using a public vote; an internal selection by a committee appointed by the broadcaster; and through a mixed format where some decisions are made internally and the public are engaged in others. Among the most successful televised selection shows is Sweden's Melodifestivalen, first established in 1959 and now one of Sweden's most watched television shows each year.",
"title": "Format"
},
{
"paragraph_id": 14,
"text": "Active members (as opposed to associate members) of the European Broadcasting Union are eligible to participate; active members are those who are located in states that fall within the European Broadcasting Area, or are member states of the Council of Europe. Active members include media organisations whose broadcasts are often made available to at least 98% of households in their own country which are equipped to receive such transmissions. Associate member broadcasters may be eligible to compete, dependent on approval by the contest's Reference Group.",
"title": "Participation"
},
{
"paragraph_id": 15,
"text": "The European Broadcasting Area is defined by the International Telecommunication Union as encompassing the geographical area between the boundary of ITU Region 1 in the west, the meridian 40° East of Greenwich in the east, and parallel 30° North in the south. Armenia, Azerbaijan, Georgia and Ukraine, Iraq, Jordan and Syria lying outside these limits are included in the European Broadcasting Area.",
"title": "Participation"
},
{
"paragraph_id": 16,
"text": "Eligibility to participate in the contest is therefore not limited to countries in Europe, as several states geographically outside the boundaries of the continent or which span more than one continent are included in the Broadcasting Area. Countries from these groups have taken part in past editions, including countries in Western Asia such as Israel and Cyprus, countries which span Europe and Asia like Russia and Turkey, and North African countries such as Morocco. Australia became the first country to participate from outside the European Broadcasting Area in 2015, following an invitation by the contest's Reference Group.",
"title": "Participation"
},
{
"paragraph_id": 17,
"text": "EBU members who wish to participate must fulfil conditions as laid down in the rules of the contest, a separate copy of which is drafted annually. A maximum of 44 countries can take part in any one contest. Broadcasters must have paid the EBU a participation fee in advance to the deadline specified in the rules for the year in which they wish to participate; this fee is different for each country based on its size and viewership.",
"title": "Participation"
},
{
"paragraph_id": 18,
"text": "Fifty-two countries have participated at least once. These are listed here alongside the year in which they made their debut:",
"title": "Participation"
},
{
"paragraph_id": 19,
"text": "The winning country traditionally hosts the following year's event, with some exceptions since 1958. Hosting the contest can be seen as a unique opportunity for promoting the host country as a tourist destination and can provide benefits to the local economy and tourism sectors of the host city. Preparations for each year's contest typically begin at the conclusion of the previous year's contest, with the winning country's head of delegation receiving a welcome package of information related to hosting the contest at the winner's press conference. Eurovision is a non-profit event, and financing is typically achieved through a fee from each participating broadcaster, contributions from the host broadcaster and the host city, and commercial revenues from sponsorships, ticket sales, televoting and merchandise.",
"title": "Hosting"
},
{
"paragraph_id": 20,
"text": "The host broadcaster will subsequently select a host city, typically a national or regional capital city, which must meet certain criteria set out in the contest's rules. The host venue must be able to accommodate at least 10,000 spectators, a press centre for 1,500 journalists, should be within easy reach of an international airport and with hotel accommodation available for at least 2,000 delegates, journalists and spectators. A variety of different venues have been used for past editions, from small theatres and television studios to large arenas and stadiums. The largest host venue is Parken Stadium in Copenhagen, which was attended by almost 38,000 spectators in 2001. With a population of 1,500 at the time of the 1993 contest, Millstreet, Ireland remains the smallest hosting settlement, although its Green Glens Arena is capable of hosting up to 8,000 spectators.",
"title": "Hosting"
},
{
"paragraph_id": 21,
"text": "Until 2004, each edition of the contest used its own logo and visual identity as determined by the respective host broadcaster. To create a consistent visual identity, a generic logo was introduced ahead of the 2004 contest. This is typically accompanied by a unique theme artwork designed for each individual contest by the host broadcaster, with the flag of the host country placed prominently in the centre of the Eurovision heart. The original logo was designed by the London-based agency JM International, and received a revamp in 2014 by the Amsterdam-based Cityzen Agency for the contest's 60th edition.",
"title": "Hosting"
},
{
"paragraph_id": 22,
"text": "An individual theme is utilised by contest producers when constructing the visual identity of each edition of the contest, including the stage design, the opening and interval acts, and the \"postcards\". The short video postcards are interspersed between the entries and were first introduced in 1970, initially as an attempt to \"bulk up\" the contest after a number of countries decided not to compete, but has since become a regular part of the show and usually highlight the host country and introduce the competing acts. A unique slogan for each edition, first introduced in 2002, was also an integral part of each contest's visual identity, which was replaced by a permanent slogan from 2024 onwards. The permanent slogan, \"United by Music\", had previously served as the slogan for the 2023 contest before being retained for all future editions as part of the contest's global brand strategy.",
"title": "Hosting"
},
{
"paragraph_id": 23,
"text": "Preparations in the host venue typically begin approximately six weeks before the final, to accommodate building works and technical rehearsals before the arrival of the competing artists. Delegations will typically arrive in the host city two to three weeks before the live show, and each participating broadcaster nominates a head of delegation, responsible for coordinating the movements of their delegation and being that country's representative to the EBU. Members of each country's delegation include performers, composers, lyricists, members of the press, and—in the years where a live orchestra was present—a conductor. Present if desired is a commentator, who provides commentary of the event for their country's radio and/or television feed in their country's own language in dedicated booths situated around the back of the arena behind the audience.",
"title": "Hosting"
},
{
"paragraph_id": 24,
"text": "Each country conducts two individual rehearsals behind closed doors, the first for 30 minutes and the second for 20 minutes. Individual rehearsals for the semi-finalists commence the week before the live shows, with countries typically rehearsing in the order in which they will perform during the contest; rehearsals for the host country and the \"Big Five\" automatic finalists are held towards the end of the week. Following rehearsals, delegations meet with the show's production team to review footage of the rehearsal and raise any special requirements or changes. \"Meet and greet\" sessions with accredited fans and press are held during these rehearsal weeks. Each live show is preceded by three dress rehearsals, where the whole show is run in the same way as it will be presented on TV. The second dress rehearsal, alternatively called the \"jury show\" or \"evening preview show\" and held the night before the broadcast, is used as a recorded back-up in case of technological failure, and performances during this show are used by each country's professional jury to determine their votes. The delegations from the qualifying countries in each semi-final attend a qualifiers' press conference after their respective semi-final, and the winning delegation attends a winners' press conference following the final.",
"title": "Hosting"
},
{
"paragraph_id": 25,
"text": "A welcome reception is typically held at a venue in the host city on the Sunday preceding the live shows, which includes a red carpet ceremony for all the participating countries and is usually broadcast online. Accredited delegates, press and fans have access to an official nightclub, the \"EuroClub\", and some delegations will hold their own parties. The \"Eurovision Village\" is an official fan zone open to the public free of charge, with live performances by the contest's artists and screenings of the live shows on big screens.",
"title": "Hosting"
},
{
"paragraph_id": 26,
"text": "The contest is organised annually by the European Broadcasting Union (EBU), together with the participating broadcaster of the host country. The event is monitored by an Executive Supervisor appointed by the EBU, and by the Reference Group which represents all participating broadcasters, who are each represented by a nominated Head of Delegation. The current Executive Supervisor as of 2023 is Martin Österdahl, who took over the role from Jon Ola Sand in May 2020. A detailed set of rules is written by the EBU for each contest and approved by the Reference Group. These rules have changed over time, and typically outline, among other points, the eligibility of the competing songs, the format of the contest, and the voting system to be used to determine the winner and how the results will be presented.",
"title": "Rules"
},
{
"paragraph_id": 27,
"text": "All competing songs must have a duration of three minutes or less. This rule applies only to the version performed during the live shows. In order to be considered eligible, competing songs in a given year's contest must not have been released commercially before the first day of September of the previous year. All competing entries must include vocals and lyrics of some kind, purely instrumental pieces are not allowed. Competing entries may be performed in any language, be that natural or constructed, and participating broadcasters are free to decide the language in which their entry may be performed.",
"title": "Rules"
},
{
"paragraph_id": 28,
"text": "Rules specifying in which language a song may be performed have changed over time. No restrictions were originally enacted when the contest was first founded, however following criticism over the 1965 Swedish entry being performed in English, a new rule was introduced for the 1966 contest restricting songs to be performed only in an official language of the country it represented. This rule was first abolished in 1973, and subsequently reinstated for most countries in 1977, with only Belgium and Germany permitted freedom of language as their selection processes for that year's contest had already commenced. The language rule was once again abolished ahead of the 1999 contest.",
"title": "Rules"
},
{
"paragraph_id": 29,
"text": "The rules for the first contest specified that only solo performers were permitted to enter; this criterion was changed the following year to permit duos to compete, and groups were subsequently permitted for the first time in 1971. Currently the number of people permitted on stage during competing performances is limited to a maximum of six, and no live animals are allowed. Since 1990, all contestants must be aged 16 or over on the day of the live show in which they perform. Sandra Kim, the winner in 1986 at the age of 13, shall remain the contest's youngest winner while this rule remains in place. There is no limit on the nationality or country of birth of the competing artists, and participating broadcasters are free to select an artist from any country; several winning artists have subsequently held a different nationality or were born in a different country to that which they represented. No performer may compete for more than one country in a given year.",
"title": "Rules"
},
{
"paragraph_id": 30,
"text": "The orchestra was a prominent aspect of the contest from 1956 to 1998. Pre-recorded backing tracks were first allowed for competing acts in 1973, but any pre-recorded instruments were required to be seen being \"performed\" on stage; in 1997, all instrumental music was allowed to be pre-recorded, however the host country was still required to provide an orchestra. In 1999, the rules were changed again, making the orchestra an optional requirement; the host broadcaster of that year's contest, Israel's IBA, subsequently decided not to provide an orchestra, resulting in all entries using backing tracks for the first time. Currently all instrumental music for competing entries must now be pre-recorded, and no live instrumentation is allowed during performances.",
"title": "Rules"
},
{
"paragraph_id": 31,
"text": "The main vocals of competing songs must be performed live during the contest. Previously live backing vocals were also required; since 2021 these may optionally be pre-recorded – this change has been implemented in an effort to introduce flexibility following the cancellation of the 2020 edition and to facilitate modernisation.",
"title": "Rules"
},
{
"paragraph_id": 32,
"text": "Since 2013, the order in which the competing countries perform has been determined by the contest's producers, and submitted to the EBU Executive Supervisor and Reference Group for approval before public announcement. This was changed from a random draw used in previous years in order to provide a better experience for television viewers and ensure all countries stand out by avoiding instances where songs of a similar style or tempo are performed in sequence.",
"title": "Rules"
},
{
"paragraph_id": 33,
"text": "Since the creation of a second semi-final in 2008, a semi-final allocation draw is held each year. Countries are placed into pots based on their geographical location and voting history in recent contests, and are assigned to compete in one of the two semi-finals through a random draw. Countries are then randomly assigned to compete in either the first or second half of their respective semi-final, and once all competing songs have been selected the producers then determine the running order for the semi-finals. The automatic qualifiers are assigned at random to a semi-final for the purposes of voting rights.",
"title": "Rules"
},
{
"paragraph_id": 34,
"text": "Semi-final qualifiers make a draw at random during the winners' press conference to determine whether they will perform during the first or second half of the final; the automatic finalists then randomly draw their competing half in the run-up to the final, except for the host country, whose exact performance position is determined in a separate draw. The running order for the final is then decided following the second semi-final by the producers. The running orders are decided with the competing songs' musical qualities, stage performance, prop and lighting set-up, and other production considerations taken into account.",
"title": "Rules"
},
{
"paragraph_id": 35,
"text": "Since 2023, the voting system used to determine the results of the contest works on the basis of positional voting. Each country awards 1–8, 10 and 12 points to the ten favourite songs as voted for by that country's general public or assembled jury, with the most preferred song receiving 12 points. In the semi-finals, each country awards one set of points based primarily on the votes cast by that country's viewing public via telephone, SMS or the official Eurovision app, while in the final, each country awards two sets of points, with one set awarded by the viewers and another awarded by a jury panel comprising five music professionals from that country. Since 2023, viewers in non-participating countries are also able to vote during the contest, with those viewers able to cast votes via an online platform, which are then aggregated and awarded as one set of points from an \"extra country\" for the overall public vote. This system is a modification of that used since 1975, when the \"12 points\" system was first introduced but with one set of points per country, and a similar system used since 2016 where two sets of points were awarded in both the semi-finals and final. National juries and the public in each country are not allowed to vote for their own country, a rule first introduced in 1957.",
"title": "Rules"
},
{
"paragraph_id": 36,
"text": "Historically, each country's points were determined by a jury, consisting at various times of members of the public, music professionals, or both in combination. With advances in telecommunication technology, televoting was first introduced to the contest in 1997 on a trial basis, with broadcasters in five countries allowing the viewing public to determine their votes for the first time. From 1998, televoting was extended to almost all competing countries, and subsequently became mandatory from 2004. A jury was reintroduced for the final in 2009, with each country's points comprising both the votes of the jury and public in an equal split; this mix of jury and public voting was expanded into the semi-finals from 2010, and was used until 2023, when full public voting was reintroduced to determine the results of the semi-finals. The mix of jury and public voting continues to be used in the final.",
"title": "Rules"
},
{
"paragraph_id": 37,
"text": "Should two or more countries finish with the same number of points, a tie-break procedure is employed to determine the final placings. As of 2016, a combined national televoting and jury result is calculated for each country, and the country which has obtained more points from the public voting following this calculation is deemed to have placed higher.",
"title": "Rules"
},
{
"paragraph_id": 38,
"text": "Since 1957, each country's votes have been announced during a special voting segment as part of the contest's broadcast, with a selected spokesperson assigned to announce the results of their country's vote. This spokesperson is typically well known in their country; previous spokespersons have included former Eurovision artists and presenters. Historically, the announcements were made through telephone lines from the countries of origin, with satellite links employed for the first time in 1994, allowing the spokespersons to be seen visually by the audience and TV spectators.",
"title": "Rules"
},
{
"paragraph_id": 39,
"text": "Scoring is done by both a national jury and a national televote. Each country's jury votes are consecutively added to the totals scoreboard as they are called upon by the contest presenter(s). The scoreboard was historically placed at the side of the stage and updated manually as each country gave their votes; in 1988 a computer graphics scoreboard was introduced. The jury points from 1–8 and 10 are displayed on screen and added automatically to the scoreboard, then the country's spokesperson announces which country will receive the 12 points. Once jury points from all countries have been announced, the presenter(s) announce the total public points received for each finalist, with the votes for each country being consolidated and announced as a single value. Since 2019, the public points have been revealed in ascending order based on the jury vote, with the country that received the fewest points from the jury being the first to receive their public points. A full breakdown of the results across all shows is published on the official Eurovision website after the final, including each country's televoting ranking and the votes of its jury and individual jury members. Each country's individual televoting points in the final are typically displayed on-screen by that country's broadcaster following the announcement of the winner.",
"title": "Rules"
},
{
"paragraph_id": 40,
"text": "Participating broadcasters are required to air live the semi-final in which they compete, or in the case of the automatic finalists the semi-final in which they are required to vote, and the final, in its entirety; this includes all competing songs, the voting recap containing short clips of the performances, the voting procedure or semi-final qualification reveal, and the reprise of the winning song in the final. Since 1999, broadcasters who wished to do so were given the opportunity to provide advertising during short, non-essential hiatuses in the show's schedule. In exceptional circumstances, such as due to developing emergency situations, participating broadcasters may delay or postpone broadcast of the event. Should a broadcaster fail to air a show as expected in any other scenario they may be subject to sanctions by the EBU. Several broadcasters in countries that are unable to compete have previously aired the contest in their markets.",
"title": "Rules"
},
{
"paragraph_id": 41,
"text": "As national broadcasters join and leave the Eurovision feed transmitted by the EBU, the EBU/Eurovision network logo ident (not to be confused with the logo of the song contest itself) is displayed. The accompanying music (used on other Eurovision broadcasts) is the Prelude (Marche en rondeau) to Marc-Antoine Charpentier's Te Deum. Originally, the same logo was used for both the Eurovision network and the European Broadcasting Union, however, they now have two different logos; the latest Eurovision network logo was introduced in 2012, and when the ident is transmitted at the start and end of programmes it is this Eurovision network logo that appears.",
"title": "Rules"
},
{
"paragraph_id": 42,
"text": "The EBU now holds the recordings of all but two editions of the contest in its archives, following a project initiated in 2011 to collate footage and related materials of all editions ahead of the event's 60th edition in 2015. Although cameras were present to practice pan-European broadcasting for the first contest in 1956 to the few Europeans who had television sets, its audience was primarily over the radio. The only footage available is a Kinescope recording of Lys Assia's reprise of her winning song. No full recording of the 1964 contest exists, with conflicting reports of the fate of any copies that may have survived. Audio recordings of both contests do however exist, and some short pieces of footage from both events have survived.",
"title": "Rules"
},
{
"paragraph_id": 43,
"text": "From the original seven countries which entered the first contest in 1956, the number of competing countries has steadily grown over time. 18 countries participated in the contest's tenth edition in 1965, and by 1990, 22 countries were regularly competing each year.",
"title": "Expansion of the contest"
},
{
"paragraph_id": 44,
"text": "Besides slight modifications to the voting system and other contest rules, no fundamental changes to the contest's format were introduced until the early 1990s, when events in Europe in the late 1980s and early 1990s resulted in a growing interest from new countries in the former Eastern Bloc, particularly following the merger of the Eastern European rival OIRT network with the EBU in 1993.",
"title": "Expansion of the contest"
},
{
"paragraph_id": 45,
"text": "29 countries registered to take part in the 1993 contest, a figure the EBU considered unable to fit reasonably into a single TV show. A pre-selection method was subsequently introduced for the first time in order to reduce the number of competing entries, with seven countries in Central and Eastern Europe participating in Kvalifikacija za Millstreet, held in Ljubljana, Slovenia one month before the event. Following a vote among the seven competing countries, Bosnia and Herzegovina, Croatia and Slovenia were chosen to head to the contest in Millstreet, Ireland, and Estonia, Hungary, Romania and Slovakia were forced to wait another year before being allowed to compete. A new relegation system was introduced for entry into the 1994 contest, with the lowest-placed countries being forced to sit out the following year's event to be replaced by countries which had not competed in the previous contest. The bottom seven countries in 1993 were required to miss the following year's contest, and were replaced by the four unsuccessful countries in Kvalifikacija za Millstreet and new entries from Lithuania, Poland and Russia.",
"title": "Expansion of the contest"
},
{
"paragraph_id": 46,
"text": "This system was used again in 1994 for qualification for the 1995 contest, but a new system was introduced for the 1996 contest, when an audio-only qualification round held in the months before the contest in Oslo, Norway; this system was primarily introduced in an attempt to appease Germany, one of Eurovision's biggest markets and financial contributors, which would have otherwise been relegated under the previous system. 29 countries competed for 22 places in the main contest alongside the automatically qualified Norwegian hosts, however Germany would ultimately still miss out, and joined Hungary, Romania, Russia, Denmark, Israel, and Macedonia as one of the seven countries to be absent from the Oslo contest. For the 1997 contest, a similar relegation system to that used between 1993 and 1995 was introduced, with each country's average scores in the preceding five contests being used as a measure to determine which countries would be relegated. This was subsequently changed again in 2001, back to the same system used between 1993 and 1995 where only the results from that year's contest would count towards relegation.",
"title": "Expansion of the contest"
},
{
"paragraph_id": 47,
"text": "In 1999, an exemption from relegation was introduced for France, Germany, Spain and the United Kingdom, giving them an automatic right to compete in the 2000 contest and in all subsequent editions. This group, as the highest-paying EBU members which significantly fund the contest each year, subsequently became known as the \"Big Four\" countries. This group was expanded in 2011 when Italy began competing again, becoming the \"Big Five\". Originally brought in to ensure that the financial contributions of the contest's biggest financial backers would not be missed, since the introduction of the semi-finals in 2004, the \"Big Five\" now instead automatically qualify for the final along with the host country.",
"title": "Expansion of the contest"
},
{
"paragraph_id": 48,
"text": "There remains debate on whether this status prejudices the countries' results, based on reported antipathy over their automatic qualification and the potential disadvantage of having spent less time on stage through not competing in the semi-finals, however this status appears to be more complex given that the results of the \"Big Five\" countries can vary widely. This status has caused consternation from other competing countries, and was cited, among other aspects, as a reason why Turkey had ceased participating after 2012.",
"title": "Expansion of the contest"
},
{
"paragraph_id": 49,
"text": "An influx of new countries applying for the 2003 contest resulted in the introduction of a semi-final from 2004, with the contest becoming a two-day event. The top 10 countries in each year's final would qualify automatically to the following year's final, alongside the \"Big Four\", meaning all other countries would compete in the semi-final to compete for 10 qualification spots. The 2004 contest in Istanbul, Turkey saw a record 36 countries competing, with new entries from Albania, Andorra, Belarus and Serbia and Montenegro and the return of previously relegated countries. The format of this semi-final remained similar to the final proper, taking place a few days before the final; following the performances and the voting window, the names of the 10 countries with the highest number of points, which would therefore qualify for the final, were announced at the end of the show, revealed in a random order by the contest's presenters.",
"title": "Expansion of the contest"
},
{
"paragraph_id": 50,
"text": "The single semi-final continued to be held between 2005 and 2007; however, with 42 countries competing in the 2007 contest in Helsinki, Finland, the semi-final had 28 entries competing for 10 spots in the final. Following criticism over the mainly Central and Eastern European qualifiers at the 2007 event and the poor performance of entries from Western European countries, a second semi-final was subsequently introduced for the 2008 contest in Belgrade, Serbia, with all countries now competing in one of the two semi-finals, with only the host country and the \"Big Four\", and subsequently the \"Big Five\" from 2011, qualifying automatically. 10 qualification spots would be available in each of the semi-finals, and a new system to split the competing countries between the two semi-finals was introduced based on their geographic location and previous voting patterns, in an attempt to reduce the impact of bloc voting and to make the outcome less predictable.",
"title": "Expansion of the contest"
},
{
"paragraph_id": 51,
"text": "The contest has been used as a launching point for artists who went on to achieve worldwide fame, and several of the world's best-selling artists are counted among past Eurovision Song Contest participants and winning artists. ABBA, the 1974 winners for Sweden, have sold an estimated 380 million albums and singles since their contest win brought them to worldwide attention, with their winning song \"Waterloo\" selling over five million records. Celine Dion's win for Switzerland in 1988 helped launch her international career, particularly in the anglophone market, and she would go on to sell an estimated 200 million records worldwide. Julio Iglesias was relatively unknown when he represented Spain in 1970 and placed fourth, but worldwide success followed his Eurovision appearance, with an estimated 100 million records sold during his career. Australian-British singer Olivia Newton-John represented the United Kingdom in 1974, placing fourth behind ABBA, but went on to sell an estimated 100 million records, win four Grammy Awards, and star in the critically and commercially successful musical film Grease.",
"title": "Entries and participants"
},
{
"paragraph_id": 52,
"text": "A number of performers have competed in the contest after having already achieved considerable success. These include winning artists Lulu, Toto Cutugno, and Katrina and the Waves, and acts that failed to win such as Nana Mouskouri, Cliff Richard, Baccara, Umberto Tozzi, Plastic Bertrand, t.A.T.u., Las Ketchup, Patricia Kaas, Engelbert Humperdinck, Bonnie Tyler, and Flo Rida. Many well-known composers and lyricists have penned entries of varying success over the years, including Serge Gainsbourg, Goran Bregović, Diane Warren, Andrew Lloyd Webber, Pete Waterman, and Tony Iommi, as well as producers Timbaland and Guy-Manuel de Homem-Christo.",
"title": "Entries and participants"
},
{
"paragraph_id": 53,
"text": "Past participants have contributed to other fields in addition to their music careers. The Netherlands' Annie Schmidt, lyricist of the first entry performed at Eurovision, has gained a worldwide reputation for her stories and earned the Hans Christian Andersen Award for children's literature. French \"yé-yé girls\" Françoise Hardy and contest winner France Gall are household names of 1960s pop culture, with Hardy also being a pioneer of street style fashion trends and an inspiration for the global youthquake movement. Figures who carved a career in politics and gained international acclaim for humanitarian achievements include contest winner Dana as a two-time Irish presidential candidate and Member of the European Parliament (MEP); Nana Mouskouri as Greek MEP and a UNICEF international goodwill ambassador; contest winner Ruslana as member of Verkhovna Rada, Ukraine's parliament and a figure of the Orange Revolution and Euromaidan protests, who gained global honours for leadership and courage; and North Macedonia's Esma Redžepova as member of political parties and a two-time Nobel Peace Prize nominee.",
"title": "Entries and participants"
},
{
"paragraph_id": 54,
"text": "Competing songs have occasionally gone on to become successes for their original performers and other artists, and some of the best-selling singles globally received their first international performances at Eurovision. \"Save Your Kisses for Me\", the winning song in 1976 for the United Kingdom's Brotherhood of Man, went on to sell over six million singles, more than any other winning song. \"Nel blu, dipinto di blu\", also known as \"Volare\", Italy's third-placed song in 1958 performed by Domenico Modugno, is the only Eurovision entry to win a Grammy Award. It was the first Grammy winner for both Record of the Year and Song of the Year and it has since been recorded by various artists, topped the Billboard Hot 100 in the United States and achieved combined sales of over 22 million copies worldwide. \"Eres tú\", performed by Spain's Mocedades and runner-up in 1973, became the first Spanish-language song to reach the top 10 of the Billboard Hot 100, and the Grammy-nominated \"Ooh Aah... Just a Little Bit\", which came eighth in 1996 for the United Kingdom's Gina G, sold 790,000 records and achieved success across Europe and the US, reaching #1 on the UK Singles Chart and peaking at #12 on the Billboard Hot 100.",
"title": "Entries and participants"
},
{
"paragraph_id": 55,
"text": "The turn of the century has also seen numerous competing songs becoming successes. \"Euphoria\", Loreen's winning song for Sweden in 2012, achieved Europe-wide success, reaching number one in several countries and by 2014 had become the most downloaded Eurovision song to date. The video for \"Occidentali's Karma\" by Francesco Gabbani, which placed sixth for Italy in 2017, became the first Eurovision song to reach more than 200 million views on YouTube, while \"Soldi\" by Mahmood, the Italian runner-up in 2019, was the most-streamed Eurovision song on Spotify until it was overtaken by that year's winner for the Netherlands, \"Arcade\" by Duncan Laurence, following viral success on TikTok in late 2020 and early 2021; \"Arcade\" later became the first Eurovision song since \"Ooh Aah... Just a Little Bit\" and the first Eurovision winning song since \"Save Your Kisses for Me\" to chart on the Billboard Hot 100, eventually peaking at #30. The 2021 contest saw the next major breakthrough success from Eurovision, with Måneskin, that year's winners for Italy with \"Zitti e buoni\", attracting worldwide attention across their repertoire immediately following their victory.",
"title": "Entries and participants"
},
{
"paragraph_id": 56,
"text": "Johnny Logan was the first artist to have won multiple contests as a performer, winning for Ireland in 1980 with \"What's Another Year\", written by Shay Healy, and in 1987 with the self-penned \"Hold Me Now\". Logan was also the winning songwriter in 1992 for the Irish winner, \"Why Me?\" performed by Linda Martin, and has therefore achieved three contest victories as either a performer or writer. Four further songwriters have each written two contest-winning songs: Willy van Hemert, Yves Dessca, Rolf Løvland, and Brendan Graham. Following their introduction in 2004, Alexander Rybak became the first artist to win multiple Eurovision semi-finals, finishing in first at the second semi-finals in 2009 and 2018; he remains the only entrant to have done so to date.",
"title": "Entries and participants"
},
{
"paragraph_id": 57,
"text": "70 songs from 27 countries have won the Eurovision Song Contest as of 2023. Ireland and Sweden have recorded the most wins with seven each, followed by France, Luxembourg, the United Kingdom and the Netherlands with five each. Of the 52 countries to have taken part, 25 have yet to win. On only one occasion have multiple winners been declared in a single contest: in 1969, four countries finished the contest with an equal number of votes and due to the lack of a tie-break rule at the time, all four countries were declared winners. A majority of winning songs have been performed in English, particularly since the language rule was abolished in 1999. Since that contest, seven winning songs have been performed either fully or partially in a language other than English.",
"title": "Entries and participants"
},
{
"paragraph_id": 58,
"text": "Two countries have won the contest on their first appearance: Switzerland, by virtue of being declared the winner of the first contest in 1956; and Serbia, which won in 2007 in its first participation as an independent country, following entries in previous editions as part of the now-defunct Yugoslavia and then Serbia and Montenegro. Other countries have had relatively short waits before winning their first contest, with Ukraine victorious on its second contest appearance in 2004 and Latvia winning with its third entry in 2002. Conversely, some countries have competed for many years before recording their first win: Greece recorded its first win in 2005, 31 years after its first appearance, while Finland ended a 45-year losing streak in 2006. Portugal waited the longest, recording its first win in 2017, 53 years after its first participation. Countries have in the past had to wait many years to win again: Switzerland went 32 years between winning in 1956 and 1988; Denmark held a 37-year gap between wins in 1963 and 2000; the Netherlands waited 44 years to win again in 2019, its most recent win having been in 1975; and Austria won its second contest in 2014, 48 years after its first win in 1966.",
"title": "Entries and participants"
},
{
"paragraph_id": 59,
"text": "The United Kingdom holds the record for the highest number of second-place finishes, having finished runner-up sixteen times. Meanwhile, Norway has come last more than any other country, appearing at the bottom of the scoreboard on eleven occasions, including scoring nul points four times. A country has recorded back-to-back wins on four occasions: Spain recorded consecutive wins in 1968 and 1969; Luxembourg did likewise in 1972 and 1973; Israel won the contest four times in 1978, 1979, 1998 and 2018; and Ireland became the first country to win three consecutive titles, winning in 1992, 1993 and 1994. Ireland's winning streak in the 1990s includes the 1996 contest, giving it a record four wins in five years.",
"title": "Entries and participants"
},
{
"paragraph_id": 60,
"text": "The winning artists and songwriters receive a trophy, which since 2008 has followed a standard design: a handmade piece of sandblasted glass with painted details in the shape of a 1950s-style microphone, designed by Kjell Engman of the Swedish-based glassworks Kosta Boda. The trophy is typically presented by the previous year's winner; others who have handed out the award in the past include representatives from the host broadcaster or the EBU, and politicians; in 2007, the fictional character Joulupukki (original Santa Claus from Finland) presented the award to the winner Marija Šerifović.",
"title": "Entries and participants"
},
{
"paragraph_id": 61,
"text": "Alongside the song contest and appearances from local and international personalities, performances from non-competing artists and musicians have been included since the first edition, and have become a staple of the live show. These performances have varied widely, previously featuring music, art, dance and circus performances, and past participants are regularly invited to perform, with the reigning champion traditionally returning each year to perform the previous year's winning song.",
"title": "Interval acts and guest appearances"
},
{
"paragraph_id": 62,
"text": "The contest's opening performance and the main interval act, held following the final competing song and before the announcement of the voting results, has become a memorable part of the contest and has included both internationally known artists and local stars. Contest organisers have previously used these performances as a way to explore their country's culture and history, such as in \"4,000 Years of Greek Song\" at the 2006 contest held in Greece; other performances have been more comedic in nature, featuring parody and humour, as was the case with \"Love Love Peace Peace\" in 2016, a humorous ode to the history and spectacle of the contest itself. Riverdance, which later became one of the most successful dance productions in the world, first began as the interval performance at the 1994 contest in Ireland; the seven-minute performance of traditional Irish music and dance was later expanded into a full stage show that has been seen by over 25 million people worldwide and provided a launchpad for its lead dancers Michael Flatley and Jean Butler.",
"title": "Interval acts and guest appearances"
},
{
"paragraph_id": 63,
"text": "Among other artists who have performed in a non-competitive manner are Danish Europop group Aqua in 2001, Russian pop duo t.A.T.u. in 2009, and American entertainers Justin Timberlake and Madonna in 2016 and 2019 respectively. Other notable artists, including Cirque du Soleil (2009), Alexandrov Ensemble (2009), Vienna Boys' Choir (1967 and 2015) and Fire of Anatolia (2004), also performed on the Eurovision stage, and there have been guest appearances from well-known faces from outside the world of music, including actors, athletes, and serving astronauts and cosmonauts. Guest performances have been used as a channel in response to global events happening concurrently with the contest. The 1999 contest in Israel closed with all competing acts performing a rendition of Israel's 1979 winning song \"Hallelujah\" as a tribute to the victims of the war in the Balkans, a dance performance entitled \"The Grey People\" in 2016's first semi-final was devoted to the European migrant crisis, the 2022 contest featured known anti-war songs \"Fragile\", \"People Have the Power\" and \"Give Peace a Chance\" in response to the Russian invasion of Ukraine that same year, and an interval act in 2023's first semi-final alluded to the refugee crisis caused by the aforementioned invasion.",
"title": "Interval acts and guest appearances"
},
{
"paragraph_id": 64,
"text": "The contest has been the subject of considerable criticism regarding both its musical content and what has been reported to be a political element to the event, and several controversial moments have been witnessed over the course of its history.",
"title": "Criticism and controversy"
},
{
"paragraph_id": 65,
"text": "Criticism has been levied against the musical quality of past competing entries, with a perception that certain music styles seen as being presented more often than others in an attempt to appeal to as many potential voters as possible among the international audience. Power ballads, folk rhythms and bubblegum pop have been considered staples of the contest in recent years, leading to allegations that the event has become formulaic. Other traits in past competing entries which have regularly been mocked by media and viewers include an abundance of key changes and lyrics about love and/or peace, as well as the pronunciation of English by non-native users of the language. Given Eurovision is principally a television show, over the years competing performances have attempted to attract the viewers' attention through means other than music, and elaborate lighting displays, pyrotechnics, and extravagant on-stage theatrics and costumes having become a common sight at recent contests; criticism of these tactics have been levied as being a method of distracting the viewer from the weak musical quality of some of the competing entries.",
"title": "Criticism and controversy"
},
{
"paragraph_id": 66,
"text": "While many of these traits are ridiculed in the media and elsewhere, for others these traits are celebrated and considered an integral part of what makes the contest appealing. Although many of the competing acts each year will fall into some of the categories above, the contest has seen a diverse range of musical styles in its history, including rock, heavy metal, jazz, country, electronic, R&B, hip hop and avant-garde.",
"title": "Criticism and controversy"
},
{
"paragraph_id": 67,
"text": "As artists and songs ultimately represent a country, the contest has seen several controversial moments where political tensions between competing countries as a result of frozen conflicts, and in some cases open warfare, are reflected in the performances and voting.",
"title": "Criticism and controversy"
},
{
"paragraph_id": 68,
"text": "The continuing conflict between Armenia and Azerbaijan has affected the contest on numerous occasions. Conflicts between the two countries at Eurovision escalated quickly since both countries began competing in the late 2000s, resulting in fines and disciplinary action for both countries' broadcasters over political stunts, and a forced change of title for one competing song due to allegations of political subtext. Interactions between Russia and Ukraine in the contest had originally been positive, however as political relations soured between the two countries so too have relations at Eurovision become more complex. Complaints were levied against Ukraine's winning song in 2016, \"1944\", whose lyrics referenced the deportation of the Crimean Tatars, but which the Russian delegation claimed had a greater political meaning in light of Russia's annexation of Crimea. As Ukraine prepared to host the following year's contest, Russia's selected representative, Yuliya Samoylova, was barred from entering the country due to having previously entered Crimea illegally according to Ukrainian law. Russia eventually pulled out of the contest after offers for Samoylova to perform remotely were refused by Russia's broadcaster, Channel One Russia, resulting in the EBU reprimanding the Ukrainian broadcaster, UA:PBC. In the wake of the Russian invasion of Ukraine and subsequent protests from other participating countries, Russia was barred from competing in the 2022 contest, where Ukraine went on to win. Georgia's planned entry for the 2009 contest in Moscow, Russia, \"We Don't Wanna Put In\", caused controversy as the lyrics appeared to criticise Vladimir Putin, in a move seen as opposition to the then-Russian prime minister in the aftermath of the Russo-Georgian War. After requests by the EBU for changes to the lyrics were refused, Georgia's broadcaster GPB subsequently withdrew from the event. Belarus' planned entry in 2021, \"Ya nauchu tebya (I'll Teach You)\", also caused controversy in the wake of demonstrations against disputed election results, resulting in the country's disqualification when the aforementioned song and another potential song were deemed to breach the contest's rules on neutrality and politicisation.",
"title": "Criticism and controversy"
},
{
"paragraph_id": 69,
"text": "Israel's participation in the contest has resulted in several controversial moments in the past, with the country's first appearance in 1973, less than a year after the Munich massacre, resulting in an increased security presence at the venue in Luxembourg City. Israel's first win in 1978 proved controversial for Arab states broadcasting the contest which would typically cut to advertisements when Israel performed due to a lack of recognition of the country, and when it became apparent Israel would win many of these broadcasters cut the feed before the end of the voting. Arab states which are eligible to compete have declined to participate due to Israel's presence, with Morocco the only Arab state to have entered Eurovision, competing for the first, and as of 2023 the only time, in 1980 when Israel was absent. Israeli participation has been criticised by those who oppose current government policies in the state, with calls raised by various political groups for a boycott ahead of the 2019 contest in Tel Aviv, including proponents of the Boycott, Divestment and Sanctions (BDS) movement in response to the country's policies towards Palestinians in the West Bank and Gaza, as well as groups who take issue with perceived pinkwashing in Israel. Others campaigned against a boycott, asserting that any cultural boycott would be antithetical to advancing peace in the region.",
"title": "Criticism and controversy"
},
{
"paragraph_id": 70,
"text": "The contest has been described as containing political elements in its voting process, a perception that countries will give votes more frequently and in higher quantities to other countries based on political relationships, rather than the musical merits of the songs themselves. Numerous studies and academic papers have been written on this subject, which have corroborated that certain countries form \"clusters\" or \"cliques\" by frequently voting in the same way; one study concludes that voting blocs can play a crucial role in deciding the winner of the contest, with evidence that on at least two occasions bloc voting was a pivotal factor in the vote for the winning song. Other views on these \"blocs\" argue that certain countries will allocate high points to others based on similar musical tastes, shared cultural links and a high degree of similarity and mutual intelligibility between languages, and are therefore more likely to appreciate and vote for the competing songs from these countries based on these factors, rather than political relationships specifically. Analysis on other voting patterns have revealed examples which indicate voting preferences among countries based on shared religion, as well as \"patriotic voting\", particularly since the introduction of televoting in 1997, where foreign nationals vote for their country of origin.",
"title": "Criticism and controversy"
},
{
"paragraph_id": 71,
"text": "Voting patterns in the contest have been reported by news publishers, including The Economist and BBC News. Criticism of the voting system was at its highest in the mid-2000s, resulting in a number of calls for countries to boycott the contest over reported voting biases, particularly following the 2007 contest where Eastern European countries occupied the top 15 places in the final and dominated the qualifying spaces. The poor performance of the entries from more traditional Eurovision countries had subsequently been discussed in European national parliaments, and the developments in the voting was cited as among the reasons for the resignation of Terry Wogan as commentator for the UK, a role he had performed at every contest from 1980. In response to this criticism, the EBU introduced a second semi-final in 2008, with countries split based on geographic proximity and voting history, and juries of music professionals were reintroduced in 2009, in an effort to reduce the impacts of bloc voting.",
"title": "Criticism and controversy"
},
{
"paragraph_id": 72,
"text": "Eurovision has had a long-held fan base in the LGBT community, and contest organisers have actively worked to include these fans in the event since the 1990s. Paul Oscar became the contest's first openly gay artist to compete when he represented Iceland in 1997. Israel's Dana International, the contest's first trans performer, became the first LGBT artist to win in 1998. In 2021, Nikkie de Jager became the first trans person to host the contest.",
"title": "Criticism and controversy"
},
{
"paragraph_id": 73,
"text": "Several open members of the LGBT community have since gone on to compete and win: Conchita Wurst, the drag persona of openly gay Thomas Neuwirth, won the 2014 contest for Austria; openly bisexual performer Duncan Laurence was the winner of the 2019 contest for the Netherlands; and rock band Måneskin, winners of the 2021 contest for Italy, features openly bisexual Victoria De Angelis as its bassist. Marija Šerifović, who won the 2007 contest for Serbia, subsequently came out publicly as a lesbian in 2013. Past competing songs and performances have included references and allusions to same-sex relationships; \"Nous les amoureux\", the 1961 winning song, contained references to the difficulties faced by a homosexual relationship; Krista Siegfrids' performance of \"Marry Me\" at the 2013 contest included a same-sex kiss with one of her female backing dancers; and the stage show of Ireland's Ryan O'Shaughnessy's \"Together\" in 2018 had two male dancers portraying a same-sex relationship. Drag performers, such as Ukraine's Verka Serduchka, Denmark's DQ and Slovenia's Sestre, have appeared, including Wurst winning in 2014.",
"title": "Criticism and controversy"
},
{
"paragraph_id": 74,
"text": "In recent years, various political ideologies across Europe have clashed in the Eurovision setting, particularly on LGBT rights. Dana International's selection for the 1998 contest in Birmingham was marked by objections and death threats from orthodox religious sections of Israeli society, and at the contest her accommodation was reportedly in the only hotel in Birmingham with bulletproof windows. Turkey, once a regular participant and a one-time winner, first pulled out of the contest in 2013, citing dissatisfaction in the voting rules and more recently Turkish broadcaster TRT have cited LGBT performances as another reason for their continued boycott, refusing to broadcast the 2013 event over Finland's same sex kiss. LGBT visibility in the contest has been cited as a deciding factor for Hungary's non-participation since 2020, although no official reason was given by the Hungarian broadcaster MTVA. The rise of anti-LGBT sentiment in Europe has led to a marked increase in booing from contest audiences, particularly since the introduction of a \"gay propaganda\" law in Russia in 2013. Conchita Wurst's win was met with criticism on the Russian political stage, with several conservative politicians voicing displeasure in the result. Clashes on LGBT visibility in the contest have occurred in countries which do not compete, such as in China, where broadcasting rights were terminated during the 2018 contest due to censorship of \"abnormal sexual relationships and behaviours\" that went against Chinese broadcasting guidelines.",
"title": "Criticism and controversy"
},
{
"paragraph_id": 75,
"text": "The Eurovision Song Contest has amassed a global following and sees annual audience figures of between 100 and 600 million. The contest has become a cultural influence worldwide since its first years, is regularly described as having kitsch appeal, and is included as a topic of parody in television sketches and in stage performances at the Edinburgh Fringe and Melbourne Comedy festivals amongst others. Several films have been created which celebrate the contest, including Eytan Fox's 2013 Israeli comedy Cupcakes, and the Netflix 2020 musical comedy, Eurovision Song Contest: The Story of Fire Saga, produced with backing from the EBU and starring Will Ferrell and Rachel McAdams.",
"title": "Cultural influence"
},
{
"paragraph_id": 76,
"text": "Eurovision has a large online following and multiple independent websites, news blogs and fan clubs are dedicated to the event. One of the oldest and largest Eurovision fan clubs is OGAE, founded in 1984 in Finland and currently a network of over 40 national branches across the world. National branches regularly host events to promote and celebrate Eurovision, and several participating broadcasters work closely with these branches when preparing their entries.",
"title": "Cultural influence"
},
{
"paragraph_id": 77,
"text": "In the run-up to each year's contest, several countries regularly host smaller events between the conclusion of the national selection shows in March and the contest proper in May, known as the \"pre-parties\". These events typically feature the artists which will go on to compete at that year's contest, and consist of performances at a venue and meet-and-greets with fans and the press. Eurovision in Concert, held annually in Amsterdam, was one of the first of these events to be created, holding its first edition in 2008. Other events held regularly include the London Eurovision Party, PrePartyES in Madrid, and Israel Calling in Tel Aviv. Several community events have been held virtually, particularly since the outbreak of the COVID-19 pandemic in Europe in 2020, among these EurovisionAgain, an initiative where fans watched and discussed past contests in sync on YouTube and other social media platforms. Launched during the first COVID-19 lockdowns, the event subsequently became a top trend on Twitter across Europe and facilitated over £20,000 in donations for UK-based LGBTQ+ charities.",
"title": "Cultural influence"
},
{
"paragraph_id": 78,
"text": "Several anniversary events, and related contests under the \"Eurovision Live Events\" brand, have been organised by the EBU with its member broadcasters. In addition, participating broadcasters have occasionally commissioned special Eurovision programmes for their home audiences, and a number of other imitator contests have been developed outside of the EBU framework, on both a national and international level.",
"title": "Special events and related competitions"
},
{
"paragraph_id": 79,
"text": "The EBU has held several events to mark selected anniversaries in the contest's history: Songs of Europe, held in 1981 to celebrate its twenty-fifth anniversary, had live performances and video recordings of all Eurovision Song Contest winners up to 1981; Congratulations: 50 Years of the Eurovision Song Contest was organised in 2005 to celebrate the event's fiftieth anniversary, and featured a contest to determine the most popular song from among 14 selected entries from the contest's first 50 years; and in 2015 the event's sixtieth anniversary was marked by Eurovision Song Contest's Greatest Hits, a concert of performances by past Eurovision artists and video montages of performances and footage from previous contests. Following the cancellation of the 2020 contest, the EBU subsequently organised a special non-competitive broadcast, Eurovision: Europe Shine a Light, which provided a showcase for the songs that would have taken part in the competition.",
"title": "Special events and related competitions"
},
{
"paragraph_id": 80,
"text": "Other contests organised by the EBU include Eurovision Young Musicians, a classical music competition for European musicians between the ages of 12 and 21; Eurovision Young Dancers, a dance competition for non-professional performers between the ages of 16 and 21; Eurovision Choir, a choral competition for non-professional European choirs produced in partnership with the Interkultur [de] and modelled after the World Choir Games; and the Junior Eurovision Song Contest, a similar song contest for singers aged between 9 and 14 representing primarily European countries. The Eurovision Dance Contest was an event featuring pairs of dancers performing ballroom and Latin dancing, which took place for two editions, in 2007 and 2008.",
"title": "Special events and related competitions"
},
{
"paragraph_id": 81,
"text": "Similar international music competitions have been organised externally to the EBU. The Sopot International Song Festival has been held annually since 1961; between 1977 and 1980, under the patronage of the International Radio and Television Organisation (OIRT), an Eastern European broadcasting network similar to the EBU, it was rebranded as the Intervision Song Contest. An Ibero-American contest, the OTI Festival, was previously held among hispanophone and lusophone countries in Europe, North America and South America; and a contest for countries and autonomous regions with Turkic links, the Turkvision Song Contest, has been organised since 2013. Similarly, an adaption of the contest for artists in the United States, the American Song Contest, was held in 2022 and featured songs representing U.S. states and territories. Adaptions of the contest for artists in Canada and Latin America are in development, though development on the former has been halted.",
"title": "Special events and related competitions"
},
{
"paragraph_id": 82,
"text": "Sources:",
"title": "References"
}
]
| The Eurovision Song Contest, often known simply as Eurovision or by its initialism ESC, is an international song competition organised annually by the European Broadcasting Union. Each participating country submits an original song to be performed live and transmitted to national broadcasters via the Eurovision and Euroradio networks, with competing countries then casting votes for the other countries' songs to determine a winner. Based on the Sanremo Music Festival held in Italy since 1951, Eurovision has been held annually since 1956, making it the longest-running annual international televised music competition and one of the world's longest-running television programmes. Active members of the EBU and invited associate members are eligible to compete; as of 2023, 52 countries have participated at least once. Each participating broadcaster sends one original song of three minutes duration or less to be performed live by a singer or group of up to six people aged 16 or older. Each country awards 1–8, 10 and 12 points to their ten favourite songs, based on the views of an assembled group of music professionals and the country's viewing public, with the song receiving the most points declared the winner. Other performances feature alongside the competition, including a specially-commissioned opening and interval act and guest performances by musicians and other personalities, with past acts including Cirque du Soleil, Madonna, Justin Timberlake, Mika, Rita Ora and the first performance of Riverdance. Originally consisting of a single evening event, the contest has expanded as new countries joined, leading to the introduction of relegation procedures in the 1990s, before the creation of semi-finals in the 2000s. As of 2023, Germany has competed more times than any other country, having participated in all but one edition, while Ireland and Sweden both hold the record for the most victories, with seven wins each in total. Traditionally held in the country which won the preceding year's event, the contest provides an opportunity to promote the host country and city as a tourist destination. Thousands of spectators attend each year, along with journalists who cover all aspects of the contest, including rehearsals in venue, press conferences with the competing acts, in addition to other related events and performances in the host city. Alongside the generic Eurovision logo, a unique theme is typically developed for each event. The contest has aired in countries across all continents; it has been available online via the official Eurovision website since 2001. Eurovision ranks among the world's most watched non-sporting events every year, with hundreds of millions of viewers globally. Performing at the contest has often provided artists with a local career boost and in some cases long-lasting international success. Several of the best-selling music artists in the world have competed in past editions, including ABBA, Celine Dion, Julio Iglesias, Cliff Richard and Olivia Newton-John; some of the world's best-selling singles have received their first international performance on the Eurovision stage. While having gained popularity with the viewing public in both participating and non-participating countries, the contest has also been the subject of criticism for its artistic quality as well as a perceived political aspect to the event. Concerns have been raised regarding political friendships and rivalries between countries potentially having an impact on the results. Controversial moments have included participating countries withdrawing at a late stage, censorship of broadcast segments by broadcasters, as well as political events impacting participation. Likewise, the contest has also been criticised for an over-abundance of elaborate stage shows at the cost of artistic merit. Eurovision has, however, gained popularity for its kitsch appeal, its musical span of ethnic and international styles, as well as emergence as part of LGBT culture, resulting in a large, active fanbase and an influence on popular culture. The popularity of the contest has led to the creation of several similar events, either organised by the EBU or created by external organisations; several special events have been organised by the EBU to celebrate select anniversaries or as a replacement due to cancellation. | 2001-10-20T01:50:40Z | 2023-12-27T20:35:06Z | [
"Template:Refbegin",
"Template:Lang fr",
"Template:Sfn",
"Template:Lang",
"Template:Main",
"Template:ESCYr",
"Template:Efn",
"Template:Infobox television",
"Template:Commons category-inline",
"Template:Cite magazine",
"Template:Webarchive",
"Template:Short description",
"Template:Redirect",
"Template:Legend",
"Template:Wide image",
"Template:Reflist",
"Template:Cite web",
"Template:European Broadcasting Union",
"Template:Use dmy dates",
"Template:As of",
"Template:Further",
"Template:Snd",
"Template:Esccnty",
"Template:Notelist",
"Template:Multiple image",
"Template:Refend",
"Template:YouTube",
"Template:Authority control",
"Template:Good article",
"Template:Legend inline",
"Template:Cite journal",
"Template:Cite news",
"Template:Eurovision Song Contest",
"Template:Currency",
"Template:Ill",
"Template:Music industry",
"Template:Escyr",
"Template:Esc",
"Template:Cite book",
"Template:Wikiquote-inline",
"Template:Official website"
]
| https://en.wikipedia.org/wiki/Eurovision_Song_Contest |
9,955 | Nitrox | Nitrox refers to any gas mixture composed (excepting trace gases) of nitrogen and oxygen. This includes atmospheric air, which is approximately 78% nitrogen, 21% oxygen, and 1% other gases, primarily argon. In the usual application, underwater diving, nitrox is normally distinguished from air and handled differently. The most common use of nitrox mixtures containing oxygen in higher proportions than atmospheric air is in scuba diving, where the reduced partial pressure of nitrogen is advantageous in reducing nitrogen uptake in the body's tissues, thereby extending the practicable underwater dive time by reducing the decompression requirement, or reducing the risk of decompression sickness (also known as the bends).
Nitrox is used to a lesser extent in surface-supplied diving, as these advantages are reduced by the more complex logistical requirements for nitrox compared to the use of simple low-pressure compressors for breathing gas supply. Nitrox can also be used in hyperbaric treatment of decompression illness, usually at pressures where pure oxygen would be hazardous. Nitrox is not a safer gas than compressed air in all respects; although its use can reduce the risk of decompression sickness, it increases the risks of oxygen toxicity and fire.
Though not generally referred to as nitrox, an oxygen-enriched air mixture is routinely provided at normal surface ambient pressure as oxygen therapy to patients with compromised respiration and circulation.
Reducing the proportion of nitrogen by increasing the proportion of oxygen reduces the risk of decompression sickness for the same dive profile, or allows extended dive times without increasing the need for decompression stops for the same risk. The significant aspect of extended no-stop time when using nitrox mixtures is reduced risk in a situation where breathing gas supply is compromised, as the diver can make a direct ascent to the surface with an acceptably low risk of decompression sickness. The exact values of the extended no-stop times vary depending on the decompression model used to derive the tables, but as an approximation, it is based on the partial pressure of nitrogen at the dive depth. This principle can be used to calculate an equivalent air depth (EAD) with the same partial pressure of nitrogen as the mix to be used, and this depth is less than the actual dive depth for oxygen enriched mixtures. The equivalent air depth is used with air decompression tables to calculate decompression obligation and no-stop times. The Goldman decompression model predicts a significant risk reduction by using nitrox (more so than the PADI tables suggest).
Controlled tests have not shown breathing nitrox to reduce the effects of nitrogen narcosis, as oxygen seems to have similarly narcotic properties under pressure to nitrogen; thus one should not expect a reduction in narcotic effects due only to the use of nitrox. Nonetheless, there are people in the diving community who insist that they feel reduced narcotic effects at depths breathing nitrox. This may be due to a dissociation of the subjective and behavioural effects of narcosis. Although oxygen appears chemically more narcotic at the surface, relative narcotic effects at depth have never been studied in detail, but it is known that different gases produce different narcotic effects as depth increases. Helium has no narcotic effect, but results in HPNS when breathed at high pressures, which does not happen with gases that have greater narcotic potency. However, because of risks associated with oxygen toxicity, divers do not usually use nitrox at greater depths where more pronounced narcosis symptoms are more likely to occur. For deep diving, trimix or heliox gases are typically used; these gases contain helium to reduce the amount of narcotic gases in the mixture.
Diving with and handling nitrox raise a number of potentially fatal dangers due to the high partial pressure of oxygen (ppO2). Nitrox is not a deep-diving gas mixture owing to the increased proportion of oxygen, which becomes toxic when breathed at high pressure. For example, the maximum operating depth of nitrox with 36% oxygen, a popular recreational diving mix, is 29 metres (95 ft) to ensure a maximum ppO2 of no more than 1.4 bar (140 kPa). The exact value of the maximum allowed ppO2 and maximum operating depth varies depending on factors such as the training agency, the type of dive, the breathing equipment and the level of surface support, with professional divers sometimes being allowed to breathe higher ppO2 than those recommended to recreational divers.
To dive safely with nitrox, the diver must learn good buoyancy control, a vital part of scuba diving in its own right, and a disciplined approach to preparing, planning and executing a dive to ensure that the ppO2 is known, and the maximum operating depth is not exceeded. Many dive shops, dive operators, and gas blenders (individuals trained to blend gases) require the diver to present a nitrox certification card before selling nitrox to divers.
Some training agencies, such as PADI and Technical Diving International, teach the use of two depth limits to protect against oxygen toxicity. The shallower depth is called the "maximum operating depth" and is reached when the partial pressure of oxygen in the breathing gas reaches 1.4 bar (140 kPa). The deeper depth, called the "contingency depth", is reached when the partial pressure reaches 1.6 bar (160 kPa). Diving at or beyond this level exposes the diver to a greater risk of central nervous system (CNS) oxygen toxicity. This can be extremely dangerous since its onset is often without warning and can lead to drowning, as the regulator may be spat out during convulsions, which occur in conjunction with sudden unconsciousness (general seizure induced by oxygen toxicity).
Divers trained to use nitrox may memorise the acronym VENTID-C or sometimes ConVENTID, (which stands for Vision (blurriness), Ears (ringing sound), Nausea, Twitching, Irritability, Dizziness, and Convulsions). However, evidence from non-fatal oxygen convulsions indicates that most convulsions are not preceded by any warning symptoms at all. Further, many of the suggested warning signs are also symptoms of nitrogen narcosis, and so may lead to misdiagnosis by a diver. A solution to either is to ascend to a shallower depth.
Use of nitrox may cause a reduced ventilatory response, and when breathing dense gas at the deeper limits of the usable range, this may result in carbon dioxide retention when exercise levels are high, with an increased risk of loss of consciousness.
There is anecdotal evidence that the use of nitrox reduces post-dive fatigue, particularly in older and or obese divers; however a double-blind study to test this found no statistically significant reduction in reported fatigue. There was, however, some suggestion that post-dive fatigue is due to sub-clinical decompression sickness (DCS) (i.e. micro bubbles in the blood insufficient to cause symptoms of DCS); the fact that the study mentioned was conducted in a dry chamber with an ideal decompression profile may have been sufficient to reduce sub-clinical DCS and prevent fatigue in both nitrox and air divers. In 2008, a study was published using wet divers at the same depth no statistically significant reduction in reported fatigue was seen.
Further studies with a number of different dive profiles, and also different levels of exertion, would be necessary to fully investigate this issue. For example, there is much better scientific evidence that breathing high-oxygen gases increases exercise tolerance, during aerobic exertion. Though even moderate exertion while breathing from the regulator is a relatively uncommon occurrence in recreational scuba, as divers usually try to minimize it in order to conserve gas, episodes of exertion while regulator-breathing do occasionally occur in recreational diving. Examples are surface-swimming a distance to a boat or beach after surfacing, where residual "safety" cylinder gas is often used freely, since the remainder will be wasted anyway when the dive is completed, and unplanned contingencies due to currents or buoyancy problems. It is possible that these so-far un-studied situations have contributed to some of the positive reputation of nitrox.
A 2010 study using critical flicker fusion frequency and perceived fatigue criteria found that diver alertness after a dive on nitrox was significantly better than after an air dive.
Enriched Air Nitrox, nitrox with an oxygen content above 21%, is mainly used in scuba diving to reduce the proportion of nitrogen in the breathing gas mixture. The main benefit is reduced decompression risk. To a considerably lesser extent it is also used in surface supplied diving, where the logistics are relatively complex, similar to the use of other diving gas mixtures like heliox and trimix.
Nitrox50 is used as one of the options in the first stages of therapeutic recompression using the Comex CX 30 table for treatment of vestibular or general decompression sickness. Nitrox is breathed at 30 msw and 24 msw and the ascents from these depths to the next stop. At 18 m the gas is switched to oxygen for the rest of the treatment.
The use of oxygen at high altitudes or as oxygen therapy may be as supplementary oxygen, added to the inspired air, which would technically be a use of nitrox, blended on site, but this is not normally referred to as such, as the gas provided for the purpose is oxygen.
Nitrox is known by many names: Enriched Air Nitrox, Oxygen Enriched Air, Nitrox, EANx or Safe Air. Since the word is a compound contraction or coined word and not an acronym, it should not be written in all upper case characters as "NITROX", but may be initially capitalized when referring to specific mixtures such as Nitrox32, which contains 68% nitrogen and 32% oxygen. When one figure is stated, it refers to the oxygen percentage, not the nitrogen percentage. The original convention, Nitrox68/32 became shortened as the first figure is redundant.
The term "nitrox" was originally used to refer to the breathing gas in a seafloor habitat where the oxygen has to be kept to a lower fraction than in air to avoid long term oxygen toxicity problems. It was later used by Dr Morgan Wells of NOAA for mixtures with an oxygen fraction higher than air, and has become a generic term for binary mixtures of nitrogen and oxygen with any oxygen fraction, and in the context of recreational and technical diving, now usually refers to a mixture of nitrogen and oxygen with more than 21% oxygen. "Enriched Air Nitrox" or "EAN", and "Oxygen Enriched Air" are used to emphasize richer than air mixtures. In "EANx", the "x" was originally the x of nitrox, but has come to indicate the percentage of oxygen in the mix and is replaced by a number when the percentage is known; for example, a 40% oxygen mix is called EAN40. The two most popular blends are EAN32 and EAN36, developed by NOAA for scientific diving, and also named Nitrox I and Nitrox II, respectively, or Nitrox68/32 and Nitrox64/36. These two mixtures were first utilized to the depth and oxygen limits for scientific diving designated by NOAA at the time.
The term Oxygen Enriched Air (OEN) was accepted by the (American) scientific diving community, but although it is probably the most unambiguous and simply descriptive term yet proposed, it was resisted by the recreational diving community, sometimes in favour of less appropriate terminology.
In its early days of introduction to non-technical divers, nitrox has occasionally also been known by detractors by less complimentary terms, such as "devil gas" or "voodoo gas" (a term now sometimes used with pride).
American Nitrox Divers International (ANDI) uses the term "SafeAir", which they define as any oxygen-enriched air mixture with O2 concentrations between 22% and 50% that meet their gas quality and handling specifications, and specifically claim that these mixtures are safer than normally produced breathing air for the end user not envolved to the mix production which. Considering the complexities and hazards of mixing, handling, analyzing, and using oxygen-enriched air, this name is considered inappropriate by those who consider that it is not inherently "safe", but merely has decompression advantages.
The constituent gas percentages are what the gas blender aims for, but the final actual mix may vary from the specification, and so a small flow of gas from the cylinder must be measured with an oxygen analyzer, before the cylinder is used underwater.
Maximum Operating Depth (MOD) is the maximum safe depth at which a given nitrox mixture can be used. MOD depends on the allowed partial pressure of oxygen, which is related to exposure time and the acceptable risk assumed for central nervous system oxygen toxicity. Acceptable maximum ppO2 varies depending on the application:
Higher values are used by commercial and military divers in special circumstances, often when the diver uses surface supplied breathing apparatus, or for treatment in a chamber, where the airway is relatively secure.
The two most common recreational diving nitrox mixes contain 32% and 36% oxygen, which have maximum operating depths (MODs) of 34 metres (112 ft) and 29 metres (95 ft) respectively when limited to a maximum partial pressure of oxygen of 1.4 bar (140 kPa). Divers may calculate an equivalent air depth to determine their decompression requirements or may use nitrox tables or a nitrox-capable dive computer.
Nitrox with more than 40% oxygen is uncommon within recreational diving. There are two main reasons for this: the first is that all pieces of diving equipment that come into contact with mixes containing higher proportions of oxygen, particularly at high pressure, need special cleaning and servicing to reduce the risk of fire. The second reason is that richer mixes extend the time the diver can stay underwater without needing decompression stops far further than the duration permitted by the capacity of typical diving cylinders. For example, based on the PADI nitrox recommendations, the maximum operating depth for EAN45 would be 21 metres (69 ft) and the maximum dive time available at this depth even with EAN36 is nearly 1 hour 15 minutes: a diver with a breathing rate of 20 litres per minute using twin 10-litre, 230-bar (about double 85 cu. ft.) cylinders would have completely emptied the cylinders after 1 hour 14 minutes at this depth.
Use of nitrox mixtures containing 50% to 80% oxygen is common in technical diving as decompression gas, which by virtue of its lower partial pressure of inert gases such as nitrogen and helium, allows for more efficient (faster) elimination of these gases from the tissues than leaner oxygen mixtures.
In deep open circuit technical diving, where hypoxic gases are breathed during the bottom portion of the dive, a Nitrox mix with 50% or less oxygen called a "travel mix" is sometimes breathed during the beginning of the descent in order to avoid hypoxia. Normally, however, the most oxygen-lean of the diver's decompression gases would be used for this purpose, since descent time spent reaching a depth where bottom mix is no longer hypoxic is normally small, and the distance between this depth and the MOD of any nitrox decompression gas is likely to be very short, if it occurs at all.
The composition of a nitrox mix can be optimized for a given planned dive profile. This is termed "Best mix", for the dive, and provides the maximum no-decompression time compatible with acceptable oxygen exposure. An acceptable maximum partial pressure of oxygen is selected based on depth and planned bottom time, and this value is used to calculate the oxygen content of the best mix for the dive:
There are several methods of production:
Any diving cylinder containing a blend of gasses other than standard air is required by most diver training organizations, and some national governments, to be clearly marked to indicate the current gas mixture. In practice it is common to use a printed adhesive label to indicate the type of gas (in this case nitrox), and to add a temporary label to specify the analysis of the current mix.
Training standards for nitrox certification suggest the composition must be verified by the diver by using an oxygen analyzer before use.
Within the EU, valves with M26x2 outlet thread are recommended for cylinders with increased oxygen content. Regulators for use with these cylinders require compatible connectors, and are not directly connectable with cylinders for compressed air.
A German standard specifies that any mixture with an oxygen content greater than atmospheric air must be treated as pure oxygen. A nitrox cylinder is specially cleaned and identified. The cylinder colour is overall white with the letter N on opposite sides of the cylinder. The fraction of oxygen in the bottle is checked after filling and marked on the cylinder.
South African National Standard 10019:2008 specifies the colour of all scuba cylinders as Golden yellow with French gray shoulder. This applies to all underwater breathing gases except medical oxygen, which must be carried in cylinders that are Black with a White shoulder. Nitrox cylinders must be identified by a transparent, self-adhesive label with green lettering, fitted below the shoulder. In effect this is green lettering on a yellow cylinder, with a gray shoulder. The composition of the gas must also be specified on the label. In practice this is done by a small additional self-adhesive label marked with the measured oxygen fraction, which is changed when a new mix is filled.
The 2021 revision of SANS 10019 changed the colour specification to Light navy grey for the shoulder, and a different label specification which includes hazard symbols for high pressure and oxidising materials.
Every nitrox cylinder should also have a sticker stating whether or not the cylinder is oxygen clean and suitable for partial pressure blending. Any oxygen-clean cylinder may have any mix up to 100% oxygen inside. If by some accident an oxygen-clean cylinder is filled at a station that does not supply gas to oxygen-clean standards it is then considered contaminated and must be re-cleaned before a gas containing more than 40% oxygen may again be added. Cylinders marked as 'not oxygen clean' may only be filled with oxygen-enriched air mixtures from membrane or stick blending systems where the gas is mixed before being added to the cylinder, and to an oxygen fraction not exceeding 40% by volume.
Nitrox can be a hazard to the blender and to the user, for different reasons.
Partial pressure blending using pure oxygen decanted into the cylinder before topping up with air may involve very high oxygen fractions and oxygen partial pressures during the decanting process, which constitute a relatively high fire hazard. This procedure requires care and precautions by the operator, and decanting equipment and cylinders which are clean for oxygen service, but the equipment is relatively simple and inexpensive. Partial pressure blending using pure oxygen is often used to provide nitrox on live-aboard dive boats, but it is also used in some dive shops and clubs.
Any gas which contains a significantly larger percentage of oxygen than air is a fire hazard, and such gases can react with hydrocarbons or lubricants and sealing materials inside the filling system to produce toxic gases, even if a fire is not apparent. Some organisations exempt equipment from oxygen-clean standards if the oxygen fraction is limited to 40% or less.
Among recreational training agencies, only ANDI subscribes to the guideline of requiring oxygen cleaning for equipment used with more than 23% oxygen fraction. The USCG, NOAA, U.S. Navy, OSHA, and the other recreational training agencies accept the limit as 40% as no accident or incident has been known to occur when this guideline has been properly applied. Tens of thousands of recreational divers are trained each year and the overwhelming majority of these divers are taught the "over 40% rule". Most nitrox fill stations which supply pre-mixed nitrox will fill cylinders with mixtures below 40% without certification of cleanliness for oxygen service. Luxfer cylinders specify oxygen cleaning for all mixtures exceeding 23.5% oxygen.
The following references for oxygen cleaning specifically cite the "over 40%" guideline that has been in widespread use since the 1960s, and consensus at the 1992 Enriched Air Workshop was to accept that guideline and continue the status quo.
Much of the confusion appears to be a result of misapplying PVHO (pressure vessel for human occupancy) guidelines which prescribe a maximum ambient oxygen content of 25% when a human is sealed into a pressure vessel (chamber). The concern here is for a fire hazard to a living person who could be trapped in an oxygen-rich burning environment.
Of the three commonly applied methods of producing enriched air mixes - continuous blending, partial pressure blending, and membrane separation systems - only partial pressure blending would require the valve and cylinder components to be oxygen cleaned for mixtures with less than 40% oxygen. The other two methods ensure that the equipment is never subjected to greater than 40% oxygen content.
In a fire, the pressure in a gas cylinder rises in direct proportion to its absolute temperature. If the internal pressure exceeds the mechanical limitations of the cylinder and there are no means to safely vent the pressurized gas to the atmosphere, the vessel will fail mechanically. If the vessel contents are ignitable or a contaminant is present this event may result in a "fireball".
Use of a gas mix that differs from the planned mix introduces an increased risk of decompression sickness or an increased risk of oxygen toxicity, depending on the error. It may be possible to simply recalculate the dive plan or set the dive computer accordingly, but in some cases the planned dive may not be practicable.
Many training agencies such as PADI, CMAS, SSI and NAUI train their divers to personally check the oxygen percentage content of each nitrox cylinder before every dive. If the oxygen percentage deviates by more than 1% from the planned mix, the diver must either recalculate the dive plan with the actual mix, or else abort the dive to avoid increased risk of oxygen toxicity or decompression sickness. Under IANTD and ANDI rules for use of nitrox, which are followed by dive resorts around the world, filled nitrox cylinders are signed out personally in a blended gas records book, which contains, for each cylinder and fill, the cylinder number, the measured oxygen fraction by percentage, the calculated maximum operating depth for that mix, and the signature of the receiving diver, who should have personally measured the oxygen fraction before taking delivery. All of these steps reduce risk but increase complexity of operations as each diver must use the specific cylinder they have checked out. In South Africa, the national standard for handling and filling portable cylinders with pressurised gases (SANS 10019) requires that the cylinder be labelled with a sticker identifying the contents as nitrox, and specifying the oxygen fraction. Similar requirements may apply in other countries.
In 1874, Henry Fleuss made what was possibly the first Nitrox dive using a rebreather.
In 1911 Draeger of Germany tested an injector operated rebreather backpack for a standard diving suit. This concept was produced and marketed as the DM20 oxygen rebreather system and the DM40 nitrox rebreather system, in which air from one cylinder and oxygen from a second cylinder were mixed during injection through a nozzle which circulated the breathing gas through the scrubber and the rest of the loop. The DM40 was rated for depths up to 40m.
Christian J. Lambertsen proposed calculations for nitrogen addition to prevent oxygen toxicity in divers utilizing nitrogen-oxygen rebreather diving.
In World War II or soon after, British commando frogmen and clearance divers started occasionally diving with oxygen rebreathers adapted for semi-closed-circuit nitrox (which they called "mixture") diving by fitting larger cylinders and carefully setting the gas flow rate using a flow meter. These developments were kept secret until independently duplicated by civilians in the 1960s.
Lambertson published a paper on nitrox in 1947.
In the 1950s the United States Navy (USN) documented enriched oxygen gas procedures for military use of what we today call nitrox, in the US Navy Diving Manual.
In 1955, E. Lanphier described the use of nitrogen-oxygen diving mixtures, and the equivalent air depth method for calculating decompression from air tables.
In the 1960s, A. Galerne used on-line blending for commercial diving.
In 1970, Morgan Wells, who was the first director of the National Oceanographic and Atmospheric Administration (NOAA) Diving Center, began instituting diving procedures for oxygen-enriched air. He introduced the concept of Equivalent Air Depth (EAD). He also developed a process for mixing oxygen and air which he called a continuous blending system. For many years Wells' invention was the only practical alternative to partial pressure blending. In 1979 NOAA published Wells' procedures for the scientific use of nitrox in the NOAA Diving Manual.
In 1985 Dick Rutkowski, a former NOAA diving safety officer, formed IAND (International Association of Nitrox Divers) and began teaching nitrox use for recreational diving. This was considered dangerous by some, and met with heavy skepticism by the diving community.
In 1989, the Harbor Branch Oceanographic institution workshop addressed blending, oxygen limits and decompression issues.
In 1991, Bove, Bennett and Skin Diver magazine took a stand against nitrox use for recreational diving. Skin Diver editor Bill Gleason dubbed nitrox the "Voodoo Gas". The annual DEMA show (held in Houston, Texas that year) banned nitrox training providers from the show. This caused a backlash, and when DEMA relented, a number of organizations took the opportunity to present nitrox workshops outside the show.
In 1992, the Scuba Diving Resources Group organised a workshop where some guidelines were established, and some misconceptions addressed.
In 1992 BSAC banned its members from using nitrox during BSAC activities. IAND's name was changed to the International Association of Nitrox and Technical Divers (IANTD), the T being added when the European Association of Technical Divers (EATD) merged with IAND. In the early 1990s, these agencies were teaching nitrox, but the main scuba agencies were not. Additional new organizations, including the American Nitrox Divers International (ANDI) - which invented the term "Safe Air" for marketing purposes - and Technical Diving International (TDI) were begun. NAUI became the first existing major recreational diver training agency to sanction nitrox.
In 1993 the Sub-Aqua Association was the first UK recreational diving training agency to acknowledge and endorse the Nitrox training their members had undertaken with one of the tech agencies. The SAA's first recreational Nitrox qualification was issued in April 1993. The SAA's first Nitrox instructor was Vic Bonfante and he was certified in September 1993.
Meanwhile, diving stores were finding a purely economic reason to offer nitrox: not only was an entire new course and certification needed to use it, but instead of cheap or free tank fills with compressed air, dive shops found they could charge premium amounts of money for custom-gas blending of nitrox to their ordinary, moderately experienced divers. With the new dive computers which could be programmed to allow for the longer bottom-times and shorter residual nitrogen times that nitrox gave, the incentive for the sport diver to use the gas increased.
In 1993 Skin Diver magazine, the leading recreational diving publication at the time, published a three-part series arguing that nitrox was unsafe for sport divers. DiveRite manufactured the first nitrox-compatible dive computer, called the Bridge, the aquaCorps TEK93 conference was held in San Francisco, and a practicable oil limit of 0.1 mg/m for oxygen compatible air was set. The Canadian armed forces issued EAD tables with an upper PO2 of 1.5 ATA.
In 1994 John Lamb and Vandagraph launched the first oxygen analyser built specifically for Nitrox and mixed-gas divers, at the Birmingham Dive Show.
In 1994 BSAC reversed its policy on Nitrox and announced BSAC nitrox training to start in 1995
In 1996, the Professional Association of Diving Instructors (PADI) announced full educational support for nitrox. While other mainline scuba organizations had announced their support of nitrox earlier, it was PADI's endorsement that established nitrox as a standard recreational diving option.
In 1997 ProTec started with Nitrox 1 (recreational) and Nitrox 2 (technical). A German ProTec Nitrox manual (ref to the 6th edition) has been published.
In 1999 a survey by R.W. Hamilton showed that over hundreds of thousands of nitrox dives, the DCS record is good. Nitrox had become popular with recreational divers, but not used much by commercial divers who tend to use surface supplied breathing apparatus. The OSHA accepted a petition for a variance from the commercial diving regulations for recreational scuba instructors.
The 2001 edition of the NOAA Diving Manual included a chapter intended for Nitrox training.
At times in the geological past, the Earth's atmosphere contained much more than 20% oxygen: e.g. up to 35% in the Upper Carboniferous period. This let animals absorb oxygen more easily and influenced their evolutionary patterns. | [
{
"paragraph_id": 0,
"text": "Nitrox refers to any gas mixture composed (excepting trace gases) of nitrogen and oxygen. This includes atmospheric air, which is approximately 78% nitrogen, 21% oxygen, and 1% other gases, primarily argon. In the usual application, underwater diving, nitrox is normally distinguished from air and handled differently. The most common use of nitrox mixtures containing oxygen in higher proportions than atmospheric air is in scuba diving, where the reduced partial pressure of nitrogen is advantageous in reducing nitrogen uptake in the body's tissues, thereby extending the practicable underwater dive time by reducing the decompression requirement, or reducing the risk of decompression sickness (also known as the bends).",
"title": ""
},
{
"paragraph_id": 1,
"text": "Nitrox is used to a lesser extent in surface-supplied diving, as these advantages are reduced by the more complex logistical requirements for nitrox compared to the use of simple low-pressure compressors for breathing gas supply. Nitrox can also be used in hyperbaric treatment of decompression illness, usually at pressures where pure oxygen would be hazardous. Nitrox is not a safer gas than compressed air in all respects; although its use can reduce the risk of decompression sickness, it increases the risks of oxygen toxicity and fire.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Though not generally referred to as nitrox, an oxygen-enriched air mixture is routinely provided at normal surface ambient pressure as oxygen therapy to patients with compromised respiration and circulation.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Reducing the proportion of nitrogen by increasing the proportion of oxygen reduces the risk of decompression sickness for the same dive profile, or allows extended dive times without increasing the need for decompression stops for the same risk. The significant aspect of extended no-stop time when using nitrox mixtures is reduced risk in a situation where breathing gas supply is compromised, as the diver can make a direct ascent to the surface with an acceptably low risk of decompression sickness. The exact values of the extended no-stop times vary depending on the decompression model used to derive the tables, but as an approximation, it is based on the partial pressure of nitrogen at the dive depth. This principle can be used to calculate an equivalent air depth (EAD) with the same partial pressure of nitrogen as the mix to be used, and this depth is less than the actual dive depth for oxygen enriched mixtures. The equivalent air depth is used with air decompression tables to calculate decompression obligation and no-stop times. The Goldman decompression model predicts a significant risk reduction by using nitrox (more so than the PADI tables suggest).",
"title": "Physiological effects under pressure"
},
{
"paragraph_id": 4,
"text": "Controlled tests have not shown breathing nitrox to reduce the effects of nitrogen narcosis, as oxygen seems to have similarly narcotic properties under pressure to nitrogen; thus one should not expect a reduction in narcotic effects due only to the use of nitrox. Nonetheless, there are people in the diving community who insist that they feel reduced narcotic effects at depths breathing nitrox. This may be due to a dissociation of the subjective and behavioural effects of narcosis. Although oxygen appears chemically more narcotic at the surface, relative narcotic effects at depth have never been studied in detail, but it is known that different gases produce different narcotic effects as depth increases. Helium has no narcotic effect, but results in HPNS when breathed at high pressures, which does not happen with gases that have greater narcotic potency. However, because of risks associated with oxygen toxicity, divers do not usually use nitrox at greater depths where more pronounced narcosis symptoms are more likely to occur. For deep diving, trimix or heliox gases are typically used; these gases contain helium to reduce the amount of narcotic gases in the mixture.",
"title": "Physiological effects under pressure"
},
{
"paragraph_id": 5,
"text": "Diving with and handling nitrox raise a number of potentially fatal dangers due to the high partial pressure of oxygen (ppO2). Nitrox is not a deep-diving gas mixture owing to the increased proportion of oxygen, which becomes toxic when breathed at high pressure. For example, the maximum operating depth of nitrox with 36% oxygen, a popular recreational diving mix, is 29 metres (95 ft) to ensure a maximum ppO2 of no more than 1.4 bar (140 kPa). The exact value of the maximum allowed ppO2 and maximum operating depth varies depending on factors such as the training agency, the type of dive, the breathing equipment and the level of surface support, with professional divers sometimes being allowed to breathe higher ppO2 than those recommended to recreational divers.",
"title": "Physiological effects under pressure"
},
{
"paragraph_id": 6,
"text": "To dive safely with nitrox, the diver must learn good buoyancy control, a vital part of scuba diving in its own right, and a disciplined approach to preparing, planning and executing a dive to ensure that the ppO2 is known, and the maximum operating depth is not exceeded. Many dive shops, dive operators, and gas blenders (individuals trained to blend gases) require the diver to present a nitrox certification card before selling nitrox to divers.",
"title": "Physiological effects under pressure"
},
{
"paragraph_id": 7,
"text": "Some training agencies, such as PADI and Technical Diving International, teach the use of two depth limits to protect against oxygen toxicity. The shallower depth is called the \"maximum operating depth\" and is reached when the partial pressure of oxygen in the breathing gas reaches 1.4 bar (140 kPa). The deeper depth, called the \"contingency depth\", is reached when the partial pressure reaches 1.6 bar (160 kPa). Diving at or beyond this level exposes the diver to a greater risk of central nervous system (CNS) oxygen toxicity. This can be extremely dangerous since its onset is often without warning and can lead to drowning, as the regulator may be spat out during convulsions, which occur in conjunction with sudden unconsciousness (general seizure induced by oxygen toxicity).",
"title": "Physiological effects under pressure"
},
{
"paragraph_id": 8,
"text": "Divers trained to use nitrox may memorise the acronym VENTID-C or sometimes ConVENTID, (which stands for Vision (blurriness), Ears (ringing sound), Nausea, Twitching, Irritability, Dizziness, and Convulsions). However, evidence from non-fatal oxygen convulsions indicates that most convulsions are not preceded by any warning symptoms at all. Further, many of the suggested warning signs are also symptoms of nitrogen narcosis, and so may lead to misdiagnosis by a diver. A solution to either is to ascend to a shallower depth.",
"title": "Physiological effects under pressure"
},
{
"paragraph_id": 9,
"text": "Use of nitrox may cause a reduced ventilatory response, and when breathing dense gas at the deeper limits of the usable range, this may result in carbon dioxide retention when exercise levels are high, with an increased risk of loss of consciousness.",
"title": "Physiological effects under pressure"
},
{
"paragraph_id": 10,
"text": "There is anecdotal evidence that the use of nitrox reduces post-dive fatigue, particularly in older and or obese divers; however a double-blind study to test this found no statistically significant reduction in reported fatigue. There was, however, some suggestion that post-dive fatigue is due to sub-clinical decompression sickness (DCS) (i.e. micro bubbles in the blood insufficient to cause symptoms of DCS); the fact that the study mentioned was conducted in a dry chamber with an ideal decompression profile may have been sufficient to reduce sub-clinical DCS and prevent fatigue in both nitrox and air divers. In 2008, a study was published using wet divers at the same depth no statistically significant reduction in reported fatigue was seen.",
"title": "Physiological effects under pressure"
},
{
"paragraph_id": 11,
"text": "Further studies with a number of different dive profiles, and also different levels of exertion, would be necessary to fully investigate this issue. For example, there is much better scientific evidence that breathing high-oxygen gases increases exercise tolerance, during aerobic exertion. Though even moderate exertion while breathing from the regulator is a relatively uncommon occurrence in recreational scuba, as divers usually try to minimize it in order to conserve gas, episodes of exertion while regulator-breathing do occasionally occur in recreational diving. Examples are surface-swimming a distance to a boat or beach after surfacing, where residual \"safety\" cylinder gas is often used freely, since the remainder will be wasted anyway when the dive is completed, and unplanned contingencies due to currents or buoyancy problems. It is possible that these so-far un-studied situations have contributed to some of the positive reputation of nitrox.",
"title": "Physiological effects under pressure"
},
{
"paragraph_id": 12,
"text": "A 2010 study using critical flicker fusion frequency and perceived fatigue criteria found that diver alertness after a dive on nitrox was significantly better than after an air dive.",
"title": "Physiological effects under pressure"
},
{
"paragraph_id": 13,
"text": "Enriched Air Nitrox, nitrox with an oxygen content above 21%, is mainly used in scuba diving to reduce the proportion of nitrogen in the breathing gas mixture. The main benefit is reduced decompression risk. To a considerably lesser extent it is also used in surface supplied diving, where the logistics are relatively complex, similar to the use of other diving gas mixtures like heliox and trimix.",
"title": "Uses"
},
{
"paragraph_id": 14,
"text": "Nitrox50 is used as one of the options in the first stages of therapeutic recompression using the Comex CX 30 table for treatment of vestibular or general decompression sickness. Nitrox is breathed at 30 msw and 24 msw and the ascents from these depths to the next stop. At 18 m the gas is switched to oxygen for the rest of the treatment.",
"title": "Uses"
},
{
"paragraph_id": 15,
"text": "The use of oxygen at high altitudes or as oxygen therapy may be as supplementary oxygen, added to the inspired air, which would technically be a use of nitrox, blended on site, but this is not normally referred to as such, as the gas provided for the purpose is oxygen.",
"title": "Uses"
},
{
"paragraph_id": 16,
"text": "Nitrox is known by many names: Enriched Air Nitrox, Oxygen Enriched Air, Nitrox, EANx or Safe Air. Since the word is a compound contraction or coined word and not an acronym, it should not be written in all upper case characters as \"NITROX\", but may be initially capitalized when referring to specific mixtures such as Nitrox32, which contains 68% nitrogen and 32% oxygen. When one figure is stated, it refers to the oxygen percentage, not the nitrogen percentage. The original convention, Nitrox68/32 became shortened as the first figure is redundant.",
"title": "Terminology"
},
{
"paragraph_id": 17,
"text": "The term \"nitrox\" was originally used to refer to the breathing gas in a seafloor habitat where the oxygen has to be kept to a lower fraction than in air to avoid long term oxygen toxicity problems. It was later used by Dr Morgan Wells of NOAA for mixtures with an oxygen fraction higher than air, and has become a generic term for binary mixtures of nitrogen and oxygen with any oxygen fraction, and in the context of recreational and technical diving, now usually refers to a mixture of nitrogen and oxygen with more than 21% oxygen. \"Enriched Air Nitrox\" or \"EAN\", and \"Oxygen Enriched Air\" are used to emphasize richer than air mixtures. In \"EANx\", the \"x\" was originally the x of nitrox, but has come to indicate the percentage of oxygen in the mix and is replaced by a number when the percentage is known; for example, a 40% oxygen mix is called EAN40. The two most popular blends are EAN32 and EAN36, developed by NOAA for scientific diving, and also named Nitrox I and Nitrox II, respectively, or Nitrox68/32 and Nitrox64/36. These two mixtures were first utilized to the depth and oxygen limits for scientific diving designated by NOAA at the time.",
"title": "Terminology"
},
{
"paragraph_id": 18,
"text": "The term Oxygen Enriched Air (OEN) was accepted by the (American) scientific diving community, but although it is probably the most unambiguous and simply descriptive term yet proposed, it was resisted by the recreational diving community, sometimes in favour of less appropriate terminology.",
"title": "Terminology"
},
{
"paragraph_id": 19,
"text": "In its early days of introduction to non-technical divers, nitrox has occasionally also been known by detractors by less complimentary terms, such as \"devil gas\" or \"voodoo gas\" (a term now sometimes used with pride).",
"title": "Terminology"
},
{
"paragraph_id": 20,
"text": "American Nitrox Divers International (ANDI) uses the term \"SafeAir\", which they define as any oxygen-enriched air mixture with O2 concentrations between 22% and 50% that meet their gas quality and handling specifications, and specifically claim that these mixtures are safer than normally produced breathing air for the end user not envolved to the mix production which. Considering the complexities and hazards of mixing, handling, analyzing, and using oxygen-enriched air, this name is considered inappropriate by those who consider that it is not inherently \"safe\", but merely has decompression advantages.",
"title": "Terminology"
},
{
"paragraph_id": 21,
"text": "The constituent gas percentages are what the gas blender aims for, but the final actual mix may vary from the specification, and so a small flow of gas from the cylinder must be measured with an oxygen analyzer, before the cylinder is used underwater.",
"title": "Terminology"
},
{
"paragraph_id": 22,
"text": "Maximum Operating Depth (MOD) is the maximum safe depth at which a given nitrox mixture can be used. MOD depends on the allowed partial pressure of oxygen, which is related to exposure time and the acceptable risk assumed for central nervous system oxygen toxicity. Acceptable maximum ppO2 varies depending on the application:",
"title": "Terminology"
},
{
"paragraph_id": 23,
"text": "Higher values are used by commercial and military divers in special circumstances, often when the diver uses surface supplied breathing apparatus, or for treatment in a chamber, where the airway is relatively secure.",
"title": "Terminology"
},
{
"paragraph_id": 24,
"text": "The two most common recreational diving nitrox mixes contain 32% and 36% oxygen, which have maximum operating depths (MODs) of 34 metres (112 ft) and 29 metres (95 ft) respectively when limited to a maximum partial pressure of oxygen of 1.4 bar (140 kPa). Divers may calculate an equivalent air depth to determine their decompression requirements or may use nitrox tables or a nitrox-capable dive computer.",
"title": "Choice of mixture"
},
{
"paragraph_id": 25,
"text": "Nitrox with more than 40% oxygen is uncommon within recreational diving. There are two main reasons for this: the first is that all pieces of diving equipment that come into contact with mixes containing higher proportions of oxygen, particularly at high pressure, need special cleaning and servicing to reduce the risk of fire. The second reason is that richer mixes extend the time the diver can stay underwater without needing decompression stops far further than the duration permitted by the capacity of typical diving cylinders. For example, based on the PADI nitrox recommendations, the maximum operating depth for EAN45 would be 21 metres (69 ft) and the maximum dive time available at this depth even with EAN36 is nearly 1 hour 15 minutes: a diver with a breathing rate of 20 litres per minute using twin 10-litre, 230-bar (about double 85 cu. ft.) cylinders would have completely emptied the cylinders after 1 hour 14 minutes at this depth.",
"title": "Choice of mixture"
},
{
"paragraph_id": 26,
"text": "Use of nitrox mixtures containing 50% to 80% oxygen is common in technical diving as decompression gas, which by virtue of its lower partial pressure of inert gases such as nitrogen and helium, allows for more efficient (faster) elimination of these gases from the tissues than leaner oxygen mixtures.",
"title": "Choice of mixture"
},
{
"paragraph_id": 27,
"text": "In deep open circuit technical diving, where hypoxic gases are breathed during the bottom portion of the dive, a Nitrox mix with 50% or less oxygen called a \"travel mix\" is sometimes breathed during the beginning of the descent in order to avoid hypoxia. Normally, however, the most oxygen-lean of the diver's decompression gases would be used for this purpose, since descent time spent reaching a depth where bottom mix is no longer hypoxic is normally small, and the distance between this depth and the MOD of any nitrox decompression gas is likely to be very short, if it occurs at all.",
"title": "Choice of mixture"
},
{
"paragraph_id": 28,
"text": "The composition of a nitrox mix can be optimized for a given planned dive profile. This is termed \"Best mix\", for the dive, and provides the maximum no-decompression time compatible with acceptable oxygen exposure. An acceptable maximum partial pressure of oxygen is selected based on depth and planned bottom time, and this value is used to calculate the oxygen content of the best mix for the dive:",
"title": "Choice of mixture"
},
{
"paragraph_id": 29,
"text": "There are several methods of production:",
"title": "Production"
},
{
"paragraph_id": 30,
"text": "Any diving cylinder containing a blend of gasses other than standard air is required by most diver training organizations, and some national governments, to be clearly marked to indicate the current gas mixture. In practice it is common to use a printed adhesive label to indicate the type of gas (in this case nitrox), and to add a temporary label to specify the analysis of the current mix.",
"title": "Cylinder markings to identify contents"
},
{
"paragraph_id": 31,
"text": "Training standards for nitrox certification suggest the composition must be verified by the diver by using an oxygen analyzer before use.",
"title": "Cylinder markings to identify contents"
},
{
"paragraph_id": 32,
"text": "Within the EU, valves with M26x2 outlet thread are recommended for cylinders with increased oxygen content. Regulators for use with these cylinders require compatible connectors, and are not directly connectable with cylinders for compressed air.",
"title": "Regional standards and conventions"
},
{
"paragraph_id": 33,
"text": "A German standard specifies that any mixture with an oxygen content greater than atmospheric air must be treated as pure oxygen. A nitrox cylinder is specially cleaned and identified. The cylinder colour is overall white with the letter N on opposite sides of the cylinder. The fraction of oxygen in the bottle is checked after filling and marked on the cylinder.",
"title": "Regional standards and conventions"
},
{
"paragraph_id": 34,
"text": "South African National Standard 10019:2008 specifies the colour of all scuba cylinders as Golden yellow with French gray shoulder. This applies to all underwater breathing gases except medical oxygen, which must be carried in cylinders that are Black with a White shoulder. Nitrox cylinders must be identified by a transparent, self-adhesive label with green lettering, fitted below the shoulder. In effect this is green lettering on a yellow cylinder, with a gray shoulder. The composition of the gas must also be specified on the label. In practice this is done by a small additional self-adhesive label marked with the measured oxygen fraction, which is changed when a new mix is filled.",
"title": "Regional standards and conventions"
},
{
"paragraph_id": 35,
"text": "The 2021 revision of SANS 10019 changed the colour specification to Light navy grey for the shoulder, and a different label specification which includes hazard symbols for high pressure and oxidising materials.",
"title": "Regional standards and conventions"
},
{
"paragraph_id": 36,
"text": "Every nitrox cylinder should also have a sticker stating whether or not the cylinder is oxygen clean and suitable for partial pressure blending. Any oxygen-clean cylinder may have any mix up to 100% oxygen inside. If by some accident an oxygen-clean cylinder is filled at a station that does not supply gas to oxygen-clean standards it is then considered contaminated and must be re-cleaned before a gas containing more than 40% oxygen may again be added. Cylinders marked as 'not oxygen clean' may only be filled with oxygen-enriched air mixtures from membrane or stick blending systems where the gas is mixed before being added to the cylinder, and to an oxygen fraction not exceeding 40% by volume.",
"title": "Regional standards and conventions"
},
{
"paragraph_id": 37,
"text": "Nitrox can be a hazard to the blender and to the user, for different reasons.",
"title": "Hazards"
},
{
"paragraph_id": 38,
"text": "Partial pressure blending using pure oxygen decanted into the cylinder before topping up with air may involve very high oxygen fractions and oxygen partial pressures during the decanting process, which constitute a relatively high fire hazard. This procedure requires care and precautions by the operator, and decanting equipment and cylinders which are clean for oxygen service, but the equipment is relatively simple and inexpensive. Partial pressure blending using pure oxygen is often used to provide nitrox on live-aboard dive boats, but it is also used in some dive shops and clubs.",
"title": "Hazards"
},
{
"paragraph_id": 39,
"text": "Any gas which contains a significantly larger percentage of oxygen than air is a fire hazard, and such gases can react with hydrocarbons or lubricants and sealing materials inside the filling system to produce toxic gases, even if a fire is not apparent. Some organisations exempt equipment from oxygen-clean standards if the oxygen fraction is limited to 40% or less.",
"title": "Hazards"
},
{
"paragraph_id": 40,
"text": "Among recreational training agencies, only ANDI subscribes to the guideline of requiring oxygen cleaning for equipment used with more than 23% oxygen fraction. The USCG, NOAA, U.S. Navy, OSHA, and the other recreational training agencies accept the limit as 40% as no accident or incident has been known to occur when this guideline has been properly applied. Tens of thousands of recreational divers are trained each year and the overwhelming majority of these divers are taught the \"over 40% rule\". Most nitrox fill stations which supply pre-mixed nitrox will fill cylinders with mixtures below 40% without certification of cleanliness for oxygen service. Luxfer cylinders specify oxygen cleaning for all mixtures exceeding 23.5% oxygen.",
"title": "Hazards"
},
{
"paragraph_id": 41,
"text": "The following references for oxygen cleaning specifically cite the \"over 40%\" guideline that has been in widespread use since the 1960s, and consensus at the 1992 Enriched Air Workshop was to accept that guideline and continue the status quo.",
"title": "Hazards"
},
{
"paragraph_id": 42,
"text": "Much of the confusion appears to be a result of misapplying PVHO (pressure vessel for human occupancy) guidelines which prescribe a maximum ambient oxygen content of 25% when a human is sealed into a pressure vessel (chamber). The concern here is for a fire hazard to a living person who could be trapped in an oxygen-rich burning environment.",
"title": "Hazards"
},
{
"paragraph_id": 43,
"text": "Of the three commonly applied methods of producing enriched air mixes - continuous blending, partial pressure blending, and membrane separation systems - only partial pressure blending would require the valve and cylinder components to be oxygen cleaned for mixtures with less than 40% oxygen. The other two methods ensure that the equipment is never subjected to greater than 40% oxygen content.",
"title": "Hazards"
},
{
"paragraph_id": 44,
"text": "In a fire, the pressure in a gas cylinder rises in direct proportion to its absolute temperature. If the internal pressure exceeds the mechanical limitations of the cylinder and there are no means to safely vent the pressurized gas to the atmosphere, the vessel will fail mechanically. If the vessel contents are ignitable or a contaminant is present this event may result in a \"fireball\".",
"title": "Hazards"
},
{
"paragraph_id": 45,
"text": "Use of a gas mix that differs from the planned mix introduces an increased risk of decompression sickness or an increased risk of oxygen toxicity, depending on the error. It may be possible to simply recalculate the dive plan or set the dive computer accordingly, but in some cases the planned dive may not be practicable.",
"title": "Hazards"
},
{
"paragraph_id": 46,
"text": "Many training agencies such as PADI, CMAS, SSI and NAUI train their divers to personally check the oxygen percentage content of each nitrox cylinder before every dive. If the oxygen percentage deviates by more than 1% from the planned mix, the diver must either recalculate the dive plan with the actual mix, or else abort the dive to avoid increased risk of oxygen toxicity or decompression sickness. Under IANTD and ANDI rules for use of nitrox, which are followed by dive resorts around the world, filled nitrox cylinders are signed out personally in a blended gas records book, which contains, for each cylinder and fill, the cylinder number, the measured oxygen fraction by percentage, the calculated maximum operating depth for that mix, and the signature of the receiving diver, who should have personally measured the oxygen fraction before taking delivery. All of these steps reduce risk but increase complexity of operations as each diver must use the specific cylinder they have checked out. In South Africa, the national standard for handling and filling portable cylinders with pressurised gases (SANS 10019) requires that the cylinder be labelled with a sticker identifying the contents as nitrox, and specifying the oxygen fraction. Similar requirements may apply in other countries.",
"title": "Hazards"
},
{
"paragraph_id": 47,
"text": "In 1874, Henry Fleuss made what was possibly the first Nitrox dive using a rebreather.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "In 1911 Draeger of Germany tested an injector operated rebreather backpack for a standard diving suit. This concept was produced and marketed as the DM20 oxygen rebreather system and the DM40 nitrox rebreather system, in which air from one cylinder and oxygen from a second cylinder were mixed during injection through a nozzle which circulated the breathing gas through the scrubber and the rest of the loop. The DM40 was rated for depths up to 40m.",
"title": "History"
},
{
"paragraph_id": 49,
"text": "Christian J. Lambertsen proposed calculations for nitrogen addition to prevent oxygen toxicity in divers utilizing nitrogen-oxygen rebreather diving.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "In World War II or soon after, British commando frogmen and clearance divers started occasionally diving with oxygen rebreathers adapted for semi-closed-circuit nitrox (which they called \"mixture\") diving by fitting larger cylinders and carefully setting the gas flow rate using a flow meter. These developments were kept secret until independently duplicated by civilians in the 1960s.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "Lambertson published a paper on nitrox in 1947.",
"title": "History"
},
{
"paragraph_id": 52,
"text": "In the 1950s the United States Navy (USN) documented enriched oxygen gas procedures for military use of what we today call nitrox, in the US Navy Diving Manual.",
"title": "History"
},
{
"paragraph_id": 53,
"text": "In 1955, E. Lanphier described the use of nitrogen-oxygen diving mixtures, and the equivalent air depth method for calculating decompression from air tables.",
"title": "History"
},
{
"paragraph_id": 54,
"text": "In the 1960s, A. Galerne used on-line blending for commercial diving.",
"title": "History"
},
{
"paragraph_id": 55,
"text": "In 1970, Morgan Wells, who was the first director of the National Oceanographic and Atmospheric Administration (NOAA) Diving Center, began instituting diving procedures for oxygen-enriched air. He introduced the concept of Equivalent Air Depth (EAD). He also developed a process for mixing oxygen and air which he called a continuous blending system. For many years Wells' invention was the only practical alternative to partial pressure blending. In 1979 NOAA published Wells' procedures for the scientific use of nitrox in the NOAA Diving Manual.",
"title": "History"
},
{
"paragraph_id": 56,
"text": "In 1985 Dick Rutkowski, a former NOAA diving safety officer, formed IAND (International Association of Nitrox Divers) and began teaching nitrox use for recreational diving. This was considered dangerous by some, and met with heavy skepticism by the diving community.",
"title": "History"
},
{
"paragraph_id": 57,
"text": "In 1989, the Harbor Branch Oceanographic institution workshop addressed blending, oxygen limits and decompression issues.",
"title": "History"
},
{
"paragraph_id": 58,
"text": "In 1991, Bove, Bennett and Skin Diver magazine took a stand against nitrox use for recreational diving. Skin Diver editor Bill Gleason dubbed nitrox the \"Voodoo Gas\". The annual DEMA show (held in Houston, Texas that year) banned nitrox training providers from the show. This caused a backlash, and when DEMA relented, a number of organizations took the opportunity to present nitrox workshops outside the show.",
"title": "History"
},
{
"paragraph_id": 59,
"text": "In 1992, the Scuba Diving Resources Group organised a workshop where some guidelines were established, and some misconceptions addressed.",
"title": "History"
},
{
"paragraph_id": 60,
"text": "In 1992 BSAC banned its members from using nitrox during BSAC activities. IAND's name was changed to the International Association of Nitrox and Technical Divers (IANTD), the T being added when the European Association of Technical Divers (EATD) merged with IAND. In the early 1990s, these agencies were teaching nitrox, but the main scuba agencies were not. Additional new organizations, including the American Nitrox Divers International (ANDI) - which invented the term \"Safe Air\" for marketing purposes - and Technical Diving International (TDI) were begun. NAUI became the first existing major recreational diver training agency to sanction nitrox.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "In 1993 the Sub-Aqua Association was the first UK recreational diving training agency to acknowledge and endorse the Nitrox training their members had undertaken with one of the tech agencies. The SAA's first recreational Nitrox qualification was issued in April 1993. The SAA's first Nitrox instructor was Vic Bonfante and he was certified in September 1993.",
"title": "History"
},
{
"paragraph_id": 62,
"text": "Meanwhile, diving stores were finding a purely economic reason to offer nitrox: not only was an entire new course and certification needed to use it, but instead of cheap or free tank fills with compressed air, dive shops found they could charge premium amounts of money for custom-gas blending of nitrox to their ordinary, moderately experienced divers. With the new dive computers which could be programmed to allow for the longer bottom-times and shorter residual nitrogen times that nitrox gave, the incentive for the sport diver to use the gas increased.",
"title": "History"
},
{
"paragraph_id": 63,
"text": "In 1993 Skin Diver magazine, the leading recreational diving publication at the time, published a three-part series arguing that nitrox was unsafe for sport divers. DiveRite manufactured the first nitrox-compatible dive computer, called the Bridge, the aquaCorps TEK93 conference was held in San Francisco, and a practicable oil limit of 0.1 mg/m for oxygen compatible air was set. The Canadian armed forces issued EAD tables with an upper PO2 of 1.5 ATA.",
"title": "History"
},
{
"paragraph_id": 64,
"text": "In 1994 John Lamb and Vandagraph launched the first oxygen analyser built specifically for Nitrox and mixed-gas divers, at the Birmingham Dive Show.",
"title": "History"
},
{
"paragraph_id": 65,
"text": "In 1994 BSAC reversed its policy on Nitrox and announced BSAC nitrox training to start in 1995",
"title": "History"
},
{
"paragraph_id": 66,
"text": "In 1996, the Professional Association of Diving Instructors (PADI) announced full educational support for nitrox. While other mainline scuba organizations had announced their support of nitrox earlier, it was PADI's endorsement that established nitrox as a standard recreational diving option.",
"title": "History"
},
{
"paragraph_id": 67,
"text": "In 1997 ProTec started with Nitrox 1 (recreational) and Nitrox 2 (technical). A German ProTec Nitrox manual (ref to the 6th edition) has been published.",
"title": "History"
},
{
"paragraph_id": 68,
"text": "In 1999 a survey by R.W. Hamilton showed that over hundreds of thousands of nitrox dives, the DCS record is good. Nitrox had become popular with recreational divers, but not used much by commercial divers who tend to use surface supplied breathing apparatus. The OSHA accepted a petition for a variance from the commercial diving regulations for recreational scuba instructors.",
"title": "History"
},
{
"paragraph_id": 69,
"text": "The 2001 edition of the NOAA Diving Manual included a chapter intended for Nitrox training.",
"title": "History"
},
{
"paragraph_id": 70,
"text": "At times in the geological past, the Earth's atmosphere contained much more than 20% oxygen: e.g. up to 35% in the Upper Carboniferous period. This let animals absorb oxygen more easily and influenced their evolutionary patterns.",
"title": "In nature"
}
]
| Nitrox refers to any gas mixture composed of nitrogen and oxygen. This includes atmospheric air, which is approximately 78% nitrogen, 21% oxygen, and 1% other gases, primarily argon. In the usual application, underwater diving, nitrox is normally distinguished from air and handled differently. The most common use of nitrox mixtures containing oxygen in higher proportions than atmospheric air is in scuba diving, where the reduced partial pressure of nitrogen is advantageous in reducing nitrogen uptake in the body's tissues, thereby extending the practicable underwater dive time by reducing the decompression requirement, or reducing the risk of decompression sickness. Nitrox is used to a lesser extent in surface-supplied diving, as these advantages are reduced by the more complex logistical requirements for nitrox compared to the use of simple low-pressure compressors for breathing gas supply. Nitrox can also be used in hyperbaric treatment of decompression illness, usually at pressures where pure oxygen would be hazardous. Nitrox is not a safer gas than compressed air in all respects; although its use can reduce the risk of decompression sickness, it increases the risks of oxygen toxicity and fire. Though not generally referred to as nitrox, an oxygen-enriched air mixture is routinely provided at normal surface ambient pressure as oxygen therapy to patients with compromised respiration and circulation. | 2001-10-19T13:34:29Z | 2023-12-20T11:06:36Z | [
"Template:About",
"Template:Convert",
"Template:Further",
"Template:Citation needed",
"Template:See",
"Template:Wiktionary",
"Template:Short description",
"Template:Main",
"Template:Clarify",
"Template:Underwater diving",
"Template:See also",
"Template:R",
"Template:Annotated link",
"Template:Reflist",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/Nitrox |
9,956 | Erik Satie | Eric Alfred Leslie Satie (UK: /ˈsæti, ˈsɑːti/, US: /sæˈtiː, sɑːˈtiː/; French: [eʁik sati]; 17 May 1866 – 1 July 1925), who signed his name Erik Satie after 1884, was a French composer and pianist. He was the son of a French father and a British mother. He studied at the Paris Conservatoire, but was an undistinguished student and obtained no diploma. In the 1880s he worked as a pianist in café-cabaret in Montmartre, Paris, and began composing works, mostly for solo piano, such as his Gymnopédies and Gnossiennes. He also wrote music for a Rosicrucian sect to which he was briefly attached.
After a spell in which he composed little, Satie entered Paris's second music academy, the Schola Cantorum, as a mature student. His studies there were more successful than those at the Conservatoire. From about 1910 he became the focus of successive groups of young composers attracted by his unconventionality and originality. Among them were the group known as Les Six. A meeting with Jean Cocteau in 1915 led to the creation of the ballet Parade (1917) for Serge Diaghilev, with music by Satie, sets and costumes by Pablo Picasso, and choreography by Léonide Massine.
Satie's example guided a new generation of French composers away from post-Wagnerian impressionism towards a sparer, terser style. Among those influenced by him during his lifetime were Maurice Ravel, Claude Debussy, and Francis Poulenc, and he is seen as an influence on more recent, minimalist composers such as John Cage and John Adams. His harmony is often characterised by unresolved chords, he sometimes dispensed with bar-lines, as in his Gnossiennes, and his melodies are generally simple and often reflect his love of old church music. He gave some of his later works absurd titles, such as Veritables Preludes flasques (pour un chien) ("True Flabby Preludes (for a Dog)", 1912), Croquis et agaceries d'un gros bonhomme en bois ("Sketches and Exasperations of a Big Wooden Man", 1913) and Sonatine bureaucratique ("Bureaucratic Sonatina", 1917). Most of his works are brief, and the majority are for solo piano. Exceptions include his "symphonic drama" Socrate (1919) and two late ballets Mercure and Relâche (1924).
Satie never married, and his home for most of his adult life was a single small room, first in Montmartre and, from 1898 to his death, in Arcueil, a suburb of Paris. He adopted various images over the years, including a period in quasi-priestly dress, another in which he always wore identically coloured velvet suits, and is known for his last persona, in neat bourgeois costume, with bowler hat, wing collar, and umbrella. He was a lifelong heavy drinker, and died of cirrhosis of the liver at the age of 59.
Satie was born on 17 May 1866 in Honfleur, Normandy, the first child of Alfred Satie and his wife Jane Leslie (née Anton). Jane Satie was an English Protestant of Scottish descent; Alfred Satie, a shipping broker, was a Roman Catholic anglophobe. A year later, the Saties had a daughter, Olga, and in 1869 a second son, Conrad. The children were baptised in the Anglican church.
After the Franco-Prussian War, Alfred Satie sold his business and the family moved to Paris, where he eventually set up as a music publisher. In 1872 Jane Satie died and Eric and his brother were sent back to Honfleur to be brought up by Alfred's parents. The boys were rebaptised as Roman Catholics and educated at a local boarding school, where Satie excelled in history and Latin but nothing else. In 1874 he began taking music lessons with a local organist, Gustave Vinot, a former pupil of Louis Niedermeyer. Vinot stimulated Satie's love of old church music, and in particular Gregorian chant.
In 1878 Satie's grandmother died, and the two boys returned to Paris to be informally educated by their father. Satie did not attend a school, but his father took him to lectures at the Collège de France and engaged a tutor to teach Eric Latin and Greek. Before the boys returned to Paris from Honfleur, Alfred had met a piano teacher and salon composer, Eugénie Barnetche, whom he married in January 1879, to the dismay of the twelve-year-old Satie, who did not like her.
Eugénie Satie resolved that her elder stepson should become a professional musician, and in November 1879 enrolled him in the preparatory piano class at the Paris Conservatoire. Satie strongly disliked the Conservatoire, which he described as "a vast, very uncomfortable, and rather ugly building; a sort of district prison with no beauty on the inside – nor on the outside, for that matter". He studied solfeggio with Albert Lavignac and piano with Émile Decombes, who had been a pupil of Frédéric Chopin. In 1880 Satie took his first examinations as a pianist: he was described as "gifted but indolent". The following year Decombes called him "the laziest student in the Conservatoire". In 1882 he was expelled from the Conservatoire for his unsatisfactory performance.
In 1884 Satie wrote his first known composition, a short Allegro for piano, written while on holiday in Honfleur. He signed himself "Erik" on this and subsequent compositions, though continuing to use "Eric" on other documents until 1906. In 1885, he was readmitted to the Conservatoire, in the intermediate piano class of his stepmother's former teacher, Georges Mathias. He made little progress: Mathias described his playing as "Insignificant and laborious" and Satie himself "Worthless. Three months just to learn the piece. Cannot sight-read properly". Satie became fascinated by aspects of religion. He spent much time in Notre-Dame de Paris contemplating the stained glass windows and in the National Library examining obscure medieval manuscripts. His friend Alphonse Allais later dubbed him "Esotérik Satie". From this period comes Ogives, a set of four piano pieces inspired by Gregorian chant and Gothic church architecture.
Keen to leave the Conservatoire, Satie volunteered for military service, and joined the 33rd Infantry Regiment in November 1886. He quickly found army life no more to his liking than the Conservatoire, and deliberately contracted acute bronchitis by standing in the open, bare-chested, on a winter night. After three months' convalescence he was invalided out of the army.
In 1887, at the age of 21, Satie moved from his father's residence to lodgings in the 9th arrondissement. By this time he had started what was to be an enduring friendship with the romantic poet Contamine de Latour, whose verse he set in some of his early compositions, which Satie senior published. His lodgings were close to the popular Chat Noir cabaret on the southern edge of Montmartre where he became an habitué and then a resident pianist. The Chat Noir was known as the "temple de la 'convention farfelue'" – the temple of zany convention, and as the biographer Robert Orledge puts it, Satie, "free from his restrictive upbringing … enthusiastically embraced the reckless bohemian lifestyle and created for himself a new persona as a long-haired man-about-town in frock coat and top hat". This was the first of several personas that Satie invented for himself over the years.
In the late 1880s Satie styled himself on at least one occasion "Erik Satie – gymnopédiste", and his works from this period include the three Gymnopédies (1888) and the first Gnossiennes (1889 and 1890). He earned a modest living as pianist and conductor at the Chat Noir, before falling out with the proprietor and moving to become second pianist at the nearby Auberge du Clou. There he became a close friend of Claude Debussy, who proved a kindred spirit in his experimental approach to composition. Both were bohemians, enjoying the same café society and struggling to survive financially. At the Auberge du Clou Satie first encountered the flamboyant, self-styled "Sâr" Joséphin Péladan, for whose mystic sect, the Ordre de la Rose-Croix Catholique du Temple et du Graal, he was appointed composer. This gave him scope for experiment, and Péladan's salons at the fashionable Galerie Durand-Ruel gained Satie his first public hearings. Frequently short of money, Satie moved from his lodgings in the 9th arrondissement to a small room in the rue Cortot not far from Sacre-Coeur, so high up the Butte Montmartre that he said he could see from his window all the way to the Belgian border.
By mid-1892, Satie had composed the first pieces in a compositional system of his own making (Fête donnée par des Chevaliers Normands en l'honneur d'une jeune demoiselle), provided incidental music to a chivalric esoteric play (two Préludes du Nazaréen), had a hoax published (announcing the premiere of his non-existent Le bâtard de Tristan, an anti-Wagnerian opera), and broken away from Péladan, starting that autumn with the "Uspud" project, a "Christian Ballet", in collaboration with Latour. He challenged the musical establishment by proposing himself – unsuccessfully – for the seat in the Académie des Beaux-Arts made vacant by the death of Ernest Guiraud. Between 1893 and 1895, Satie, affecting a quasi-priestly dress, was the founder and only member of the Eglise Métropolitaine d'Art de Jésus Conducteur. From his "Abbatiale" in the rue Cortot, he published scathing attacks on his artistic enemies.
In 1893 Satie had what is believed to be his only love affair, a five-month liaison with the painter Suzanne Valadon. After their first night together, he proposed marriage. The two did not marry, but Valadon moved to a room next to Satie's at the rue Cortot. Satie became obsessed with her, calling her his Biqui and writing impassioned notes about "her whole being, lovely eyes, gentle hands, and tiny feet". During their relationship Satie composed the Danses gothiques as a means of calming his mind, and Valadon painted his portrait, which she gave him. After five months she moved away, leaving him devastated. He said later that he was left with "nothing but an icy loneliness that fills the head with emptiness and the heart with sadness".
In 1895 Satie attempted to change his image once again: this time to that of "the Velvet Gentleman". From the proceeds of a small legacy he bought seven identical dun-coloured suits. Orledge comments that this change "marked the end of his Rose+Croix period and the start of a long search for a new artistic direction".
In 1898, in search of somewhere cheaper and quieter than Montmartre, Satie moved to a room in the southern suburbs, in the commune of Arcueil-Cachan, eight kilometres (five miles) from the centre of Paris. This remained his home for the rest of his life. No visitors were ever admitted. He joined a radical socialist party (he later switched his membership to the Communist Party), but adopted a thoroughly bourgeois image: the biographer Pierre-Daniel Templier, writes, "With his umbrella and bowler hat, he resembled a quiet school teacher. Although a Bohemian, he looked very dignified, almost ceremonious".
Satie earned a living as a cabaret pianist, adapting more than a hundred compositions of popular music for piano or piano and voice, adding some of his own. The most popular of these were Je te veux, text by Henry Pacory; Tendrement, text by Vincent Hyspa; Poudre d'or, a waltz; La Diva de l'Empire, text by Dominique Bonnaud/Numa Blès; Le Picadilly, a march; Légende californienne, text by Contamine de Latour (lost, but the music later reappears in La belle excentrique); and others. In his later years Satie rejected all his cabaret music as vile and against his nature. Only a few compositions that he took seriously remain from this period: Jack in the Box, music to a pantomime by Jules Depaquit (called a "clownerie" by Satie); Geneviève de Brabant, a short comic opera to a text by "Lord Cheminot" (Latour); Le poisson rêveur (The Dreamy Fish), piano music to accompany a lost tale by Cheminot, and a few others that were mostly incomplete. Few were presented, and none published at the time.
A decisive change in Satie's musical outlook came after he heard the premiere of Debussy's opera Pelléas et Mélisande in 1902. He found it "absolutely astounding", and he re-evaluated his own music. In a determined attempt to improve his technique, and against Debussy's advice, he enrolled as a mature student at Paris's second main music academy, the Schola Cantorum in October 1905, continuing his studies there until 1912. The institution was run by Vincent d'Indy, who emphasised orthodox technique rather than creative originality. Satie studied counterpoint with Albert Roussel and composition with d'Indy, and was a much more conscientious and successful student than he had been at the Conservatoire in his youth.
It was not until 1911, when he was in his mid-forties, that Satie came to the notice of the musical public in general. In January of that year Maurice Ravel played some early Satie works at a concert by the Société musicale indépendante, a forward-looking group set up by Ravel and others as a rival to the conservative Société nationale de musique. Satie was suddenly seen as "the precursor and apostle of the musical revolution now taking place"; he became a focus for young composers. Debussy, having orchestrated the first and third Gymnopédies, conducted them in concert. The publisher Demets asked for new works from Satie, who was finally able to give up his cabaret work and devote himself to composition. Works such as the cycle Sports et divertissements (1914) were published in de luxe editions. The press began to write about Satie's music, and a leading pianist, Ricardo Viñes, took him up, giving celebrated first performances of some Satie pieces.
Satie became the focus of successive groups of young composers, whom he first encouraged and then distanced himself from, sometimes rancorously, when their popularity threatened to eclipse his or they otherwise displeased him. First were the "jeunes" – those associated with Ravel – and then a group known at first as the "nouveaux jeunes", later called Les Six, including Georges Auric, Louis Durey, Arthur Honegger, and Germaine Tailleferre, joined later by Francis Poulenc and Darius Milhaud. Satie dissociated himself from the second group in 1918, and in the 1920s he became the focal point of another set of young composers including Henri Cliquet-Pleyel, Roger Désormière, Maxime Jacob and Henri Sauguet, who became known as the "Arcueil School". In addition to turning against Ravel, Auric and Poulenc in particular, Satie quarrelled with his old friend Debussy in 1917, resentful of the latter's failure to appreciate the more recent Satie compositions. The rupture lasted for the remaining months of Debussy's life, and when he died the following year, Satie refused to attend the funeral. A few of his protégés escaped his displeasure, and Milhaud and Désormière were among those who remained friends with him to the last.
The First World War restricted concert-giving to some extent, but Orledge comments that the war years brought "Satie's second lucky break", when Jean Cocteau heard Viñes and Satie perform the Trois morceaux in 1916. This led to the commissioning of the ballet Parade, premiered in 1917 by Sergei Diaghilev's Ballets Russes, with music by Satie, sets and costumes by Pablo Picasso, and choreography by Léonide Massine. This was a succès de scandale, with jazz rhythms and instrumentation including parts for typewriter, steamship whistle and siren. It firmly established Satie's name before the public, and thereafter his career centred on the theatre, writing mainly to commission.
In October 1916 Satie received a commission from the Princesse de Polignac that resulted in what Orledge rates as the composer's masterpiece, Socrate, two years later. Satie set translations from Plato's Dialogues as a "symphonic drama". Its composition was interrupted in 1917 by a libel suit brought against him by a music critic, Jean Poueigh, which nearly resulted in a jail sentence for Satie. When Socrate was premiered, Satie called it "a return to classical simplicity with a modern sensibility", and among those who admired the work was Igor Stravinsky, a composer whom Satie regarded with awe.
In his later years Satie became known for his prose. He was in demand as a journalist, making contributions to the Revue musicale, Action, L'Esprit nouveau, the Paris-Journal and other publications from the Dadaist 391 to the English-language magazines Vanity Fair and The Transatlantic Review. As he contributed anonymously or under pen names to some publications it is not certain how many titles he wrote for, but Grove's Dictionary of Music and Musicians lists 25. Satie's habit of embellishing the scores of his compositions with all kinds of written remarks became so established that he had to insist that they must not be read out during performances.
In 1920 there was a festival of Satie's music at the Salle Erard in Paris. In 1924 the ballets Mercure (with choreography by Massine and décor by Picasso) and Relâche ("Cancelled") (in collaboration with Francis Picabia and René Clair), both provoked headlines with their first night scandals.
Despite being a musical iconoclast, and encourager of modernism, Satie was uninterested to the point of antipathy about innovations such as the telephone, the gramophone and the radio. He made no recordings, and as far as is known heard only a single radio broadcast (of Milhaud's music) and made only one telephone call. Although his personal appearance was customarily immaculate, his room at Arcueil was in Orledge's word "squalid", and after his death the scores of several important works believed lost were found among the accumulated rubbish. He was incompetent with money. Having depended to a considerable extent on the generosity of friends in his early years, he was little better off when he began to earn a good income from his compositions, as he spent or gave away money as soon as he received it. He liked children, and they liked him, but his relations with adults were seldom straightforward. One of his last collaborators, Picabia, said of him:
Satie's case is extraordinary. He's a mischievous and cunning old artist. At least, that's how he thinks of himself. Myself, I think the opposite! He's a very susceptible man, arrogant, a real sad child, but one who is sometimes made optimistic by alcohol. But he's a good friend, and I like him a lot.
Throughout his adult life Satie was a heavy drinker, and in 1925 his health collapsed. He was taken to the Hôpital Saint-Joseph in Paris, diagnosed with cirrhosis of the liver. He died there at 8.00 p.m. on 1 July, at the age of 59. He was buried in the cemetery at Arcueil.
In the view of the Oxford Dictionary of Music, Satie's importance lay in "directing a new generation of French composers away from Wagner‐influenced impressionism towards a leaner, more epigrammatic style". Debussy christened him "the precursor" because of his early harmonic innovations. Satie summed up his musical philosophy in 1917:
To have a feeling for harmony is to have a feeling for tonality… the melody is the Idea, the outline; as much as it is the form and the subject matter of a work. The harmony is an illumination, an exhibition of the object, its reflection.
Among his earliest compositions were sets of three Gymnopédies (1888) and his Gnossiennes (1889 onwards) for piano. They evoke the ancient world by what the critics Roger Nichols and Paul Griffiths describe as "pure simplicity, monotonous repetition, and highly original modal harmonies". It is possible that their simplicity and originality were influenced by Debussy; it is also possible that it was Satie who influenced Debussy. During the brief spell when Satie was composer to Péladan's sect he adopted a similarly austere manner.
While Satie was earning his living as a café pianist in Montmartre he contributed songs and little waltzes. After moving to Arcueil he began to write works with quirky titles, such as the seven-movement suite Trois morceaux en forme de poire ("Three Pear-shaped Pieces") for piano four-hands (1903), simply-phrased music that Nichols and Griffiths describe as "a résumé of his music since 1890" – reusing some of his earlier work as well as popular songs of the time. He struggled to find his own musical voice. Orledge writes that this was partly because of his "trying to ape his illustrious peers … we find bits of Ravel in his miniature opera Geneviève de Brabant and echoes of both Fauré and Debussy in the Nouvelles pièces froides of 1907".
After concluding his studies at the Schola Cantorum in 1912 Satie composed with greater confidence and more prolifically. Orchestration, despite his studies with d'Indy, was never his strongest suit, but his grasp of counterpoint is evident in the opening bars of Parade, and from the outset of his composing career he had original and distinctive ideas about harmony. In his later years he composed sets of short instrumental works with absurd titles, including Veritables Preludes flasques (pour un chien) ("True Flabby Preludes (for a Dog)", 1912), Croquis et agaceries d'un gros bonhomme en bois ("Sketches and Exasperations of a Big Wooden Man", 1913) and Sonatine bureaucratique ("Bureaucratic Sonata", 1917).
In his neat, calligraphic hand, Satie would write extensive instructions for his performers, and although his words appear at first sight to be humorous and deliberately nonsensical, Nichols and Griffiths comment, "a sensitive pianist can make much of injunctions such as 'arm yourself with clairvoyance' and 'with the end of your thought'". His Sonatine bureaucratique anticipates the neoclassicism soon adopted by Stravinsky. Despite his rancorous falling out with Debussy, Satie commemorated his long-time friend in 1920, two years after Debussy's death, in the anguished "Elégie", the first of the miniature song cycle Quatre petites mélodies. Orledge rates the cycle as the finest, though least known, of the four sets of short songs of Satie's last decade.
Satie invented what he called Musique d'ameublement – "furniture music" – a kind of background not to be listened to consciously. Cinéma, composed for the René Clair film Entr'acte, shown between the acts of Relâche (1924), is an example of early film music designed to be unconsciously absorbed rather than carefully listened to.
Satie is regarded by some writers as an influence on minimalism, which developed in the 1960s and later. The musicologist Mark Bennett and the composer Humphrey Searle have said that John Cage's music shows Satie's influence, and Searle and the writer Edward Strickland have used the term "minimalism" in connection with Satie's Vexations, which the composer implied in his manuscript should be played over and over again 840 times. John Adams included a specific homage to Satie's music in his 1996 Century Rolls.
Satie wrote extensively for the press, but unlike his professional colleagues such as Debussy and Dukas he did not write primarily as a music critic. Much of his writing is connected to music tangentially if at all. His biographer Caroline Potter describes him as "an experimental creative writer, a blagueur who provoked, mystified and amused his readers". He wrote jeux d'esprit claiming to eat dinner in four minutes with a diet of exclusively white food (including bones and fruit mould), or to drink boiled wine mixed with fuchsia juice, or to be woken by a servant hourly throughout the night to have his temperature taken; he wrote in praise of Beethoven's non-existent but "sumptuous" Tenth Symphony, and the family of instruments known as the cephalophones, "which have a compass of thirty octaves and are absolutely unplayable".
Satie grouped some of these writings under the general headings Cahiers d'un mammifère (A Mammal's Notebook) and Mémoires d'un amnésique (Memoirs of an Amnesiac), indicating, as Potter comments, that "these are not autobiographical writings in the conventional manner". He claimed the major influence on his humour was Oliver Cromwell, adding "I also owe much to Christopher Columbus, because the American spirit has occasionally tapped me on the shoulder and I have been delighted to feel its ironically glacial bite".
His published writings include: | [
{
"paragraph_id": 0,
"text": "Eric Alfred Leslie Satie (UK: /ˈsæti, ˈsɑːti/, US: /sæˈtiː, sɑːˈtiː/; French: [eʁik sati]; 17 May 1866 – 1 July 1925), who signed his name Erik Satie after 1884, was a French composer and pianist. He was the son of a French father and a British mother. He studied at the Paris Conservatoire, but was an undistinguished student and obtained no diploma. In the 1880s he worked as a pianist in café-cabaret in Montmartre, Paris, and began composing works, mostly for solo piano, such as his Gymnopédies and Gnossiennes. He also wrote music for a Rosicrucian sect to which he was briefly attached.",
"title": ""
},
{
"paragraph_id": 1,
"text": "After a spell in which he composed little, Satie entered Paris's second music academy, the Schola Cantorum, as a mature student. His studies there were more successful than those at the Conservatoire. From about 1910 he became the focus of successive groups of young composers attracted by his unconventionality and originality. Among them were the group known as Les Six. A meeting with Jean Cocteau in 1915 led to the creation of the ballet Parade (1917) for Serge Diaghilev, with music by Satie, sets and costumes by Pablo Picasso, and choreography by Léonide Massine.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Satie's example guided a new generation of French composers away from post-Wagnerian impressionism towards a sparer, terser style. Among those influenced by him during his lifetime were Maurice Ravel, Claude Debussy, and Francis Poulenc, and he is seen as an influence on more recent, minimalist composers such as John Cage and John Adams. His harmony is often characterised by unresolved chords, he sometimes dispensed with bar-lines, as in his Gnossiennes, and his melodies are generally simple and often reflect his love of old church music. He gave some of his later works absurd titles, such as Veritables Preludes flasques (pour un chien) (\"True Flabby Preludes (for a Dog)\", 1912), Croquis et agaceries d'un gros bonhomme en bois (\"Sketches and Exasperations of a Big Wooden Man\", 1913) and Sonatine bureaucratique (\"Bureaucratic Sonatina\", 1917). Most of his works are brief, and the majority are for solo piano. Exceptions include his \"symphonic drama\" Socrate (1919) and two late ballets Mercure and Relâche (1924).",
"title": ""
},
{
"paragraph_id": 3,
"text": "Satie never married, and his home for most of his adult life was a single small room, first in Montmartre and, from 1898 to his death, in Arcueil, a suburb of Paris. He adopted various images over the years, including a period in quasi-priestly dress, another in which he always wore identically coloured velvet suits, and is known for his last persona, in neat bourgeois costume, with bowler hat, wing collar, and umbrella. He was a lifelong heavy drinker, and died of cirrhosis of the liver at the age of 59.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Satie was born on 17 May 1866 in Honfleur, Normandy, the first child of Alfred Satie and his wife Jane Leslie (née Anton). Jane Satie was an English Protestant of Scottish descent; Alfred Satie, a shipping broker, was a Roman Catholic anglophobe. A year later, the Saties had a daughter, Olga, and in 1869 a second son, Conrad. The children were baptised in the Anglican church.",
"title": "Life and career"
},
{
"paragraph_id": 5,
"text": "After the Franco-Prussian War, Alfred Satie sold his business and the family moved to Paris, where he eventually set up as a music publisher. In 1872 Jane Satie died and Eric and his brother were sent back to Honfleur to be brought up by Alfred's parents. The boys were rebaptised as Roman Catholics and educated at a local boarding school, where Satie excelled in history and Latin but nothing else. In 1874 he began taking music lessons with a local organist, Gustave Vinot, a former pupil of Louis Niedermeyer. Vinot stimulated Satie's love of old church music, and in particular Gregorian chant.",
"title": "Life and career"
},
{
"paragraph_id": 6,
"text": "In 1878 Satie's grandmother died, and the two boys returned to Paris to be informally educated by their father. Satie did not attend a school, but his father took him to lectures at the Collège de France and engaged a tutor to teach Eric Latin and Greek. Before the boys returned to Paris from Honfleur, Alfred had met a piano teacher and salon composer, Eugénie Barnetche, whom he married in January 1879, to the dismay of the twelve-year-old Satie, who did not like her.",
"title": "Life and career"
},
{
"paragraph_id": 7,
"text": "Eugénie Satie resolved that her elder stepson should become a professional musician, and in November 1879 enrolled him in the preparatory piano class at the Paris Conservatoire. Satie strongly disliked the Conservatoire, which he described as \"a vast, very uncomfortable, and rather ugly building; a sort of district prison with no beauty on the inside – nor on the outside, for that matter\". He studied solfeggio with Albert Lavignac and piano with Émile Decombes, who had been a pupil of Frédéric Chopin. In 1880 Satie took his first examinations as a pianist: he was described as \"gifted but indolent\". The following year Decombes called him \"the laziest student in the Conservatoire\". In 1882 he was expelled from the Conservatoire for his unsatisfactory performance.",
"title": "Life and career"
},
{
"paragraph_id": 8,
"text": "In 1884 Satie wrote his first known composition, a short Allegro for piano, written while on holiday in Honfleur. He signed himself \"Erik\" on this and subsequent compositions, though continuing to use \"Eric\" on other documents until 1906. In 1885, he was readmitted to the Conservatoire, in the intermediate piano class of his stepmother's former teacher, Georges Mathias. He made little progress: Mathias described his playing as \"Insignificant and laborious\" and Satie himself \"Worthless. Three months just to learn the piece. Cannot sight-read properly\". Satie became fascinated by aspects of religion. He spent much time in Notre-Dame de Paris contemplating the stained glass windows and in the National Library examining obscure medieval manuscripts. His friend Alphonse Allais later dubbed him \"Esotérik Satie\". From this period comes Ogives, a set of four piano pieces inspired by Gregorian chant and Gothic church architecture.",
"title": "Life and career"
},
{
"paragraph_id": 9,
"text": "Keen to leave the Conservatoire, Satie volunteered for military service, and joined the 33rd Infantry Regiment in November 1886. He quickly found army life no more to his liking than the Conservatoire, and deliberately contracted acute bronchitis by standing in the open, bare-chested, on a winter night. After three months' convalescence he was invalided out of the army.",
"title": "Life and career"
},
{
"paragraph_id": 10,
"text": "In 1887, at the age of 21, Satie moved from his father's residence to lodgings in the 9th arrondissement. By this time he had started what was to be an enduring friendship with the romantic poet Contamine de Latour, whose verse he set in some of his early compositions, which Satie senior published. His lodgings were close to the popular Chat Noir cabaret on the southern edge of Montmartre where he became an habitué and then a resident pianist. The Chat Noir was known as the \"temple de la 'convention farfelue'\" – the temple of zany convention, and as the biographer Robert Orledge puts it, Satie, \"free from his restrictive upbringing … enthusiastically embraced the reckless bohemian lifestyle and created for himself a new persona as a long-haired man-about-town in frock coat and top hat\". This was the first of several personas that Satie invented for himself over the years.",
"title": "Life and career"
},
{
"paragraph_id": 11,
"text": "In the late 1880s Satie styled himself on at least one occasion \"Erik Satie – gymnopédiste\", and his works from this period include the three Gymnopédies (1888) and the first Gnossiennes (1889 and 1890). He earned a modest living as pianist and conductor at the Chat Noir, before falling out with the proprietor and moving to become second pianist at the nearby Auberge du Clou. There he became a close friend of Claude Debussy, who proved a kindred spirit in his experimental approach to composition. Both were bohemians, enjoying the same café society and struggling to survive financially. At the Auberge du Clou Satie first encountered the flamboyant, self-styled \"Sâr\" Joséphin Péladan, for whose mystic sect, the Ordre de la Rose-Croix Catholique du Temple et du Graal, he was appointed composer. This gave him scope for experiment, and Péladan's salons at the fashionable Galerie Durand-Ruel gained Satie his first public hearings. Frequently short of money, Satie moved from his lodgings in the 9th arrondissement to a small room in the rue Cortot not far from Sacre-Coeur, so high up the Butte Montmartre that he said he could see from his window all the way to the Belgian border.",
"title": "Life and career"
},
{
"paragraph_id": 12,
"text": "By mid-1892, Satie had composed the first pieces in a compositional system of his own making (Fête donnée par des Chevaliers Normands en l'honneur d'une jeune demoiselle), provided incidental music to a chivalric esoteric play (two Préludes du Nazaréen), had a hoax published (announcing the premiere of his non-existent Le bâtard de Tristan, an anti-Wagnerian opera), and broken away from Péladan, starting that autumn with the \"Uspud\" project, a \"Christian Ballet\", in collaboration with Latour. He challenged the musical establishment by proposing himself – unsuccessfully – for the seat in the Académie des Beaux-Arts made vacant by the death of Ernest Guiraud. Between 1893 and 1895, Satie, affecting a quasi-priestly dress, was the founder and only member of the Eglise Métropolitaine d'Art de Jésus Conducteur. From his \"Abbatiale\" in the rue Cortot, he published scathing attacks on his artistic enemies.",
"title": "Life and career"
},
{
"paragraph_id": 13,
"text": "In 1893 Satie had what is believed to be his only love affair, a five-month liaison with the painter Suzanne Valadon. After their first night together, he proposed marriage. The two did not marry, but Valadon moved to a room next to Satie's at the rue Cortot. Satie became obsessed with her, calling her his Biqui and writing impassioned notes about \"her whole being, lovely eyes, gentle hands, and tiny feet\". During their relationship Satie composed the Danses gothiques as a means of calming his mind, and Valadon painted his portrait, which she gave him. After five months she moved away, leaving him devastated. He said later that he was left with \"nothing but an icy loneliness that fills the head with emptiness and the heart with sadness\".",
"title": "Life and career"
},
{
"paragraph_id": 14,
"text": "In 1895 Satie attempted to change his image once again: this time to that of \"the Velvet Gentleman\". From the proceeds of a small legacy he bought seven identical dun-coloured suits. Orledge comments that this change \"marked the end of his Rose+Croix period and the start of a long search for a new artistic direction\".",
"title": "Life and career"
},
{
"paragraph_id": 15,
"text": "In 1898, in search of somewhere cheaper and quieter than Montmartre, Satie moved to a room in the southern suburbs, in the commune of Arcueil-Cachan, eight kilometres (five miles) from the centre of Paris. This remained his home for the rest of his life. No visitors were ever admitted. He joined a radical socialist party (he later switched his membership to the Communist Party), but adopted a thoroughly bourgeois image: the biographer Pierre-Daniel Templier, writes, \"With his umbrella and bowler hat, he resembled a quiet school teacher. Although a Bohemian, he looked very dignified, almost ceremonious\".",
"title": "Life and career"
},
{
"paragraph_id": 16,
"text": "Satie earned a living as a cabaret pianist, adapting more than a hundred compositions of popular music for piano or piano and voice, adding some of his own. The most popular of these were Je te veux, text by Henry Pacory; Tendrement, text by Vincent Hyspa; Poudre d'or, a waltz; La Diva de l'Empire, text by Dominique Bonnaud/Numa Blès; Le Picadilly, a march; Légende californienne, text by Contamine de Latour (lost, but the music later reappears in La belle excentrique); and others. In his later years Satie rejected all his cabaret music as vile and against his nature. Only a few compositions that he took seriously remain from this period: Jack in the Box, music to a pantomime by Jules Depaquit (called a \"clownerie\" by Satie); Geneviève de Brabant, a short comic opera to a text by \"Lord Cheminot\" (Latour); Le poisson rêveur (The Dreamy Fish), piano music to accompany a lost tale by Cheminot, and a few others that were mostly incomplete. Few were presented, and none published at the time.",
"title": "Life and career"
},
{
"paragraph_id": 17,
"text": "A decisive change in Satie's musical outlook came after he heard the premiere of Debussy's opera Pelléas et Mélisande in 1902. He found it \"absolutely astounding\", and he re-evaluated his own music. In a determined attempt to improve his technique, and against Debussy's advice, he enrolled as a mature student at Paris's second main music academy, the Schola Cantorum in October 1905, continuing his studies there until 1912. The institution was run by Vincent d'Indy, who emphasised orthodox technique rather than creative originality. Satie studied counterpoint with Albert Roussel and composition with d'Indy, and was a much more conscientious and successful student than he had been at the Conservatoire in his youth.",
"title": "Life and career"
},
{
"paragraph_id": 18,
"text": "It was not until 1911, when he was in his mid-forties, that Satie came to the notice of the musical public in general. In January of that year Maurice Ravel played some early Satie works at a concert by the Société musicale indépendante, a forward-looking group set up by Ravel and others as a rival to the conservative Société nationale de musique. Satie was suddenly seen as \"the precursor and apostle of the musical revolution now taking place\"; he became a focus for young composers. Debussy, having orchestrated the first and third Gymnopédies, conducted them in concert. The publisher Demets asked for new works from Satie, who was finally able to give up his cabaret work and devote himself to composition. Works such as the cycle Sports et divertissements (1914) were published in de luxe editions. The press began to write about Satie's music, and a leading pianist, Ricardo Viñes, took him up, giving celebrated first performances of some Satie pieces.",
"title": "Life and career"
},
{
"paragraph_id": 19,
"text": "Satie became the focus of successive groups of young composers, whom he first encouraged and then distanced himself from, sometimes rancorously, when their popularity threatened to eclipse his or they otherwise displeased him. First were the \"jeunes\" – those associated with Ravel – and then a group known at first as the \"nouveaux jeunes\", later called Les Six, including Georges Auric, Louis Durey, Arthur Honegger, and Germaine Tailleferre, joined later by Francis Poulenc and Darius Milhaud. Satie dissociated himself from the second group in 1918, and in the 1920s he became the focal point of another set of young composers including Henri Cliquet-Pleyel, Roger Désormière, Maxime Jacob and Henri Sauguet, who became known as the \"Arcueil School\". In addition to turning against Ravel, Auric and Poulenc in particular, Satie quarrelled with his old friend Debussy in 1917, resentful of the latter's failure to appreciate the more recent Satie compositions. The rupture lasted for the remaining months of Debussy's life, and when he died the following year, Satie refused to attend the funeral. A few of his protégés escaped his displeasure, and Milhaud and Désormière were among those who remained friends with him to the last.",
"title": "Life and career"
},
{
"paragraph_id": 20,
"text": "The First World War restricted concert-giving to some extent, but Orledge comments that the war years brought \"Satie's second lucky break\", when Jean Cocteau heard Viñes and Satie perform the Trois morceaux in 1916. This led to the commissioning of the ballet Parade, premiered in 1917 by Sergei Diaghilev's Ballets Russes, with music by Satie, sets and costumes by Pablo Picasso, and choreography by Léonide Massine. This was a succès de scandale, with jazz rhythms and instrumentation including parts for typewriter, steamship whistle and siren. It firmly established Satie's name before the public, and thereafter his career centred on the theatre, writing mainly to commission.",
"title": "Life and career"
},
{
"paragraph_id": 21,
"text": "In October 1916 Satie received a commission from the Princesse de Polignac that resulted in what Orledge rates as the composer's masterpiece, Socrate, two years later. Satie set translations from Plato's Dialogues as a \"symphonic drama\". Its composition was interrupted in 1917 by a libel suit brought against him by a music critic, Jean Poueigh, which nearly resulted in a jail sentence for Satie. When Socrate was premiered, Satie called it \"a return to classical simplicity with a modern sensibility\", and among those who admired the work was Igor Stravinsky, a composer whom Satie regarded with awe.",
"title": "Life and career"
},
{
"paragraph_id": 22,
"text": "In his later years Satie became known for his prose. He was in demand as a journalist, making contributions to the Revue musicale, Action, L'Esprit nouveau, the Paris-Journal and other publications from the Dadaist 391 to the English-language magazines Vanity Fair and The Transatlantic Review. As he contributed anonymously or under pen names to some publications it is not certain how many titles he wrote for, but Grove's Dictionary of Music and Musicians lists 25. Satie's habit of embellishing the scores of his compositions with all kinds of written remarks became so established that he had to insist that they must not be read out during performances.",
"title": "Life and career"
},
{
"paragraph_id": 23,
"text": "In 1920 there was a festival of Satie's music at the Salle Erard in Paris. In 1924 the ballets Mercure (with choreography by Massine and décor by Picasso) and Relâche (\"Cancelled\") (in collaboration with Francis Picabia and René Clair), both provoked headlines with their first night scandals.",
"title": "Life and career"
},
{
"paragraph_id": 24,
"text": "Despite being a musical iconoclast, and encourager of modernism, Satie was uninterested to the point of antipathy about innovations such as the telephone, the gramophone and the radio. He made no recordings, and as far as is known heard only a single radio broadcast (of Milhaud's music) and made only one telephone call. Although his personal appearance was customarily immaculate, his room at Arcueil was in Orledge's word \"squalid\", and after his death the scores of several important works believed lost were found among the accumulated rubbish. He was incompetent with money. Having depended to a considerable extent on the generosity of friends in his early years, he was little better off when he began to earn a good income from his compositions, as he spent or gave away money as soon as he received it. He liked children, and they liked him, but his relations with adults were seldom straightforward. One of his last collaborators, Picabia, said of him:",
"title": "Life and career"
},
{
"paragraph_id": 25,
"text": "Satie's case is extraordinary. He's a mischievous and cunning old artist. At least, that's how he thinks of himself. Myself, I think the opposite! He's a very susceptible man, arrogant, a real sad child, but one who is sometimes made optimistic by alcohol. But he's a good friend, and I like him a lot.",
"title": "Life and career"
},
{
"paragraph_id": 26,
"text": "Throughout his adult life Satie was a heavy drinker, and in 1925 his health collapsed. He was taken to the Hôpital Saint-Joseph in Paris, diagnosed with cirrhosis of the liver. He died there at 8.00 p.m. on 1 July, at the age of 59. He was buried in the cemetery at Arcueil.",
"title": "Life and career"
},
{
"paragraph_id": 27,
"text": "In the view of the Oxford Dictionary of Music, Satie's importance lay in \"directing a new generation of French composers away from Wagner‐influenced impressionism towards a leaner, more epigrammatic style\". Debussy christened him \"the precursor\" because of his early harmonic innovations. Satie summed up his musical philosophy in 1917:",
"title": "Works"
},
{
"paragraph_id": 28,
"text": "To have a feeling for harmony is to have a feeling for tonality… the melody is the Idea, the outline; as much as it is the form and the subject matter of a work. The harmony is an illumination, an exhibition of the object, its reflection.",
"title": "Works"
},
{
"paragraph_id": 29,
"text": "Among his earliest compositions were sets of three Gymnopédies (1888) and his Gnossiennes (1889 onwards) for piano. They evoke the ancient world by what the critics Roger Nichols and Paul Griffiths describe as \"pure simplicity, monotonous repetition, and highly original modal harmonies\". It is possible that their simplicity and originality were influenced by Debussy; it is also possible that it was Satie who influenced Debussy. During the brief spell when Satie was composer to Péladan's sect he adopted a similarly austere manner.",
"title": "Works"
},
{
"paragraph_id": 30,
"text": "While Satie was earning his living as a café pianist in Montmartre he contributed songs and little waltzes. After moving to Arcueil he began to write works with quirky titles, such as the seven-movement suite Trois morceaux en forme de poire (\"Three Pear-shaped Pieces\") for piano four-hands (1903), simply-phrased music that Nichols and Griffiths describe as \"a résumé of his music since 1890\" – reusing some of his earlier work as well as popular songs of the time. He struggled to find his own musical voice. Orledge writes that this was partly because of his \"trying to ape his illustrious peers … we find bits of Ravel in his miniature opera Geneviève de Brabant and echoes of both Fauré and Debussy in the Nouvelles pièces froides of 1907\".",
"title": "Works"
},
{
"paragraph_id": 31,
"text": "After concluding his studies at the Schola Cantorum in 1912 Satie composed with greater confidence and more prolifically. Orchestration, despite his studies with d'Indy, was never his strongest suit, but his grasp of counterpoint is evident in the opening bars of Parade, and from the outset of his composing career he had original and distinctive ideas about harmony. In his later years he composed sets of short instrumental works with absurd titles, including Veritables Preludes flasques (pour un chien) (\"True Flabby Preludes (for a Dog)\", 1912), Croquis et agaceries d'un gros bonhomme en bois (\"Sketches and Exasperations of a Big Wooden Man\", 1913) and Sonatine bureaucratique (\"Bureaucratic Sonata\", 1917).",
"title": "Works"
},
{
"paragraph_id": 32,
"text": "In his neat, calligraphic hand, Satie would write extensive instructions for his performers, and although his words appear at first sight to be humorous and deliberately nonsensical, Nichols and Griffiths comment, \"a sensitive pianist can make much of injunctions such as 'arm yourself with clairvoyance' and 'with the end of your thought'\". His Sonatine bureaucratique anticipates the neoclassicism soon adopted by Stravinsky. Despite his rancorous falling out with Debussy, Satie commemorated his long-time friend in 1920, two years after Debussy's death, in the anguished \"Elégie\", the first of the miniature song cycle Quatre petites mélodies. Orledge rates the cycle as the finest, though least known, of the four sets of short songs of Satie's last decade.",
"title": "Works"
},
{
"paragraph_id": 33,
"text": "Satie invented what he called Musique d'ameublement – \"furniture music\" – a kind of background not to be listened to consciously. Cinéma, composed for the René Clair film Entr'acte, shown between the acts of Relâche (1924), is an example of early film music designed to be unconsciously absorbed rather than carefully listened to.",
"title": "Works"
},
{
"paragraph_id": 34,
"text": "Satie is regarded by some writers as an influence on minimalism, which developed in the 1960s and later. The musicologist Mark Bennett and the composer Humphrey Searle have said that John Cage's music shows Satie's influence, and Searle and the writer Edward Strickland have used the term \"minimalism\" in connection with Satie's Vexations, which the composer implied in his manuscript should be played over and over again 840 times. John Adams included a specific homage to Satie's music in his 1996 Century Rolls.",
"title": "Works"
},
{
"paragraph_id": 35,
"text": "Satie wrote extensively for the press, but unlike his professional colleagues such as Debussy and Dukas he did not write primarily as a music critic. Much of his writing is connected to music tangentially if at all. His biographer Caroline Potter describes him as \"an experimental creative writer, a blagueur who provoked, mystified and amused his readers\". He wrote jeux d'esprit claiming to eat dinner in four minutes with a diet of exclusively white food (including bones and fruit mould), or to drink boiled wine mixed with fuchsia juice, or to be woken by a servant hourly throughout the night to have his temperature taken; he wrote in praise of Beethoven's non-existent but \"sumptuous\" Tenth Symphony, and the family of instruments known as the cephalophones, \"which have a compass of thirty octaves and are absolutely unplayable\".",
"title": "Works"
},
{
"paragraph_id": 36,
"text": "Satie grouped some of these writings under the general headings Cahiers d'un mammifère (A Mammal's Notebook) and Mémoires d'un amnésique (Memoirs of an Amnesiac), indicating, as Potter comments, that \"these are not autobiographical writings in the conventional manner\". He claimed the major influence on his humour was Oliver Cromwell, adding \"I also owe much to Christopher Columbus, because the American spirit has occasionally tapped me on the shoulder and I have been delighted to feel its ironically glacial bite\".",
"title": "Works"
},
{
"paragraph_id": 37,
"text": "His published writings include:",
"title": "Works"
}
]
| Eric Alfred Leslie Satie, who signed his name Erik Satie after 1884, was a French composer and pianist. He was the son of a French father and a British mother. He studied at the Paris Conservatoire, but was an undistinguished student and obtained no diploma. In the 1880s he worked as a pianist in café-cabaret in Montmartre, Paris, and began composing works, mostly for solo piano, such as his Gymnopédies and Gnossiennes. He also wrote music for a Rosicrucian sect to which he was briefly attached. After a spell in which he composed little, Satie entered Paris's second music academy, the Schola Cantorum, as a mature student. His studies there were more successful than those at the Conservatoire. From about 1910 he became the focus of successive groups of young composers attracted by his unconventionality and originality. Among them were the group known as Les Six. A meeting with Jean Cocteau in 1915 led to the creation of the ballet Parade (1917) for Serge Diaghilev, with music by Satie, sets and costumes by Pablo Picasso, and choreography by Léonide Massine. Satie's example guided a new generation of French composers away from post-Wagnerian impressionism towards a sparer, terser style. Among those influenced by him during his lifetime were Maurice Ravel, Claude Debussy, and Francis Poulenc, and he is seen as an influence on more recent, minimalist composers such as John Cage and John Adams. His harmony is often characterised by unresolved chords, he sometimes dispensed with bar-lines, as in his Gnossiennes, and his melodies are generally simple and often reflect his love of old church music. He gave some of his later works absurd titles, such as Veritables Preludes flasques, Croquis et agaceries d'un gros bonhomme en bois and Sonatine bureaucratique. Most of his works are brief, and the majority are for solo piano. Exceptions include his "symphonic drama" Socrate (1919) and two late ballets Mercure and Relâche (1924). Satie never married, and his home for most of his adult life was a single small room, first in Montmartre and, from 1898 to his death, in Arcueil, a suburb of Paris. He adopted various images over the years, including a period in quasi-priestly dress, another in which he always wore identically coloured velvet suits, and is known for his last persona, in neat bourgeois costume, with bowler hat, wing collar, and umbrella. He was a lifelong heavy drinker, and died of cirrhosis of the liver at the age of 59. | 2001-10-21T18:35:52Z | 2023-12-06T00:09:49Z | [
"Template:Short description",
"Template:Spaced ndash",
"Template:Blockquote",
"Template:Wikisource author",
"Template:Portal bar",
"Template:IPA-fr",
"Template:Reflist",
"Template:Cite EPD",
"Template:Subscription",
"Template:ChoralWiki",
"Template:Authority control",
"Template:Lang",
"Template:See also",
"Template:Commons category",
"Template:Erik Satie",
"Template:Cite book",
"Template:Use British English",
"Template:Use dmy dates",
"Template:IPAc-en",
"Template:Refn",
"Template:ISBN",
"Template:Cite LPD",
"Template:Webarchive",
"Template:Wikiquote",
"Template:IMSLP",
"Template:Modernism"
]
| https://en.wikipedia.org/wiki/Erik_Satie |
9,960 | Elliptic integral | In integral calculus, an elliptic integral is one of a number of related functions defined as the value of certain integrals, which were first studied by Giulio Fagnano and Leonhard Euler (c. 1750). Their name originates from their originally arising in connection with the problem of finding the arc length of an ellipse.
Modern mathematics defines an "elliptic integral" as any function f which can be expressed in the form
where R is a rational function of its two arguments, P is a polynomial of degree 3 or 4 with no repeated roots, and c is a constant.
In general, integrals in this form cannot be expressed in terms of elementary functions. Exceptions to this general rule are when P has repeated roots, or when R(x, y) contains no odd powers of y or if the integral is pseudo-elliptic. However, with the appropriate reduction formula, every elliptic integral can be brought into a form that involves integrals over rational functions and the three Legendre canonical forms (i.e. the elliptic integrals of the first, second and third kind).
Besides the Legendre form given below, the elliptic integrals may also be expressed in Carlson symmetric form. Additional insight into the theory of the elliptic integral may be gained through the study of the Schwarz–Christoffel mapping. Historically, elliptic functions were discovered as inverse functions of elliptic integrals.
Incomplete elliptic integrals are functions of two arguments; complete elliptic integrals are functions of a single argument. These arguments are expressed in a variety of different but equivalent ways (they give the same elliptic integral). Most texts adhere to a canonical naming scheme, using the following naming conventions.
For expressing one argument:
Each of the above three quantities is completely determined by any of the others (given that they are non-negative). Thus, they can be used interchangeably.
The other argument can likewise be expressed as φ, the amplitude, or as x or u, where x = sin φ = sn u and sn is one of the Jacobian elliptic functions.
Specifying the value of any one of these quantities determines the others. Note that u also depends on m. Some additional relationships involving u include
The latter is sometimes called the delta amplitude and written as Δ(φ) = dn u. Sometimes the literature also refers to the complementary parameter, the complementary modulus, or the complementary modular angle. These are further defined in the article on quarter periods.
In this notation, the use of a vertical bar as delimiter indicates that the argument following it is the "parameter" (as defined above), while the backslash indicates that it is the modular angle. The use of a semicolon implies that the argument preceding it is the sine of the amplitude:
This potentially confusing use of different argument delimiters is traditional in elliptic integrals and much of the notation is compatible with that used in the reference book by Abramowitz and Stegun and that used in the integral tables by Gradshteyn and Ryzhik.
There are still other conventions for the notation of elliptic integrals employed in the literature. The notation with interchanged arguments, F(k, φ), is often encountered; and similarly E(k, φ) for the integral of the second kind. Abramowitz and Stegun substitute the integral of the first kind, F(φ, k), for the argument φ in their definition of the integrals of the second and third kinds, unless this argument is followed by a vertical bar: i.e. E(F(φ, k) | k) for E(φ | k). Moreover, their complete integrals employ the parameter k as argument in place of the modulus k, i.e. K(k) rather than K(k). And the integral of the third kind defined by Gradshteyn and Ryzhik, Π(φ, n, k), puts the amplitude φ first and not the "characteristic" n.
Thus one must be careful with the notation when using these functions, because various reputable references and software packages use different conventions in the definitions of the elliptic functions. For example, Wolfram's Mathematica software and Wolfram Alpha define the complete elliptic integral of the first kind in terms of the parameter m, instead of the elliptic modulus k.
The incomplete elliptic integral of the first kind F is defined as
This is the trigonometric form of the integral; substituting t = sin θ and x = sin φ, one obtains the Legendre normal form:
Equivalently, in terms of the amplitude and modular angle one has:
With x = sn(u, k) one has:
demonstrating that this Jacobian elliptic function is a simple inverse of the incomplete elliptic integral of the first kind.
The incomplete elliptic integral of the first kind has following addition theorem:
The elliptic modulus can be transformed that way:
The incomplete elliptic integral of the second kind E in trigonometric form is
Substituting t = sin θ and x = sin φ, one obtains the Legendre normal form:
Equivalently, in terms of the amplitude and modular angle:
Relations with the Jacobi elliptic functions include
The meridian arc length from the equator to latitude φ is written in terms of E:
where a is the semi-major axis, and e is the eccentricity.
The incomplete elliptic integral of the second kind has following addition theorem:
The elliptic modulus can be transformed that way:
The incomplete elliptic integral of the third kind Π is
or
The number n is called the characteristic and can take on any value, independently of the other arguments. Note though that the value Π(1; π/2 | m) is infinite, for any m.
A relation with the Jacobian elliptic functions is
The meridian arc length from the equator to latitude φ is also related to a special case of Π:
Elliptic Integrals are said to be 'complete' when the amplitude φ = π/2 and therefore x = 1. The complete elliptic integral of the first kind K may thus be defined as
or more compactly in terms of the incomplete integral of the first kind as
It can be expressed as a power series
where Pn is the Legendre polynomials, which is equivalent to
where n!! denotes the double factorial. In terms of the Gauss hypergeometric function, the complete elliptic integral of the first kind can be expressed as
The complete elliptic integral of the first kind is sometimes called the quarter period. It can be computed very efficiently in terms of the arithmetic–geometric mean:
Therefore the modulus can be transformed as:
This expression is valid for all n ∈ N {\displaystyle n\in \mathbb {N} } and 0 ≤ k ≤ 1:
If k = λ(i√r) and r ∈ Q + {\displaystyle r\in \mathbb {Q} ^{+}} (where λ is the modular lambda function), then K(k) is expressible in closed form in terms of the gamma function. For example, r = 2, r = 3 and r = 7 give, respectively,
and
and
More generally, the condition that
be in an imaginary quadratic field is sufficient. For instance, if k = e, then iK′/K = e and
The relation to Jacobi's theta function is given by
where the nome q is
This approximation has a relative precision better than 3×10 for k < 1/2. Keeping only the first two terms is correct to 0.01 precision for k < 1/2.
The differential equation for the elliptic integral of the first kind is
A second solution to this equation is K(√1 − k). This solution satisfies the relation
A continued fraction expansion is:
where the nome is q = q ( k ) = exp [ − π K ′ ( k ) / K ( k ) ] {\displaystyle q=q(k)=\exp[-\pi K'(k)/K(k)]} in its definition.
The complete elliptic integral of the second kind E is defined as
or more compactly in terms of the incomplete integral of the second kind E(φ,k) as
For an ellipse with semi-major axis a and semi-minor axis b and eccentricity e = √1 − b/a, the complete elliptic integral of the second kind E(e) is equal to one quarter of the circumference C of the ellipse measured in units of the semi-major axis a. In other words:
The complete elliptic integral of the second kind can be expressed as a power series
which is equivalent to
In terms of the Gauss hypergeometric function, the complete elliptic integral of the second kind can be expressed as
The modulus can be transformed that way:
Like the integral of the first kind, the complete elliptic integral of the second kind can be computed very efficiently using the arithmetic–geometric mean.
Define sequences an and gn, where a0 = 1, g0 = √1 − k = k′ and the recurrence relations an + 1 = an + gn/2, gn + 1 = √an gn hold. Furthermore, define
By definition,
Also
Then
In practice, the arithmetic-geometric mean would simply be computed up to some limit. This formula converges quadratically for all |k| ≤ 1. To speed up computation further, the relation cn + 1 = cn/4an + 1 can be used.
Furthermore, if k = λ(i√r) and r ∈ Q + {\displaystyle r\in \mathbb {Q} ^{+}} (where λ is the modular lambda function), then E(k) is expressible in closed form in terms of
and hence can be computed without the need for the infinite summation term. For example, r = 1, r = 3 and r = 7 give, respectively,
and
and
A second solution to this equation is E(√1 − k) − K(√1 − k).
The complete elliptic integral of the third kind Π can be defined as
Note that sometimes the elliptic integral of the third kind is defined with an inverse sign for the characteristic n,
Just like the complete elliptic integrals of the first and second kind, the complete elliptic integral of the third kind can be computed very efficiently using the arithmetic-geometric mean.
In 1829, Jacobi defined the Jacobi zeta function:
It is periodic in φ {\displaystyle \varphi } with minimal period π {\displaystyle \pi } . It is related to the Jacobi zn function by Z ( φ , k ) = zn ( F ( φ , k ) , k ) {\displaystyle Z(\varphi ,k)=\operatorname {zn} (F(\varphi ,k),k)} . In the literature (e.g. Whittaker and Watson (1927)), sometimes Z {\displaystyle Z} means Wikipedia's zn {\displaystyle \operatorname {zn} } . Some authors (e.g. King (1924)) use Z {\displaystyle Z} for both Wikipedia's Z {\displaystyle Z} and zn {\displaystyle \operatorname {zn} } .
The Legendre's relation or Legendre Identity shows the relation of the integrals K and E of an elliptic modulus and its anti-related counterpart in an integral equation of second degree:
For two modules that are Pythagorean counterparts to each other, this relation is valid:
For example:
And for two modules that are tangential counterparts to each other, the following relationship is valid:
For example:
The Legendre's relation for tangential modular counterparts results directly from the Legendre's identity for Pythagorean modular counterparts by using the Landen modular transformation on the Pythagorean counter modulus.
For the lemniscatic case, the elliptic modulus or specific eccentricity ε is equal to half the square root of two. Legendre's identity for the lemniscatic case can be proved as follows:
According to the Chain rule these derivatives hold:
By using the Fundamental theorem of calculus these formulas can be generated:
The Linear combination of the two now mentioned integrals leads to the following formula:
By forming the original antiderivative related to x from the function now shown using the Product rule this formula results:
If the value x = 1 {\displaystyle x=1} is inserted in this integral identity, then the following identity emerges:
This is how this lemniscatic excerpt from Legendre's identity appears:
Now the modular general case is worked out. For this purpose, the derivatives of the complete elliptic integrals are derived after the modulus ε {\displaystyle \varepsilon } and then they are combined. And then the Legendre's identity balance is determined.
Because the derivative of the circle function is the negative product of the identical mapping function and the reciprocal of the circle function:
These are the derivatives of K and E shown in this article in the sections above:
In combination with the derivative of the circle function these derivatives are valid then:
Legendre's identity includes products of any two complete elliptic integrals. For the derivation of the function side from the equation scale of Legendre's identity, the Product rule is now applied in the following:
Of these three equations, adding the top two equations and subtracting the bottom equation gives this result:
In relation to the ε {\displaystyle \varepsilon } the equation balance constantly gives the value zero.
The previously determined result shall be combined with the Legendre equation to the modulus ε = 1 / 2 {\displaystyle \varepsilon =1/{\sqrt {2}}} that is worked out in the section before:
The combination of the last two formulas gives the following result:
Because if the derivative of a continuous function constantly takes the value zero, then the concerned function is a constant function. This means that this function results in the same function value for each abscissa value ε {\displaystyle \varepsilon } and the associated function graph is therefore a horizontal straight line. | [
{
"paragraph_id": 0,
"text": "In integral calculus, an elliptic integral is one of a number of related functions defined as the value of certain integrals, which were first studied by Giulio Fagnano and Leonhard Euler (c. 1750). Their name originates from their originally arising in connection with the problem of finding the arc length of an ellipse.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Modern mathematics defines an \"elliptic integral\" as any function f which can be expressed in the form",
"title": ""
},
{
"paragraph_id": 2,
"text": "where R is a rational function of its two arguments, P is a polynomial of degree 3 or 4 with no repeated roots, and c is a constant.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In general, integrals in this form cannot be expressed in terms of elementary functions. Exceptions to this general rule are when P has repeated roots, or when R(x, y) contains no odd powers of y or if the integral is pseudo-elliptic. However, with the appropriate reduction formula, every elliptic integral can be brought into a form that involves integrals over rational functions and the three Legendre canonical forms (i.e. the elliptic integrals of the first, second and third kind).",
"title": ""
},
{
"paragraph_id": 4,
"text": "Besides the Legendre form given below, the elliptic integrals may also be expressed in Carlson symmetric form. Additional insight into the theory of the elliptic integral may be gained through the study of the Schwarz–Christoffel mapping. Historically, elliptic functions were discovered as inverse functions of elliptic integrals.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Incomplete elliptic integrals are functions of two arguments; complete elliptic integrals are functions of a single argument. These arguments are expressed in a variety of different but equivalent ways (they give the same elliptic integral). Most texts adhere to a canonical naming scheme, using the following naming conventions.",
"title": "Argument notation"
},
{
"paragraph_id": 6,
"text": "For expressing one argument:",
"title": "Argument notation"
},
{
"paragraph_id": 7,
"text": "Each of the above three quantities is completely determined by any of the others (given that they are non-negative). Thus, they can be used interchangeably.",
"title": "Argument notation"
},
{
"paragraph_id": 8,
"text": "The other argument can likewise be expressed as φ, the amplitude, or as x or u, where x = sin φ = sn u and sn is one of the Jacobian elliptic functions.",
"title": "Argument notation"
},
{
"paragraph_id": 9,
"text": "Specifying the value of any one of these quantities determines the others. Note that u also depends on m. Some additional relationships involving u include",
"title": "Argument notation"
},
{
"paragraph_id": 10,
"text": "The latter is sometimes called the delta amplitude and written as Δ(φ) = dn u. Sometimes the literature also refers to the complementary parameter, the complementary modulus, or the complementary modular angle. These are further defined in the article on quarter periods.",
"title": "Argument notation"
},
{
"paragraph_id": 11,
"text": "In this notation, the use of a vertical bar as delimiter indicates that the argument following it is the \"parameter\" (as defined above), while the backslash indicates that it is the modular angle. The use of a semicolon implies that the argument preceding it is the sine of the amplitude:",
"title": "Argument notation"
},
{
"paragraph_id": 12,
"text": "This potentially confusing use of different argument delimiters is traditional in elliptic integrals and much of the notation is compatible with that used in the reference book by Abramowitz and Stegun and that used in the integral tables by Gradshteyn and Ryzhik.",
"title": "Argument notation"
},
{
"paragraph_id": 13,
"text": "There are still other conventions for the notation of elliptic integrals employed in the literature. The notation with interchanged arguments, F(k, φ), is often encountered; and similarly E(k, φ) for the integral of the second kind. Abramowitz and Stegun substitute the integral of the first kind, F(φ, k), for the argument φ in their definition of the integrals of the second and third kinds, unless this argument is followed by a vertical bar: i.e. E(F(φ, k) | k) for E(φ | k). Moreover, their complete integrals employ the parameter k as argument in place of the modulus k, i.e. K(k) rather than K(k). And the integral of the third kind defined by Gradshteyn and Ryzhik, Π(φ, n, k), puts the amplitude φ first and not the \"characteristic\" n.",
"title": "Argument notation"
},
{
"paragraph_id": 14,
"text": "Thus one must be careful with the notation when using these functions, because various reputable references and software packages use different conventions in the definitions of the elliptic functions. For example, Wolfram's Mathematica software and Wolfram Alpha define the complete elliptic integral of the first kind in terms of the parameter m, instead of the elliptic modulus k.",
"title": "Argument notation"
},
{
"paragraph_id": 15,
"text": "The incomplete elliptic integral of the first kind F is defined as",
"title": "Incomplete elliptic integral of the first kind"
},
{
"paragraph_id": 16,
"text": "This is the trigonometric form of the integral; substituting t = sin θ and x = sin φ, one obtains the Legendre normal form:",
"title": "Incomplete elliptic integral of the first kind"
},
{
"paragraph_id": 17,
"text": "Equivalently, in terms of the amplitude and modular angle one has:",
"title": "Incomplete elliptic integral of the first kind"
},
{
"paragraph_id": 18,
"text": "With x = sn(u, k) one has:",
"title": "Incomplete elliptic integral of the first kind"
},
{
"paragraph_id": 19,
"text": "demonstrating that this Jacobian elliptic function is a simple inverse of the incomplete elliptic integral of the first kind.",
"title": "Incomplete elliptic integral of the first kind"
},
{
"paragraph_id": 20,
"text": "The incomplete elliptic integral of the first kind has following addition theorem:",
"title": "Incomplete elliptic integral of the first kind"
},
{
"paragraph_id": 21,
"text": "The elliptic modulus can be transformed that way:",
"title": "Incomplete elliptic integral of the first kind"
},
{
"paragraph_id": 22,
"text": "The incomplete elliptic integral of the second kind E in trigonometric form is",
"title": "Incomplete elliptic integral of the second kind"
},
{
"paragraph_id": 23,
"text": "Substituting t = sin θ and x = sin φ, one obtains the Legendre normal form:",
"title": "Incomplete elliptic integral of the second kind"
},
{
"paragraph_id": 24,
"text": "Equivalently, in terms of the amplitude and modular angle:",
"title": "Incomplete elliptic integral of the second kind"
},
{
"paragraph_id": 25,
"text": "Relations with the Jacobi elliptic functions include",
"title": "Incomplete elliptic integral of the second kind"
},
{
"paragraph_id": 26,
"text": "The meridian arc length from the equator to latitude φ is written in terms of E:",
"title": "Incomplete elliptic integral of the second kind"
},
{
"paragraph_id": 27,
"text": "where a is the semi-major axis, and e is the eccentricity.",
"title": "Incomplete elliptic integral of the second kind"
},
{
"paragraph_id": 28,
"text": "The incomplete elliptic integral of the second kind has following addition theorem:",
"title": "Incomplete elliptic integral of the second kind"
},
{
"paragraph_id": 29,
"text": "The elliptic modulus can be transformed that way:",
"title": "Incomplete elliptic integral of the second kind"
},
{
"paragraph_id": 30,
"text": "The incomplete elliptic integral of the third kind Π is",
"title": "Incomplete elliptic integral of the third kind"
},
{
"paragraph_id": 31,
"text": "or",
"title": "Incomplete elliptic integral of the third kind"
},
{
"paragraph_id": 32,
"text": "The number n is called the characteristic and can take on any value, independently of the other arguments. Note though that the value Π(1; π/2 | m) is infinite, for any m.",
"title": "Incomplete elliptic integral of the third kind"
},
{
"paragraph_id": 33,
"text": "A relation with the Jacobian elliptic functions is",
"title": "Incomplete elliptic integral of the third kind"
},
{
"paragraph_id": 34,
"text": "The meridian arc length from the equator to latitude φ is also related to a special case of Π:",
"title": "Incomplete elliptic integral of the third kind"
},
{
"paragraph_id": 35,
"text": "Elliptic Integrals are said to be 'complete' when the amplitude φ = π/2 and therefore x = 1. The complete elliptic integral of the first kind K may thus be defined as",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 36,
"text": "or more compactly in terms of the incomplete integral of the first kind as",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 37,
"text": "It can be expressed as a power series",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 38,
"text": "where Pn is the Legendre polynomials, which is equivalent to",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 39,
"text": "where n!! denotes the double factorial. In terms of the Gauss hypergeometric function, the complete elliptic integral of the first kind can be expressed as",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 40,
"text": "The complete elliptic integral of the first kind is sometimes called the quarter period. It can be computed very efficiently in terms of the arithmetic–geometric mean:",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 41,
"text": "Therefore the modulus can be transformed as:",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 42,
"text": "This expression is valid for all n ∈ N {\\displaystyle n\\in \\mathbb {N} } and 0 ≤ k ≤ 1:",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 43,
"text": "If k = λ(i√r) and r ∈ Q + {\\displaystyle r\\in \\mathbb {Q} ^{+}} (where λ is the modular lambda function), then K(k) is expressible in closed form in terms of the gamma function. For example, r = 2, r = 3 and r = 7 give, respectively,",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 44,
"text": "and",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 45,
"text": "and",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 46,
"text": "More generally, the condition that",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 47,
"text": "be in an imaginary quadratic field is sufficient. For instance, if k = e, then iK′/K = e and",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 48,
"text": "The relation to Jacobi's theta function is given by",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 49,
"text": "where the nome q is",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 50,
"text": "This approximation has a relative precision better than 3×10 for k < 1/2. Keeping only the first two terms is correct to 0.01 precision for k < 1/2.",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 51,
"text": "The differential equation for the elliptic integral of the first kind is",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 52,
"text": "A second solution to this equation is K(√1 − k). This solution satisfies the relation",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 53,
"text": "A continued fraction expansion is:",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 54,
"text": "where the nome is q = q ( k ) = exp [ − π K ′ ( k ) / K ( k ) ] {\\displaystyle q=q(k)=\\exp[-\\pi K'(k)/K(k)]} in its definition.",
"title": "Complete elliptic integral of the first kind"
},
{
"paragraph_id": 55,
"text": "The complete elliptic integral of the second kind E is defined as",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 56,
"text": "or more compactly in terms of the incomplete integral of the second kind E(φ,k) as",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 57,
"text": "For an ellipse with semi-major axis a and semi-minor axis b and eccentricity e = √1 − b/a, the complete elliptic integral of the second kind E(e) is equal to one quarter of the circumference C of the ellipse measured in units of the semi-major axis a. In other words:",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 58,
"text": "The complete elliptic integral of the second kind can be expressed as a power series",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 59,
"text": "which is equivalent to",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 60,
"text": "In terms of the Gauss hypergeometric function, the complete elliptic integral of the second kind can be expressed as",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 61,
"text": "The modulus can be transformed that way:",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 62,
"text": "Like the integral of the first kind, the complete elliptic integral of the second kind can be computed very efficiently using the arithmetic–geometric mean.",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 63,
"text": "Define sequences an and gn, where a0 = 1, g0 = √1 − k = k′ and the recurrence relations an + 1 = an + gn/2, gn + 1 = √an gn hold. Furthermore, define",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 64,
"text": "By definition,",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 65,
"text": "Also",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 66,
"text": "Then",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 67,
"text": "In practice, the arithmetic-geometric mean would simply be computed up to some limit. This formula converges quadratically for all |k| ≤ 1. To speed up computation further, the relation cn + 1 = cn/4an + 1 can be used.",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 68,
"text": "Furthermore, if k = λ(i√r) and r ∈ Q + {\\displaystyle r\\in \\mathbb {Q} ^{+}} (where λ is the modular lambda function), then E(k) is expressible in closed form in terms of",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 69,
"text": "and hence can be computed without the need for the infinite summation term. For example, r = 1, r = 3 and r = 7 give, respectively,",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 70,
"text": "and",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 71,
"text": "and",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 72,
"text": "A second solution to this equation is E(√1 − k) − K(√1 − k).",
"title": "Complete elliptic integral of the second kind"
},
{
"paragraph_id": 73,
"text": "The complete elliptic integral of the third kind Π can be defined as",
"title": "Complete elliptic integral of the third kind"
},
{
"paragraph_id": 74,
"text": "Note that sometimes the elliptic integral of the third kind is defined with an inverse sign for the characteristic n,",
"title": "Complete elliptic integral of the third kind"
},
{
"paragraph_id": 75,
"text": "Just like the complete elliptic integrals of the first and second kind, the complete elliptic integral of the third kind can be computed very efficiently using the arithmetic-geometric mean.",
"title": "Complete elliptic integral of the third kind"
},
{
"paragraph_id": 76,
"text": "In 1829, Jacobi defined the Jacobi zeta function:",
"title": "Jacobi zeta function"
},
{
"paragraph_id": 77,
"text": "It is periodic in φ {\\displaystyle \\varphi } with minimal period π {\\displaystyle \\pi } . It is related to the Jacobi zn function by Z ( φ , k ) = zn ( F ( φ , k ) , k ) {\\displaystyle Z(\\varphi ,k)=\\operatorname {zn} (F(\\varphi ,k),k)} . In the literature (e.g. Whittaker and Watson (1927)), sometimes Z {\\displaystyle Z} means Wikipedia's zn {\\displaystyle \\operatorname {zn} } . Some authors (e.g. King (1924)) use Z {\\displaystyle Z} for both Wikipedia's Z {\\displaystyle Z} and zn {\\displaystyle \\operatorname {zn} } .",
"title": "Jacobi zeta function"
},
{
"paragraph_id": 78,
"text": "The Legendre's relation or Legendre Identity shows the relation of the integrals K and E of an elliptic modulus and its anti-related counterpart in an integral equation of second degree:",
"title": "Legendre's relation"
},
{
"paragraph_id": 79,
"text": "For two modules that are Pythagorean counterparts to each other, this relation is valid:",
"title": "Legendre's relation"
},
{
"paragraph_id": 80,
"text": "For example:",
"title": "Legendre's relation"
},
{
"paragraph_id": 81,
"text": "And for two modules that are tangential counterparts to each other, the following relationship is valid:",
"title": "Legendre's relation"
},
{
"paragraph_id": 82,
"text": "For example:",
"title": "Legendre's relation"
},
{
"paragraph_id": 83,
"text": "The Legendre's relation for tangential modular counterparts results directly from the Legendre's identity for Pythagorean modular counterparts by using the Landen modular transformation on the Pythagorean counter modulus.",
"title": "Legendre's relation"
},
{
"paragraph_id": 84,
"text": "For the lemniscatic case, the elliptic modulus or specific eccentricity ε is equal to half the square root of two. Legendre's identity for the lemniscatic case can be proved as follows:",
"title": "Legendre's relation"
},
{
"paragraph_id": 85,
"text": "According to the Chain rule these derivatives hold:",
"title": "Legendre's relation"
},
{
"paragraph_id": 86,
"text": "By using the Fundamental theorem of calculus these formulas can be generated:",
"title": "Legendre's relation"
},
{
"paragraph_id": 87,
"text": "The Linear combination of the two now mentioned integrals leads to the following formula:",
"title": "Legendre's relation"
},
{
"paragraph_id": 88,
"text": "By forming the original antiderivative related to x from the function now shown using the Product rule this formula results:",
"title": "Legendre's relation"
},
{
"paragraph_id": 89,
"text": "If the value x = 1 {\\displaystyle x=1} is inserted in this integral identity, then the following identity emerges:",
"title": "Legendre's relation"
},
{
"paragraph_id": 90,
"text": "This is how this lemniscatic excerpt from Legendre's identity appears:",
"title": "Legendre's relation"
},
{
"paragraph_id": 91,
"text": "Now the modular general case is worked out. For this purpose, the derivatives of the complete elliptic integrals are derived after the modulus ε {\\displaystyle \\varepsilon } and then they are combined. And then the Legendre's identity balance is determined.",
"title": "Legendre's relation"
},
{
"paragraph_id": 92,
"text": "Because the derivative of the circle function is the negative product of the identical mapping function and the reciprocal of the circle function:",
"title": "Legendre's relation"
},
{
"paragraph_id": 93,
"text": "These are the derivatives of K and E shown in this article in the sections above:",
"title": "Legendre's relation"
},
{
"paragraph_id": 94,
"text": "In combination with the derivative of the circle function these derivatives are valid then:",
"title": "Legendre's relation"
},
{
"paragraph_id": 95,
"text": "Legendre's identity includes products of any two complete elliptic integrals. For the derivation of the function side from the equation scale of Legendre's identity, the Product rule is now applied in the following:",
"title": "Legendre's relation"
},
{
"paragraph_id": 96,
"text": "Of these three equations, adding the top two equations and subtracting the bottom equation gives this result:",
"title": "Legendre's relation"
},
{
"paragraph_id": 97,
"text": "In relation to the ε {\\displaystyle \\varepsilon } the equation balance constantly gives the value zero.",
"title": "Legendre's relation"
},
{
"paragraph_id": 98,
"text": "The previously determined result shall be combined with the Legendre equation to the modulus ε = 1 / 2 {\\displaystyle \\varepsilon =1/{\\sqrt {2}}} that is worked out in the section before:",
"title": "Legendre's relation"
},
{
"paragraph_id": 99,
"text": "The combination of the last two formulas gives the following result:",
"title": "Legendre's relation"
},
{
"paragraph_id": 100,
"text": "Because if the derivative of a continuous function constantly takes the value zero, then the concerned function is a constant function. This means that this function results in the same function value for each abscissa value ε {\\displaystyle \\varepsilon } and the associated function graph is therefore a horizontal straight line.",
"title": "Legendre's relation"
}
]
| In integral calculus, an elliptic integral is one of a number of related functions defined as the value of certain integrals, which were first studied by Giulio Fagnano and Leonhard Euler. Their name originates from their originally arising in connection with the problem of finding the arc length of an ellipse. Modern mathematics defines an "elliptic integral" as any function f which can be expressed in the form where R is a rational function of its two arguments, P is a polynomial of degree 3 or 4 with no repeated roots, and c is a constant. In general, integrals in this form cannot be expressed in terms of elementary functions. Exceptions to this general rule are when P has repeated roots, or when R(x, y) contains no odd powers of y or if the integral is pseudo-elliptic. However, with the appropriate reduction formula, every elliptic integral can be brought into a form that involves integrals over rational functions and the three Legendre canonical forms. Besides the Legendre form given below, the elliptic integrals may also be expressed in Carlson symmetric form. Additional insight into the theory of the elliptic integral may be gained through the study of the Schwarz–Christoffel mapping. Historically, elliptic functions were discovered as inverse functions of elliptic integrals. | 2002-02-25T15:51:15Z | 2023-11-12T01:35:26Z | [
"Template:Authority control",
"Template:Mvar",
"Template:Val",
"Template:Portal",
"Template:Div col end",
"Template:Nonelementary Integral",
"Template:Springer",
"Template:Use American English",
"Template:MOS",
"Template:Sfn",
"Template:Cite web",
"Template:Refend",
"Template:Citation needed",
"Template:Reflist",
"Template:Dlmf",
"Template:Cite journal",
"Template:Citation",
"Template:Refbegin",
"Template:Short description",
"Template:Circa",
"Template:Math",
"Template:Div col",
"Template:Cite book",
"Template:AS ref",
"Template:Commons category",
"Template:Algebraic curves navbox"
]
| https://en.wikipedia.org/wiki/Elliptic_integral |
9,961 | Epistle to the Romans | The Epistle to the Romans is the sixth book in the New Testament, and the longest of the thirteen Pauline epistles. Biblical scholars agree that it was composed by Paul the Apostle to explain that salvation is offered through the gospel of Jesus Christ.
Romans was likely written while Paul was staying in the house of Gaius in Corinth. The epistle was probably transcribed by Paul's amanuensis Tertius and is dated AD late 55 to early 57. Consisting of 16 chapters, versions with only the first 14 or 15 chapters circulated early. Some of these recensions lacked all reference to the original audience of Christians in Rome making it very general in nature. Other textual variants include subscripts explicitly mentioning Corinth as the place of composition and name Phoebe, a deacon of the church in Cenchreae, as the messenger who took the epistle to Rome.
Prior to composing the epistle, Paul had evangelized the areas surrounding the Aegean Sea and was eager to take the gospel farther to Spain, a journey that would allow him to visit Rome on the way. The epistle can consequentially be understood as a document outlining his reasons for the trip and preparing the church in Rome for his visit. Christians in Rome would have been of both Jewish and Gentile background and it is possible that the church suffered from internal strife between these two groups. Paul – a Hellenistic Jew and former Pharisee – shifts his argument to cater to both audiences and the church as a whole. Because the work contains material intended both for specific recipients as well as the general Christian public in Rome, scholars have had difficulty categorizing it as either a private letter or a public epistle.
Although sometimes considered a treatise of (systematic) theology, Romans remains silent on many issues that Paul addresses elsewhere, but is nonetheless generally considered substantial, especially on justification and salvation. Proponents of both sola fide and the Roman Catholic position of the necessity of both faith and works find support in Romans. Martin Luther in his translation of the Bible controversially added the word "alone" (allein in German) to Romans 3:28 so that it read: "thus, we hold, then, that man is justified without doing the works of the law, alone through faith".
In the opinion of Jesuit biblical scholar Joseph Fitzmyer, the book "overwhelms the reader by the density and sublimity of the topic with which it deals, the gospel of the justification and salvation of Jew and Greek alike by the grace of God through faith in Jesus Christ, revealing the uprightness and love of God the Father."
Anglican bishop N. T. Wright notes that Romans is:
...neither a systematic theology nor a summary of Paul's lifework, but it is by common consent his masterpiece. It dwarfs most of his other writings, an Alpine peak towering over hills and villages. Not all onlookers have viewed it in the same light or from the same angle, and their snapshots and paintings of it are sometimes remarkably unalike. Not all climbers have taken the same route up its sheer sides, and there is frequent disagreement on the best approach. What nobody doubts is that we are here dealing with a work of massive substance, presenting a formidable intellectual challenge while offering a breathtaking theological and spiritual vision.
The scholarly consensus is that Paul wrote the Epistle to the Romans. C. E. B. Cranfield, in the introduction to his commentary on Romans, says:
The denial of Paul's authorship of Romans by such critics [...] is now rightly relegated to a place among the curiosities of NT scholarship. Today no responsible criticism disputes its Pauline origin. The evidence of its use in the Apostolic Fathers is clear, and before the end of the second century it is listed and cited as Paul's. Every extant early list of NT books includes it among his letters. The external evidence of authenticity could indeed hardly be stronger; and it is altogether borne out by the internal evidence, linguistic, stylistic, literary, historical and theological.
The letter was most probably written while Paul was in Corinth, probably while he was staying in the house of Gaius, and transcribed by Tertius, his amanuensis. There are a number of reasons why Corinth is considered most plausible. Paul was about to travel to Jerusalem on writing the letter, which matches Acts where it is reported that Paul stayed for three months in Greece. This probably implies Corinth as it was the location of Paul's greatest missionary success in Greece. Additionally, Phoebe was a deacon of the church in Cenchreae, a port to the east of Corinth, and would have been able to convey the letter to Rome after passing through Corinth and taking a ship from Corinth's west port. Erastus, mentioned in Romans 16:23, also lived in Corinth, being the city's commissioner for public works and city treasurer at various times, again indicating that the letter was written in Corinth.
The precise time at which it was written is not mentioned in the epistle, but it was obviously written when the collection for Jerusalem had been assembled and Paul was about to "go unto Jerusalem to minister unto the saints", that is, at the close of his second visit to Greece, during the winter preceding his last visit to that city. The majority of scholars writing on Romans propose the letter was written in late 55/early 56 or late 56/early 57. Early 55 and early 58 both have some support, while German New Testament scholar Gerd Lüdemann argues for a date as early as 51/52 (or 54/55), following on from Knox, who proposed 53/54. Lüdemann is the only serious challenge to the consensus of mid to late 50s.
There is strong, albeit indirect, evidence that a recension of Romans that lacked chapters 15 and 16 was widely used in the western half of the Roman Empire until the mid-4th century. This conclusion is partially based on the fact that a variety of Church Fathers, such as Origen and Tertullian, refer to a fourteen-chapter edition of Romans, either directly or indirectly. The fact that Paul's doxology is placed in various different places in different manuscripts of Romans only strengthens the case for an early fourteen-chapter recension. While there is some uncertainty, Harry Gamble concludes that the canonical sixteen-chapter recension is likely the earlier version of the text.
The Codex Boernerianus lacks the explicit references to the Roman church as the audience of the epistle found in Romans 1:7 and 1:15. There is evidence from patristic commentaries indicating that Boernarianus is not unique in this regard; many early, no longer extant manuscripts also lacked an explicit Roman addressee in chapter 1. It is notable that, when this textual variant is combined with the omission of chapters 15 and 16, there is no longer any clear reference to the Roman church throughout the entire epistle. Harry Gamble speculates that 1:7, 1:15, and chapters 15 and 16 may have been removed by a scribe in order to make the epistle more suitable for a "general" audience.
It is quite possible that a fifteen-chapter form of Romans, omitting chapter 16, may have existed at an early date. Several scholars have argued, largely on the basis of internal evidence, that Chapter 16 represents a separate letter of Paul – possibly addressed to Ephesus – that was later appended to Romans.
There are a few different arguments for this conclusion. First of all, there is a concluding peace benediction at 15:33, which reads like the other Pauline benedictions that conclude their respective letters. Secondly, Paul greets a large number of people and families in chapter 16, in a way that suggests he was already familiar with them, whereas the material of chapters 1–15 presupposes that Paul has never met anyone from the Roman church. The fact that Papyrus 46 places Paul's doxology at the end of chapter 15 can also be interpreted as evidence for the existence of a fifteen-chapter recension of the epistle.
Some manuscripts have a subscript at the end of the Epistle:
For ten years before writing the letter (c. 47–57 AD), Paul had traveled around the territories bordering the Aegean Sea evangelizing. Churches had been planted in the Roman provinces of Galatia, Macedonia, Achaia and Asia. Paul, considering his task complete, wanted to preach the gospel in Spain, where he would not "build upon another man's foundation". This allowed him to visit Rome on the way, a long-time ambition of his. The letter to the Romans, in part, prepares them and gives reasons for his visit.
In addition to Paul's geographic location, his religious views are important. First, Paul was a Hellenistic Jew with a Pharisaic background (see Gamaliel), integral to his identity (see Paul the Apostle and Judaism). His concern for his people is one part of the dialogue and runs throughout the letter. Second, the other side of the dialogue is Paul's conversion and calling to follow Christ in the early 30s.
The most probable ancient account of the beginning of Christianity in Rome is given by a 4th-century writer known as Ambrosiaster:
It is established that there were Jews living in Rome in the times of the Apostles, and that those Jews who had believed [in Christ] passed on to the Romans the tradition that they ought to profess Christ but keep the law [Torah] [...] One ought not to condemn the Romans, but to praise their faith, because without seeing any signs or miracles and without seeing any of the apostles, they nevertheless accepted faith in Christ, although according to a Jewish rite.
From Adam Clarke:
The occasion of writing the epistle: [...] Paul had made acquaintance with all circumstances of the Christians at Rome [...] and finding that it was [...] partly of heathens converted to Christianity, and partly of Jews, who had, with many remaining prejudices, believed in Jesus as the true Messiah, and that many contentions arose from the claims of the Gentiles to equal privileges with the Jews, and from absolute refusal of the Jews to admit these claims, unless the Gentile converts become circumcised; he wrote this epistle to adjust and settle these differences.
At this time, the Jews made up a substantial number in Rome, and their synagogues, frequented by many, enabled the Gentiles to become acquainted with the story of Jesus of Nazareth. Consequently, churches composed of both Jews and Gentiles were formed at Rome. According to Irenaeus, a 2nd-century Church Father, the church at Rome was founded directly by the apostles Peter and Paul. However, many modern scholars disagree with Irenaeus, holding that while little is known of the circumstances of the church's founding, it was not founded by Paul:
Many of the brethren went out to meet Paul on his approach to Rome. There is evidence that Christians were then in Rome in considerable numbers and probably had more than one place of meeting.
The large number of names in Romans 16:3–15 of those then in Rome, and verses 5, 15 and 16, indicate there was more than one church assembly or company of believers in Rome. Verse 5 mentions a church that met in the house of Aquila and Priscilla. Verses 14 and 15 each mention groupings of believers and saints.
Jews were expelled from Rome because of disturbances around AD 49 by the edict of Claudius. Fitzmyer claims that both Jews and Jewish Christians were expelled as a result of their infighting. Claudius died around the year AD 54, and his successor, Emperor Nero, allowed the Jews back into Rome, but then, after the Great Fire of Rome of 64, Christians were persecuted. Fitzmyer argues that with the return of the Jews to Rome in 54 new conflict arose between the Gentile Christians and the Jewish Christians who had formerly been expelled. He also argues that this may be what Paul is referring to when he talks about the "strong" and the "weak" in Romans 15; this theory was originally put forth by W. Marxsen in Introduction to the New Testament: An Approach to its problems (1968) but is critiqued and modified by Fitzmyer. Fitzmyer's main contention is that Paul seems to be purposefully vague. Paul could have been more specific if he wanted to address this problem specifically. Keck thinks Gentile Christians may have developed a dislike of or looked down on Jews (see also Antisemitism and Responsibility for the death of Jesus), because they theologically rationalized that Jews were no longer God's people.
Scholars often have difficulty assessing whether Romans is a letter or an epistle, a relevant distinction in form-critical analysis:
A letter is something non-literary, a means of communication between persons who are separated from each other. Confidential and personal in nature, it is intended only for the person or persons to whom it is addressed, and not at all for the public or any kind of publicity...An Epistle is an artistic literary form, just like the dialogue, the oration, or the drama. It has nothing in common with the letter except its form: apart from that one might venture the paradox that the epistle is the opposite of a real letter. The contents of the epistle are intended for publicity—they aim at interesting "the public."
Joseph Fitzmyer argues, from evidence put forth by Stirewalt, that the style of Romans is an "essay-letter." Philip Melanchthon, a writer during the Reformation, suggested that Romans was caput et summa universae doctrinae christianae ("a summary of all Christian doctrine"). While some scholars suggest, like Melanchthon, that it is a type of theological treatise, this view largely ignores chapters 14 and 15 of Romans. There are also many "noteworthy elements" missing from Romans that are included in other areas of the Pauline corpus. The breakdown of Romans as a treatise began with F.C. Baur in 1836 when he suggested "this letter had to be interpreted according to the historical circumstances in which Paul wrote it."
Paul sometimes uses a style of writing common in his time called a diatribe. He appears to be responding to a critic (probably an imaginary one based on Paul's encounters with real objections in his previous preaching), and the letter is structured as a series of arguments. In the flow of the letter, Paul shifts his arguments, sometimes addressing the Jewish members of the church, sometimes the Gentile membership and sometimes the church as a whole.
To review the current scholarly viewpoints on the purpose of Romans, along with a bibliography, see Dictionary of Paul and His Letters. For a 16th-century "Lollard" reformer view, see the work of William Tyndale. In his prologue to his translation of Romans, which was largely taken from the prologue of German Reformer Martin Luther, Tyndale writes that:
.. this epistle is the principal and most excellent part of the new testament, and most pure evangelion, that is to say glad tidings and what we call the gospel, and also a light and a way in unto the whole scripture ... The sum and whole cause of the writings of this epistle, is, to prove that a man is justified by faith only: which proposition whoso denieth, to him is not only this epistle and all that Paul writeth, but also the whole scripture, so locked up that he shall never understand it to his soul's health. And to bring a man to the understanding and feeling that faith only justifieth, Paul proveth that the whole nature of man is so poisoned and so corrupt, yea and so dead concerning godly living or godly thinking, that it is impossible for her to keep the law in the sight of God.
The introduction provides some general notes about Paul. He introduces his apostleship here and introductory notes about the gospel he wishes to preach to the church at Rome. Jesus' human line stems from David. Paul, however, does not limit his ministry to Jews. Paul's goal is that the Gentiles would also hear the gospel.
He commends the Romans for faith. Paul also speaks of the past obstacles that have blocked his coming to Rome earlier.
Paul announces that he is not "ashamed" (epaiscúnomai) of his gospel because it holds power (dúnamis). These two verses form a backdrop of themes for the rest of the book; first, that Paul is unashamed of his love for this gospel that he preaches about Jesus Christ. He also notes that he is speaking to the "Jew first." There is significance to this, but much of it is scholarly conjecture as the relationship between Paul and Judaism is still debated, and scholars are hard-pressed to find an answer to such a question without knowing more about the audience in question. Wayne Brindle argues, based on Paul's former writings against the Judaizers in Galatians and 2 Corinthians, that rumors had probably spread about Paul totally negating the Jewish existence in a Christian world (see also Antinomianism in the New Testament and Supersessionism). Paul may have used the "Jew first" approach to counter such a view.
Paul begins with a summary of Hellenistic Jewish apologist discourse. His summary begins by suggesting that humans have taken up ungodliness and wickedness for which there already is wrath from God. People have taken God's invisible image and made him into an idol. Paul draws heavily here from the Wisdom of Solomon. This summary condemns "unnatural sexual behavior" and warns that such behavior has already resulted in a depraved body and mind ("reprobate mind" in the King James Version) and says that people who do such things (including murder and wickedness ) are worthy of death. Paul stands firmly against the idol worship system which was common in Rome. Several scholars believe the passage is a non-Pauline interpolation.
On the traditional Protestant interpretation, Paul here calls out Jews who are condemning others for not following the law when they themselves are also not following the law. Stanley Stowers, however, has argued on rhetorical grounds that Paul is in these verses not addressing a Jew at all but rather an easily recognizable caricature of the typical boastful person (ὁ ἀλαζων). Stowers writes, "There is absolutely no justification for reading 2:1–5 as Paul's attack on 'the hypocrisy of the Jew.' No one in the first century would have identified ho alazon with Judaism. That popular interpretation depends upon anachronistically reading later Christian characterizations of Jews as 'hypocritical Pharisees'". (See also Anti-Judaism).
Paul says that a righteousness from God has made itself known apart from the law, to which the law and prophets testify, and this righteousness from God comes through faith in Jesus to all who believe. He describes justification – legally clearing the believer of the guilt and penalty of sin – as a gift of God, and not the work of man (lest he might boast), but by faith.
In chapters five through eight, Paul argues that believers can be assured of their hope in salvation, having been freed from the bondage of sin. Paul teaches that through faith, the faithful have been joined with Jesus and freed from sin. Believers should celebrate in the assurance of salvation and be certain that no external force or party can take their salvation away from them. This promise is open to everyone since everyone has sinned, save the one who paid for all of them.
In Romans 7:1, Paul says that humans are under the law while they live: "Know ye not [...] that the law hath dominion over a man as long as he liveth?" However, Jesus' death on the cross makes believers dead to the law (7:4, "Wherefore, my brethren, ye are also become dead to the law by the body of Christ"), according to an antinomistic interpretation.
In chapters 9–11 Paul addresses the faithfulness of God to the Israelites, where he says that God has been faithful to his promise. Paul hopes that all Israelites will come to realize the truth, stating that "Not as though the word of God hath taken none effect. For they are not all Israel, which are of Israel: Neither, because they are the seed of Abraham, are they all children: but, In Isaac shall thy seed be called." Paul affirms that he himself is also an Israelite, and had in the past been a persecutor of Early Christians. In Romans 9–11 Paul talks about how the nation of Israel has not been cast away, and the conditions under which Israel will be God's chosen nation again: when Israel returns to its faith, sets aside its unbelief.
From chapter 12 through the first part of chapter 15, Paul outlines how the gospel transforms believers and the behaviour that results from such a transformation. This transformation is described as a "renewing of your mind" (12:2), a transformation that Douglas J. Moo characterizes as "the heart of the matter." It is a transformation so radical that it amounts to "a transfiguration of your brain," a "metanoia", a "mental revolution."
Paul goes on to describe how believers should live. Christians are no longer under the law, that is, no longer bound by the law of Moses, but under the grace of God (see Law and grace). Christians do not need to live under the law because to the extent that their minds have been renewed, they will know "almost instinctively" what God wants of them. The law then provides an "objective standard" for judging progress in the "lifelong process" of their mind's renewal.
To the extent they have been set free from sin by renewed minds (Romans 6:18), believers are no longer bound to sin. Believers are free to live in obedience to God and love everybody. As Paul says in Romans 13:10, "love (ἀγάπη) worketh no ill to his neighbor: therefore love is the fulfilling of law".
The fragment in Romans 13:1–7 dealing with obedience to earthly powers is considered by some, for example James Kallas, to be an interpolation. (See also the Great Commandment and Christianity and politics). Paul Tillich accepts the historical authenticity of Romans 13:1–7, but claims it has been misinterpreted by churches with an anti-revolutionary bias:
One of the many politico-theological abuses of biblical statements is the understanding of Paul's words [Romans 13:1–7] as justifying the anti-revolutionary bias of some churches, particularly the Lutheran. But neither these words nor any other New Testament statement deals with the methods of gaining political power. In Romans, Paul is addressing eschatological enthusiasts, not a revolutionary political movement.
The concluding verses contain a description of his travel plans, personal greetings and salutations. One-third of the twenty-one Christians identified in the greetings are women. Additionally, none of these Christians answer to the name Peter, although according to the Catholic tradition, he had been ruling as Pope in Rome for about 25 years. Possibly related was the Incident at Antioch between Paul and Cephas.
Roman Catholics accept the necessity of faith for salvation but point to Romans 2:5–11 for the necessity of living a virtuous life as well:
But by your hard and impenitent heart you are storing up wrath for yourself on the day of wrath when God's righteous judgment will be revealed. For he will render to every man according to his works: to those who by patience in well-doing seek for glory and honor and immortality, he will give eternal life; but for those who are factious and do not obey the truth, but obey wickedness, there will be wrath and fury. There will be tribulation and distress for every human being who does evil, the Jew first and also the Greek, but glory and honor and peace for every one who does good, the Jew first and also the Greek. For God shows no partiality.
Catholics would also look to the passage in Romans 8:13 for evidence that justification by faith is only valid so long as it is combined with obedient cooperation with The Holy Spirit, and the passage in Romans 11:22 to show that the Christian can lose their justification if they turn away from cooperating with The Holy Spirit and reject Christ through mortal sin.
In the Protestant interpretation, the New Testament epistles (including Romans) describe salvation as coming from faith and not from righteous actions. For example, Romans 4:2–5 (underlining added):
2 For if Abraham were justified by works, he hath whereof to glory; but not before God. 3 For what saith the scripture? Abraham believed God, and it was counted unto him for righteousness. 4 Now to him that worketh is the reward not reckoned of grace, but of debt. 5 But to him that worketh not, but believeth on him that justifieth the ungodly, his faith is counted unto him for righteousness.
In the Protestant interpretation it is considered significant that in Romans chapter 2:9, Paul says that God will reward those who follow the law and then goes on to say that no one follows the law perfectly (see also Sermon on the Mount: Interpretation) Romans 2:21–29:
21 Thou therefore which teachest another, teachest thou not thyself? thou that preachest a man should not steal, dost thou steal? 22 Thou that sayest a man should not commit adultery, dost thou commit adultery? thou that abhorrest idols, dost thou commit sacrilege? 23 Thou that makest thy boast of the law, through breaking the law dishonourest thou God? 24 For the name of God is blasphemed among the Gentiles through you, as it is written. 25 For circumcision verily profiteth, if thou keep the law: but if thou be a breaker of the law, thy circumcision is made uncircumcision. 26 Therefore if the uncircumcision keep the righteousness of the law, shall not his uncircumcision be counted for circumcision? 27 And shall not uncircumcision which is by nature, if it fulfil the law, judge thee, who by the letter and circumcision dost transgress the law? 28 For he is not a Jew, which is one outwardly; neither is that circumcision, which is outward in the flesh: 29 But he is a Jew, which is one inwardly; and circumcision is that of the heart, in the spirit, and not in the letter; whose praise is not of men, but of God.
Romans has been at the forefront of several major movements in Protestantism. Martin Luther's lectures on Romans in 1515–1516 probably coincided with the development of his criticism of Roman Catholicism which led to the 95 Theses of 1517. In the preface to his German translation of Romans, Luther described Paul's letter to the Romans as "the most important piece in the New Testament. It is purest Gospel. It is well worth a Christian's while not only to memorize it word for word but also to occupy himself with it daily, as though it were the daily bread of the soul". In 1738, while hearing Luther's Preface to the Epistle to the Romans read at St. Botolph Church on Aldersgate Street in London, John Wesley famously felt his heart "strangely warmed", a conversion experience which is often seen as the beginning of Methodism.
Luther controversially added the word "alone" (allein in German) to Romans 3:28 so that it read: "thus, we hold, then, that man is justified without doing the works of the law, alone through faith". The word "alone" does not appear in the original Greek text, but Luther defended his translation by maintaining that the adverb "alone" was required both by idiomatic German and Paul's intended meaning. This is a "literalist view" rather than a literal view of the Bible.
The Romans Road (or Roman Road) refers to a set of scriptures from Romans that Christian evangelists use to present a clear and simple case for personal salvation to each person, as all the verses are contained in one single book, making it easier for evangelism without going back and forth through the entire New Testament. The core verses used by nearly all groups using Romans Road are: Romans 3:23, 6:23, 5:8, 10:9, and 10:13.
Translations
Other | [
{
"paragraph_id": 0,
"text": "The Epistle to the Romans is the sixth book in the New Testament, and the longest of the thirteen Pauline epistles. Biblical scholars agree that it was composed by Paul the Apostle to explain that salvation is offered through the gospel of Jesus Christ.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Romans was likely written while Paul was staying in the house of Gaius in Corinth. The epistle was probably transcribed by Paul's amanuensis Tertius and is dated AD late 55 to early 57. Consisting of 16 chapters, versions with only the first 14 or 15 chapters circulated early. Some of these recensions lacked all reference to the original audience of Christians in Rome making it very general in nature. Other textual variants include subscripts explicitly mentioning Corinth as the place of composition and name Phoebe, a deacon of the church in Cenchreae, as the messenger who took the epistle to Rome.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Prior to composing the epistle, Paul had evangelized the areas surrounding the Aegean Sea and was eager to take the gospel farther to Spain, a journey that would allow him to visit Rome on the way. The epistle can consequentially be understood as a document outlining his reasons for the trip and preparing the church in Rome for his visit. Christians in Rome would have been of both Jewish and Gentile background and it is possible that the church suffered from internal strife between these two groups. Paul – a Hellenistic Jew and former Pharisee – shifts his argument to cater to both audiences and the church as a whole. Because the work contains material intended both for specific recipients as well as the general Christian public in Rome, scholars have had difficulty categorizing it as either a private letter or a public epistle.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Although sometimes considered a treatise of (systematic) theology, Romans remains silent on many issues that Paul addresses elsewhere, but is nonetheless generally considered substantial, especially on justification and salvation. Proponents of both sola fide and the Roman Catholic position of the necessity of both faith and works find support in Romans. Martin Luther in his translation of the Bible controversially added the word \"alone\" (allein in German) to Romans 3:28 so that it read: \"thus, we hold, then, that man is justified without doing the works of the law, alone through faith\".",
"title": ""
},
{
"paragraph_id": 4,
"text": "In the opinion of Jesuit biblical scholar Joseph Fitzmyer, the book \"overwhelms the reader by the density and sublimity of the topic with which it deals, the gospel of the justification and salvation of Jew and Greek alike by the grace of God through faith in Jesus Christ, revealing the uprightness and love of God the Father.\"",
"title": "General presentation"
},
{
"paragraph_id": 5,
"text": "Anglican bishop N. T. Wright notes that Romans is:",
"title": "General presentation"
},
{
"paragraph_id": 6,
"text": "...neither a systematic theology nor a summary of Paul's lifework, but it is by common consent his masterpiece. It dwarfs most of his other writings, an Alpine peak towering over hills and villages. Not all onlookers have viewed it in the same light or from the same angle, and their snapshots and paintings of it are sometimes remarkably unalike. Not all climbers have taken the same route up its sheer sides, and there is frequent disagreement on the best approach. What nobody doubts is that we are here dealing with a work of massive substance, presenting a formidable intellectual challenge while offering a breathtaking theological and spiritual vision.",
"title": "General presentation"
},
{
"paragraph_id": 7,
"text": "The scholarly consensus is that Paul wrote the Epistle to the Romans. C. E. B. Cranfield, in the introduction to his commentary on Romans, says:",
"title": "Authorship and dating"
},
{
"paragraph_id": 8,
"text": "The denial of Paul's authorship of Romans by such critics [...] is now rightly relegated to a place among the curiosities of NT scholarship. Today no responsible criticism disputes its Pauline origin. The evidence of its use in the Apostolic Fathers is clear, and before the end of the second century it is listed and cited as Paul's. Every extant early list of NT books includes it among his letters. The external evidence of authenticity could indeed hardly be stronger; and it is altogether borne out by the internal evidence, linguistic, stylistic, literary, historical and theological.",
"title": "Authorship and dating"
},
{
"paragraph_id": 9,
"text": "The letter was most probably written while Paul was in Corinth, probably while he was staying in the house of Gaius, and transcribed by Tertius, his amanuensis. There are a number of reasons why Corinth is considered most plausible. Paul was about to travel to Jerusalem on writing the letter, which matches Acts where it is reported that Paul stayed for three months in Greece. This probably implies Corinth as it was the location of Paul's greatest missionary success in Greece. Additionally, Phoebe was a deacon of the church in Cenchreae, a port to the east of Corinth, and would have been able to convey the letter to Rome after passing through Corinth and taking a ship from Corinth's west port. Erastus, mentioned in Romans 16:23, also lived in Corinth, being the city's commissioner for public works and city treasurer at various times, again indicating that the letter was written in Corinth.",
"title": "Authorship and dating"
},
{
"paragraph_id": 10,
"text": "The precise time at which it was written is not mentioned in the epistle, but it was obviously written when the collection for Jerusalem had been assembled and Paul was about to \"go unto Jerusalem to minister unto the saints\", that is, at the close of his second visit to Greece, during the winter preceding his last visit to that city. The majority of scholars writing on Romans propose the letter was written in late 55/early 56 or late 56/early 57. Early 55 and early 58 both have some support, while German New Testament scholar Gerd Lüdemann argues for a date as early as 51/52 (or 54/55), following on from Knox, who proposed 53/54. Lüdemann is the only serious challenge to the consensus of mid to late 50s.",
"title": "Authorship and dating"
},
{
"paragraph_id": 11,
"text": "There is strong, albeit indirect, evidence that a recension of Romans that lacked chapters 15 and 16 was widely used in the western half of the Roman Empire until the mid-4th century. This conclusion is partially based on the fact that a variety of Church Fathers, such as Origen and Tertullian, refer to a fourteen-chapter edition of Romans, either directly or indirectly. The fact that Paul's doxology is placed in various different places in different manuscripts of Romans only strengthens the case for an early fourteen-chapter recension. While there is some uncertainty, Harry Gamble concludes that the canonical sixteen-chapter recension is likely the earlier version of the text.",
"title": "Textual variants"
},
{
"paragraph_id": 12,
"text": "The Codex Boernerianus lacks the explicit references to the Roman church as the audience of the epistle found in Romans 1:7 and 1:15. There is evidence from patristic commentaries indicating that Boernarianus is not unique in this regard; many early, no longer extant manuscripts also lacked an explicit Roman addressee in chapter 1. It is notable that, when this textual variant is combined with the omission of chapters 15 and 16, there is no longer any clear reference to the Roman church throughout the entire epistle. Harry Gamble speculates that 1:7, 1:15, and chapters 15 and 16 may have been removed by a scribe in order to make the epistle more suitable for a \"general\" audience.",
"title": "Textual variants"
},
{
"paragraph_id": 13,
"text": "It is quite possible that a fifteen-chapter form of Romans, omitting chapter 16, may have existed at an early date. Several scholars have argued, largely on the basis of internal evidence, that Chapter 16 represents a separate letter of Paul – possibly addressed to Ephesus – that was later appended to Romans.",
"title": "Textual variants"
},
{
"paragraph_id": 14,
"text": "There are a few different arguments for this conclusion. First of all, there is a concluding peace benediction at 15:33, which reads like the other Pauline benedictions that conclude their respective letters. Secondly, Paul greets a large number of people and families in chapter 16, in a way that suggests he was already familiar with them, whereas the material of chapters 1–15 presupposes that Paul has never met anyone from the Roman church. The fact that Papyrus 46 places Paul's doxology at the end of chapter 15 can also be interpreted as evidence for the existence of a fifteen-chapter recension of the epistle.",
"title": "Textual variants"
},
{
"paragraph_id": 15,
"text": "Some manuscripts have a subscript at the end of the Epistle:",
"title": "Textual variants"
},
{
"paragraph_id": 16,
"text": "For ten years before writing the letter (c. 47–57 AD), Paul had traveled around the territories bordering the Aegean Sea evangelizing. Churches had been planted in the Roman provinces of Galatia, Macedonia, Achaia and Asia. Paul, considering his task complete, wanted to preach the gospel in Spain, where he would not \"build upon another man's foundation\". This allowed him to visit Rome on the way, a long-time ambition of his. The letter to the Romans, in part, prepares them and gives reasons for his visit.",
"title": "Paul's life in relation to his epistle"
},
{
"paragraph_id": 17,
"text": "In addition to Paul's geographic location, his religious views are important. First, Paul was a Hellenistic Jew with a Pharisaic background (see Gamaliel), integral to his identity (see Paul the Apostle and Judaism). His concern for his people is one part of the dialogue and runs throughout the letter. Second, the other side of the dialogue is Paul's conversion and calling to follow Christ in the early 30s.",
"title": "Paul's life in relation to his epistle"
},
{
"paragraph_id": 18,
"text": "The most probable ancient account of the beginning of Christianity in Rome is given by a 4th-century writer known as Ambrosiaster:",
"title": "The churches in Rome"
},
{
"paragraph_id": 19,
"text": "It is established that there were Jews living in Rome in the times of the Apostles, and that those Jews who had believed [in Christ] passed on to the Romans the tradition that they ought to profess Christ but keep the law [Torah] [...] One ought not to condemn the Romans, but to praise their faith, because without seeing any signs or miracles and without seeing any of the apostles, they nevertheless accepted faith in Christ, although according to a Jewish rite.",
"title": "The churches in Rome"
},
{
"paragraph_id": 20,
"text": "From Adam Clarke:",
"title": "The churches in Rome"
},
{
"paragraph_id": 21,
"text": "The occasion of writing the epistle: [...] Paul had made acquaintance with all circumstances of the Christians at Rome [...] and finding that it was [...] partly of heathens converted to Christianity, and partly of Jews, who had, with many remaining prejudices, believed in Jesus as the true Messiah, and that many contentions arose from the claims of the Gentiles to equal privileges with the Jews, and from absolute refusal of the Jews to admit these claims, unless the Gentile converts become circumcised; he wrote this epistle to adjust and settle these differences.",
"title": "The churches in Rome"
},
{
"paragraph_id": 22,
"text": "At this time, the Jews made up a substantial number in Rome, and their synagogues, frequented by many, enabled the Gentiles to become acquainted with the story of Jesus of Nazareth. Consequently, churches composed of both Jews and Gentiles were formed at Rome. According to Irenaeus, a 2nd-century Church Father, the church at Rome was founded directly by the apostles Peter and Paul. However, many modern scholars disagree with Irenaeus, holding that while little is known of the circumstances of the church's founding, it was not founded by Paul:",
"title": "The churches in Rome"
},
{
"paragraph_id": 23,
"text": "Many of the brethren went out to meet Paul on his approach to Rome. There is evidence that Christians were then in Rome in considerable numbers and probably had more than one place of meeting.",
"title": "The churches in Rome"
},
{
"paragraph_id": 24,
"text": "The large number of names in Romans 16:3–15 of those then in Rome, and verses 5, 15 and 16, indicate there was more than one church assembly or company of believers in Rome. Verse 5 mentions a church that met in the house of Aquila and Priscilla. Verses 14 and 15 each mention groupings of believers and saints.",
"title": "The churches in Rome"
},
{
"paragraph_id": 25,
"text": "Jews were expelled from Rome because of disturbances around AD 49 by the edict of Claudius. Fitzmyer claims that both Jews and Jewish Christians were expelled as a result of their infighting. Claudius died around the year AD 54, and his successor, Emperor Nero, allowed the Jews back into Rome, but then, after the Great Fire of Rome of 64, Christians were persecuted. Fitzmyer argues that with the return of the Jews to Rome in 54 new conflict arose between the Gentile Christians and the Jewish Christians who had formerly been expelled. He also argues that this may be what Paul is referring to when he talks about the \"strong\" and the \"weak\" in Romans 15; this theory was originally put forth by W. Marxsen in Introduction to the New Testament: An Approach to its problems (1968) but is critiqued and modified by Fitzmyer. Fitzmyer's main contention is that Paul seems to be purposefully vague. Paul could have been more specific if he wanted to address this problem specifically. Keck thinks Gentile Christians may have developed a dislike of or looked down on Jews (see also Antisemitism and Responsibility for the death of Jesus), because they theologically rationalized that Jews were no longer God's people.",
"title": "The churches in Rome"
},
{
"paragraph_id": 26,
"text": "Scholars often have difficulty assessing whether Romans is a letter or an epistle, a relevant distinction in form-critical analysis:",
"title": "Style"
},
{
"paragraph_id": 27,
"text": "A letter is something non-literary, a means of communication between persons who are separated from each other. Confidential and personal in nature, it is intended only for the person or persons to whom it is addressed, and not at all for the public or any kind of publicity...An Epistle is an artistic literary form, just like the dialogue, the oration, or the drama. It has nothing in common with the letter except its form: apart from that one might venture the paradox that the epistle is the opposite of a real letter. The contents of the epistle are intended for publicity—they aim at interesting \"the public.\"",
"title": "Style"
},
{
"paragraph_id": 28,
"text": "Joseph Fitzmyer argues, from evidence put forth by Stirewalt, that the style of Romans is an \"essay-letter.\" Philip Melanchthon, a writer during the Reformation, suggested that Romans was caput et summa universae doctrinae christianae (\"a summary of all Christian doctrine\"). While some scholars suggest, like Melanchthon, that it is a type of theological treatise, this view largely ignores chapters 14 and 15 of Romans. There are also many \"noteworthy elements\" missing from Romans that are included in other areas of the Pauline corpus. The breakdown of Romans as a treatise began with F.C. Baur in 1836 when he suggested \"this letter had to be interpreted according to the historical circumstances in which Paul wrote it.\"",
"title": "Style"
},
{
"paragraph_id": 29,
"text": "Paul sometimes uses a style of writing common in his time called a diatribe. He appears to be responding to a critic (probably an imaginary one based on Paul's encounters with real objections in his previous preaching), and the letter is structured as a series of arguments. In the flow of the letter, Paul shifts his arguments, sometimes addressing the Jewish members of the church, sometimes the Gentile membership and sometimes the church as a whole.",
"title": "Style"
},
{
"paragraph_id": 30,
"text": "To review the current scholarly viewpoints on the purpose of Romans, along with a bibliography, see Dictionary of Paul and His Letters. For a 16th-century \"Lollard\" reformer view, see the work of William Tyndale. In his prologue to his translation of Romans, which was largely taken from the prologue of German Reformer Martin Luther, Tyndale writes that:",
"title": "Purposes of writing"
},
{
"paragraph_id": 31,
"text": ".. this epistle is the principal and most excellent part of the new testament, and most pure evangelion, that is to say glad tidings and what we call the gospel, and also a light and a way in unto the whole scripture ... The sum and whole cause of the writings of this epistle, is, to prove that a man is justified by faith only: which proposition whoso denieth, to him is not only this epistle and all that Paul writeth, but also the whole scripture, so locked up that he shall never understand it to his soul's health. And to bring a man to the understanding and feeling that faith only justifieth, Paul proveth that the whole nature of man is so poisoned and so corrupt, yea and so dead concerning godly living or godly thinking, that it is impossible for her to keep the law in the sight of God.",
"title": "Purposes of writing"
},
{
"paragraph_id": 32,
"text": "The introduction provides some general notes about Paul. He introduces his apostleship here and introductory notes about the gospel he wishes to preach to the church at Rome. Jesus' human line stems from David. Paul, however, does not limit his ministry to Jews. Paul's goal is that the Gentiles would also hear the gospel.",
"title": "Contents"
},
{
"paragraph_id": 33,
"text": "He commends the Romans for faith. Paul also speaks of the past obstacles that have blocked his coming to Rome earlier.",
"title": "Contents"
},
{
"paragraph_id": 34,
"text": "Paul announces that he is not \"ashamed\" (epaiscúnomai) of his gospel because it holds power (dúnamis). These two verses form a backdrop of themes for the rest of the book; first, that Paul is unashamed of his love for this gospel that he preaches about Jesus Christ. He also notes that he is speaking to the \"Jew first.\" There is significance to this, but much of it is scholarly conjecture as the relationship between Paul and Judaism is still debated, and scholars are hard-pressed to find an answer to such a question without knowing more about the audience in question. Wayne Brindle argues, based on Paul's former writings against the Judaizers in Galatians and 2 Corinthians, that rumors had probably spread about Paul totally negating the Jewish existence in a Christian world (see also Antinomianism in the New Testament and Supersessionism). Paul may have used the \"Jew first\" approach to counter such a view.",
"title": "Contents"
},
{
"paragraph_id": 35,
"text": "Paul begins with a summary of Hellenistic Jewish apologist discourse. His summary begins by suggesting that humans have taken up ungodliness and wickedness for which there already is wrath from God. People have taken God's invisible image and made him into an idol. Paul draws heavily here from the Wisdom of Solomon. This summary condemns \"unnatural sexual behavior\" and warns that such behavior has already resulted in a depraved body and mind (\"reprobate mind\" in the King James Version) and says that people who do such things (including murder and wickedness ) are worthy of death. Paul stands firmly against the idol worship system which was common in Rome. Several scholars believe the passage is a non-Pauline interpolation.",
"title": "Contents"
},
{
"paragraph_id": 36,
"text": "On the traditional Protestant interpretation, Paul here calls out Jews who are condemning others for not following the law when they themselves are also not following the law. Stanley Stowers, however, has argued on rhetorical grounds that Paul is in these verses not addressing a Jew at all but rather an easily recognizable caricature of the typical boastful person (ὁ ἀλαζων). Stowers writes, \"There is absolutely no justification for reading 2:1–5 as Paul's attack on 'the hypocrisy of the Jew.' No one in the first century would have identified ho alazon with Judaism. That popular interpretation depends upon anachronistically reading later Christian characterizations of Jews as 'hypocritical Pharisees'\". (See also Anti-Judaism).",
"title": "Contents"
},
{
"paragraph_id": 37,
"text": "Paul says that a righteousness from God has made itself known apart from the law, to which the law and prophets testify, and this righteousness from God comes through faith in Jesus to all who believe. He describes justification – legally clearing the believer of the guilt and penalty of sin – as a gift of God, and not the work of man (lest he might boast), but by faith.",
"title": "Contents"
},
{
"paragraph_id": 38,
"text": "In chapters five through eight, Paul argues that believers can be assured of their hope in salvation, having been freed from the bondage of sin. Paul teaches that through faith, the faithful have been joined with Jesus and freed from sin. Believers should celebrate in the assurance of salvation and be certain that no external force or party can take their salvation away from them. This promise is open to everyone since everyone has sinned, save the one who paid for all of them.",
"title": "Contents"
},
{
"paragraph_id": 39,
"text": "In Romans 7:1, Paul says that humans are under the law while they live: \"Know ye not [...] that the law hath dominion over a man as long as he liveth?\" However, Jesus' death on the cross makes believers dead to the law (7:4, \"Wherefore, my brethren, ye are also become dead to the law by the body of Christ\"), according to an antinomistic interpretation.",
"title": "Contents"
},
{
"paragraph_id": 40,
"text": "In chapters 9–11 Paul addresses the faithfulness of God to the Israelites, where he says that God has been faithful to his promise. Paul hopes that all Israelites will come to realize the truth, stating that \"Not as though the word of God hath taken none effect. For they are not all Israel, which are of Israel: Neither, because they are the seed of Abraham, are they all children: but, In Isaac shall thy seed be called.\" Paul affirms that he himself is also an Israelite, and had in the past been a persecutor of Early Christians. In Romans 9–11 Paul talks about how the nation of Israel has not been cast away, and the conditions under which Israel will be God's chosen nation again: when Israel returns to its faith, sets aside its unbelief.",
"title": "Contents"
},
{
"paragraph_id": 41,
"text": "From chapter 12 through the first part of chapter 15, Paul outlines how the gospel transforms believers and the behaviour that results from such a transformation. This transformation is described as a \"renewing of your mind\" (12:2), a transformation that Douglas J. Moo characterizes as \"the heart of the matter.\" It is a transformation so radical that it amounts to \"a transfiguration of your brain,\" a \"metanoia\", a \"mental revolution.\"",
"title": "Contents"
},
{
"paragraph_id": 42,
"text": "Paul goes on to describe how believers should live. Christians are no longer under the law, that is, no longer bound by the law of Moses, but under the grace of God (see Law and grace). Christians do not need to live under the law because to the extent that their minds have been renewed, they will know \"almost instinctively\" what God wants of them. The law then provides an \"objective standard\" for judging progress in the \"lifelong process\" of their mind's renewal.",
"title": "Contents"
},
{
"paragraph_id": 43,
"text": "To the extent they have been set free from sin by renewed minds (Romans 6:18), believers are no longer bound to sin. Believers are free to live in obedience to God and love everybody. As Paul says in Romans 13:10, \"love (ἀγάπη) worketh no ill to his neighbor: therefore love is the fulfilling of law\".",
"title": "Contents"
},
{
"paragraph_id": 44,
"text": "The fragment in Romans 13:1–7 dealing with obedience to earthly powers is considered by some, for example James Kallas, to be an interpolation. (See also the Great Commandment and Christianity and politics). Paul Tillich accepts the historical authenticity of Romans 13:1–7, but claims it has been misinterpreted by churches with an anti-revolutionary bias:",
"title": "Contents"
},
{
"paragraph_id": 45,
"text": "One of the many politico-theological abuses of biblical statements is the understanding of Paul's words [Romans 13:1–7] as justifying the anti-revolutionary bias of some churches, particularly the Lutheran. But neither these words nor any other New Testament statement deals with the methods of gaining political power. In Romans, Paul is addressing eschatological enthusiasts, not a revolutionary political movement.",
"title": "Contents"
},
{
"paragraph_id": 46,
"text": "The concluding verses contain a description of his travel plans, personal greetings and salutations. One-third of the twenty-one Christians identified in the greetings are women. Additionally, none of these Christians answer to the name Peter, although according to the Catholic tradition, he had been ruling as Pope in Rome for about 25 years. Possibly related was the Incident at Antioch between Paul and Cephas.",
"title": "Contents"
},
{
"paragraph_id": 47,
"text": "Roman Catholics accept the necessity of faith for salvation but point to Romans 2:5–11 for the necessity of living a virtuous life as well:",
"title": "Hermeneutics"
},
{
"paragraph_id": 48,
"text": "But by your hard and impenitent heart you are storing up wrath for yourself on the day of wrath when God's righteous judgment will be revealed. For he will render to every man according to his works: to those who by patience in well-doing seek for glory and honor and immortality, he will give eternal life; but for those who are factious and do not obey the truth, but obey wickedness, there will be wrath and fury. There will be tribulation and distress for every human being who does evil, the Jew first and also the Greek, but glory and honor and peace for every one who does good, the Jew first and also the Greek. For God shows no partiality.",
"title": "Hermeneutics"
},
{
"paragraph_id": 49,
"text": "Catholics would also look to the passage in Romans 8:13 for evidence that justification by faith is only valid so long as it is combined with obedient cooperation with The Holy Spirit, and the passage in Romans 11:22 to show that the Christian can lose their justification if they turn away from cooperating with The Holy Spirit and reject Christ through mortal sin.",
"title": "Hermeneutics"
},
{
"paragraph_id": 50,
"text": "In the Protestant interpretation, the New Testament epistles (including Romans) describe salvation as coming from faith and not from righteous actions. For example, Romans 4:2–5 (underlining added):",
"title": "Hermeneutics"
},
{
"paragraph_id": 51,
"text": "2 For if Abraham were justified by works, he hath whereof to glory; but not before God. 3 For what saith the scripture? Abraham believed God, and it was counted unto him for righteousness. 4 Now to him that worketh is the reward not reckoned of grace, but of debt. 5 But to him that worketh not, but believeth on him that justifieth the ungodly, his faith is counted unto him for righteousness.",
"title": "Hermeneutics"
},
{
"paragraph_id": 52,
"text": "In the Protestant interpretation it is considered significant that in Romans chapter 2:9, Paul says that God will reward those who follow the law and then goes on to say that no one follows the law perfectly (see also Sermon on the Mount: Interpretation) Romans 2:21–29:",
"title": "Hermeneutics"
},
{
"paragraph_id": 53,
"text": "21 Thou therefore which teachest another, teachest thou not thyself? thou that preachest a man should not steal, dost thou steal? 22 Thou that sayest a man should not commit adultery, dost thou commit adultery? thou that abhorrest idols, dost thou commit sacrilege? 23 Thou that makest thy boast of the law, through breaking the law dishonourest thou God? 24 For the name of God is blasphemed among the Gentiles through you, as it is written. 25 For circumcision verily profiteth, if thou keep the law: but if thou be a breaker of the law, thy circumcision is made uncircumcision. 26 Therefore if the uncircumcision keep the righteousness of the law, shall not his uncircumcision be counted for circumcision? 27 And shall not uncircumcision which is by nature, if it fulfil the law, judge thee, who by the letter and circumcision dost transgress the law? 28 For he is not a Jew, which is one outwardly; neither is that circumcision, which is outward in the flesh: 29 But he is a Jew, which is one inwardly; and circumcision is that of the heart, in the spirit, and not in the letter; whose praise is not of men, but of God.",
"title": "Hermeneutics"
},
{
"paragraph_id": 54,
"text": "Romans has been at the forefront of several major movements in Protestantism. Martin Luther's lectures on Romans in 1515–1516 probably coincided with the development of his criticism of Roman Catholicism which led to the 95 Theses of 1517. In the preface to his German translation of Romans, Luther described Paul's letter to the Romans as \"the most important piece in the New Testament. It is purest Gospel. It is well worth a Christian's while not only to memorize it word for word but also to occupy himself with it daily, as though it were the daily bread of the soul\". In 1738, while hearing Luther's Preface to the Epistle to the Romans read at St. Botolph Church on Aldersgate Street in London, John Wesley famously felt his heart \"strangely warmed\", a conversion experience which is often seen as the beginning of Methodism.",
"title": "Hermeneutics"
},
{
"paragraph_id": 55,
"text": "Luther controversially added the word \"alone\" (allein in German) to Romans 3:28 so that it read: \"thus, we hold, then, that man is justified without doing the works of the law, alone through faith\". The word \"alone\" does not appear in the original Greek text, but Luther defended his translation by maintaining that the adverb \"alone\" was required both by idiomatic German and Paul's intended meaning. This is a \"literalist view\" rather than a literal view of the Bible.",
"title": "Hermeneutics"
},
{
"paragraph_id": 56,
"text": "The Romans Road (or Roman Road) refers to a set of scriptures from Romans that Christian evangelists use to present a clear and simple case for personal salvation to each person, as all the verses are contained in one single book, making it easier for evangelism without going back and forth through the entire New Testament. The core verses used by nearly all groups using Romans Road are: Romans 3:23, 6:23, 5:8, 10:9, and 10:13.",
"title": "Hermeneutics"
},
{
"paragraph_id": 57,
"text": "Translations",
"title": "External links"
},
{
"paragraph_id": 58,
"text": "Other",
"title": "External links"
}
]
| The Epistle to the Romans is the sixth book in the New Testament, and the longest of the thirteen Pauline epistles. Biblical scholars agree that it was composed by Paul the Apostle to explain that salvation is offered through the gospel of Jesus Christ. Romans was likely written while Paul was staying in the house of Gaius in Corinth. The epistle was probably transcribed by Paul's amanuensis Tertius and is dated AD late 55 to early 57. Consisting of 16 chapters, versions with only the first 14 or 15 chapters circulated early. Some of these recensions lacked all reference to the original audience of Christians in Rome making it very general in nature. Other textual variants include subscripts explicitly mentioning Corinth as the place of composition and name Phoebe, a deacon of the church in Cenchreae, as the messenger who took the epistle to Rome. Prior to composing the epistle, Paul had evangelized the areas surrounding the Aegean Sea and was eager to take the gospel farther to Spain, a journey that would allow him to visit Rome on the way. The epistle can consequentially be understood as a document outlining his reasons for the trip and preparing the church in Rome for his visit. Christians in Rome would have been of both Jewish and Gentile background and it is possible that the church suffered from internal strife between these two groups. Paul – a Hellenistic Jew and former Pharisee – shifts his argument to cater to both audiences and the church as a whole. Because the work contains material intended both for specific recipients as well as the general Christian public in Rome, scholars have had difficulty categorizing it as either a private letter or a public epistle. Although sometimes considered a treatise of (systematic) theology, Romans remains silent on many issues that Paul addresses elsewhere, but is nonetheless generally considered substantial, especially on justification and salvation. Proponents of both sola fide and the Roman Catholic position of the necessity of both faith and works find support in Romans. Martin Luther in his translation of the Bible controversially added the word "alone" to Romans 3:28 so that it read: "thus, we hold, then, that man is justified without doing the works of the law, alone through faith". | 2001-10-22T04:22:58Z | 2023-11-22T21:48:28Z | [
"Template:Paul",
"Template:Efn",
"Template:ISBN",
"Template:Wikiquote",
"Template:Librivox book",
"Template:S-end",
"Template:Books of the Bible",
"Template:About",
"Template:Use mdy dates",
"Template:Bibleref2-nb",
"Template:Transliteration",
"Template:Quote without source",
"Template:Reflist",
"Template:Authority control",
"Template:Em",
"Template:Citation needed",
"Template:Wikisource",
"Template:Who",
"Template:Cite journal",
"Template:Wiktionary",
"Template:S-hou",
"Template:S-ttl",
"Template:Circa",
"Template:C.",
"Template:Harvnb",
"Template:S-bef",
"Template:Bibleverse",
"Template:Cite book",
"Template:Short description",
"Template:Sfn",
"Template:Blockquote",
"Template:Main",
"Template:Sup",
"Template:Bibleref2",
"Template:EBD",
"Template:S-start",
"Template:S-aft",
"Template:Books of the New Testament",
"Template:Lang",
"Template:Notelist",
"Template:See also",
"Template:'\"",
"Template:Cite web",
"Template:Webarchive",
"Template:External links",
"Template:Epistle to the Romans"
]
| https://en.wikipedia.org/wiki/Epistle_to_the_Romans |
9,962 | Eleanor of Aquitaine | Eleanor of Aquitaine (Occitan: Alienòr d'Aquitània, pronounced [aljɛnɔɾ dakiˈtanjɔ]; c. 1122 – 1 April 1204) was Duchess of Aquitaine in her own right from 1137 to 1204, Queen of France from 1137 to 1152 as the wife of King Louis VII, and Queen of England from 1154 to 1189 as the wife of King Henry II. As the heiress of the House of Poitiers, which controlled much of southwestern France, she was one of the wealthiest and most powerful women in Western Europe during the High Middle Ages. Militarily, she was a key leading figure in the Second Crusade, and in a revolt in favour of her son. Culturally, she was a patron of poets such as Wace, Benoît de Sainte-Maure, and Bernart de Ventadorn, and of the arts of the High Middle Ages.
Eleanor was the eldest child of William X, Duke of Aquitaine, and Aénor de Châtellerault. She became duchess upon her father's death in April 1137, and three months later she married Louis, son of her guardian King Louis VI of France. Shortly afterwards, Louis VI died and Eleanor's husband ascended the throne, making Eleanor queen consort. The couple had two daughters, Marie and Alix. Eleanor sought an annulment of her marriage, but her request was rejected by Pope Eugene III. Eventually, Louis agreed to an annulment, as fifteen years of marriage had not produced a son. The marriage was annulled on 21 March 1152 on the grounds of consanguinity within the fourth degree. Their daughters were declared legitimate, custody was awarded to Louis, and Eleanor's lands were restored to her.
As soon as the annulment was granted, Eleanor became engaged to her third cousin Henry, Duke of Normandy. The couple married on Whitsun, 18 May 1152 in Poitiers. Eleanor was crowned queen of England at Westminster Abbey in 1154, when Henry acceded to the throne. Henry and Eleanor had five sons and three daughters, but eventually became estranged. Henry imprisoned her in 1173 for supporting the revolt of their eldest son, Henry the Young King, against him. She was not released until 6 July 1189, when her husband died and their third son, Richard I, ascended the throne. As queen dowager, Eleanor acted as regent while Richard went on the Third Crusade. She lived well into the reign of her youngest son, John.
There is a paucity of primary sources on Eleanor's life, and in their absence myth, legend and speculation have frequently been resorted to to fill the gaps.
Eleanor's year of birth is not known precisely: a late 13th-century genealogy of her family listing her as 13 years old in the spring of 1137 provides the best evidence that Eleanor was perhaps born as late as 1124. On the other hand, some chronicles mention a fidelity oath of some lords of Aquitaine on the occasion of Eleanor's fourteenth birthday in 1136. This, and her known age of 82 at her death make 1122 the most likely year of her birth. Her parents almost certainly married in 1121. Her birthplace may have been Poitiers, Bordeaux, or Nieul-sur-l'Autise, where her mother and brother died when Eleanor was 6 or 8.
Eleanor (or Aliénor) was the oldest of three children of William X, Duke of Aquitaine, whose glittering ducal court was renowned in early 12th-century Europe, and his wife, Aenor de Châtellerault, the daughter of Aimery I, Viscount of Châtellerault, and Dangereuse de l'Isle Bouchard, who was William IX's longtime mistress as well as Eleanor's maternal grandmother. Her parents' marriage had been arranged by Dangereuse with her paternal grandfather William IX.
Eleanor is said to have been named for her mother Aenor and called Aliénor from the Latin Alia Aenor, which means the other Aenor. It became Eléanor in the langues d'oïl of northern France and Eleanor in English. There was, however, another prominent Eleanor before her—Eleanor of Normandy, an aunt of William the Conqueror, who lived a century earlier than Eleanor of Aquitaine. In Paris as the queen of France, she was called Helienordis, her honorific name as written in the Latin epistles.
Eleanor's father ensured that she had the best possible education. Eleanor came to learn arithmetic, the constellations, and history. She also learned domestic skills such as household management and the needle arts of embroidery, needlepoint, sewing, spinning, and weaving. Eleanor developed skills in conversation, dancing, games such as backgammon, checkers, and chess, playing the harp, and singing. Although her native tongue was Poitevin, she was taught to read and speak Latin, was well versed in music and literature, and schooled in riding, hawking, and hunting. Eleanor was extroverted, lively, intelligent, and strong-willed. Her four-year-old brother William Aigret and their mother died at the castle of Talmont on Aquitaine's Atlantic coast in the spring of 1130. Eleanor became the heir presumptive to her father's domains. The Duchy of Aquitaine was the largest and richest province of France. Poitou, where Eleanor spent most of her childhood, and Aquitaine together were almost one-third the size of modern France. Eleanor had only one other legitimate sibling, a younger sister named Aelith (also called Petronilla). Her half-brother Joscelin was acknowledged by William X as a son, but not as his heir. The notion that she had another half-brother, William, has been discredited. Later, during the first four years of Henry II's reign, her siblings joined Eleanor's royal household.
In 1137 Duke William X left Poitiers for Bordeaux and took his daughters with him. Upon reaching Bordeaux, he left them in the charge of the archbishop of Bordeaux, one of his few loyal vassals. The duke then set out for the Shrine of Saint James of Compostela in the company of other pilgrims. However, he died on Good Friday of that year (9 April).
Eleanor, aged 12 to 15, then became the duchess of Aquitaine, and thus the most eligible heiress in Europe. As these were the days when kidnapping an heiress was seen as a viable option for obtaining a title, William dictated a will on the very day he died that bequeathed his domains to Eleanor and appointed King Louis VI of France as her guardian. William requested of the king that he take care of both the lands and the duchess, and find her a suitable husband. However, until a husband was found, the king had the legal right to Eleanor's lands. The duke also insisted to his companions that his death be kept a secret until Louis was informed; the men were to journey from Saint James of Compostela across the Pyrenees as quickly as possible to call at Bordeaux to notify the archbishop, then to make all speed to Paris to inform the king.
The king of France, known as Louis the Fat, was also gravely ill at that time, suffering from a bout of dysentery from which he appeared unlikely to recover. Yet despite his impending death, Louis's mind remained clear. His eldest surviving son, Louis, had originally been destined for monastic life, but had become the heir apparent when the firstborn, Philip, died in a riding accident in 1131.
The death of William, one of the king's most powerful vassals, made available the most desirable duchy in France. While presenting a solemn and dignified face to the grieving Aquitainian messengers, Louis exulted when they departed. Rather than act as guardian to the duchess and duchy, he decided to marry the duchess to his 17-year-old heir and bring Aquitaine under the control of the French crown, thereby greatly increasing the power and prominence of France and its ruling family, the House of Capet. Within hours, the king had arranged for his son Louis to be married to Eleanor, with Abbot Suger in charge of the wedding arrangements. Louis was sent to Bordeaux with an escort of 500 knights, along with Abbot Suger, Theobald II, Count of Champagne, and Raoul I, Count of Vermandois.
On 25 July 1137, Eleanor and Louis were married in the Cathedral of Saint-André in Bordeaux by the archbishop of Bordeaux. Immediately after the wedding, the couple were enthroned as duke and duchess of Aquitaine. It was agreed that the duchy would remain independent of France until Eleanor's oldest son became both king of France and duke of Aquitaine. Thus, her holdings would not be merged with France until the next generation. As a wedding present she gave Louis a rock crystal vase {fr}, currently on display at the Louvre. Louis donated the vase to the Basilica of St Denis. This vase is the only object connected with Eleanor of Aquitaine that still survives.
Louis's tenure as count of Poitou and duke of Aquitaine and Gascony lasted only a few days. Although he had been invested as such on 8 August 1137, a messenger gave him the news that Louis VI had died of dysentery on 1 August while he and Eleanor were making a tour of the provinces. He and Eleanor were anointed and crowned king and queen of France on Christmas Day of the same year.
Possessing a high-spirited nature, Eleanor was not popular with the staid northerners; according to sources, Louis's mother Adelaide of Maurienne thought her flighty and a bad influence. She was not aided by memories of Constance of Arles, the Provençal wife of Robert II, tales of whose immodest dress and language were still told with horror. Eleanor's conduct was repeatedly criticised by church elders, particularly Bernard of Clairvaux and Abbot Suger, as indecorous. The king was madly in love with his beautiful and worldly bride, however, and granted her every whim, even though her behaviour baffled and vexed him. Much money went into making the austere Cité Palace in Paris more comfortable for Eleanor's sake.
Louis soon came into violent conflict with Pope Innocent II. In 1141, the Archbishopric of Bourges became vacant, and the king put forward as a candidate one of his chancellors, Cadurc, while vetoing the one suitable candidate, Pierre de la Chatre, who was promptly elected by the canons of Bourges and consecrated by the Pope. Louis accordingly bolted the gates of Bourges against the new bishop. The Pope, recalling similar attempts by William X to exile supporters of Innocent from Poitou and replace them with priests loyal to himself, blamed Eleanor, saying that Louis was only a child and should be taught manners. Outraged, Louis swore upon relics that so long as he lived Pierre should never enter Bourges. An interdict was thereupon imposed upon the king's lands, and Pierre was given refuge by Theobald II, Count of Champagne.
Louis became involved in a war with Count Theobald by permitting Raoul I, Count of Vermandois and seneschal of France, to repudiate his wife Eleanor of Champagne, Theobald's sister, and to marry Petronilla of Aquitaine, the Queen's sister. Eleanor urged Louis to support her sister's marriage to Count Raoul. Theobald had also offended Louis by siding with the Pope in the dispute over Bourges. The war lasted two years (1142–44) and ended with the occupation of Champagne by the royal army. Louis was personally involved in the assault and burning of the town of Vitry. More than a thousand people sought refuge in the town church, but the church caught fire and everyone inside was burned alive. Horrified, and desiring an end to the war, Louis attempted to make peace with Theobald in exchange for his support in lifting the interdict on Raoul and Petronilla. This was duly lifted for long enough to allow Theobald's lands to be restored; it was then lowered once more when Raoul refused to repudiate Petronilla, prompting Louis to return to Champagne and ravage it once more.
In June 1144, the king and queen visited the newly built monastic church at Saint-Denis. While there, the queen met with Bernard of Clairvaux, asking him to use his influence with the Pope to have the excommunication of Petronilla and Raoul lifted, in exchange for which King Louis would make concessions in Champagne and recognise Pierre de la Chatre as archbishop of Bourges. Dismayed at her attitude, Bernard scolded Eleanor for her lack of penitence and interference in matters of state. In response, Eleanor broke down and meekly excused her behaviour, claiming to be bitter because of her lack of children (her only recorded pregnancy at that time was in about 1138, but she miscarried). In response, Bernard became more kindly towards her: "My child, seek those things which make for peace. Cease to stir up the king against the Church, and urge upon him a better course of action. If you will promise to do this, I in return promise to entreat the merciful Lord to grant you offspring." In a matter of weeks, peace had returned to France: Theobald's provinces were returned and Pierre de la Chatre was installed as archbishop of Bourges. In April 1145, Eleanor gave birth to a daughter, Marie.
Louis, however, still burned with guilt over the massacre at Vitry and wished to make a pilgrimage to the Holy Land to atone for his sins. In autumn 1145, Pope Eugene III requested that Louis lead a Crusade to the Middle East to rescue the Frankish states there from disaster. Accordingly, Louis declared on Christmas Day 1145 at Bourges his intention of going on a crusade.
Eleanor of Aquitaine also formally took up the cross symbolic of the Second Crusade during a sermon preached by Bernard of Clairvaux. In addition, she had been corresponding with her uncle Raymond, Prince of Antioch, who was seeking further protection from the French crown against the Saracens. Eleanor recruited some of her royal ladies-in-waiting for the campaign as well as 300 non-noble Aquitainian vassals. She insisted on taking part in the Crusades as the feudal leader of the soldiers from her duchy. She left for the Second Crusade from Vézelay, the rumoured location of Mary Magdalene's grave, in June 1147.
The Crusade itself achieved little. Louis was a weak and ineffectual military leader with no skill for maintaining troop discipline or morale, or of making informed and logical tactical decisions. In eastern Europe, the French army was at times hindered by Manuel I Comnenus, the Byzantine Emperor, who feared that the Crusade would jeopardise the tenuous safety of his empire. Notwithstanding, during their three-week stay at Constantinople, Louis was fêted and Eleanor was much admired. She was compared with Penthesilea, mythical queen of the Amazons, by the Greek historian Nicetas Choniates. He added that she gained the epithet chrysopous (golden-foot) from the cloth of gold that decorated and fringed her robe. Louis and Eleanor stayed in the Philopation palace just outside the city walls.
From the moment the Crusaders entered Asia Minor, things began to go badly. The king and queen were still optimistic—the Byzantine Emperor had told them that King Conrad III of Germany had won a great victory against a Turkish army when in fact the German army had been almost completely destroyed at Dorylaeum. However, while camping near Nicea, the remnants of the German army, including a dazed and sick Conrad III, staggered past the French camp, bringing news of their disaster. The French, with what remained of the Germans, then began to march in increasingly disorganised fashion towards Antioch. They were in high spirits on Christmas Eve, when they chose to camp in a lush valley near Ephesus. Here they were ambushed by a Turkish detachment, but the French proceeded to slaughter this detachment and appropriate their camp.
Louis then decided to cross the Phrygian mountains directly in the hope of reaching Raymond of Poitiers in Antioch more quickly. As they ascended the mountains, however, the army and the king and queen were horrified to discover the unburied corpses of the Germans killed earlier.
On the day set for the crossing of Mount Cadmus, Louis chose to take charge of the rear of the column, where the unarmed pilgrims and the baggage trains marched. The vanguard, with which Queen Eleanor marched, was commanded by her Aquitainian vassal, Geoffrey de Rancon. Unencumbered by baggage, they reached the summit of Cadmus, where Rancon had been ordered to make camp for the night. Rancon, however, chose to continue on, deciding in concert with Amadeus III, Count of Savoy, Louis's uncle, that a nearby plateau would make a better campsite. Such disobedience was reportedly common.
Accordingly, by mid-afternoon, the rear of the column—believing the day's march to be nearly at an end—was dawdling. This resulted in the army becoming separated, with some having already crossed the summit and others still approaching it. In the ensuing Battle of Mount Cadmus, the Turks, who had been following and feinting for many days, seized their opportunity and attacked those who had not yet crossed the summit. The French, both soldiers and pilgrims, taken by surprise, were trapped. Those who tried to escape were caught and killed. Many men, horses, and much of the baggage were cast into the canyon below. The chronicler William of Tyre, writing between 1170 and 1184 and thus perhaps too long after the event to be considered historically accurate, placed the blame for this disaster firmly on the amount of baggage being carried, much of it reputedly belonging to Eleanor and her ladies, and the presence of non-combatants.
The king, having scorned royal apparel in favour of a simple pilgrim's tunic, escaped notice, unlike his bodyguards, whose skulls were brutally smashed and limbs severed. He reportedly "nimbly and bravely scaled a rock by making use of some tree roots which God had provided for his safety" and managed to survive the attack. Others were not so fortunate: "No aid came from Heaven, except that night fell."
Official blame for the disaster was placed on Geoffrey de Rancon, who had made the decision to continue, and it was suggested that he be hanged, a suggestion which the king ignored. Since Geoffrey was Eleanor's vassal, many believed that it was she who had been ultimately responsible for the change in plan, and thus the massacre. This suspicion of responsibility did nothing for her popularity in Christendom. She was also blamed for the size of the baggage train and the fact that her Aquitanian soldiers had marched at the front and thus were not involved in the fight. Continuing on, the army became split, with the commoners marching towards Antioch and the royalty travelling by sea. When most of the land army arrived, the king and queen had a dispute. Some, such as John of Salisbury and William of Tyre, say Eleanor's reputation was sullied by rumours of an affair with her uncle Raymond.
However, this rumour may have been a ruse, as Raymond, through Eleanor, had been trying to induce Louis to use his army to attack the actual Muslim encampment at nearby Beroea (modern Aleppo), gateway to retaking Edessa, which had all along, by papal decree, been the main objective of the Crusade. Although this was perhaps a better military plan, Louis was not keen to fight in northern Syria. One of Louis's avowed Crusade goals was to journey in pilgrimage to Jerusalem, and he stated his intention to continue. Reputedly Eleanor then requested to stay with Raymond and brought up the matter of consanguinity—the fact that she and her husband, King Louis, were perhaps too closely related. Consanguinity was grounds for annulment in the medieval period. But rather than allowing her to stay, Louis took Eleanor from Antioch against her will and continued on to Jerusalem with his dwindling army.
Louis's refusal and his forcing her to accompany him humiliated Eleanor, and she maintained a low profile for the rest of the crusade. Louis's subsequent siege of Damascus in 1148 with his remaining army, reinforced by Conrad and Baldwin III of Jerusalem, achieved little. Damascus was a major wealthy trading centre and was under normal circumstances a potential threat, but the rulers of Jerusalem had recently entered into a truce with the city, which they then forswore. It was a gamble that did not pay off, and whether through military error or betrayal, the Damascus campaign was a failure. Louis's long march to Jerusalem and back north, which Eleanor was forced to join, debilitated his army and disheartened her knights; the divided Crusade armies could not overcome the Muslim forces, and the royal couple had to return home. The French royal family retreated to Jerusalem and then sailed to Rome and made their way back to Paris.
While in the eastern Mediterranean, Eleanor learned about maritime conventions developing there, which were the beginnings of what would become admiralty law. She introduced those conventions in her own lands on the island of Oléron in 1160 (with the "Rolls of Oléron") and later in England as well. She was also instrumental in developing trade agreements with Constantinople and ports of trade in the Holy Lands.
Even before the Crusade, Eleanor and Louis were becoming estranged, and their differences were only exacerbated while they were abroad. Eleanor's purported relationship with her uncle Raymond, the ruler of Antioch, was a major source of discord. Eleanor supported her uncle's desire to re-capture the nearby County of Edessa, the objective of the Crusade. In addition, having been close to him in their youth, she now showed what was considered to be "excessive affection" towards her uncle.
Home, however, was not easily reached. Louis and Eleanor, on separate ships due to their disagreements, were first attacked in May 1149 by Byzantine ships. Although they escaped this attempt unharmed, stormy weather drove Eleanor's ship far to the south to the Barbary Coast and caused her to lose track of her husband. Neither was heard of for over two months. In mid-July, Eleanor's ship finally reached Palermo in Sicily, where she discovered that she and her husband had both been given up for dead. She was given shelter and food by servants of King Roger II of Sicily, until the king eventually reached Calabria, and she set out to meet him there. Later, at King Roger's court in Potenza, she learned of the death of her uncle Raymond, who had been beheaded by Muslim forces in the Holy Land. This news appears to have forced a change of plans, for instead of returning to France from Marseilles, they went to see Pope Eugene III in Tusculum, where he had been driven five months before by a revolt of the Commune of Rome.
Eugene did not, as Eleanor had hoped, grant an annulment. Instead, he attempted to reconcile Eleanor and Louis, confirming the legality of their marriage. He proclaimed that no word could be spoken against it, and that it might not be dissolved under any pretext. He even arranged for Eleanor and Louis to sleep in the same bed. Thus was conceived their second child—not a son, but another daughter, Alix of France.
The marriage was now doomed. Still without a son and in danger of being left with no male heir, as well as facing substantial opposition to Eleanor from many of his barons and her own desire for annulment, Louis bowed to the inevitable. On 11 March 1152, they met at the royal castle of Beaugency to dissolve the marriage. Hugues de Toucy, archbishop of Sens, presided, and Louis and Eleanor were both present, as were the archbishop of Bordeaux and Rouen. Archbishop Samson of Reims acted for Eleanor.
On 21 March, the four archbishops, with the approval of Pope Eugene, granted an annulment on grounds of consanguinity within the fourth degree; Eleanor was Louis' third cousin once removed, and shared common ancestry with Robert II of France. Their two daughters were, however, declared legitimate. Children born to a marriage that was later annulled were not at risk of being "bastardised," because "Where the parties married in good faith, without knowledge of an impediment, the canonists held that the children of the marriage were legitimate and that the marriage itself was valid up to the day it was declared null". Custody of the daughters was awarded to King Louis. Archbishop Samson received assurances from Louis that Eleanor's lands would be restored to her.
As Eleanor travelled to Poitiers, two lords—Theobald V, Count of Blois, and Geoffrey, Count of Nantes, brother of Henry II, Duke of Normandy—tried to kidnap and marry her to claim her lands. As soon as she arrived in Poitiers, Eleanor sent envoys to Henry, asking him to come at once to marry her. On 18 May 1152 (Whit Sunday), eight weeks after her annulment, Eleanor married Henry "without the pomp and ceremony that befitted their rank."
Eleanor was related to Henry even more closely than she had been to Louis: they were cousins to the third degree through their common ancestor Ermengarde of Anjou, wife of Robert I, Duke of Burgundy and Geoffrey, Count of Gâtinais, and they were also descended from King Robert II of France. A marriage between Henry and Eleanor's daughter Marie had earlier been declared impossible due to their status as third cousins once removed. It was rumoured by some that Eleanor had had an affair with Henry's own father, Geoffrey V, Count of Anjou, who had advised his son to avoid any involvement with her.
On 25 October 1154, Henry became king of England. A now heavily pregnant Eleanor, was crowned queen of England by Theobald of Bec, the Archbishop of Canterbury, on 19 December 1154. She may not have been anointed on this occasion, however, because she had already been anointed in 1137. Over the next 13 years, she bore Henry five sons and three daughters: William, Henry, Richard, Geoffrey, John, Matilda, Eleanor, and Joan. Historian John Speed, in his 1611 work History of Great Britain, mentions the possibility that Eleanor had a son named Philip, who died young. His sources no longer exist, and he alone mentions this birth.
Eleanor's marriage to Henry was reputed to be tumultuous and argumentative, although sufficiently cooperative to produce at least eight pregnancies. Henry was by no means faithful to his wife and had a reputation for philandering; he fathered other, illegitimate, children throughout the marriage. Eleanor appears to have taken an ambivalent attitude towards these affairs. Geoffrey of York, for example, was an illegitimate son of Henry, but acknowledged by Henry as his child and raised at Westminster in the care of the queen.
During the period from Henry's accession to the birth of Eleanor's youngest son John, affairs in the kingdom were turbulent: Aquitaine, as was the norm, defied the authority of Henry as Eleanor's husband and answered only to their duchess. Attempts were made to claim Toulouse, the rightful inheritance of Eleanor's grandmother Philippa of Toulouse, but they ended in failure. A bitter feud arose between the king and Thomas Becket, initially his chancellor and closest adviser and later the archbishop of Canterbury. Louis of France had remarried and been widowed; he married for the third time and finally fathered a long-hoped-for son, Philip Augustus, also known as Dieudonné—God-given. "Young Henry", son of Henry and Eleanor, wed Margaret, daughter of Louis from his second marriage. Little is known of Eleanor's involvement in these events. By late 1166, Henry's affair with Rosamund Clifford had become known, and although much speculation has arisen that this precipitated a break in their relationship, the evidence does not support this.
In 1167, Eleanor's third daughter, Matilda, married Henry the Lion of Saxony. Eleanor remained in England with her daughter for the year prior to Matilda's departure for Normandy in September. In December, Eleanor gathered her movable possessions in England and transported them on several ships to Argentan. Christmas was celebrated at the royal court there, and she appears to have agreed to a separation from Henry. She certainly left for her own city of Poitiers immediately after Christmas. Henry did not stop her; on the contrary, he and his army personally escorted her there before attacking a castle belonging to the rebellious Lusignan family. Henry then went about his own business outside Aquitaine, leaving Earl Patrick, his regional military commander, as her protective custodian. When Patrick was killed in a skirmish, Eleanor, who proceeded to ransom his captured nephew, the young William Marshal, was left in control of her lands.
Of all her influence on culture, Eleanor's time in Poitiers between 1168 and 1173 was perhaps the most critical, yet very little is known about it. Henry II was elsewhere, attending to his own affairs after escorting Eleanor there. Some believe that Eleanor's court in Poitiers was the "Court of Love" where Eleanor and her daughter Marie meshed and encouraged the ideas of troubadours, chivalry, and courtly love into a single court. It may have been largely to teach manners, something the French courts would be known for in later generations. Yet the existence and reasons for this court are debated.
In the 12th century The Art of Courtly Love, Andreas Capellanus (Andrew the Chaplain), refers to the court of Poitiers. He claims that Eleanor, her daughter Marie, Ermengarde, Viscountess of Narbonne, and Isabelle of Flanders would sit and listen to the quarrels of lovers and act as a jury to the questions of the court that revolved around acts of romantic love. He records some twenty-one cases, the most famous of them being a problem posed to the women about whether true love can exist in marriage. According to Capellanus, the women decided that it was not at all likely.
Some scholars believe that the "court of love" probably never existed since the only evidence for it is Andreas Capellanus' book. To strengthen their argument, they state that there is no other evidence that Marie ever stayed with her mother in Poitiers. Andreas wrote for the court of the king of France, where Eleanor was not held in esteem. Polly Schoyer Brooks, the author of a popular biography of Eleanor, suggests that the court did exist, but that it was not taken very seriously, and that acts of courtly love were just a "parlour game" made up by Eleanor and Marie in order to place some order over the young courtiers living there.
There is no claim that Eleanor invented courtly love, for it was a concept that had begun to grow before Eleanor's court arose. All that can be said is that her court at Poitiers was most likely a catalyst for the increased popularity of courtly love literature in the Western European regions. Amy Kelly provides a plausible description of the origins of the rules of Eleanor's court: "In the Poitevin code, man is the property, the very thing of woman; whereas a precisely contrary state of things existed in the adjacent realms of the two kings from whom the reigning duchess of Aquitaine was estranged."
In March 1173, aggrieved at his lack of power and egged on by Henry's enemies, his son by the same name, the younger Henry, launched the Revolt of 1173–1174. He fled to Paris. From there, "the younger Henry, devising evil against his father from every side by the advice of the French king, went secretly into Aquitaine where his two youthful brothers, Richard and Geoffrey, were living with their mother, and with her connivance, so it is said, he incited them to join him." One source claimed that the queen sent her younger sons to France "to join with him against their father the king." Once her sons had left for Paris, Eleanor may have encouraged the lords of the south to rise up and support them.
Sometime between the end of March and the beginning of May, Eleanor left Poitiers, but was arrested and sent to the king at Rouen. The king did not announce the arrest publicly and for the next year the queen's whereabouts were unknown. On 8 July 1174, Henry and Eleanor took ship for England from Barfleur. As soon as they disembarked at Southampton, Eleanor was taken either to Winchester Castle or Sarum Castle and held there.
Eleanor was imprisoned for the next 16 years, much of the time in various locations in England. During her imprisonment, Eleanor became more and more distant from her sons, especially from Richard, who had always been her favourite. She did not have the opportunity to see her sons very often during her imprisonment, though she was released for special occasions such as Christmas. About four miles from Shrewsbury and close by Haughmond Abbey is "Queen Eleanor's Bower", the remains of a possible triangular timber castle which is believed to have been one of her prisons.
Henry lost the woman reputed to be his great love, Rosamund Clifford, in 1176. He had met her in 1166 and had begun their liaison in 1173, supposedly contemplating divorce from Eleanor. This notorious affair caused a monkish scribe to transcribe Rosamund's name in Latin to "Rosa Immundi", or "Rose of Unchastity". The king had many mistresses, but although he treated earlier liaisons discreetly, he flaunted Rosamund. He may have done so to provoke Eleanor into seeking an annulment, but if so, the queen disappointed him. Nevertheless, rumours persisted, perhaps assisted by Henry's camp, that Eleanor had poisoned Rosamund. It is also speculated that Eleanor placed Rosamund in a bathtub and had an old woman cut Rosamund's arms. Henry donated much money to Godstow Nunnery in Oxfordshire, where Rosamund was buried.
In 1183, the young King Henry tried again to force his father to hand over some of his patrimony. In debt and refused control of Normandy, he tried to ambush his father at Limoges. He was joined by troops sent by his brother Geoffrey and Philip II of France. Henry II's troops besieged the town, forcing his son to flee. After wandering aimlessly through Aquitaine, Henry the Younger caught dysentery. On Saturday, 11 June 1183, the young king realized he was dying and was overcome with remorse for his sins. When his father's ring was sent to him, he begged that his father would show mercy to his mother, and that all his companions would plead with Henry to set her free. Henry II sent Thomas of Earley, Archdeacon of Wells, to break the news to Eleanor at Sarum. Eleanor reputedly had a dream in which she foresaw her son Henry's death. In 1193, she would tell Pope Celestine III that she was tortured by his memory.
King Philip II of France claimed that certain properties in Normandy belonged to his half-sister Margaret, widow of the young Henry, but Henry insisted that they had once belonged to Eleanor and would revert to her upon her son's death. For this reason Henry summoned Eleanor to Normandy in the late summer of 1183. She stayed in Normandy for six months. This was the beginning of a period of greater freedom for the still-supervised Eleanor. Eleanor went back to England probably early in 1184. Over the next few years Eleanor often travelled with her husband and was sometimes associated with him in the government of the realm, but still had a custodian so that she was not free.
Upon the death of her husband Henry II on 6 July 1189, Richard I was the undisputed heir. One of his first acts as king was to send William Marshal to England with orders to release Eleanor from prison; he found upon his arrival that her custodians had already released her. Eleanor rode to Westminster and received the oaths of fealty from many lords and prelates on behalf of the King. She ruled England in Richard's name, signing herself "Eleanor, by the grace of God, Queen of England". On 13 August 1189, Richard sailed from Barfleur to Portsmouth and was received with enthusiasm. Between 1190 and 1194, King Richard was absent from England, engaged in the Third Crusade from 1190 to 1192, and then held in captivity by Henry VI, Holy Roman Emperor. During Richard's absence, royal authority in England was represented by a Council of Regency in conjunction with a succession of chief justiciars—William de Longchamp (1190–1191), Walter de Coutances (1191–1193), and Hubert Walter. Although Eleanor held no formal office in England during this period, she arrived in England in the company of Coutances in June 1191, and for the remainder of Richard's absence, she exercised a considerable degree of influence over the affairs of England as well as the conduct of Prince John. Eleanor played a key role in raising the ransom demanded from England by Henry VI and in the negotiations with the Holy Roman Emperor that eventually secured Richard's release. Evidence of the influence she wielded can also be found within the numerous letters she wrote to Pope Celestine III regarding Richard's captivity. Her letter dated 1193, presents her strong expressions of personal suffering as a result of Richard's captivity and informs the Pope that in her grief she is "wasted away by sorrow".
Eleanor survived Richard, who died in 1199 and lived well into the reign of her youngest son, King John. That year, under the terms of a truce between King Philip II and John, it was agreed that Philip's 12-year-old heir-apparent Louis would be married to one of John's nieces, daughters of his sister Eleanor of England, queen of Castile. John instructed his mother to travel to Castile to select one of the princesses. Now 77, Eleanor set out from Poitiers. Just outside Poitiers she was ambushed and held captive by Hugh IX of Lusignan, whose lands had been sold to Henry II by his forebears. Eleanor secured her freedom by agreeing to his demands. She continued south, crossed the Pyrenees, and travelled through the kingdoms of Navarre and Castile, arriving in Castile before the end of January 1200.
Eleanor's daughter, Queen Eleanor of Castile, had two remaining unmarried daughters, Urraca and Blanche. Eleanor selected the younger daughter, Blanche. She stayed for two months at the Castilian court, then late in March journeyed with granddaughter Blanche back across the Pyrenees. She celebrated Easter in Bordeaux, where the famous warrior Mercadier came to her court. It was decided that he would escort the queen and princess north. "On the second day in Easter week, he was slain in the city by a man-at-arms in the service of Brandin", a rival mercenary captain. This tragedy was too much for the elderly queen, who was fatigued and unable to continue to Normandy. She and Blanche rode in easy stages to the valley of the Loire, and she entrusted Blanche to the archbishop of Bordeaux, who took over as her escort. The exhausted Eleanor went to Fontevraud, where she remained. In early summer, Eleanor was ill, and John visited her at Fontevraud.
Eleanor was again unwell in early 1201. When war broke out between John and Philip, Eleanor declared her support for John and set out from Fontevraud to her capital Poitiers to prevent her grandson Arthur I, Duke of Brittany, posthumous son of Eleanor's son Geoffrey and John's rival for the English throne, from taking control. Arthur learned of her whereabouts and besieged her in the castle of Mirebeau. As soon as John heard of this, he marched south, overcame the besiegers, and captured the 15-year-old Arthur, and probably his sister Eleanor, Fair Maid of Brittany, whom Eleanor had raised with Richard. Eleanor then returned to Fontevraud where she took the veil as a nun.
Eleanor died in 1204 and was entombed in Fontevraud Abbey next to her husband Henry and her son Richard. Her tomb effigy shows her reading a Bible and is decorated with representations of magnificent jewellery; such effigies were rare, and Eleanor's is one of the finest of the few that survive from this period. However, during the French Revolution the abbey of Fontevraud was sacked and the tombs were disturbed and vandalised – consequently the bones of Eleanor, Henry, Richard, Joanna and Isabella of Angoulême were exhumed and scattered, never to be recovered. By the time of her death she had outlived all of her children except for King John of England, who died in 1216, and Queen Eleanor of Castile.
Contemporary sources praise Eleanor's beauty. Even in an era when ladies of the nobility were excessively praised, their praise of her was undoubtedly sincere. When she was young, she was described as perpulchra—more than beautiful. When she was around 30, Bernard de Ventadour, a noted troubadour, called her "gracious, lovely, the embodiment of charm", extolling her "lovely eyes and noble countenance" and declaring that she was "one meet to crown the state of any king". William of Newburgh emphasised the charms of her person, and even in her old age Richard of Devizes described her as beautiful, while Matthew Paris, writing in the 13th century, recalled her "admirable beauty".
In spite of all these words of praise, no one left a more detailed description of Eleanor; the colour of her hair and eyes, for example, are unknown. The effigy on her tomb shows a tall and large-boned woman with brown skin, though this may not be an accurate representation. Her seal of c. 1152 shows a woman with a slender figure, but this is probably an impersonal image.
Judy Chicago's artistic installation The Dinner Party features a place setting for Eleanor, and she was portrayed by Frederick Sandys in his 1858 painting, Queen Eleanor.
Henry and Eleanor are the main characters in James Goldman's 1966 play The Lion in Winter, which was made into a film in 1968 starring Peter O'Toole as Henry and Katharine Hepburn in the role of Eleanor, for which she won the Academy Award for Best Actress and the BAFTA Award for Best Actress in a Leading Role and was nominated for the Golden Globe Award for Best Actress – Motion Picture Drama.
Jean Plaidy's novel The Courts of Love, fifth in the 'Queens of England' series, is a fictionalised autobiography of Eleanor of Aquitaine.
Norah Lofts wrote a fictionalized biography of her, entitled in various editions Queen in Waiting or Eleanor the Queen, and including some romanticized episodes—starting off with the young Eleanor planning to elope with a young knight, who is killed out of hand by her guardian, in order to facilitate her marriage to the King's son.
The character Queen Elinor appears in William Shakespeare's The Life and Death of King John, with other members of the family. On television, she has been portrayed in this play by Una Venning in the BBC Sunday Night Theatre version (1952) and by Mary Morris in the BBC Shakespeare version (1984).
Eleanor features in the novel Via Crucis (1899) by F. Marion Crawford.
Eleanor serves as an important allegorical figure in Ezra Pound's early Cantos.
In Sharon Kay Penman's Plantagenet novels, she figures prominently in When Christ and His Saints Slept, Time and Chance, and Devil's Brood, and also appears in Lionheart and A King's Ransom, both of which focus on the reign of her son, Richard, as king of England. Eleanor also appears briefly in the first novel of Penman's Welsh trilogy, Here Be Dragons. In Penman's historical mysteries, Eleanor, as Richard's regent, sends squire Justin de Quincy on various missions, often an investigation of a situation involving Prince John. The four published mysteries are the Queen's Man, Cruel as the Grave, Dragon's Lair, and Prince of Darkness.
Eleanor is the subject of A Proud Taste for Scarlet and Miniver, a children's novel by E. L. Konigsburg.
Historical fiction author Elizabeth Chadwick wrote a three-volume series about Eleanor: The Summer Queen (2013), The Winter Crown (2014), and The Autumn Throne (2016).
Historical fiction author Ariana Franklin features Eleanor prominently in her novel The Serpent's Tale (2008) and the queen appears again as a character in subsequent novel A Murderous Procession (2010).
In The Merry Adventures of Robin Hood, Howard Pyle, retelling the ballad Robin Hood and Queen Katherine, made the queen Queen Eleanor to fit historically with the rest of the work.
She has also been introduced in The Royal Diaries series in the book Crown Jewel of Aquitaine by Kristiana Gregory.
She is a supporting character in Matrix by Lauren Groff.
Eleanor has featured in a number of screen versions of the Ivanhoe and Robin Hood stories. She has been played by Martita Hunt in The Story of Robin Hood and His Merrie Men (1952), Jill Esmond in the British TV adventure series The Adventures of Robin Hood (1955–1960), Phyllis Neilson-Terry in the British TV adventure series Ivanhoe (1958), Yvonne Mitchell in the BBC TV drama series The Legend of Robin Hood (1975), Siân Phillips in the TV series Ivanhoe (1997), and Tusse Silberg in the TV series The New Adventures of Robin Hood (1997). She was portrayed by Lynda Bellingham in the BBC series Robin Hood. Most recently, she was portrayed by Eileen Atkins in Robin Hood (2010).
In the 1964 film Becket, Eleanor is briefly played by Pamela Brown to Peter O'Toole's first performance as a young Henry II.
In the 1968 film The Lion in Winter, Eleanor is played by Katharine Hepburn, who won the third of her four Academy Awards for Best Actress for her portrayal, and Henry again is portrayed by O'Toole. The film is about the difficult relationship between them and the struggle of their three sons Richard, Geoffrey, and John for their father's favour and the succession. In the 2003 television film version, Eleanor was played by Glenn Close alongside Patrick Stewart as Henry.
She was portrayed by Mary Clare in the silent film Becket (1923), by Prudence Hyman in Richard the Lionheart (1962), and twice by Jane Lapotaire in the BBC TV drama series The Devil's Crown (1978) and again in Mike Walker's BBC Radio 4 series Plantagenet (2010). In the 2014 film Richard the Lionheart: Rebellion, Eleanor is played by Debbie Rochon.
Her life was portrayed on BBC Radio 4 in the drama series Eleanor Rising, with Rose Basista as Eleanor and Joel MacCormack as King Louis. The first series of five 15-minute episodes was broadcast in November 2020, a second series in April 2021, and a third series of two 60 minute episodes in September 2022.
Eleanor and Rosamund Clifford, as well as Henry II and Rosamund's father, appear in Gaetano Donizetti's opera Rosmonda d'Inghilterra (libretto by Felice Romani), which was premiered in Florence, at the Teatro Pergola, in 1834.
Eleanor of Aquitaine is thought to be the queen of England mentioned in the poem "Were diu werlt alle min," used as the tenth movement of Carl Orff's famous cantata, Carmina Burana.
Flower and Hawk is a monodrama for soprano and orchestra, written by American composer, Carlisle Floyd that premiered in 1972, in which the soprano (Eleanor of Aquitaine) relives past memories of her time as queen, and at the end of the monodrama, hears the bells that toll for Henry II's death, and in turn, her freedom.
Queen Elanor's Confession, or Queen Eleanor's Confession, is Child Ballad 156. Although the figures are intended as Eleanor of Aquitaine, Henry II of England, and William Marshal, the story is an entire invention.
In the 2019 video game expansion Civilization VI: Gathering Storm, Eleanor is a playable leader for the English and French civilizations. | [
{
"paragraph_id": 0,
"text": "Eleanor of Aquitaine (Occitan: Alienòr d'Aquitània, pronounced [aljɛnɔɾ dakiˈtanjɔ]; c. 1122 – 1 April 1204) was Duchess of Aquitaine in her own right from 1137 to 1204, Queen of France from 1137 to 1152 as the wife of King Louis VII, and Queen of England from 1154 to 1189 as the wife of King Henry II. As the heiress of the House of Poitiers, which controlled much of southwestern France, she was one of the wealthiest and most powerful women in Western Europe during the High Middle Ages. Militarily, she was a key leading figure in the Second Crusade, and in a revolt in favour of her son. Culturally, she was a patron of poets such as Wace, Benoît de Sainte-Maure, and Bernart de Ventadorn, and of the arts of the High Middle Ages.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Eleanor was the eldest child of William X, Duke of Aquitaine, and Aénor de Châtellerault. She became duchess upon her father's death in April 1137, and three months later she married Louis, son of her guardian King Louis VI of France. Shortly afterwards, Louis VI died and Eleanor's husband ascended the throne, making Eleanor queen consort. The couple had two daughters, Marie and Alix. Eleanor sought an annulment of her marriage, but her request was rejected by Pope Eugene III. Eventually, Louis agreed to an annulment, as fifteen years of marriage had not produced a son. The marriage was annulled on 21 March 1152 on the grounds of consanguinity within the fourth degree. Their daughters were declared legitimate, custody was awarded to Louis, and Eleanor's lands were restored to her.",
"title": ""
},
{
"paragraph_id": 2,
"text": "As soon as the annulment was granted, Eleanor became engaged to her third cousin Henry, Duke of Normandy. The couple married on Whitsun, 18 May 1152 in Poitiers. Eleanor was crowned queen of England at Westminster Abbey in 1154, when Henry acceded to the throne. Henry and Eleanor had five sons and three daughters, but eventually became estranged. Henry imprisoned her in 1173 for supporting the revolt of their eldest son, Henry the Young King, against him. She was not released until 6 July 1189, when her husband died and their third son, Richard I, ascended the throne. As queen dowager, Eleanor acted as regent while Richard went on the Third Crusade. She lived well into the reign of her youngest son, John.",
"title": ""
},
{
"paragraph_id": 3,
"text": "There is a paucity of primary sources on Eleanor's life, and in their absence myth, legend and speculation have frequently been resorted to to fill the gaps.",
"title": "Early life"
},
{
"paragraph_id": 4,
"text": "Eleanor's year of birth is not known precisely: a late 13th-century genealogy of her family listing her as 13 years old in the spring of 1137 provides the best evidence that Eleanor was perhaps born as late as 1124. On the other hand, some chronicles mention a fidelity oath of some lords of Aquitaine on the occasion of Eleanor's fourteenth birthday in 1136. This, and her known age of 82 at her death make 1122 the most likely year of her birth. Her parents almost certainly married in 1121. Her birthplace may have been Poitiers, Bordeaux, or Nieul-sur-l'Autise, where her mother and brother died when Eleanor was 6 or 8.",
"title": "Early life"
},
{
"paragraph_id": 5,
"text": "Eleanor (or Aliénor) was the oldest of three children of William X, Duke of Aquitaine, whose glittering ducal court was renowned in early 12th-century Europe, and his wife, Aenor de Châtellerault, the daughter of Aimery I, Viscount of Châtellerault, and Dangereuse de l'Isle Bouchard, who was William IX's longtime mistress as well as Eleanor's maternal grandmother. Her parents' marriage had been arranged by Dangereuse with her paternal grandfather William IX.",
"title": "Early life"
},
{
"paragraph_id": 6,
"text": "Eleanor is said to have been named for her mother Aenor and called Aliénor from the Latin Alia Aenor, which means the other Aenor. It became Eléanor in the langues d'oïl of northern France and Eleanor in English. There was, however, another prominent Eleanor before her—Eleanor of Normandy, an aunt of William the Conqueror, who lived a century earlier than Eleanor of Aquitaine. In Paris as the queen of France, she was called Helienordis, her honorific name as written in the Latin epistles.",
"title": "Early life"
},
{
"paragraph_id": 7,
"text": "Eleanor's father ensured that she had the best possible education. Eleanor came to learn arithmetic, the constellations, and history. She also learned domestic skills such as household management and the needle arts of embroidery, needlepoint, sewing, spinning, and weaving. Eleanor developed skills in conversation, dancing, games such as backgammon, checkers, and chess, playing the harp, and singing. Although her native tongue was Poitevin, she was taught to read and speak Latin, was well versed in music and literature, and schooled in riding, hawking, and hunting. Eleanor was extroverted, lively, intelligent, and strong-willed. Her four-year-old brother William Aigret and their mother died at the castle of Talmont on Aquitaine's Atlantic coast in the spring of 1130. Eleanor became the heir presumptive to her father's domains. The Duchy of Aquitaine was the largest and richest province of France. Poitou, where Eleanor spent most of her childhood, and Aquitaine together were almost one-third the size of modern France. Eleanor had only one other legitimate sibling, a younger sister named Aelith (also called Petronilla). Her half-brother Joscelin was acknowledged by William X as a son, but not as his heir. The notion that she had another half-brother, William, has been discredited. Later, during the first four years of Henry II's reign, her siblings joined Eleanor's royal household.",
"title": "Early life"
},
{
"paragraph_id": 8,
"text": "In 1137 Duke William X left Poitiers for Bordeaux and took his daughters with him. Upon reaching Bordeaux, he left them in the charge of the archbishop of Bordeaux, one of his few loyal vassals. The duke then set out for the Shrine of Saint James of Compostela in the company of other pilgrims. However, he died on Good Friday of that year (9 April).",
"title": "Early life"
},
{
"paragraph_id": 9,
"text": "Eleanor, aged 12 to 15, then became the duchess of Aquitaine, and thus the most eligible heiress in Europe. As these were the days when kidnapping an heiress was seen as a viable option for obtaining a title, William dictated a will on the very day he died that bequeathed his domains to Eleanor and appointed King Louis VI of France as her guardian. William requested of the king that he take care of both the lands and the duchess, and find her a suitable husband. However, until a husband was found, the king had the legal right to Eleanor's lands. The duke also insisted to his companions that his death be kept a secret until Louis was informed; the men were to journey from Saint James of Compostela across the Pyrenees as quickly as possible to call at Bordeaux to notify the archbishop, then to make all speed to Paris to inform the king.",
"title": "Early life"
},
{
"paragraph_id": 10,
"text": "The king of France, known as Louis the Fat, was also gravely ill at that time, suffering from a bout of dysentery from which he appeared unlikely to recover. Yet despite his impending death, Louis's mind remained clear. His eldest surviving son, Louis, had originally been destined for monastic life, but had become the heir apparent when the firstborn, Philip, died in a riding accident in 1131.",
"title": "Early life"
},
{
"paragraph_id": 11,
"text": "The death of William, one of the king's most powerful vassals, made available the most desirable duchy in France. While presenting a solemn and dignified face to the grieving Aquitainian messengers, Louis exulted when they departed. Rather than act as guardian to the duchess and duchy, he decided to marry the duchess to his 17-year-old heir and bring Aquitaine under the control of the French crown, thereby greatly increasing the power and prominence of France and its ruling family, the House of Capet. Within hours, the king had arranged for his son Louis to be married to Eleanor, with Abbot Suger in charge of the wedding arrangements. Louis was sent to Bordeaux with an escort of 500 knights, along with Abbot Suger, Theobald II, Count of Champagne, and Raoul I, Count of Vermandois.",
"title": "Early life"
},
{
"paragraph_id": 12,
"text": "On 25 July 1137, Eleanor and Louis were married in the Cathedral of Saint-André in Bordeaux by the archbishop of Bordeaux. Immediately after the wedding, the couple were enthroned as duke and duchess of Aquitaine. It was agreed that the duchy would remain independent of France until Eleanor's oldest son became both king of France and duke of Aquitaine. Thus, her holdings would not be merged with France until the next generation. As a wedding present she gave Louis a rock crystal vase {fr}, currently on display at the Louvre. Louis donated the vase to the Basilica of St Denis. This vase is the only object connected with Eleanor of Aquitaine that still survives.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 13,
"text": "Louis's tenure as count of Poitou and duke of Aquitaine and Gascony lasted only a few days. Although he had been invested as such on 8 August 1137, a messenger gave him the news that Louis VI had died of dysentery on 1 August while he and Eleanor were making a tour of the provinces. He and Eleanor were anointed and crowned king and queen of France on Christmas Day of the same year.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 14,
"text": "Possessing a high-spirited nature, Eleanor was not popular with the staid northerners; according to sources, Louis's mother Adelaide of Maurienne thought her flighty and a bad influence. She was not aided by memories of Constance of Arles, the Provençal wife of Robert II, tales of whose immodest dress and language were still told with horror. Eleanor's conduct was repeatedly criticised by church elders, particularly Bernard of Clairvaux and Abbot Suger, as indecorous. The king was madly in love with his beautiful and worldly bride, however, and granted her every whim, even though her behaviour baffled and vexed him. Much money went into making the austere Cité Palace in Paris more comfortable for Eleanor's sake.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 15,
"text": "Louis soon came into violent conflict with Pope Innocent II. In 1141, the Archbishopric of Bourges became vacant, and the king put forward as a candidate one of his chancellors, Cadurc, while vetoing the one suitable candidate, Pierre de la Chatre, who was promptly elected by the canons of Bourges and consecrated by the Pope. Louis accordingly bolted the gates of Bourges against the new bishop. The Pope, recalling similar attempts by William X to exile supporters of Innocent from Poitou and replace them with priests loyal to himself, blamed Eleanor, saying that Louis was only a child and should be taught manners. Outraged, Louis swore upon relics that so long as he lived Pierre should never enter Bourges. An interdict was thereupon imposed upon the king's lands, and Pierre was given refuge by Theobald II, Count of Champagne.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 16,
"text": "Louis became involved in a war with Count Theobald by permitting Raoul I, Count of Vermandois and seneschal of France, to repudiate his wife Eleanor of Champagne, Theobald's sister, and to marry Petronilla of Aquitaine, the Queen's sister. Eleanor urged Louis to support her sister's marriage to Count Raoul. Theobald had also offended Louis by siding with the Pope in the dispute over Bourges. The war lasted two years (1142–44) and ended with the occupation of Champagne by the royal army. Louis was personally involved in the assault and burning of the town of Vitry. More than a thousand people sought refuge in the town church, but the church caught fire and everyone inside was burned alive. Horrified, and desiring an end to the war, Louis attempted to make peace with Theobald in exchange for his support in lifting the interdict on Raoul and Petronilla. This was duly lifted for long enough to allow Theobald's lands to be restored; it was then lowered once more when Raoul refused to repudiate Petronilla, prompting Louis to return to Champagne and ravage it once more.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 17,
"text": "In June 1144, the king and queen visited the newly built monastic church at Saint-Denis. While there, the queen met with Bernard of Clairvaux, asking him to use his influence with the Pope to have the excommunication of Petronilla and Raoul lifted, in exchange for which King Louis would make concessions in Champagne and recognise Pierre de la Chatre as archbishop of Bourges. Dismayed at her attitude, Bernard scolded Eleanor for her lack of penitence and interference in matters of state. In response, Eleanor broke down and meekly excused her behaviour, claiming to be bitter because of her lack of children (her only recorded pregnancy at that time was in about 1138, but she miscarried). In response, Bernard became more kindly towards her: \"My child, seek those things which make for peace. Cease to stir up the king against the Church, and urge upon him a better course of action. If you will promise to do this, I in return promise to entreat the merciful Lord to grant you offspring.\" In a matter of weeks, peace had returned to France: Theobald's provinces were returned and Pierre de la Chatre was installed as archbishop of Bourges. In April 1145, Eleanor gave birth to a daughter, Marie.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 18,
"text": "Louis, however, still burned with guilt over the massacre at Vitry and wished to make a pilgrimage to the Holy Land to atone for his sins. In autumn 1145, Pope Eugene III requested that Louis lead a Crusade to the Middle East to rescue the Frankish states there from disaster. Accordingly, Louis declared on Christmas Day 1145 at Bourges his intention of going on a crusade.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 19,
"text": "Eleanor of Aquitaine also formally took up the cross symbolic of the Second Crusade during a sermon preached by Bernard of Clairvaux. In addition, she had been corresponding with her uncle Raymond, Prince of Antioch, who was seeking further protection from the French crown against the Saracens. Eleanor recruited some of her royal ladies-in-waiting for the campaign as well as 300 non-noble Aquitainian vassals. She insisted on taking part in the Crusades as the feudal leader of the soldiers from her duchy. She left for the Second Crusade from Vézelay, the rumoured location of Mary Magdalene's grave, in June 1147.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 20,
"text": "The Crusade itself achieved little. Louis was a weak and ineffectual military leader with no skill for maintaining troop discipline or morale, or of making informed and logical tactical decisions. In eastern Europe, the French army was at times hindered by Manuel I Comnenus, the Byzantine Emperor, who feared that the Crusade would jeopardise the tenuous safety of his empire. Notwithstanding, during their three-week stay at Constantinople, Louis was fêted and Eleanor was much admired. She was compared with Penthesilea, mythical queen of the Amazons, by the Greek historian Nicetas Choniates. He added that she gained the epithet chrysopous (golden-foot) from the cloth of gold that decorated and fringed her robe. Louis and Eleanor stayed in the Philopation palace just outside the city walls.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 21,
"text": "From the moment the Crusaders entered Asia Minor, things began to go badly. The king and queen were still optimistic—the Byzantine Emperor had told them that King Conrad III of Germany had won a great victory against a Turkish army when in fact the German army had been almost completely destroyed at Dorylaeum. However, while camping near Nicea, the remnants of the German army, including a dazed and sick Conrad III, staggered past the French camp, bringing news of their disaster. The French, with what remained of the Germans, then began to march in increasingly disorganised fashion towards Antioch. They were in high spirits on Christmas Eve, when they chose to camp in a lush valley near Ephesus. Here they were ambushed by a Turkish detachment, but the French proceeded to slaughter this detachment and appropriate their camp.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 22,
"text": "Louis then decided to cross the Phrygian mountains directly in the hope of reaching Raymond of Poitiers in Antioch more quickly. As they ascended the mountains, however, the army and the king and queen were horrified to discover the unburied corpses of the Germans killed earlier.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 23,
"text": "On the day set for the crossing of Mount Cadmus, Louis chose to take charge of the rear of the column, where the unarmed pilgrims and the baggage trains marched. The vanguard, with which Queen Eleanor marched, was commanded by her Aquitainian vassal, Geoffrey de Rancon. Unencumbered by baggage, they reached the summit of Cadmus, where Rancon had been ordered to make camp for the night. Rancon, however, chose to continue on, deciding in concert with Amadeus III, Count of Savoy, Louis's uncle, that a nearby plateau would make a better campsite. Such disobedience was reportedly common.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 24,
"text": "Accordingly, by mid-afternoon, the rear of the column—believing the day's march to be nearly at an end—was dawdling. This resulted in the army becoming separated, with some having already crossed the summit and others still approaching it. In the ensuing Battle of Mount Cadmus, the Turks, who had been following and feinting for many days, seized their opportunity and attacked those who had not yet crossed the summit. The French, both soldiers and pilgrims, taken by surprise, were trapped. Those who tried to escape were caught and killed. Many men, horses, and much of the baggage were cast into the canyon below. The chronicler William of Tyre, writing between 1170 and 1184 and thus perhaps too long after the event to be considered historically accurate, placed the blame for this disaster firmly on the amount of baggage being carried, much of it reputedly belonging to Eleanor and her ladies, and the presence of non-combatants.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 25,
"text": "The king, having scorned royal apparel in favour of a simple pilgrim's tunic, escaped notice, unlike his bodyguards, whose skulls were brutally smashed and limbs severed. He reportedly \"nimbly and bravely scaled a rock by making use of some tree roots which God had provided for his safety\" and managed to survive the attack. Others were not so fortunate: \"No aid came from Heaven, except that night fell.\"",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 26,
"text": "Official blame for the disaster was placed on Geoffrey de Rancon, who had made the decision to continue, and it was suggested that he be hanged, a suggestion which the king ignored. Since Geoffrey was Eleanor's vassal, many believed that it was she who had been ultimately responsible for the change in plan, and thus the massacre. This suspicion of responsibility did nothing for her popularity in Christendom. She was also blamed for the size of the baggage train and the fact that her Aquitanian soldiers had marched at the front and thus were not involved in the fight. Continuing on, the army became split, with the commoners marching towards Antioch and the royalty travelling by sea. When most of the land army arrived, the king and queen had a dispute. Some, such as John of Salisbury and William of Tyre, say Eleanor's reputation was sullied by rumours of an affair with her uncle Raymond.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 27,
"text": "However, this rumour may have been a ruse, as Raymond, through Eleanor, had been trying to induce Louis to use his army to attack the actual Muslim encampment at nearby Beroea (modern Aleppo), gateway to retaking Edessa, which had all along, by papal decree, been the main objective of the Crusade. Although this was perhaps a better military plan, Louis was not keen to fight in northern Syria. One of Louis's avowed Crusade goals was to journey in pilgrimage to Jerusalem, and he stated his intention to continue. Reputedly Eleanor then requested to stay with Raymond and brought up the matter of consanguinity—the fact that she and her husband, King Louis, were perhaps too closely related. Consanguinity was grounds for annulment in the medieval period. But rather than allowing her to stay, Louis took Eleanor from Antioch against her will and continued on to Jerusalem with his dwindling army.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 28,
"text": "Louis's refusal and his forcing her to accompany him humiliated Eleanor, and she maintained a low profile for the rest of the crusade. Louis's subsequent siege of Damascus in 1148 with his remaining army, reinforced by Conrad and Baldwin III of Jerusalem, achieved little. Damascus was a major wealthy trading centre and was under normal circumstances a potential threat, but the rulers of Jerusalem had recently entered into a truce with the city, which they then forswore. It was a gamble that did not pay off, and whether through military error or betrayal, the Damascus campaign was a failure. Louis's long march to Jerusalem and back north, which Eleanor was forced to join, debilitated his army and disheartened her knights; the divided Crusade armies could not overcome the Muslim forces, and the royal couple had to return home. The French royal family retreated to Jerusalem and then sailed to Rome and made their way back to Paris.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 29,
"text": "While in the eastern Mediterranean, Eleanor learned about maritime conventions developing there, which were the beginnings of what would become admiralty law. She introduced those conventions in her own lands on the island of Oléron in 1160 (with the \"Rolls of Oléron\") and later in England as well. She was also instrumental in developing trade agreements with Constantinople and ports of trade in the Holy Lands.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 30,
"text": "Even before the Crusade, Eleanor and Louis were becoming estranged, and their differences were only exacerbated while they were abroad. Eleanor's purported relationship with her uncle Raymond, the ruler of Antioch, was a major source of discord. Eleanor supported her uncle's desire to re-capture the nearby County of Edessa, the objective of the Crusade. In addition, having been close to him in their youth, she now showed what was considered to be \"excessive affection\" towards her uncle.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 31,
"text": "Home, however, was not easily reached. Louis and Eleanor, on separate ships due to their disagreements, were first attacked in May 1149 by Byzantine ships. Although they escaped this attempt unharmed, stormy weather drove Eleanor's ship far to the south to the Barbary Coast and caused her to lose track of her husband. Neither was heard of for over two months. In mid-July, Eleanor's ship finally reached Palermo in Sicily, where she discovered that she and her husband had both been given up for dead. She was given shelter and food by servants of King Roger II of Sicily, until the king eventually reached Calabria, and she set out to meet him there. Later, at King Roger's court in Potenza, she learned of the death of her uncle Raymond, who had been beheaded by Muslim forces in the Holy Land. This news appears to have forced a change of plans, for instead of returning to France from Marseilles, they went to see Pope Eugene III in Tusculum, where he had been driven five months before by a revolt of the Commune of Rome.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 32,
"text": "Eugene did not, as Eleanor had hoped, grant an annulment. Instead, he attempted to reconcile Eleanor and Louis, confirming the legality of their marriage. He proclaimed that no word could be spoken against it, and that it might not be dissolved under any pretext. He even arranged for Eleanor and Louis to sleep in the same bed. Thus was conceived their second child—not a son, but another daughter, Alix of France.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 33,
"text": "The marriage was now doomed. Still without a son and in danger of being left with no male heir, as well as facing substantial opposition to Eleanor from many of his barons and her own desire for annulment, Louis bowed to the inevitable. On 11 March 1152, they met at the royal castle of Beaugency to dissolve the marriage. Hugues de Toucy, archbishop of Sens, presided, and Louis and Eleanor were both present, as were the archbishop of Bordeaux and Rouen. Archbishop Samson of Reims acted for Eleanor.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 34,
"text": "On 21 March, the four archbishops, with the approval of Pope Eugene, granted an annulment on grounds of consanguinity within the fourth degree; Eleanor was Louis' third cousin once removed, and shared common ancestry with Robert II of France. Their two daughters were, however, declared legitimate. Children born to a marriage that was later annulled were not at risk of being \"bastardised,\" because \"Where the parties married in good faith, without knowledge of an impediment, the canonists held that the children of the marriage were legitimate and that the marriage itself was valid up to the day it was declared null\". Custody of the daughters was awarded to King Louis. Archbishop Samson received assurances from Louis that Eleanor's lands would be restored to her.",
"title": "Queen of France (1137-1152)"
},
{
"paragraph_id": 35,
"text": "As Eleanor travelled to Poitiers, two lords—Theobald V, Count of Blois, and Geoffrey, Count of Nantes, brother of Henry II, Duke of Normandy—tried to kidnap and marry her to claim her lands. As soon as she arrived in Poitiers, Eleanor sent envoys to Henry, asking him to come at once to marry her. On 18 May 1152 (Whit Sunday), eight weeks after her annulment, Eleanor married Henry \"without the pomp and ceremony that befitted their rank.\"",
"title": "Queen of England (1154-1189)"
},
{
"paragraph_id": 36,
"text": "Eleanor was related to Henry even more closely than she had been to Louis: they were cousins to the third degree through their common ancestor Ermengarde of Anjou, wife of Robert I, Duke of Burgundy and Geoffrey, Count of Gâtinais, and they were also descended from King Robert II of France. A marriage between Henry and Eleanor's daughter Marie had earlier been declared impossible due to their status as third cousins once removed. It was rumoured by some that Eleanor had had an affair with Henry's own father, Geoffrey V, Count of Anjou, who had advised his son to avoid any involvement with her.",
"title": "Queen of England (1154-1189)"
},
{
"paragraph_id": 37,
"text": "On 25 October 1154, Henry became king of England. A now heavily pregnant Eleanor, was crowned queen of England by Theobald of Bec, the Archbishop of Canterbury, on 19 December 1154. She may not have been anointed on this occasion, however, because she had already been anointed in 1137. Over the next 13 years, she bore Henry five sons and three daughters: William, Henry, Richard, Geoffrey, John, Matilda, Eleanor, and Joan. Historian John Speed, in his 1611 work History of Great Britain, mentions the possibility that Eleanor had a son named Philip, who died young. His sources no longer exist, and he alone mentions this birth.",
"title": "Queen of England (1154-1189)"
},
{
"paragraph_id": 38,
"text": "Eleanor's marriage to Henry was reputed to be tumultuous and argumentative, although sufficiently cooperative to produce at least eight pregnancies. Henry was by no means faithful to his wife and had a reputation for philandering; he fathered other, illegitimate, children throughout the marriage. Eleanor appears to have taken an ambivalent attitude towards these affairs. Geoffrey of York, for example, was an illegitimate son of Henry, but acknowledged by Henry as his child and raised at Westminster in the care of the queen.",
"title": "Queen of England (1154-1189)"
},
{
"paragraph_id": 39,
"text": "During the period from Henry's accession to the birth of Eleanor's youngest son John, affairs in the kingdom were turbulent: Aquitaine, as was the norm, defied the authority of Henry as Eleanor's husband and answered only to their duchess. Attempts were made to claim Toulouse, the rightful inheritance of Eleanor's grandmother Philippa of Toulouse, but they ended in failure. A bitter feud arose between the king and Thomas Becket, initially his chancellor and closest adviser and later the archbishop of Canterbury. Louis of France had remarried and been widowed; he married for the third time and finally fathered a long-hoped-for son, Philip Augustus, also known as Dieudonné—God-given. \"Young Henry\", son of Henry and Eleanor, wed Margaret, daughter of Louis from his second marriage. Little is known of Eleanor's involvement in these events. By late 1166, Henry's affair with Rosamund Clifford had become known, and although much speculation has arisen that this precipitated a break in their relationship, the evidence does not support this.",
"title": "Queen of England (1154-1189)"
},
{
"paragraph_id": 40,
"text": "In 1167, Eleanor's third daughter, Matilda, married Henry the Lion of Saxony. Eleanor remained in England with her daughter for the year prior to Matilda's departure for Normandy in September. In December, Eleanor gathered her movable possessions in England and transported them on several ships to Argentan. Christmas was celebrated at the royal court there, and she appears to have agreed to a separation from Henry. She certainly left for her own city of Poitiers immediately after Christmas. Henry did not stop her; on the contrary, he and his army personally escorted her there before attacking a castle belonging to the rebellious Lusignan family. Henry then went about his own business outside Aquitaine, leaving Earl Patrick, his regional military commander, as her protective custodian. When Patrick was killed in a skirmish, Eleanor, who proceeded to ransom his captured nephew, the young William Marshal, was left in control of her lands.",
"title": "Queen of England (1154-1189)"
},
{
"paragraph_id": 41,
"text": "Of all her influence on culture, Eleanor's time in Poitiers between 1168 and 1173 was perhaps the most critical, yet very little is known about it. Henry II was elsewhere, attending to his own affairs after escorting Eleanor there. Some believe that Eleanor's court in Poitiers was the \"Court of Love\" where Eleanor and her daughter Marie meshed and encouraged the ideas of troubadours, chivalry, and courtly love into a single court. It may have been largely to teach manners, something the French courts would be known for in later generations. Yet the existence and reasons for this court are debated.",
"title": "Queen of England (1154-1189)"
},
{
"paragraph_id": 42,
"text": "In the 12th century The Art of Courtly Love, Andreas Capellanus (Andrew the Chaplain), refers to the court of Poitiers. He claims that Eleanor, her daughter Marie, Ermengarde, Viscountess of Narbonne, and Isabelle of Flanders would sit and listen to the quarrels of lovers and act as a jury to the questions of the court that revolved around acts of romantic love. He records some twenty-one cases, the most famous of them being a problem posed to the women about whether true love can exist in marriage. According to Capellanus, the women decided that it was not at all likely.",
"title": "Queen of England (1154-1189)"
},
{
"paragraph_id": 43,
"text": "Some scholars believe that the \"court of love\" probably never existed since the only evidence for it is Andreas Capellanus' book. To strengthen their argument, they state that there is no other evidence that Marie ever stayed with her mother in Poitiers. Andreas wrote for the court of the king of France, where Eleanor was not held in esteem. Polly Schoyer Brooks, the author of a popular biography of Eleanor, suggests that the court did exist, but that it was not taken very seriously, and that acts of courtly love were just a \"parlour game\" made up by Eleanor and Marie in order to place some order over the young courtiers living there.",
"title": "Queen of England (1154-1189)"
},
{
"paragraph_id": 44,
"text": "There is no claim that Eleanor invented courtly love, for it was a concept that had begun to grow before Eleanor's court arose. All that can be said is that her court at Poitiers was most likely a catalyst for the increased popularity of courtly love literature in the Western European regions. Amy Kelly provides a plausible description of the origins of the rules of Eleanor's court: \"In the Poitevin code, man is the property, the very thing of woman; whereas a precisely contrary state of things existed in the adjacent realms of the two kings from whom the reigning duchess of Aquitaine was estranged.\"",
"title": "Queen of England (1154-1189)"
},
{
"paragraph_id": 45,
"text": "In March 1173, aggrieved at his lack of power and egged on by Henry's enemies, his son by the same name, the younger Henry, launched the Revolt of 1173–1174. He fled to Paris. From there, \"the younger Henry, devising evil against his father from every side by the advice of the French king, went secretly into Aquitaine where his two youthful brothers, Richard and Geoffrey, were living with their mother, and with her connivance, so it is said, he incited them to join him.\" One source claimed that the queen sent her younger sons to France \"to join with him against their father the king.\" Once her sons had left for Paris, Eleanor may have encouraged the lords of the south to rise up and support them.",
"title": "Queen of England (1154-1189)"
},
{
"paragraph_id": 46,
"text": "Sometime between the end of March and the beginning of May, Eleanor left Poitiers, but was arrested and sent to the king at Rouen. The king did not announce the arrest publicly and for the next year the queen's whereabouts were unknown. On 8 July 1174, Henry and Eleanor took ship for England from Barfleur. As soon as they disembarked at Southampton, Eleanor was taken either to Winchester Castle or Sarum Castle and held there.",
"title": "Queen of England (1154-1189)"
},
{
"paragraph_id": 47,
"text": "Eleanor was imprisoned for the next 16 years, much of the time in various locations in England. During her imprisonment, Eleanor became more and more distant from her sons, especially from Richard, who had always been her favourite. She did not have the opportunity to see her sons very often during her imprisonment, though she was released for special occasions such as Christmas. About four miles from Shrewsbury and close by Haughmond Abbey is \"Queen Eleanor's Bower\", the remains of a possible triangular timber castle which is believed to have been one of her prisons.",
"title": "Queen of England (1154-1189)"
},
{
"paragraph_id": 48,
"text": "Henry lost the woman reputed to be his great love, Rosamund Clifford, in 1176. He had met her in 1166 and had begun their liaison in 1173, supposedly contemplating divorce from Eleanor. This notorious affair caused a monkish scribe to transcribe Rosamund's name in Latin to \"Rosa Immundi\", or \"Rose of Unchastity\". The king had many mistresses, but although he treated earlier liaisons discreetly, he flaunted Rosamund. He may have done so to provoke Eleanor into seeking an annulment, but if so, the queen disappointed him. Nevertheless, rumours persisted, perhaps assisted by Henry's camp, that Eleanor had poisoned Rosamund. It is also speculated that Eleanor placed Rosamund in a bathtub and had an old woman cut Rosamund's arms. Henry donated much money to Godstow Nunnery in Oxfordshire, where Rosamund was buried.",
"title": "Queen of England (1154-1189)"
},
{
"paragraph_id": 49,
"text": "In 1183, the young King Henry tried again to force his father to hand over some of his patrimony. In debt and refused control of Normandy, he tried to ambush his father at Limoges. He was joined by troops sent by his brother Geoffrey and Philip II of France. Henry II's troops besieged the town, forcing his son to flee. After wandering aimlessly through Aquitaine, Henry the Younger caught dysentery. On Saturday, 11 June 1183, the young king realized he was dying and was overcome with remorse for his sins. When his father's ring was sent to him, he begged that his father would show mercy to his mother, and that all his companions would plead with Henry to set her free. Henry II sent Thomas of Earley, Archdeacon of Wells, to break the news to Eleanor at Sarum. Eleanor reputedly had a dream in which she foresaw her son Henry's death. In 1193, she would tell Pope Celestine III that she was tortured by his memory.",
"title": "Queen of England (1154-1189)"
},
{
"paragraph_id": 50,
"text": "King Philip II of France claimed that certain properties in Normandy belonged to his half-sister Margaret, widow of the young Henry, but Henry insisted that they had once belonged to Eleanor and would revert to her upon her son's death. For this reason Henry summoned Eleanor to Normandy in the late summer of 1183. She stayed in Normandy for six months. This was the beginning of a period of greater freedom for the still-supervised Eleanor. Eleanor went back to England probably early in 1184. Over the next few years Eleanor often travelled with her husband and was sometimes associated with him in the government of the realm, but still had a custodian so that she was not free.",
"title": "Queen of England (1154-1189)"
},
{
"paragraph_id": 51,
"text": "Upon the death of her husband Henry II on 6 July 1189, Richard I was the undisputed heir. One of his first acts as king was to send William Marshal to England with orders to release Eleanor from prison; he found upon his arrival that her custodians had already released her. Eleanor rode to Westminster and received the oaths of fealty from many lords and prelates on behalf of the King. She ruled England in Richard's name, signing herself \"Eleanor, by the grace of God, Queen of England\". On 13 August 1189, Richard sailed from Barfleur to Portsmouth and was received with enthusiasm. Between 1190 and 1194, King Richard was absent from England, engaged in the Third Crusade from 1190 to 1192, and then held in captivity by Henry VI, Holy Roman Emperor. During Richard's absence, royal authority in England was represented by a Council of Regency in conjunction with a succession of chief justiciars—William de Longchamp (1190–1191), Walter de Coutances (1191–1193), and Hubert Walter. Although Eleanor held no formal office in England during this period, she arrived in England in the company of Coutances in June 1191, and for the remainder of Richard's absence, she exercised a considerable degree of influence over the affairs of England as well as the conduct of Prince John. Eleanor played a key role in raising the ransom demanded from England by Henry VI and in the negotiations with the Holy Roman Emperor that eventually secured Richard's release. Evidence of the influence she wielded can also be found within the numerous letters she wrote to Pope Celestine III regarding Richard's captivity. Her letter dated 1193, presents her strong expressions of personal suffering as a result of Richard's captivity and informs the Pope that in her grief she is \"wasted away by sorrow\".",
"title": "Widowhood (1189-1204)"
},
{
"paragraph_id": 52,
"text": "Eleanor survived Richard, who died in 1199 and lived well into the reign of her youngest son, King John. That year, under the terms of a truce between King Philip II and John, it was agreed that Philip's 12-year-old heir-apparent Louis would be married to one of John's nieces, daughters of his sister Eleanor of England, queen of Castile. John instructed his mother to travel to Castile to select one of the princesses. Now 77, Eleanor set out from Poitiers. Just outside Poitiers she was ambushed and held captive by Hugh IX of Lusignan, whose lands had been sold to Henry II by his forebears. Eleanor secured her freedom by agreeing to his demands. She continued south, crossed the Pyrenees, and travelled through the kingdoms of Navarre and Castile, arriving in Castile before the end of January 1200.",
"title": "Widowhood (1189-1204)"
},
{
"paragraph_id": 53,
"text": "Eleanor's daughter, Queen Eleanor of Castile, had two remaining unmarried daughters, Urraca and Blanche. Eleanor selected the younger daughter, Blanche. She stayed for two months at the Castilian court, then late in March journeyed with granddaughter Blanche back across the Pyrenees. She celebrated Easter in Bordeaux, where the famous warrior Mercadier came to her court. It was decided that he would escort the queen and princess north. \"On the second day in Easter week, he was slain in the city by a man-at-arms in the service of Brandin\", a rival mercenary captain. This tragedy was too much for the elderly queen, who was fatigued and unable to continue to Normandy. She and Blanche rode in easy stages to the valley of the Loire, and she entrusted Blanche to the archbishop of Bordeaux, who took over as her escort. The exhausted Eleanor went to Fontevraud, where she remained. In early summer, Eleanor was ill, and John visited her at Fontevraud.",
"title": "Widowhood (1189-1204)"
},
{
"paragraph_id": 54,
"text": "Eleanor was again unwell in early 1201. When war broke out between John and Philip, Eleanor declared her support for John and set out from Fontevraud to her capital Poitiers to prevent her grandson Arthur I, Duke of Brittany, posthumous son of Eleanor's son Geoffrey and John's rival for the English throne, from taking control. Arthur learned of her whereabouts and besieged her in the castle of Mirebeau. As soon as John heard of this, he marched south, overcame the besiegers, and captured the 15-year-old Arthur, and probably his sister Eleanor, Fair Maid of Brittany, whom Eleanor had raised with Richard. Eleanor then returned to Fontevraud where she took the veil as a nun.",
"title": "Widowhood (1189-1204)"
},
{
"paragraph_id": 55,
"text": "Eleanor died in 1204 and was entombed in Fontevraud Abbey next to her husband Henry and her son Richard. Her tomb effigy shows her reading a Bible and is decorated with representations of magnificent jewellery; such effigies were rare, and Eleanor's is one of the finest of the few that survive from this period. However, during the French Revolution the abbey of Fontevraud was sacked and the tombs were disturbed and vandalised – consequently the bones of Eleanor, Henry, Richard, Joanna and Isabella of Angoulême were exhumed and scattered, never to be recovered. By the time of her death she had outlived all of her children except for King John of England, who died in 1216, and Queen Eleanor of Castile.",
"title": "Widowhood (1189-1204)"
},
{
"paragraph_id": 56,
"text": "Contemporary sources praise Eleanor's beauty. Even in an era when ladies of the nobility were excessively praised, their praise of her was undoubtedly sincere. When she was young, she was described as perpulchra—more than beautiful. When she was around 30, Bernard de Ventadour, a noted troubadour, called her \"gracious, lovely, the embodiment of charm\", extolling her \"lovely eyes and noble countenance\" and declaring that she was \"one meet to crown the state of any king\". William of Newburgh emphasised the charms of her person, and even in her old age Richard of Devizes described her as beautiful, while Matthew Paris, writing in the 13th century, recalled her \"admirable beauty\".",
"title": "Appearance"
},
{
"paragraph_id": 57,
"text": "In spite of all these words of praise, no one left a more detailed description of Eleanor; the colour of her hair and eyes, for example, are unknown. The effigy on her tomb shows a tall and large-boned woman with brown skin, though this may not be an accurate representation. Her seal of c. 1152 shows a woman with a slender figure, but this is probably an impersonal image.",
"title": "Appearance"
},
{
"paragraph_id": 58,
"text": "",
"title": "Popular culture"
},
{
"paragraph_id": 59,
"text": "Judy Chicago's artistic installation The Dinner Party features a place setting for Eleanor, and she was portrayed by Frederick Sandys in his 1858 painting, Queen Eleanor.",
"title": "Popular culture"
},
{
"paragraph_id": 60,
"text": "Henry and Eleanor are the main characters in James Goldman's 1966 play The Lion in Winter, which was made into a film in 1968 starring Peter O'Toole as Henry and Katharine Hepburn in the role of Eleanor, for which she won the Academy Award for Best Actress and the BAFTA Award for Best Actress in a Leading Role and was nominated for the Golden Globe Award for Best Actress – Motion Picture Drama.",
"title": "Popular culture"
},
{
"paragraph_id": 61,
"text": "Jean Plaidy's novel The Courts of Love, fifth in the 'Queens of England' series, is a fictionalised autobiography of Eleanor of Aquitaine.",
"title": "Popular culture"
},
{
"paragraph_id": 62,
"text": "Norah Lofts wrote a fictionalized biography of her, entitled in various editions Queen in Waiting or Eleanor the Queen, and including some romanticized episodes—starting off with the young Eleanor planning to elope with a young knight, who is killed out of hand by her guardian, in order to facilitate her marriage to the King's son.",
"title": "Popular culture"
},
{
"paragraph_id": 63,
"text": "The character Queen Elinor appears in William Shakespeare's The Life and Death of King John, with other members of the family. On television, she has been portrayed in this play by Una Venning in the BBC Sunday Night Theatre version (1952) and by Mary Morris in the BBC Shakespeare version (1984).",
"title": "Popular culture"
},
{
"paragraph_id": 64,
"text": "Eleanor features in the novel Via Crucis (1899) by F. Marion Crawford.",
"title": "Popular culture"
},
{
"paragraph_id": 65,
"text": "Eleanor serves as an important allegorical figure in Ezra Pound's early Cantos.",
"title": "Popular culture"
},
{
"paragraph_id": 66,
"text": "In Sharon Kay Penman's Plantagenet novels, she figures prominently in When Christ and His Saints Slept, Time and Chance, and Devil's Brood, and also appears in Lionheart and A King's Ransom, both of which focus on the reign of her son, Richard, as king of England. Eleanor also appears briefly in the first novel of Penman's Welsh trilogy, Here Be Dragons. In Penman's historical mysteries, Eleanor, as Richard's regent, sends squire Justin de Quincy on various missions, often an investigation of a situation involving Prince John. The four published mysteries are the Queen's Man, Cruel as the Grave, Dragon's Lair, and Prince of Darkness.",
"title": "Popular culture"
},
{
"paragraph_id": 67,
"text": "Eleanor is the subject of A Proud Taste for Scarlet and Miniver, a children's novel by E. L. Konigsburg.",
"title": "Popular culture"
},
{
"paragraph_id": 68,
"text": "Historical fiction author Elizabeth Chadwick wrote a three-volume series about Eleanor: The Summer Queen (2013), The Winter Crown (2014), and The Autumn Throne (2016).",
"title": "Popular culture"
},
{
"paragraph_id": 69,
"text": "Historical fiction author Ariana Franklin features Eleanor prominently in her novel The Serpent's Tale (2008) and the queen appears again as a character in subsequent novel A Murderous Procession (2010).",
"title": "Popular culture"
},
{
"paragraph_id": 70,
"text": "In The Merry Adventures of Robin Hood, Howard Pyle, retelling the ballad Robin Hood and Queen Katherine, made the queen Queen Eleanor to fit historically with the rest of the work.",
"title": "Popular culture"
},
{
"paragraph_id": 71,
"text": "She has also been introduced in The Royal Diaries series in the book Crown Jewel of Aquitaine by Kristiana Gregory.",
"title": "Popular culture"
},
{
"paragraph_id": 72,
"text": "She is a supporting character in Matrix by Lauren Groff.",
"title": "Popular culture"
},
{
"paragraph_id": 73,
"text": "",
"title": "Popular culture"
},
{
"paragraph_id": 74,
"text": "Eleanor has featured in a number of screen versions of the Ivanhoe and Robin Hood stories. She has been played by Martita Hunt in The Story of Robin Hood and His Merrie Men (1952), Jill Esmond in the British TV adventure series The Adventures of Robin Hood (1955–1960), Phyllis Neilson-Terry in the British TV adventure series Ivanhoe (1958), Yvonne Mitchell in the BBC TV drama series The Legend of Robin Hood (1975), Siân Phillips in the TV series Ivanhoe (1997), and Tusse Silberg in the TV series The New Adventures of Robin Hood (1997). She was portrayed by Lynda Bellingham in the BBC series Robin Hood. Most recently, she was portrayed by Eileen Atkins in Robin Hood (2010).",
"title": "Popular culture"
},
{
"paragraph_id": 75,
"text": "In the 1964 film Becket, Eleanor is briefly played by Pamela Brown to Peter O'Toole's first performance as a young Henry II.",
"title": "Popular culture"
},
{
"paragraph_id": 76,
"text": "In the 1968 film The Lion in Winter, Eleanor is played by Katharine Hepburn, who won the third of her four Academy Awards for Best Actress for her portrayal, and Henry again is portrayed by O'Toole. The film is about the difficult relationship between them and the struggle of their three sons Richard, Geoffrey, and John for their father's favour and the succession. In the 2003 television film version, Eleanor was played by Glenn Close alongside Patrick Stewart as Henry.",
"title": "Popular culture"
},
{
"paragraph_id": 77,
"text": "She was portrayed by Mary Clare in the silent film Becket (1923), by Prudence Hyman in Richard the Lionheart (1962), and twice by Jane Lapotaire in the BBC TV drama series The Devil's Crown (1978) and again in Mike Walker's BBC Radio 4 series Plantagenet (2010). In the 2014 film Richard the Lionheart: Rebellion, Eleanor is played by Debbie Rochon.",
"title": "Popular culture"
},
{
"paragraph_id": 78,
"text": "Her life was portrayed on BBC Radio 4 in the drama series Eleanor Rising, with Rose Basista as Eleanor and Joel MacCormack as King Louis. The first series of five 15-minute episodes was broadcast in November 2020, a second series in April 2021, and a third series of two 60 minute episodes in September 2022.",
"title": "Popular culture"
},
{
"paragraph_id": 79,
"text": "Eleanor and Rosamund Clifford, as well as Henry II and Rosamund's father, appear in Gaetano Donizetti's opera Rosmonda d'Inghilterra (libretto by Felice Romani), which was premiered in Florence, at the Teatro Pergola, in 1834.",
"title": "Popular culture"
},
{
"paragraph_id": 80,
"text": "Eleanor of Aquitaine is thought to be the queen of England mentioned in the poem \"Were diu werlt alle min,\" used as the tenth movement of Carl Orff's famous cantata, Carmina Burana.",
"title": "Popular culture"
},
{
"paragraph_id": 81,
"text": "Flower and Hawk is a monodrama for soprano and orchestra, written by American composer, Carlisle Floyd that premiered in 1972, in which the soprano (Eleanor of Aquitaine) relives past memories of her time as queen, and at the end of the monodrama, hears the bells that toll for Henry II's death, and in turn, her freedom.",
"title": "Popular culture"
},
{
"paragraph_id": 82,
"text": "Queen Elanor's Confession, or Queen Eleanor's Confession, is Child Ballad 156. Although the figures are intended as Eleanor of Aquitaine, Henry II of England, and William Marshal, the story is an entire invention.",
"title": "Popular culture"
},
{
"paragraph_id": 83,
"text": "In the 2019 video game expansion Civilization VI: Gathering Storm, Eleanor is a playable leader for the English and French civilizations.",
"title": "Popular culture"
},
{
"paragraph_id": 84,
"text": "",
"title": "Citations"
},
{
"paragraph_id": 85,
"text": "",
"title": "Citations"
}
]
| Eleanor of Aquitaine was Duchess of Aquitaine in her own right from 1137 to 1204, Queen of France from 1137 to 1152 as the wife of King Louis VII, and Queen of England from 1154 to 1189 as the wife of King Henry II. As the heiress of the House of Poitiers, which controlled much of southwestern France, she was one of the wealthiest and most powerful women in Western Europe during the High Middle Ages. Militarily, she was a key leading figure in the Second Crusade, and in a revolt in favour of her son. Culturally, she was a patron of poets such as Wace, Benoît de Sainte-Maure, and Bernart de Ventadorn, and of the arts of the High Middle Ages. Eleanor was the eldest child of William X, Duke of Aquitaine, and Aénor de Châtellerault. She became duchess upon her father's death in April 1137, and three months later she married Louis, son of her guardian King Louis VI of France. Shortly afterwards, Louis VI died and Eleanor's husband ascended the throne, making Eleanor queen consort. The couple had two daughters, Marie and Alix. Eleanor sought an annulment of her marriage, but her request was rejected by Pope Eugene III. Eventually, Louis agreed to an annulment, as fifteen years of marriage had not produced a son. The marriage was annulled on 21 March 1152 on the grounds of consanguinity within the fourth degree. Their daughters were declared legitimate, custody was awarded to Louis, and Eleanor's lands were restored to her. As soon as the annulment was granted, Eleanor became engaged to her third cousin Henry, Duke of Normandy. The couple married on Whitsun, 18 May 1152 in Poitiers. Eleanor was crowned queen of England at Westminster Abbey in 1154, when Henry acceded to the throne. Henry and Eleanor had five sons and three daughters, but eventually became estranged. Henry imprisoned her in 1173 for supporting the revolt of their eldest son, Henry the Young King, against him. She was not released until 6 July 1189, when her husband died and their third son, Richard I, ascended the throne. As queen dowager, Eleanor acted as regent while Richard went on the Third Crusade. She lived well into the reign of her youngest son, John. | 2001-10-23T20:57:49Z | 2023-12-31T23:43:05Z | [
"Template:Harvp",
"Template:ODNBsub",
"Template:More citations needed",
"Template:Lang-oc",
"Template:IPA-oc",
"Template:Nbsp",
"Template:Multiple image",
"Template:JSTOR",
"Template:Authority control",
"Template:More citations needed section",
"Template:Cite AV media",
"Template:English consort",
"Template:S-aft",
"Template:S-end",
"Template:S-vac",
"Template:Use dmy dates",
"Template:Infobox royalty",
"Template:S-start",
"Template:S-hou",
"Template:S-reg",
"Template:S-ttl",
"Template:Poitou Counts",
"Template:Reflist",
"Template:Cbignore",
"Template:Link note",
"Template:Cite encyclopedia",
"Template:Refend",
"Template:NPG name",
"Template:S-roy",
"Template:Circa",
"Template:Efn",
"Template:Notelist",
"Template:Cite web",
"Template:Cite EB1911",
"Template:S-bef",
"Template:Short description",
"Template:Sfnp",
"Template:Sfn",
"Template:Full citation needed",
"Template:Cite journal",
"Template:French consort",
"Template:Commons category",
"Template:Use British English",
"Template:Story",
"Template:Clear",
"Template:Cite book",
"Template:Refbegin",
"Template:Google books"
]
| https://en.wikipedia.org/wiki/Eleanor_of_Aquitaine |
9,963 | Epistle to Philemon | The Epistle to Philemon is one of the books of the Christian New Testament. It is a prison letter, authored by Paul the Apostle (the opening verse also mentions Timothy), to Philemon, a leader in the Colossian church. It deals with the themes of forgiveness and reconciliation. Paul does not identify himself as an apostle with authority, but as "a prisoner of Jesus Christ", calling Timothy "our brother", and addressing Philemon as "fellow labourer" and "brother" (Philemon 1:1; 1:7; 1:20). Onesimus, a slave that had departed from his master Philemon, was returning with this epistle wherein Paul asked Philemon to receive him as a "brother beloved" (Philemon 1:9–17).
Philemon was a wealthy Christian, possibly a bishop of the house church that met in his home (Philemon 1:1–2) in Colossae. This letter is now generally regarded as one of the undisputed works of Paul. It is the shortest of Paul's extant letters, consisting of only 335 words in the Greek text.
The Epistle to Philemon was composed around AD 57–62 by Paul while in prison at Caesarea Maritima (early date) or more likely from Rome (later date) in conjunction with the composition of Colossians.
The Epistle to Philemon is attributed to the apostle Paul, and this attribution has rarely been questioned by scholars. Along with six others, it is numbered among the "undisputed letters", which are widely considered to be authentically Pauline. The main challenge to the letter's authenticity came from a group of German scholars in the nineteenth century known as the Tübingen School. Their leader, Ferdinand Christian Baur, only accepted four New Testament epistles as genuinely written by Paul: Romans, 1 and 2 Corinthians and Galatians. Commenting on Philemon, Baur described the subject matter as "so very singular as to arouse our suspicions," and concluded that it is perhaps a "Christian romance serving to convey a genuine Christian idea." This view is now largely considered to be outdated and finds no support in modern scholarship.
The opening verse of the salutation also names Timothy alongside Paul. This, however, does not mean that Timothy was the epistle's co-author. Rather, Paul regularly mentions others in the address if they have a particular connection with the recipient. In this case, Timothy may have encountered Philemon while accompanying Paul in his work in Ephesus.
According to the majority interpretation, Paul wrote this letter on behalf of Onesimus, a runaway slave who had wronged his owner Philemon. The details of the offence are unstated, although it is often assumed that Onesimus had fled after stealing money, as Paul states in verse 18 that if Onesimus owes anything, Philemon should charge this to Paul's account. Sometime after leaving, Onesimus came into contact with Paul, although again the details are unclear. He may have been arrested and imprisoned alongside Paul. Alternatively, he may have previously heard Paul's name (as his owner was a Christian) and so travelled to him for help. After meeting Paul, Onesimus became a Christian believer. An affection grew between them, and Paul would have been glad to keep Onesimus with him. However, he considered it better to send him back to Philemon with an accompanying letter, which aimed to effect reconciliation between them as Christian brothers. The preservation of the letter suggests that Paul's request was granted.
Onesimus' status as a runaway slave was challenged by Allen Dwight Callahan in an article published in the Harvard Theological Review and in a later commentary. Callahan argues that, beyond verse 16, "nothing in the text conclusively indicates that Onesimus was ever the chattel of the letter's chief addressee. Moreover, the expectations fostered by the traditional fugitive slave hypothesis go unrealized in the letter. Modern commentators, even those committed to the prevailing interpretation, have tacitly admitted as much." Callahan argues that the earliest commentators on this work – the homily of Origen and the Anti-Marcion Preface – are silent about Onesimus' possible servile status, and traces the origins of this interpretation to John Chrysostom, who proposed it in his Homiliae in epistolam ad Philemonem, during his ministry in Antioch, circa 386–398. In place of the traditional interpretation, Callahan suggests that Onesimus and Philemon are brothers both by blood and religion, but who have become estranged, and the intent of this letter was to reconcile the two men. Ben Witherington III has challenged Callahan's interpretation as a misreading of Paul's rhetoric. Further, Margaret M. Mitchell has demonstrated that a number of writers before Chrysostom either argue or assume that Onesimus was a runaway slave, including Athanasius, Basil of Caesarea and Ambrosiaster.
The only extant information about Onesimus apart from this letter is found in Paul's epistle to the Colossians 4:7–9, where Onesimus is called "a faithful and beloved brother":
All my state shall Tychicus declare unto you, who is a beloved brother, and a faithful minister and fellow servant in the Lord: Whom I have sent unto you for the same purpose, that he might know your estate, and comfort your hearts; With Onesimus, a faithful and beloved brother, who is one of you. They shall make known unto you all things which are done here.
The letter is addressed to Philemon, Apphia and Archippus, and the church in Philemon's house. Philemon is described as a "fellow worker" of Paul. It is generally assumed that he lived in Colossae; in the letter to the Colossians, Onesimus (the slave who fled from Philemon) and Archippus (whom Paul greets in the letter to Philemon) are described as members of the church there. Philemon may have converted to Christianity through Paul's ministry, possibly in Ephesus. Apphia in the salutation is probably Philemon's wife. Some have speculated that Archippus, described by Paul as a "fellow soldier", is the son of Philemon and Apphia.
The Scottish Pastor John Knox proposed that Onesimus' owner was in fact Archippus, and the letter was addressed to him rather than Philemon. In this reconstruction, Philemon would receive the letter first and then encourage Archippus to release Onesimus so that he could work alongside Paul. This view, however, has not found widespread support. In particular, Knox's view has been challenged on the basis of the opening verses. According to O'Brien, the fact that Philemon's name is mentioned first, together with the use of the phrase "in your house" in verse 2, makes it unlikely that Archippus was the primary addressee. Knox further argued that the letter was intended to be read aloud in the Colossian church in order to put pressure on Archippus. A number of commentators, however, see this view as contradicting the tone of the letter. J. B. Lightfoot, for example, wrote: "The tact and delicacy of the Apostle's pleading for Onesimus would be nullified at one stroke by the demand for publication."
The opening salutation follows a typical pattern found in other Pauline letters. Paul first introduces himself, with a self-designation as a "prisoner of Jesus Christ," which in this case refers to a physical imprisonment. He also mentions his associate Timothy, as a valued colleague who was presumably known to the recipient. As well as addressing the letter to Philemon, Paul sends greetings to Apphia, Archippus and the church that meets in Philemon's house. Apphia is often presumed to be Philemon's wife and Archippus, a "fellow labourer", is sometimes suggested to be their son. Paul concludes his salutation with a prayerful wish for grace and peace.
Before addressing the main topic of the letter, Paul continues with a paragraph of thanksgiving and intercession. This serves to prepare the ground for Paul's central request. He gives thanks to God for Philemon's love and faith and prays for his faith to be effective. He concludes this paragraph by describing the joy and comfort he has received from knowing how Philemon has shown love towards the Christians in Colossae.
As a background to his specific plea for Onesimus, Paul clarifies his intentions and circumstances. Although he has the boldness to command Philemon to do what would be right in the circumstances, he prefers to base his appeal on his knowledge of Philemon's love and generosity. He also describes the affection he has for Onesimus and the transformation that has taken place with Onesimus's conversion to the Christian faith. Where Onesimus was "useless", now he is "useful" – a wordplay, as Onesimus means "useful". Paul indicates that he would have been glad to keep Onesimus with him, but recognised that it was right to send him back. Paul's specific request is for Philemon to welcome Onesimus as he would welcome Paul, namely as a Christian brother. He offers to pay for any debt created by Onesimus' departure and expresses his desire that Philemon might refresh his heart in Christ.
In the final section of the letter, Paul describes his confidence that Philemon would do even more than he had requested, perhaps indicating his desire for Onesimus to return to work alongside him. He also mentions his wish to visit and asks Philemon to prepare a guest room. Paul sends greetings from five of his co-workers and concludes the letter with a benediction.
Paul uses slavery vs. freedom language more often in his writings as a metaphor. At this time slavery was common, and can be seen as a theme in the book of Philemon. Slavery was most commonly found in households. This letter, seemingly, provided alleviation of suffering of some slaves due to the fact that Paul placed pastoral focus on the issue.
Although it is a main theme, Paul does not label slavery as negative or positive. Rather than deal with the morality of slavery directly, he undermines the foundation of slavery which is dehumanization of other human beings. Some scholars, but not Paul, see it as unthinkable in the times to even question ending slavery. Because slavery was so ingrained into society that the “abolitionist would have been at the same time an insurrectionist, and the political effects of such a movement would have been unthinkable." Paul viewed slavery as an example of a human institution of dehumanization, and believed that all human institutions were about to fade away. This may be because Paul had the perspective that Jesus would return soon. Paul viewed his present world as something that was swiftly passing away. This is a part of Pauline Christianity and theology.
When it comes to Onesimus and his circumstance as a slave, Paul felt that Onesimus should return to Philemon but not as a slave; rather, under a bond of familial love. Paul also was not suggesting that Onesimus be punished, in spite of the fact that Roman law allowed the owner of a runaway slave nearly unlimited privileges of punishment, even execution. This is a concern of Paul and a reason he is writing to Philemon, asking that Philemon accept Onesimus back in a bond of friendship, forgiveness, and reconciliation. Paul is undermining this example of a human institution which dehumanizes people. We see this in many of Paul's other epistles, including his letters to the Corinthians, delivering the message of unity with others and unity with Christ – a change of identity. As written in Sacra Pagina Philippians and Philemon, the move from slave to freedman has to do with a shift in “standing under the lordship of Jesus Christ”. So in short, Onesimus’ honor and obedience is not claimed by Philemon, but by Christ.
Verses 13–14 suggest that Paul wants Philemon to send Onesimus back to Paul (possibly freeing him for the purpose). Marshall, Travis and Paul write, "Paul hoped that it might be possible for [Onesimus] to spend some time with him as a missionary colleague... If that is not a request for Onesimus to join Paul’s circle, I do not know what more would need to be said".
Sarah Ruden, in her Paul Among the People (2010), argues that in the letter to Philemon, Paul created the Western conception of the individual human being, "unconditionally precious to God and therefore entitled to the consideration of other human beings." Before Paul, Ruden argues, a slave was considered subhuman, and entitled to no more consideration than an animal.
Diarmaid MacCulloch, in his A History of Christianity, described the epistle as "a Christian foundation document in the justification of slavery".
In order to better appreciate the Book of Philemon, it is necessary to be aware of the situation of the early Christian community in the Roman Empire; and the economic system of Classical Antiquity based on slavery. According to the Epistle to Diognetus: For the Christians are distinguished from other men neither by country, nor language, nor the customs which they observe... They are in the flesh, but they do not live after the flesh. They pass their days on earth, but they are citizens of heaven. They obey the prescribed laws, and at the same time surpass the laws by their lives.
Pope Benedict XVI refers to this letter in his encyclical letter, Spe salvi, highlighting the power of Christianity as power of the transformation of society:
Those who, as far as their civil status is concerned, stand in relation to one an other as masters and slaves, inasmuch as they are members of the one Church have become brothers and sisters—this is how Christians addressed one another... Even if external structures remained unaltered, this changed society from within. When the Letter to the Hebrews says that Christians here on earth do not have a permanent homeland, but seek one which lies in the future (cf. Heb 11:13–16; Phil 3:20), this does not mean for one moment that they live only for the future: present society is recognized by Christians as an exile; they belong to a new society which is the goal of their common pilgrimage and which is anticipated in the course of that pilgrimage. | [
{
"paragraph_id": 0,
"text": "The Epistle to Philemon is one of the books of the Christian New Testament. It is a prison letter, authored by Paul the Apostle (the opening verse also mentions Timothy), to Philemon, a leader in the Colossian church. It deals with the themes of forgiveness and reconciliation. Paul does not identify himself as an apostle with authority, but as \"a prisoner of Jesus Christ\", calling Timothy \"our brother\", and addressing Philemon as \"fellow labourer\" and \"brother\" (Philemon 1:1; 1:7; 1:20). Onesimus, a slave that had departed from his master Philemon, was returning with this epistle wherein Paul asked Philemon to receive him as a \"brother beloved\" (Philemon 1:9–17).",
"title": ""
},
{
"paragraph_id": 1,
"text": "Philemon was a wealthy Christian, possibly a bishop of the house church that met in his home (Philemon 1:1–2) in Colossae. This letter is now generally regarded as one of the undisputed works of Paul. It is the shortest of Paul's extant letters, consisting of only 335 words in the Greek text.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Epistle to Philemon was composed around AD 57–62 by Paul while in prison at Caesarea Maritima (early date) or more likely from Rome (later date) in conjunction with the composition of Colossians.",
"title": "Composition"
},
{
"paragraph_id": 3,
"text": "The Epistle to Philemon is attributed to the apostle Paul, and this attribution has rarely been questioned by scholars. Along with six others, it is numbered among the \"undisputed letters\", which are widely considered to be authentically Pauline. The main challenge to the letter's authenticity came from a group of German scholars in the nineteenth century known as the Tübingen School. Their leader, Ferdinand Christian Baur, only accepted four New Testament epistles as genuinely written by Paul: Romans, 1 and 2 Corinthians and Galatians. Commenting on Philemon, Baur described the subject matter as \"so very singular as to arouse our suspicions,\" and concluded that it is perhaps a \"Christian romance serving to convey a genuine Christian idea.\" This view is now largely considered to be outdated and finds no support in modern scholarship.",
"title": "Composition"
},
{
"paragraph_id": 4,
"text": "The opening verse of the salutation also names Timothy alongside Paul. This, however, does not mean that Timothy was the epistle's co-author. Rather, Paul regularly mentions others in the address if they have a particular connection with the recipient. In this case, Timothy may have encountered Philemon while accompanying Paul in his work in Ephesus.",
"title": "Composition"
},
{
"paragraph_id": 5,
"text": "According to the majority interpretation, Paul wrote this letter on behalf of Onesimus, a runaway slave who had wronged his owner Philemon. The details of the offence are unstated, although it is often assumed that Onesimus had fled after stealing money, as Paul states in verse 18 that if Onesimus owes anything, Philemon should charge this to Paul's account. Sometime after leaving, Onesimus came into contact with Paul, although again the details are unclear. He may have been arrested and imprisoned alongside Paul. Alternatively, he may have previously heard Paul's name (as his owner was a Christian) and so travelled to him for help. After meeting Paul, Onesimus became a Christian believer. An affection grew between them, and Paul would have been glad to keep Onesimus with him. However, he considered it better to send him back to Philemon with an accompanying letter, which aimed to effect reconciliation between them as Christian brothers. The preservation of the letter suggests that Paul's request was granted.",
"title": "Composition"
},
{
"paragraph_id": 6,
"text": "Onesimus' status as a runaway slave was challenged by Allen Dwight Callahan in an article published in the Harvard Theological Review and in a later commentary. Callahan argues that, beyond verse 16, \"nothing in the text conclusively indicates that Onesimus was ever the chattel of the letter's chief addressee. Moreover, the expectations fostered by the traditional fugitive slave hypothesis go unrealized in the letter. Modern commentators, even those committed to the prevailing interpretation, have tacitly admitted as much.\" Callahan argues that the earliest commentators on this work – the homily of Origen and the Anti-Marcion Preface – are silent about Onesimus' possible servile status, and traces the origins of this interpretation to John Chrysostom, who proposed it in his Homiliae in epistolam ad Philemonem, during his ministry in Antioch, circa 386–398. In place of the traditional interpretation, Callahan suggests that Onesimus and Philemon are brothers both by blood and religion, but who have become estranged, and the intent of this letter was to reconcile the two men. Ben Witherington III has challenged Callahan's interpretation as a misreading of Paul's rhetoric. Further, Margaret M. Mitchell has demonstrated that a number of writers before Chrysostom either argue or assume that Onesimus was a runaway slave, including Athanasius, Basil of Caesarea and Ambrosiaster.",
"title": "Composition"
},
{
"paragraph_id": 7,
"text": "The only extant information about Onesimus apart from this letter is found in Paul's epistle to the Colossians 4:7–9, where Onesimus is called \"a faithful and beloved brother\":",
"title": "Composition"
},
{
"paragraph_id": 8,
"text": "All my state shall Tychicus declare unto you, who is a beloved brother, and a faithful minister and fellow servant in the Lord: Whom I have sent unto you for the same purpose, that he might know your estate, and comfort your hearts; With Onesimus, a faithful and beloved brother, who is one of you. They shall make known unto you all things which are done here.",
"title": "Composition"
},
{
"paragraph_id": 9,
"text": "The letter is addressed to Philemon, Apphia and Archippus, and the church in Philemon's house. Philemon is described as a \"fellow worker\" of Paul. It is generally assumed that he lived in Colossae; in the letter to the Colossians, Onesimus (the slave who fled from Philemon) and Archippus (whom Paul greets in the letter to Philemon) are described as members of the church there. Philemon may have converted to Christianity through Paul's ministry, possibly in Ephesus. Apphia in the salutation is probably Philemon's wife. Some have speculated that Archippus, described by Paul as a \"fellow soldier\", is the son of Philemon and Apphia.",
"title": "Composition"
},
{
"paragraph_id": 10,
"text": "The Scottish Pastor John Knox proposed that Onesimus' owner was in fact Archippus, and the letter was addressed to him rather than Philemon. In this reconstruction, Philemon would receive the letter first and then encourage Archippus to release Onesimus so that he could work alongside Paul. This view, however, has not found widespread support. In particular, Knox's view has been challenged on the basis of the opening verses. According to O'Brien, the fact that Philemon's name is mentioned first, together with the use of the phrase \"in your house\" in verse 2, makes it unlikely that Archippus was the primary addressee. Knox further argued that the letter was intended to be read aloud in the Colossian church in order to put pressure on Archippus. A number of commentators, however, see this view as contradicting the tone of the letter. J. B. Lightfoot, for example, wrote: \"The tact and delicacy of the Apostle's pleading for Onesimus would be nullified at one stroke by the demand for publication.\"",
"title": "Composition"
},
{
"paragraph_id": 11,
"text": "The opening salutation follows a typical pattern found in other Pauline letters. Paul first introduces himself, with a self-designation as a \"prisoner of Jesus Christ,\" which in this case refers to a physical imprisonment. He also mentions his associate Timothy, as a valued colleague who was presumably known to the recipient. As well as addressing the letter to Philemon, Paul sends greetings to Apphia, Archippus and the church that meets in Philemon's house. Apphia is often presumed to be Philemon's wife and Archippus, a \"fellow labourer\", is sometimes suggested to be their son. Paul concludes his salutation with a prayerful wish for grace and peace.",
"title": "Content"
},
{
"paragraph_id": 12,
"text": "Before addressing the main topic of the letter, Paul continues with a paragraph of thanksgiving and intercession. This serves to prepare the ground for Paul's central request. He gives thanks to God for Philemon's love and faith and prays for his faith to be effective. He concludes this paragraph by describing the joy and comfort he has received from knowing how Philemon has shown love towards the Christians in Colossae.",
"title": "Content"
},
{
"paragraph_id": 13,
"text": "As a background to his specific plea for Onesimus, Paul clarifies his intentions and circumstances. Although he has the boldness to command Philemon to do what would be right in the circumstances, he prefers to base his appeal on his knowledge of Philemon's love and generosity. He also describes the affection he has for Onesimus and the transformation that has taken place with Onesimus's conversion to the Christian faith. Where Onesimus was \"useless\", now he is \"useful\" – a wordplay, as Onesimus means \"useful\". Paul indicates that he would have been glad to keep Onesimus with him, but recognised that it was right to send him back. Paul's specific request is for Philemon to welcome Onesimus as he would welcome Paul, namely as a Christian brother. He offers to pay for any debt created by Onesimus' departure and expresses his desire that Philemon might refresh his heart in Christ.",
"title": "Content"
},
{
"paragraph_id": 14,
"text": "In the final section of the letter, Paul describes his confidence that Philemon would do even more than he had requested, perhaps indicating his desire for Onesimus to return to work alongside him. He also mentions his wish to visit and asks Philemon to prepare a guest room. Paul sends greetings from five of his co-workers and concludes the letter with a benediction.",
"title": "Content"
},
{
"paragraph_id": 15,
"text": "Paul uses slavery vs. freedom language more often in his writings as a metaphor. At this time slavery was common, and can be seen as a theme in the book of Philemon. Slavery was most commonly found in households. This letter, seemingly, provided alleviation of suffering of some slaves due to the fact that Paul placed pastoral focus on the issue.",
"title": "Themes"
},
{
"paragraph_id": 16,
"text": "Although it is a main theme, Paul does not label slavery as negative or positive. Rather than deal with the morality of slavery directly, he undermines the foundation of slavery which is dehumanization of other human beings. Some scholars, but not Paul, see it as unthinkable in the times to even question ending slavery. Because slavery was so ingrained into society that the “abolitionist would have been at the same time an insurrectionist, and the political effects of such a movement would have been unthinkable.\" Paul viewed slavery as an example of a human institution of dehumanization, and believed that all human institutions were about to fade away. This may be because Paul had the perspective that Jesus would return soon. Paul viewed his present world as something that was swiftly passing away. This is a part of Pauline Christianity and theology.",
"title": "Themes"
},
{
"paragraph_id": 17,
"text": "When it comes to Onesimus and his circumstance as a slave, Paul felt that Onesimus should return to Philemon but not as a slave; rather, under a bond of familial love. Paul also was not suggesting that Onesimus be punished, in spite of the fact that Roman law allowed the owner of a runaway slave nearly unlimited privileges of punishment, even execution. This is a concern of Paul and a reason he is writing to Philemon, asking that Philemon accept Onesimus back in a bond of friendship, forgiveness, and reconciliation. Paul is undermining this example of a human institution which dehumanizes people. We see this in many of Paul's other epistles, including his letters to the Corinthians, delivering the message of unity with others and unity with Christ – a change of identity. As written in Sacra Pagina Philippians and Philemon, the move from slave to freedman has to do with a shift in “standing under the lordship of Jesus Christ”. So in short, Onesimus’ honor and obedience is not claimed by Philemon, but by Christ.",
"title": "Themes"
},
{
"paragraph_id": 18,
"text": "Verses 13–14 suggest that Paul wants Philemon to send Onesimus back to Paul (possibly freeing him for the purpose). Marshall, Travis and Paul write, \"Paul hoped that it might be possible for [Onesimus] to spend some time with him as a missionary colleague... If that is not a request for Onesimus to join Paul’s circle, I do not know what more would need to be said\".",
"title": "Themes"
},
{
"paragraph_id": 19,
"text": "Sarah Ruden, in her Paul Among the People (2010), argues that in the letter to Philemon, Paul created the Western conception of the individual human being, \"unconditionally precious to God and therefore entitled to the consideration of other human beings.\" Before Paul, Ruden argues, a slave was considered subhuman, and entitled to no more consideration than an animal.",
"title": "Significance"
},
{
"paragraph_id": 20,
"text": "Diarmaid MacCulloch, in his A History of Christianity, described the epistle as \"a Christian foundation document in the justification of slavery\".",
"title": "Significance"
},
{
"paragraph_id": 21,
"text": "In order to better appreciate the Book of Philemon, it is necessary to be aware of the situation of the early Christian community in the Roman Empire; and the economic system of Classical Antiquity based on slavery. According to the Epistle to Diognetus: For the Christians are distinguished from other men neither by country, nor language, nor the customs which they observe... They are in the flesh, but they do not live after the flesh. They pass their days on earth, but they are citizens of heaven. They obey the prescribed laws, and at the same time surpass the laws by their lives.",
"title": "Significance"
},
{
"paragraph_id": 22,
"text": "Pope Benedict XVI refers to this letter in his encyclical letter, Spe salvi, highlighting the power of Christianity as power of the transformation of society:",
"title": "Significance"
},
{
"paragraph_id": 23,
"text": "Those who, as far as their civil status is concerned, stand in relation to one an other as masters and slaves, inasmuch as they are members of the one Church have become brothers and sisters—this is how Christians addressed one another... Even if external structures remained unaltered, this changed society from within. When the Letter to the Hebrews says that Christians here on earth do not have a permanent homeland, but seek one which lies in the future (cf. Heb 11:13–16; Phil 3:20), this does not mean for one moment that they live only for the future: present society is recognized by Christians as an exile; they belong to a new society which is the goal of their common pilgrimage and which is anticipated in the course of that pilgrimage.",
"title": "Significance"
}
]
| The Epistle to Philemon is one of the books of the Christian New Testament. It is a prison letter, authored by Paul the Apostle, to Philemon, a leader in the Colossian church. It deals with the themes of forgiveness and reconciliation. Paul does not identify himself as an apostle with authority, but as "a prisoner of Jesus Christ", calling Timothy "our brother", and addressing Philemon as "fellow labourer" and "brother". Onesimus, a slave that had departed from his master Philemon, was returning with this epistle wherein Paul asked Philemon to receive him as a "brother beloved". Philemon was a wealthy Christian, possibly a bishop of the house church that met in his home in Colossae. This letter is now generally regarded as one of the undisputed works of Paul. It is the shortest of Paul's extant letters, consisting of only 335 words in the Greek text. | 2001-10-22T05:45:08Z | 2023-12-11T07:16:44Z | [
"Template:Efn",
"Template:Quote",
"Template:Cite book",
"Template:Cite web",
"Template:Wikisource-inline",
"Template:S-bef",
"Template:Citation needed",
"Template:ISBN",
"Template:S-hou",
"Template:Librivox book",
"Template:S-end",
"Template:Authority control",
"Template:Reflist",
"Template:Books of the Bible",
"Template:S-ttl",
"Template:Cn",
"Template:EBD",
"Template:Paul",
"Template:S-aft",
"Template:Books of the New Testament",
"Template:Short description",
"Template:Blockquote",
"Template:Notelist",
"Template:Cite journal",
"Template:S-start",
"Template:Bibleref2",
"Template:Sfn"
]
| https://en.wikipedia.org/wiki/Epistle_to_Philemon |
9,966 | Elliptic-curve cryptography | Elliptic-curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. ECC allows smaller keys compared to non-EC cryptography (based on plain Galois fields) to provide equivalent security.
Elliptic curves are applicable for key agreement, digital signatures, pseudo-random generators and other tasks. Indirectly, they can be used for encryption by combining the key agreement with a symmetric encryption scheme. They are also used in several integer factorization algorithms that have applications in cryptography, such as Lenstra elliptic-curve factorization.
The use of elliptic curves in cryptography was suggested independently by Neal Koblitz and Victor S. Miller in 1985. Elliptic curve cryptography algorithms entered wide use in 2004 to 2005.
In 1999, NIST recommended fifteen elliptic curves. Specifically, FIPS 186-4 has ten recommended finite fields:
The NIST recommendation thus contains a total of five prime curves and ten binary curves. The curves were chosen for optimal security and implementation efficiency.
At the RSA Conference 2005, the National Security Agency (NSA) announced Suite B, which exclusively uses ECC for digital signature generation and key exchange. The suite is intended to protect both classified and unclassified national security systems and information. National Institute of Standards and Technology (NIST) has endorsed elliptic curve cryptography in its Suite B set of recommended algorithms, specifically elliptic-curve Diffie–Hellman (ECDH) for key exchange and Elliptic Curve Digital Signature Algorithm (ECDSA) for digital signature. The NSA allows their use for protecting information classified up to top secret with 384-bit keys.
Recently, a large number of cryptographic primitives based on bilinear mappings on various elliptic curve groups, such as the Weil and Tate pairings, have been introduced. Schemes based on these primitives provide efficient identity-based encryption as well as pairing-based signatures, signcryption, key agreement, and proxy re-encryption.
Elliptic curve cryptography is used successfully in numerous popular protocols, such as Transport Layer Security and Bitcoin.
In 2013, The New York Times stated that Dual Elliptic Curve Deterministic Random Bit Generation (or Dual_EC_DRBG) had been included as a NIST national standard due to the influence of NSA, which had included a deliberate weakness in the algorithm and the recommended elliptic curve. RSA Security in September 2013 issued an advisory recommending that its customers discontinue using any software based on Dual_EC_DRBG. In the wake of the exposure of Dual_EC_DRBG as "an NSA undercover operation", cryptography experts have also expressed concern over the security of the NIST recommended elliptic curves, suggesting a return to encryption based on non-elliptic-curve groups.
Additionally, in August 2015, the NSA announced that it plans to replace Suite B with a new cipher suite due to concerns about quantum computing attacks on ECC.
While the RSA patent expired in 2000, there may be patents in force covering certain aspects of ECC technology, including at least one ECC scheme (ECMQV). However, RSA Laboratories and Daniel J. Bernstein have argued that the US government elliptic curve digital signature standard (ECDSA; NIST FIPS 186-3) and certain practical ECC-based key exchange schemes (including ECDH) can be implemented without infringing those patents.
For the purposes of this article, an elliptic curve is a plane curve over a finite field (rather than the real numbers) which consists of the points satisfying the equation:
along with a distinguished point at infinity, denoted ∞. The coordinates here are to be chosen from a fixed finite field of characteristic not equal to 2 or 3, or the curve equation would be somewhat more complicated.
This set of points, together with the group operation of elliptic curves, is an abelian group, with the point at infinity as an identity element. The structure of the group is inherited from the divisor group of the underlying algebraic variety:
Public-key cryptography is based on the intractability of certain mathematical problems. Early public-key systems based their security on the assumption that it is difficult to factor a large integer composed of two or more large prime factors. For later elliptic-curve-based protocols, the base assumption is that finding the discrete logarithm of a random elliptic curve element with respect to a publicly known base point is infeasible: this is the "elliptic curve discrete logarithm problem" (ECDLP). The security of elliptic curve cryptography depends on the ability to compute a point multiplication and the inability to compute the multiplicand given the original point and product point. The size of the elliptic curve, measured by the total number of discrete integer pairs satisfying the curve equation, determines the difficulty of the problem.
The primary benefit promised by elliptic curve cryptography over alternatives such as RSA is a smaller key size, reducing storage and transmission requirements. For example, a 256-bit elliptic curve public key should provide comparable security to a 3072-bit RSA public key.
Several discrete logarithm-based protocols have been adapted to elliptic curves, replacing the group ( Z p ) × {\displaystyle (\mathbb {Z} _{p})^{\times }} with an elliptic curve:
Some common implementation considerations include:
To use ECC, all parties must agree on all the elements defining the elliptic curve, that is, the domain parameters of the scheme. The size of the field used is typically either prime (and denoted as p) or is a power of two ( 2 m {\displaystyle 2^{m}} ); the latter case is called the binary case, and this case necessitates the choice of an auxiliary curve denoted by f. Thus the field is defined by p in the prime case and the pair of m and f in the binary case. The elliptic curve is defined by the constants a and b used in its defining equation. Finally, the cyclic subgroup is defined by its generator (a.k.a. base point) G. For cryptographic application, the order of G, that is the smallest positive number n such that n G = O {\displaystyle nG={\mathcal {O}}} (the point at infinity of the curve, and the identity element), is normally prime. Since n is the size of a subgroup of E ( F p ) {\displaystyle E(\mathbb {F} _{p})} it follows from Lagrange's theorem that the number h = 1 n | E ( F p ) | {\displaystyle h={\frac {1}{n}}|E(\mathbb {F} _{p})|} is an integer. In cryptographic applications, this number h, called the cofactor, must be small ( h ≤ 4 {\displaystyle h\leq 4} ) and, preferably, h = 1 {\displaystyle h=1} . To summarize: in the prime case, the domain parameters are ( p , a , b , G , n , h ) {\displaystyle (p,a,b,G,n,h)} ; in the binary case, they are ( m , f , a , b , G , n , h ) {\displaystyle (m,f,a,b,G,n,h)} .
Unless there is an assurance that domain parameters were generated by a party trusted with respect to their use, the domain parameters must be validated before use.
The generation of domain parameters is not usually done by each participant because this involves computing the number of points on a curve which is time-consuming and troublesome to implement. As a result, several standard bodies published domain parameters of elliptic curves for several common field sizes. Such domain parameters are commonly known as "standard curves" or "named curves"; a named curve can be referenced either by name or by the unique object identifier defined in the standard documents:
SECG test vectors are also available. NIST has approved many SECG curves, so there is a significant overlap between the specifications published by NIST and SECG. EC domain parameters may be specified either by value or by name.
If, despite the preceding admonition, one decides to construct one's own domain parameters, one should select the underlying field and then use one of the following strategies to find a curve with appropriate (i.e., near prime) number of points using one of the following methods:
Several classes of curves are weak and should be avoided:
Because all the fastest known algorithms that allow one to solve the ECDLP (baby-step giant-step, Pollard's rho, etc.), need O ( n ) {\displaystyle O({\sqrt {n}})} steps, it follows that the size of the underlying field should be roughly twice the security parameter. For example, for 128-bit security one needs a curve over F q {\displaystyle \mathbb {F} _{q}} , where q ≈ 2 256 {\displaystyle q\approx 2^{256}} . This can be contrasted with finite-field cryptography (e.g., DSA) which requires 3072-bit public keys and 256-bit private keys, and integer factorization cryptography (e.g., RSA) which requires a 3072-bit value of n, where the private key should be just as large. However, the public key may be smaller to accommodate efficient encryption, especially when processing power is limited.
The hardest ECC scheme (publicly) broken to date had a 112-bit key for the prime field case and a 109-bit key for the binary field case. For the prime field case, this was broken in July 2009 using a cluster of over 200 PlayStation 3 game consoles and could have been finished in 3.5 months using this cluster when running continuously. The binary field case was broken in April 2004 using 2600 computers over 17 months.
A current project is aiming at breaking the ECC2K-130 challenge by Certicom, by using a wide range of different hardware: CPUs, GPUs, FPGA.
A close examination of the addition rules shows that in order to add two points, one needs not only several additions and multiplications in F q {\displaystyle \mathbb {F} _{q}} but also an inversion operation. The inversion (for given x ∈ F q {\displaystyle x\in \mathbb {F} _{q}} find y ∈ F q {\displaystyle y\in \mathbb {F} _{q}} such that x y = 1 {\displaystyle xy=1} ) is one to two orders of magnitude slower than multiplication. However, points on a curve can be represented in different coordinate systems which do not require an inversion operation to add two points. Several such systems were proposed: in the projective system each point is represented by three coordinates ( X , Y , Z ) {\displaystyle (X,Y,Z)} using the following relation: x = X Z {\displaystyle x={\frac {X}{Z}}} , Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): y={\frac {Y}{Z}} ; in the Jacobian system a point is also represented with three coordinates ( X , Y , Z ) {\displaystyle (X,Y,Z)} , but a different relation is used: x = X Z 2 {\displaystyle x={\frac {X}{Z^{2}}}} , y = Y Z 3 {\displaystyle y={\frac {Y}{Z^{3}}}} ; in the López–Dahab system the relation is x = X Z {\displaystyle x={\frac {X}{Z}}} , y = Y Z 2 {\displaystyle y={\frac {Y}{Z^{2}}}} ; in the modified Jacobian system the same relations are used but four coordinates are stored and used for calculations ( X , Y , Z , a Z 4 ) {\displaystyle (X,Y,Z,aZ^{4})} ; and in the Chudnovsky Jacobian system five coordinates are used ( X , Y , Z , Z 2 , Z 3 ) {\displaystyle (X,Y,Z,Z^{2},Z^{3})} . Note that there may be different naming conventions, for example, IEEE P1363-2000 standard uses "projective coordinates" to refer to what is commonly called Jacobian coordinates. An additional speed-up is possible if mixed coordinates are used.
Reduction modulo p (which is needed for addition and multiplication) can be executed much faster if the prime p is a pseudo-Mersenne prime, that is p ≈ 2 d {\displaystyle p\approx 2^{d}} ; for example, p = 2 521 − 1 {\displaystyle p=2^{521}-1} or p = 2 256 − 2 32 − 2 9 − 2 8 − 2 7 − 2 6 − 2 4 − 1. {\displaystyle p=2^{256}-2^{32}-2^{9}-2^{8}-2^{7}-2^{6}-2^{4}-1.} Compared to Barrett reduction, there can be an order of magnitude speed-up. The speed-up here is a practical rather than theoretical one, and derives from the fact that the moduli of numbers against numbers near powers of two can be performed efficiently by computers operating on binary numbers with bitwise operations.
The curves over F p {\displaystyle \mathbb {F} _{p}} with pseudo-Mersenne p are recommended by NIST. Yet another advantage of the NIST curves is that they use a = −3, which improves addition in Jacobian coordinates.
According to Bernstein and Lange, many of the efficiency-related decisions in NIST FIPS 186-2 are suboptimal. Other curves are more secure and run just as fast.
Unlike most other DLP systems (where it is possible to use the same procedure for squaring and multiplication), the EC addition is significantly different for doubling (P = Q) and general addition (P ≠ Q) depending on the coordinate system used. Consequently, it is important to counteract side-channel attacks (e.g., timing or simple/differential power analysis attacks) using, for example, fixed pattern window (a.k.a. comb) methods (note that this does not increase computation time). Alternatively one can use an Edwards curve; this is a special family of elliptic curves for which doubling and addition can be done with the same operation. Another concern for ECC-systems is the danger of fault attacks, especially when running on smart cards.
Cryptographic experts have expressed concerns that the National Security Agency has inserted a kleptographic backdoor into at least one elliptic curve-based pseudo random generator. Internal memos leaked by former NSA contractor Edward Snowden suggest that the NSA put a backdoor in the Dual EC DRBG standard. One analysis of the possible backdoor concluded that an adversary in possession of the algorithm's secret key could obtain encryption keys given only 32 bytes of PRNG output.
The SafeCurves project has been launched in order to catalog curves that are easy to implement securely and are designed in a fully publicly verifiable way to minimize the chance of a backdoor.
Shor's algorithm can be used to break elliptic curve cryptography by computing discrete logarithms on a hypothetical quantum computer. The latest quantum resource estimates for breaking a curve with a 256-bit modulus (128-bit security level) are 2330 qubits and 126 billion Toffoli gates. For the binary elliptic curve case, 906 qubits are necessary (to break 128 bits of security). In comparison, using Shor's algorithm to break the RSA algorithm requires 4098 qubits and 5.2 trillion Toffoli gates for a 2048-bit RSA key, suggesting that ECC is an easier target for quantum computers than RSA. All of these figures vastly exceed any quantum computer that has ever been built, and estimates place the creation of such computers at a decade or more away.
Supersingular Isogeny Diffie–Hellman Key Exchange claimed to provide a post-quantum secure form of elliptic curve cryptography by using isogenies to implement Diffie–Hellman key exchanges. This key exchange uses much of the same field arithmetic as existing elliptic curve cryptography and requires computational and transmission overhead similar to many currently used public key systems. However, new classical attacks undermined the security of this protocol.
In August 2015, the NSA announced that it planned to transition "in the not distant future" to a new cipher suite that is resistant to quantum attacks. "Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy."
When ECC is used in virtual machines, an attacker may use an invalid curve to get a complete PDH private key.
Alternative representations of elliptic curves include: | [
{
"paragraph_id": 0,
"text": "Elliptic-curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. ECC allows smaller keys compared to non-EC cryptography (based on plain Galois fields) to provide equivalent security.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Elliptic curves are applicable for key agreement, digital signatures, pseudo-random generators and other tasks. Indirectly, they can be used for encryption by combining the key agreement with a symmetric encryption scheme. They are also used in several integer factorization algorithms that have applications in cryptography, such as Lenstra elliptic-curve factorization.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The use of elliptic curves in cryptography was suggested independently by Neal Koblitz and Victor S. Miller in 1985. Elliptic curve cryptography algorithms entered wide use in 2004 to 2005.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "In 1999, NIST recommended fifteen elliptic curves. Specifically, FIPS 186-4 has ten recommended finite fields:",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The NIST recommendation thus contains a total of five prime curves and ten binary curves. The curves were chosen for optimal security and implementation efficiency.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "At the RSA Conference 2005, the National Security Agency (NSA) announced Suite B, which exclusively uses ECC for digital signature generation and key exchange. The suite is intended to protect both classified and unclassified national security systems and information. National Institute of Standards and Technology (NIST) has endorsed elliptic curve cryptography in its Suite B set of recommended algorithms, specifically elliptic-curve Diffie–Hellman (ECDH) for key exchange and Elliptic Curve Digital Signature Algorithm (ECDSA) for digital signature. The NSA allows their use for protecting information classified up to top secret with 384-bit keys.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Recently, a large number of cryptographic primitives based on bilinear mappings on various elliptic curve groups, such as the Weil and Tate pairings, have been introduced. Schemes based on these primitives provide efficient identity-based encryption as well as pairing-based signatures, signcryption, key agreement, and proxy re-encryption.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Elliptic curve cryptography is used successfully in numerous popular protocols, such as Transport Layer Security and Bitcoin.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 2013, The New York Times stated that Dual Elliptic Curve Deterministic Random Bit Generation (or Dual_EC_DRBG) had been included as a NIST national standard due to the influence of NSA, which had included a deliberate weakness in the algorithm and the recommended elliptic curve. RSA Security in September 2013 issued an advisory recommending that its customers discontinue using any software based on Dual_EC_DRBG. In the wake of the exposure of Dual_EC_DRBG as \"an NSA undercover operation\", cryptography experts have also expressed concern over the security of the NIST recommended elliptic curves, suggesting a return to encryption based on non-elliptic-curve groups.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Additionally, in August 2015, the NSA announced that it plans to replace Suite B with a new cipher suite due to concerns about quantum computing attacks on ECC.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "While the RSA patent expired in 2000, there may be patents in force covering certain aspects of ECC technology, including at least one ECC scheme (ECMQV). However, RSA Laboratories and Daniel J. Bernstein have argued that the US government elliptic curve digital signature standard (ECDSA; NIST FIPS 186-3) and certain practical ECC-based key exchange schemes (including ECDH) can be implemented without infringing those patents.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "For the purposes of this article, an elliptic curve is a plane curve over a finite field (rather than the real numbers) which consists of the points satisfying the equation:",
"title": "Elliptic curve theory"
},
{
"paragraph_id": 12,
"text": "along with a distinguished point at infinity, denoted ∞. The coordinates here are to be chosen from a fixed finite field of characteristic not equal to 2 or 3, or the curve equation would be somewhat more complicated.",
"title": "Elliptic curve theory"
},
{
"paragraph_id": 13,
"text": "This set of points, together with the group operation of elliptic curves, is an abelian group, with the point at infinity as an identity element. The structure of the group is inherited from the divisor group of the underlying algebraic variety:",
"title": "Elliptic curve theory"
},
{
"paragraph_id": 14,
"text": "Public-key cryptography is based on the intractability of certain mathematical problems. Early public-key systems based their security on the assumption that it is difficult to factor a large integer composed of two or more large prime factors. For later elliptic-curve-based protocols, the base assumption is that finding the discrete logarithm of a random elliptic curve element with respect to a publicly known base point is infeasible: this is the \"elliptic curve discrete logarithm problem\" (ECDLP). The security of elliptic curve cryptography depends on the ability to compute a point multiplication and the inability to compute the multiplicand given the original point and product point. The size of the elliptic curve, measured by the total number of discrete integer pairs satisfying the curve equation, determines the difficulty of the problem.",
"title": "Elliptic curve theory"
},
{
"paragraph_id": 15,
"text": "The primary benefit promised by elliptic curve cryptography over alternatives such as RSA is a smaller key size, reducing storage and transmission requirements. For example, a 256-bit elliptic curve public key should provide comparable security to a 3072-bit RSA public key.",
"title": "Elliptic curve theory"
},
{
"paragraph_id": 16,
"text": "Several discrete logarithm-based protocols have been adapted to elliptic curves, replacing the group ( Z p ) × {\\displaystyle (\\mathbb {Z} _{p})^{\\times }} with an elliptic curve:",
"title": "Elliptic curve theory"
},
{
"paragraph_id": 17,
"text": "Some common implementation considerations include:",
"title": "Implementation"
},
{
"paragraph_id": 18,
"text": "To use ECC, all parties must agree on all the elements defining the elliptic curve, that is, the domain parameters of the scheme. The size of the field used is typically either prime (and denoted as p) or is a power of two ( 2 m {\\displaystyle 2^{m}} ); the latter case is called the binary case, and this case necessitates the choice of an auxiliary curve denoted by f. Thus the field is defined by p in the prime case and the pair of m and f in the binary case. The elliptic curve is defined by the constants a and b used in its defining equation. Finally, the cyclic subgroup is defined by its generator (a.k.a. base point) G. For cryptographic application, the order of G, that is the smallest positive number n such that n G = O {\\displaystyle nG={\\mathcal {O}}} (the point at infinity of the curve, and the identity element), is normally prime. Since n is the size of a subgroup of E ( F p ) {\\displaystyle E(\\mathbb {F} _{p})} it follows from Lagrange's theorem that the number h = 1 n | E ( F p ) | {\\displaystyle h={\\frac {1}{n}}|E(\\mathbb {F} _{p})|} is an integer. In cryptographic applications, this number h, called the cofactor, must be small ( h ≤ 4 {\\displaystyle h\\leq 4} ) and, preferably, h = 1 {\\displaystyle h=1} . To summarize: in the prime case, the domain parameters are ( p , a , b , G , n , h ) {\\displaystyle (p,a,b,G,n,h)} ; in the binary case, they are ( m , f , a , b , G , n , h ) {\\displaystyle (m,f,a,b,G,n,h)} .",
"title": "Implementation"
},
{
"paragraph_id": 19,
"text": "Unless there is an assurance that domain parameters were generated by a party trusted with respect to their use, the domain parameters must be validated before use.",
"title": "Implementation"
},
{
"paragraph_id": 20,
"text": "The generation of domain parameters is not usually done by each participant because this involves computing the number of points on a curve which is time-consuming and troublesome to implement. As a result, several standard bodies published domain parameters of elliptic curves for several common field sizes. Such domain parameters are commonly known as \"standard curves\" or \"named curves\"; a named curve can be referenced either by name or by the unique object identifier defined in the standard documents:",
"title": "Implementation"
},
{
"paragraph_id": 21,
"text": "SECG test vectors are also available. NIST has approved many SECG curves, so there is a significant overlap between the specifications published by NIST and SECG. EC domain parameters may be specified either by value or by name.",
"title": "Implementation"
},
{
"paragraph_id": 22,
"text": "If, despite the preceding admonition, one decides to construct one's own domain parameters, one should select the underlying field and then use one of the following strategies to find a curve with appropriate (i.e., near prime) number of points using one of the following methods:",
"title": "Implementation"
},
{
"paragraph_id": 23,
"text": "Several classes of curves are weak and should be avoided:",
"title": "Implementation"
},
{
"paragraph_id": 24,
"text": "Because all the fastest known algorithms that allow one to solve the ECDLP (baby-step giant-step, Pollard's rho, etc.), need O ( n ) {\\displaystyle O({\\sqrt {n}})} steps, it follows that the size of the underlying field should be roughly twice the security parameter. For example, for 128-bit security one needs a curve over F q {\\displaystyle \\mathbb {F} _{q}} , where q ≈ 2 256 {\\displaystyle q\\approx 2^{256}} . This can be contrasted with finite-field cryptography (e.g., DSA) which requires 3072-bit public keys and 256-bit private keys, and integer factorization cryptography (e.g., RSA) which requires a 3072-bit value of n, where the private key should be just as large. However, the public key may be smaller to accommodate efficient encryption, especially when processing power is limited.",
"title": "Implementation"
},
{
"paragraph_id": 25,
"text": "The hardest ECC scheme (publicly) broken to date had a 112-bit key for the prime field case and a 109-bit key for the binary field case. For the prime field case, this was broken in July 2009 using a cluster of over 200 PlayStation 3 game consoles and could have been finished in 3.5 months using this cluster when running continuously. The binary field case was broken in April 2004 using 2600 computers over 17 months.",
"title": "Implementation"
},
{
"paragraph_id": 26,
"text": "A current project is aiming at breaking the ECC2K-130 challenge by Certicom, by using a wide range of different hardware: CPUs, GPUs, FPGA.",
"title": "Implementation"
},
{
"paragraph_id": 27,
"text": "A close examination of the addition rules shows that in order to add two points, one needs not only several additions and multiplications in F q {\\displaystyle \\mathbb {F} _{q}} but also an inversion operation. The inversion (for given x ∈ F q {\\displaystyle x\\in \\mathbb {F} _{q}} find y ∈ F q {\\displaystyle y\\in \\mathbb {F} _{q}} such that x y = 1 {\\displaystyle xy=1} ) is one to two orders of magnitude slower than multiplication. However, points on a curve can be represented in different coordinate systems which do not require an inversion operation to add two points. Several such systems were proposed: in the projective system each point is represented by three coordinates ( X , Y , Z ) {\\displaystyle (X,Y,Z)} using the following relation: x = X Z {\\displaystyle x={\\frac {X}{Z}}} , Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response (\"Math extension cannot connect to Restbase.\") from server \"http://localhost:6011/en.wikipedia.org/v1/\":): y={\\frac {Y}{Z}} ; in the Jacobian system a point is also represented with three coordinates ( X , Y , Z ) {\\displaystyle (X,Y,Z)} , but a different relation is used: x = X Z 2 {\\displaystyle x={\\frac {X}{Z^{2}}}} , y = Y Z 3 {\\displaystyle y={\\frac {Y}{Z^{3}}}} ; in the López–Dahab system the relation is x = X Z {\\displaystyle x={\\frac {X}{Z}}} , y = Y Z 2 {\\displaystyle y={\\frac {Y}{Z^{2}}}} ; in the modified Jacobian system the same relations are used but four coordinates are stored and used for calculations ( X , Y , Z , a Z 4 ) {\\displaystyle (X,Y,Z,aZ^{4})} ; and in the Chudnovsky Jacobian system five coordinates are used ( X , Y , Z , Z 2 , Z 3 ) {\\displaystyle (X,Y,Z,Z^{2},Z^{3})} . Note that there may be different naming conventions, for example, IEEE P1363-2000 standard uses \"projective coordinates\" to refer to what is commonly called Jacobian coordinates. An additional speed-up is possible if mixed coordinates are used.",
"title": "Implementation"
},
{
"paragraph_id": 28,
"text": "Reduction modulo p (which is needed for addition and multiplication) can be executed much faster if the prime p is a pseudo-Mersenne prime, that is p ≈ 2 d {\\displaystyle p\\approx 2^{d}} ; for example, p = 2 521 − 1 {\\displaystyle p=2^{521}-1} or p = 2 256 − 2 32 − 2 9 − 2 8 − 2 7 − 2 6 − 2 4 − 1. {\\displaystyle p=2^{256}-2^{32}-2^{9}-2^{8}-2^{7}-2^{6}-2^{4}-1.} Compared to Barrett reduction, there can be an order of magnitude speed-up. The speed-up here is a practical rather than theoretical one, and derives from the fact that the moduli of numbers against numbers near powers of two can be performed efficiently by computers operating on binary numbers with bitwise operations.",
"title": "Implementation"
},
{
"paragraph_id": 29,
"text": "The curves over F p {\\displaystyle \\mathbb {F} _{p}} with pseudo-Mersenne p are recommended by NIST. Yet another advantage of the NIST curves is that they use a = −3, which improves addition in Jacobian coordinates.",
"title": "Implementation"
},
{
"paragraph_id": 30,
"text": "According to Bernstein and Lange, many of the efficiency-related decisions in NIST FIPS 186-2 are suboptimal. Other curves are more secure and run just as fast.",
"title": "Implementation"
},
{
"paragraph_id": 31,
"text": "Unlike most other DLP systems (where it is possible to use the same procedure for squaring and multiplication), the EC addition is significantly different for doubling (P = Q) and general addition (P ≠ Q) depending on the coordinate system used. Consequently, it is important to counteract side-channel attacks (e.g., timing or simple/differential power analysis attacks) using, for example, fixed pattern window (a.k.a. comb) methods (note that this does not increase computation time). Alternatively one can use an Edwards curve; this is a special family of elliptic curves for which doubling and addition can be done with the same operation. Another concern for ECC-systems is the danger of fault attacks, especially when running on smart cards.",
"title": "Security"
},
{
"paragraph_id": 32,
"text": "Cryptographic experts have expressed concerns that the National Security Agency has inserted a kleptographic backdoor into at least one elliptic curve-based pseudo random generator. Internal memos leaked by former NSA contractor Edward Snowden suggest that the NSA put a backdoor in the Dual EC DRBG standard. One analysis of the possible backdoor concluded that an adversary in possession of the algorithm's secret key could obtain encryption keys given only 32 bytes of PRNG output.",
"title": "Security"
},
{
"paragraph_id": 33,
"text": "The SafeCurves project has been launched in order to catalog curves that are easy to implement securely and are designed in a fully publicly verifiable way to minimize the chance of a backdoor.",
"title": "Security"
},
{
"paragraph_id": 34,
"text": "Shor's algorithm can be used to break elliptic curve cryptography by computing discrete logarithms on a hypothetical quantum computer. The latest quantum resource estimates for breaking a curve with a 256-bit modulus (128-bit security level) are 2330 qubits and 126 billion Toffoli gates. For the binary elliptic curve case, 906 qubits are necessary (to break 128 bits of security). In comparison, using Shor's algorithm to break the RSA algorithm requires 4098 qubits and 5.2 trillion Toffoli gates for a 2048-bit RSA key, suggesting that ECC is an easier target for quantum computers than RSA. All of these figures vastly exceed any quantum computer that has ever been built, and estimates place the creation of such computers at a decade or more away.",
"title": "Security"
},
{
"paragraph_id": 35,
"text": "Supersingular Isogeny Diffie–Hellman Key Exchange claimed to provide a post-quantum secure form of elliptic curve cryptography by using isogenies to implement Diffie–Hellman key exchanges. This key exchange uses much of the same field arithmetic as existing elliptic curve cryptography and requires computational and transmission overhead similar to many currently used public key systems. However, new classical attacks undermined the security of this protocol.",
"title": "Security"
},
{
"paragraph_id": 36,
"text": "In August 2015, the NSA announced that it planned to transition \"in the not distant future\" to a new cipher suite that is resistant to quantum attacks. \"Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy.\"",
"title": "Security"
},
{
"paragraph_id": 37,
"text": "When ECC is used in virtual machines, an attacker may use an invalid curve to get a complete PDH private key.",
"title": "Security"
},
{
"paragraph_id": 38,
"text": "Alternative representations of elliptic curves include:",
"title": "Alternative representations"
}
]
| Elliptic-curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. ECC allows smaller keys compared to non-EC cryptography to provide equivalent security. Elliptic curves are applicable for key agreement, digital signatures, pseudo-random generators and other tasks. Indirectly, they can be used for encryption by combining the key agreement with a symmetric encryption scheme. They are also used in several integer factorization algorithms that have applications in cryptography, such as Lenstra elliptic-curve factorization. | 2001-10-23T21:46:07Z | 2023-12-11T21:12:39Z | [
"Template:Technical",
"Template:See also",
"Template:Cbignore",
"Template:Refend",
"Template:Algebraic curves navbox",
"Template:Short description",
"Template:Not a typo",
"Template:Main",
"Template:Cite book",
"Template:Cite web",
"Template:Refbegin",
"Template:Vague",
"Template:External links",
"Template:Webarchive",
"Template:Cite arXiv",
"Template:Cryptography navbox",
"Template:Citation needed",
"Template:Clarify",
"Template:Div col",
"Template:Div col end",
"Template:When",
"Template:IETF RFC",
"Template:Cite journal",
"Template:Further",
"Template:Reflist",
"Template:Cite news",
"Template:Commons-inline"
]
| https://en.wikipedia.org/wiki/Elliptic-curve_cryptography |
9,967 | EDM | EDM or E-DM may refer to: | [
{
"paragraph_id": 0,
"text": "EDM or E-DM may refer to:",
"title": ""
}
]
| EDM or E-DM may refer to: | 2001-10-22T19:18:04Z | 2023-11-03T03:09:04Z | [
"Template:TOC right",
"Template:Disambiguation",
"Template:Wiktionary"
]
| https://en.wikipedia.org/wiki/EDM |
9,970 | Eightfold path (policy analysis) | The eightfold path is a method of policy analysis assembled by Eugene Bardach, a professor at the Goldman School of Public Policy at the University of California, Berkeley. It is outlined in his book A Practical Guide for Policy Analysis: The Eightfold Path to More Effective Problem Solving, which is now in its sixth edition. The book is commonly referenced in public policy and public administration scholarship.
Bardach's procedure is as follows:
A possible ninth step, based on Bardach's own writing, might be "repeat steps 1–8 as necessary."
The method is named after the Buddhist Noble Eightfold Path, but otherwise has no relation to it.
The New York taxi driver test is a technique for evaluating the effectiveness of communication between policy makers and analysts. Bardach contends that policy explanations must be clear and down-to-earth enough for a taxi driver to be able to understand the premise during a trip through city streets. The New York taxi driver is presumed to be both a non-specialist and a tough customer. | [
{
"paragraph_id": 0,
"text": "The eightfold path is a method of policy analysis assembled by Eugene Bardach, a professor at the Goldman School of Public Policy at the University of California, Berkeley. It is outlined in his book A Practical Guide for Policy Analysis: The Eightfold Path to More Effective Problem Solving, which is now in its sixth edition. The book is commonly referenced in public policy and public administration scholarship.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Bardach's procedure is as follows:",
"title": ""
},
{
"paragraph_id": 2,
"text": "A possible ninth step, based on Bardach's own writing, might be \"repeat steps 1–8 as necessary.\"",
"title": ""
},
{
"paragraph_id": 3,
"text": "The method is named after the Buddhist Noble Eightfold Path, but otherwise has no relation to it.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The New York taxi driver test is a technique for evaluating the effectiveness of communication between policy makers and analysts. Bardach contends that policy explanations must be clear and down-to-earth enough for a taxi driver to be able to understand the premise during a trip through city streets. The New York taxi driver is presumed to be both a non-specialist and a tough customer.",
"title": "The New York taxi driver test"
},
{
"paragraph_id": 5,
"text": "",
"title": "External links"
}
]
| The eightfold path is a method of policy analysis assembled by Eugene Bardach, a professor at the Goldman School of Public Policy at the University of California, Berkeley. It is outlined in his book A Practical Guide for Policy Analysis: The Eightfold Path to More Effective Problem Solving, which is now in its sixth edition. The book is commonly referenced in public policy and public administration scholarship. Bardach's procedure is as follows: Define the problem
Assemble the evidence
Construct the alternatives
Select the criteria
Project the outcomes
Confront the trade-offs
Decide
Tell your story A possible ninth step, based on Bardach's own writing, might be "repeat steps 1–8 as necessary." The method is named after the Buddhist Noble Eightfold Path, but otherwise has no relation to it. | 2002-02-25T15:51:15Z | 2023-10-26T08:21:17Z | [
"Template:Cite book",
"Template:Polisci-stub",
"Template:About",
"Template:Use dmy dates",
"Template:Reflist",
"Template:Cite web"
]
| https://en.wikipedia.org/wiki/Eightfold_path_(policy_analysis) |
9,971 | Eden Project | The Eden Project (Cornish: Edenva) is a visitor attraction in Cornwall, England. The project is located in a reclaimed china clay pit, located 2 km (1.2 mi) from the town of St Blazey and 5 km (3 mi) from the larger town of St Austell.
The complex is dominated by two huge enclosures consisting of adjoining domes that house thousands of plant species, and each enclosure emulates a natural biome. The biomes consist of hundreds of hexagonal and pentagonal ethylene tetrafluoroethylene (ETFE) inflated cells supported by geodesic tubular steel domes. The larger of the two biomes simulates a rainforest environment (and is the largest indoor rainforest in the world) and the second, a Mediterranean environment. The attraction also has an outside botanical garden which is home to many plants and wildlife native to Cornwall and the UK in general; it also has many plants that provide an important and interesting backstory, for example, those with a prehistoric heritage.
There are plans to build an Eden Project North in the seaside town of Morecambe, Lancashire, with a focus on the marine environment.
The clay pit in which the project is sited was in use for over 160 years. In 1981, the pit was used by the BBC as the planet surface of Magrathea in the TV series the Hitchhiker's Guide to the Galaxy. By the mid-1990s the pit was all but exhausted.
The initial idea for the project dates back to 1996, with construction beginning in 1998. The work was hampered by torrential rain in the first few months of the project, and parts of the pit flooded as it sits 15 m (49 ft) below the water table.
The first part of the Eden Project, the visitor centre, opened to the public in May 2000. The first plants began arriving in September of that year, and the full site opened on 17 March 2001.
To counter criticism from environmental groups, the Eden Project committed to investigate a rail link to the site. The rail link was never built, and car parking on the site is still funded from revenue generated from general admission ticket sales.
The Eden Project was used as a filming location for the 2002 James Bond film Die Another Day. On 2 July 2005 The Eden Project hosted the "Africa Calling" concert of the Live 8 concert series. It has also provided some plants for the British Museum's Africa garden.
In 2005, the Project launched "A Time of Gifts" for the winter months, November to February. This features an ice rink covering the lake, with a small café-bar attached, as well as a Christmas market. Cornish choirs regularly perform in the biomes.
In 2007, the Eden Project campaigned unsuccessfully for £50 million in Big Lottery Fund money for a proposed desert biome. It received just 12.07% of the votes, the lowest for the four projects being considered. As part of the campaign, the Eden Project invited people all over Cornwall to try to break the world record for the biggest ever pub quiz as part of its campaign to bring £50 million of lottery funds to Cornwall.
In December 2009, much of the project, including both greenhouses, became available to navigate through Google Street View.
The Eden Trust revealed a trading loss of £1.3 million for 2012–13, on a turnover of £25.4 million. The Eden Project had posted a surplus of £136,000 for the previous year. In 2014 Eden accounts showed a surplus of £2 million.
The World Pasty Championships, an international competition to find the best Cornish pasties and other pasty-type savoury snacks, have been held at the Eden Project since 2012.
The Eden Project is said to have contributed over £1 billion to the Cornish economy. In 2016, Eden became home to Europe's second-largest redwood forest (after the Giants Grove at Birr Castle, Birr Castle, Ireland) when forty saplings of coast redwoods, Sequoia sempervirens, which could live for 4,000 years and reach 115 metres in height, were planted there.
The Eden Project received 1,010,095 visitors in 2019.
In December 2020 the project was closed after heavy rain caused several landslips at the site. Managers at the site are assessing the damage and will announce when the project will reopen on the company's website. Reopening became irrelevant as Covid lockdown measures in the UK indefinitely closed the venue from early 2021, though it had reopened by May 2021 after remedial works had taken place. The site was used for an event during the 2021 G7 Summit, hosted by the United Kingdom.
The project was conceived by Tim Smit and Jonathan Ball, and designed by Grimshaw Architects and structural engineering firm Anthony Hunt Associates (now part of Sinclair Knight Merz). Davis Langdon carried out the project management, Sir Robert McAlpine and Alfred McAlpine did the construction, MERO jointly designed and built the biome steel structures, the ETFE pillows that build the façade were realized by Vector Foiltec, and Arup was the services engineer, economic consultant, environmental engineer and transportation engineer. Land Use Consultants led the masterplan and landscape design. The project took 2½ years to construct and opened to the public on 17 March 2001.
Once into the attraction, there is a meandering path with views of the two biomes, planted landscapes, including vegetable gardens, and sculptures that include a giant bee and previously The WEEE Man (removed in 2016), a towering figure made from old electrical appliances and was meant to represent the average electrical waste used by one person in a lifetime.
At the bottom of the pit are two covered biomes:
The Rainforest Biome, covers 1.56 ha (3.9 acres) and measures 55 m (180 ft) high, 100 m (328 ft) wide, and 200 m (656 ft) long. It is used for tropical plants, such as fruiting banana plants, coffee, rubber, and giant bamboo, and is kept at a tropical temperature and moisture level.
The Mediterranean Biome covers 0.654 ha (1.6 acres) and measures 35 m (115 ft) high, 65 m (213 ft) wide, and 135 m (443 ft) long. It houses familiar warm temperate and arid plants such as olives and grape vines and various sculptures.
The Outdoor Gardens represent the temperate regions of the world with plants such as tea, lavender, hops, hemp, and sunflowers, as well as local plant species.
The covered biomes are constructed from a tubular steel (hex-tri-hex) with mostly hexagonal external cladding panels made from the thermoplastic ETFE. Glass was avoided due to its weight and potential dangers. The cladding panels themselves are created from several layers of thin UV-transparent ETFE film, which are sealed around their perimeter and inflated to create a large cushion. The resulting cushion acts as a thermal blanket to the structure. The ETFE material is resistant to most stains, which simply wash off in the rain. If required, cleaning can be performed by abseilers. Although the ETFE is susceptible to punctures, these can be easily fixed with ETFE tape. The structure is completely self-supporting, with no internal supports, and takes the form of a geodesic structure. The panels vary in size up to 9 m (29.5 ft) across, with the largest at the top of the structure.
The ETFE technology was supplied and installed by the firm Vector Foiltec, which is also responsible for ongoing maintenance of the cladding. The steel spaceframe and cladding package (with Vector Foiltec as ETFE subcontractor) was designed, supplied and installed by MERO (UK) PLC, who also jointly developed the overall scheme geometry with the architect, Nicholas Grimshaw & Partners.
The entire build project was managed by McAlpine Joint Venture.
The Core is the latest addition to the site and opened in September 2005. It provides the Eden Project with an education facility, incorporating classrooms and exhibition spaces designed to help communicate Eden's central message about the relationship between people and plants. Accordingly, the building has taken its inspiration from plants, most noticeable in the form of the soaring timber roof, which gives the building its distinctive shape.
Grimshaw developed the geometry of the copper-clad roof in collaboration with a sculptor, Peter Randall-Page, and Mike Purvis of structural engineers SKM Anthony Hunts. It is derived from phyllotaxis, which is the mathematical basis for nearly all plant growth; the "opposing spirals" found in many plants such as the seeds in a sunflower's head, pine cones, and pineapples. The copper was obtained from traceable sources, and the Eden Project is working with Rio Tinto Group to explore the possibility of encouraging further traceable supply routes for metals, which would enable users to avoid metals mined unethically. The services and acoustic, mechanical, and electrical engineering design was carried out by Buro Happold.
The Core is also home to art exhibitions throughout the year. A permanent installation entitled Seed, by Peter Randall-Page, occupies the anteroom. Seed is a large, 70 tonne egg-shaped stone installation standing some 13 feet (4.0 m) tall and displaying a complex pattern of protrusions that are based upon the geometric and mathematical principles that underlie plant growth.
The biomes provide diverse growing conditions, and many plants are on display.
The Eden Project includes environmental education focusing on the interdependence of plants and people; plants are labelled with their medicinal uses. The massive amounts of water required to create the humid conditions of the Tropical Biome, and to serve the toilet facilities, are all sanitised rain water that would otherwise collect at the bottom of the quarry. The only mains water used is for hand washing and for cooking. The complex also uses Green Tariff Electricity – the energy comes from one of the many wind turbines in Cornwall, which were among the first in Europe.
In December 2010 the Eden Project received permission to build a geothermal electricity plant which will generate approx 4MWe, enough to supply Eden and about 5000 households. The project will involve geothermal heating as well as geothermal electricity. Cornwall Council and the European Union came up with the greater part of £16.8m required to start the project. First a well will be sunk nearly 3 miles (4.5 km) into the granite crust underneath Eden.
Eden co-founder, Sir Tim Smit said, "Since we began, Eden has had a dream that the world should be powered by renewable energy. The sun can provide massive solar power and the wind has been harnessed by humankind for thousands of years, but because both are intermittent and battery technology cannot yet store all we need there is a gap. We believe the answer lies beneath our feet in the heat underground that can be accessed by drilling technology that pumps water towards the centre of the Earth and brings it back up superheated to provide us with heat and electricity".
Drilling began in May 2021, and it was expected the project would be completed by 2023.
In 2018, the Eden Project revealed its design for a new version of the project, located on the seafront in Morecambe, Lancashire. There will be biomes shaped like mussels and a focus on the marine environment. There will also be reimagined lidos, gardens, performance spaces, immersive experiences, and observatories.
Grimshaw are the architects for the project, which is expected to cost £80 million. The project is a partnership with the Lancashire Enterprise Partnership, Lancaster University, Lancashire County Council, and Lancaster City Council. In December 2018, the four local partners agreed to provide £1 million to develop the idea, which allowed the development of an outline planning application for the project. It is expected that there will be 500 jobs created and 8,000 visitors a day to the site.
Having been granted planning permission in January 2022 and with £50 million of levelling-up funding granted in January 2023, it is due to open in 2024 and predicted to benefit the North West economy by £200 million per year.
In May 2020, the Eden Project revealed plans to establish their first attraction in Scotland, and named Dundee as the proposed site of the location. The city's Camperdown Park was widely touted to be the proposed location of the new attraction however in May 2021, it was announced that the Eden Project had chosen the site of the former gasworks in Dundee as the location. It was planned that the new development would result in 200 new jobs and "contribute £27m a year to the regional economy". The project is in partnership with Dundee City Council, the University of Dundee and the Northwood Charitable Trust.
In 2021, Eden Project announced that they would establish fourteen hectares of new wildflower habitat in areas across Dundee, including Morgan Academy and Caird Park.
In July 2023, new images were released depicting what the Dundee attraction would look which accompanied the planning permission documents for the new attraction which would be submitted by autumn 2023.
In 2020, Eastbourne Borough Council and the Eden Project announced a joint project to explore the viability of a new Eden site in the South Downs National Park.
In 2015, the Eden Project announced that it had reached an agreement to construct an Eden site in Qingdao, China. While the site had originally been slated to open by 2020, construction fell behind schedule due to the COVID-19 pandemic and the opening date was delayed to 2023. The new site is expected to focus on "water" and its central role in civilization and nature.
A planned Eden Project for the New Zealand city of Christchurch, to be called Eden Project New Zealand/Eden Project Aotearoa, is expected to be inaugurated in 2025. It is to be centred close to the Avon River, on a site largely razed as a result of the 2011 Christchurch Earthquake.
Since 2002, the Project has hosted a series of musical performances, called the Eden Sessions, usually held during the summer.
The 2024 sessions will be headlined by Fatboy Slim, Suede, Manic Street Preachers, The National, JLS, Crowded House, Rick Astley and Tom Grennan.
The Eden Project has appeared in various television shows and films such as the James Bond film Die Another Day, The Bad Education Movie, in the Netflix series The Last Bus, and in the CBeebies show Andy's Aquatic Adventure.
A weekly radio show called The Eden Radio Project is held every Thursday afternoon on Radio St Austell Bay.
On 18 November 2019, on the Trees A Crowd podcast, David Oakes would interview the Eden Project's Head of Interpretation, Dr Jo Elworthy, about the site. | [
{
"paragraph_id": 0,
"text": "The Eden Project (Cornish: Edenva) is a visitor attraction in Cornwall, England. The project is located in a reclaimed china clay pit, located 2 km (1.2 mi) from the town of St Blazey and 5 km (3 mi) from the larger town of St Austell.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The complex is dominated by two huge enclosures consisting of adjoining domes that house thousands of plant species, and each enclosure emulates a natural biome. The biomes consist of hundreds of hexagonal and pentagonal ethylene tetrafluoroethylene (ETFE) inflated cells supported by geodesic tubular steel domes. The larger of the two biomes simulates a rainforest environment (and is the largest indoor rainforest in the world) and the second, a Mediterranean environment. The attraction also has an outside botanical garden which is home to many plants and wildlife native to Cornwall and the UK in general; it also has many plants that provide an important and interesting backstory, for example, those with a prehistoric heritage.",
"title": ""
},
{
"paragraph_id": 2,
"text": "There are plans to build an Eden Project North in the seaside town of Morecambe, Lancashire, with a focus on the marine environment.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The clay pit in which the project is sited was in use for over 160 years. In 1981, the pit was used by the BBC as the planet surface of Magrathea in the TV series the Hitchhiker's Guide to the Galaxy. By the mid-1990s the pit was all but exhausted.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The initial idea for the project dates back to 1996, with construction beginning in 1998. The work was hampered by torrential rain in the first few months of the project, and parts of the pit flooded as it sits 15 m (49 ft) below the water table.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The first part of the Eden Project, the visitor centre, opened to the public in May 2000. The first plants began arriving in September of that year, and the full site opened on 17 March 2001.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "To counter criticism from environmental groups, the Eden Project committed to investigate a rail link to the site. The rail link was never built, and car parking on the site is still funded from revenue generated from general admission ticket sales.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The Eden Project was used as a filming location for the 2002 James Bond film Die Another Day. On 2 July 2005 The Eden Project hosted the \"Africa Calling\" concert of the Live 8 concert series. It has also provided some plants for the British Museum's Africa garden.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 2005, the Project launched \"A Time of Gifts\" for the winter months, November to February. This features an ice rink covering the lake, with a small café-bar attached, as well as a Christmas market. Cornish choirs regularly perform in the biomes.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In 2007, the Eden Project campaigned unsuccessfully for £50 million in Big Lottery Fund money for a proposed desert biome. It received just 12.07% of the votes, the lowest for the four projects being considered. As part of the campaign, the Eden Project invited people all over Cornwall to try to break the world record for the biggest ever pub quiz as part of its campaign to bring £50 million of lottery funds to Cornwall.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In December 2009, much of the project, including both greenhouses, became available to navigate through Google Street View.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The Eden Trust revealed a trading loss of £1.3 million for 2012–13, on a turnover of £25.4 million. The Eden Project had posted a surplus of £136,000 for the previous year. In 2014 Eden accounts showed a surplus of £2 million.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The World Pasty Championships, an international competition to find the best Cornish pasties and other pasty-type savoury snacks, have been held at the Eden Project since 2012.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The Eden Project is said to have contributed over £1 billion to the Cornish economy. In 2016, Eden became home to Europe's second-largest redwood forest (after the Giants Grove at Birr Castle, Birr Castle, Ireland) when forty saplings of coast redwoods, Sequoia sempervirens, which could live for 4,000 years and reach 115 metres in height, were planted there.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The Eden Project received 1,010,095 visitors in 2019.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In December 2020 the project was closed after heavy rain caused several landslips at the site. Managers at the site are assessing the damage and will announce when the project will reopen on the company's website. Reopening became irrelevant as Covid lockdown measures in the UK indefinitely closed the venue from early 2021, though it had reopened by May 2021 after remedial works had taken place. The site was used for an event during the 2021 G7 Summit, hosted by the United Kingdom.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The project was conceived by Tim Smit and Jonathan Ball, and designed by Grimshaw Architects and structural engineering firm Anthony Hunt Associates (now part of Sinclair Knight Merz). Davis Langdon carried out the project management, Sir Robert McAlpine and Alfred McAlpine did the construction, MERO jointly designed and built the biome steel structures, the ETFE pillows that build the façade were realized by Vector Foiltec, and Arup was the services engineer, economic consultant, environmental engineer and transportation engineer. Land Use Consultants led the masterplan and landscape design. The project took 2½ years to construct and opened to the public on 17 March 2001.",
"title": "Design and construction"
},
{
"paragraph_id": 17,
"text": "Once into the attraction, there is a meandering path with views of the two biomes, planted landscapes, including vegetable gardens, and sculptures that include a giant bee and previously The WEEE Man (removed in 2016), a towering figure made from old electrical appliances and was meant to represent the average electrical waste used by one person in a lifetime.",
"title": "Site"
},
{
"paragraph_id": 18,
"text": "At the bottom of the pit are two covered biomes:",
"title": "Site"
},
{
"paragraph_id": 19,
"text": "The Rainforest Biome, covers 1.56 ha (3.9 acres) and measures 55 m (180 ft) high, 100 m (328 ft) wide, and 200 m (656 ft) long. It is used for tropical plants, such as fruiting banana plants, coffee, rubber, and giant bamboo, and is kept at a tropical temperature and moisture level.",
"title": "Site"
},
{
"paragraph_id": 20,
"text": "The Mediterranean Biome covers 0.654 ha (1.6 acres) and measures 35 m (115 ft) high, 65 m (213 ft) wide, and 135 m (443 ft) long. It houses familiar warm temperate and arid plants such as olives and grape vines and various sculptures.",
"title": "Site"
},
{
"paragraph_id": 21,
"text": "The Outdoor Gardens represent the temperate regions of the world with plants such as tea, lavender, hops, hemp, and sunflowers, as well as local plant species.",
"title": "Site"
},
{
"paragraph_id": 22,
"text": "The covered biomes are constructed from a tubular steel (hex-tri-hex) with mostly hexagonal external cladding panels made from the thermoplastic ETFE. Glass was avoided due to its weight and potential dangers. The cladding panels themselves are created from several layers of thin UV-transparent ETFE film, which are sealed around their perimeter and inflated to create a large cushion. The resulting cushion acts as a thermal blanket to the structure. The ETFE material is resistant to most stains, which simply wash off in the rain. If required, cleaning can be performed by abseilers. Although the ETFE is susceptible to punctures, these can be easily fixed with ETFE tape. The structure is completely self-supporting, with no internal supports, and takes the form of a geodesic structure. The panels vary in size up to 9 m (29.5 ft) across, with the largest at the top of the structure.",
"title": "Site"
},
{
"paragraph_id": 23,
"text": "The ETFE technology was supplied and installed by the firm Vector Foiltec, which is also responsible for ongoing maintenance of the cladding. The steel spaceframe and cladding package (with Vector Foiltec as ETFE subcontractor) was designed, supplied and installed by MERO (UK) PLC, who also jointly developed the overall scheme geometry with the architect, Nicholas Grimshaw & Partners.",
"title": "Site"
},
{
"paragraph_id": 24,
"text": "The entire build project was managed by McAlpine Joint Venture.",
"title": "Site"
},
{
"paragraph_id": 25,
"text": "The Core is the latest addition to the site and opened in September 2005. It provides the Eden Project with an education facility, incorporating classrooms and exhibition spaces designed to help communicate Eden's central message about the relationship between people and plants. Accordingly, the building has taken its inspiration from plants, most noticeable in the form of the soaring timber roof, which gives the building its distinctive shape.",
"title": "Site"
},
{
"paragraph_id": 26,
"text": "Grimshaw developed the geometry of the copper-clad roof in collaboration with a sculptor, Peter Randall-Page, and Mike Purvis of structural engineers SKM Anthony Hunts. It is derived from phyllotaxis, which is the mathematical basis for nearly all plant growth; the \"opposing spirals\" found in many plants such as the seeds in a sunflower's head, pine cones, and pineapples. The copper was obtained from traceable sources, and the Eden Project is working with Rio Tinto Group to explore the possibility of encouraging further traceable supply routes for metals, which would enable users to avoid metals mined unethically. The services and acoustic, mechanical, and electrical engineering design was carried out by Buro Happold.",
"title": "Site"
},
{
"paragraph_id": 27,
"text": "The Core is also home to art exhibitions throughout the year. A permanent installation entitled Seed, by Peter Randall-Page, occupies the anteroom. Seed is a large, 70 tonne egg-shaped stone installation standing some 13 feet (4.0 m) tall and displaying a complex pattern of protrusions that are based upon the geometric and mathematical principles that underlie plant growth.",
"title": "Site"
},
{
"paragraph_id": 28,
"text": "The biomes provide diverse growing conditions, and many plants are on display.",
"title": "Environmental aspects"
},
{
"paragraph_id": 29,
"text": "The Eden Project includes environmental education focusing on the interdependence of plants and people; plants are labelled with their medicinal uses. The massive amounts of water required to create the humid conditions of the Tropical Biome, and to serve the toilet facilities, are all sanitised rain water that would otherwise collect at the bottom of the quarry. The only mains water used is for hand washing and for cooking. The complex also uses Green Tariff Electricity – the energy comes from one of the many wind turbines in Cornwall, which were among the first in Europe.",
"title": "Environmental aspects"
},
{
"paragraph_id": 30,
"text": "In December 2010 the Eden Project received permission to build a geothermal electricity plant which will generate approx 4MWe, enough to supply Eden and about 5000 households. The project will involve geothermal heating as well as geothermal electricity. Cornwall Council and the European Union came up with the greater part of £16.8m required to start the project. First a well will be sunk nearly 3 miles (4.5 km) into the granite crust underneath Eden.",
"title": "Environmental aspects"
},
{
"paragraph_id": 31,
"text": "Eden co-founder, Sir Tim Smit said, \"Since we began, Eden has had a dream that the world should be powered by renewable energy. The sun can provide massive solar power and the wind has been harnessed by humankind for thousands of years, but because both are intermittent and battery technology cannot yet store all we need there is a gap. We believe the answer lies beneath our feet in the heat underground that can be accessed by drilling technology that pumps water towards the centre of the Earth and brings it back up superheated to provide us with heat and electricity\".",
"title": "Environmental aspects"
},
{
"paragraph_id": 32,
"text": "Drilling began in May 2021, and it was expected the project would be completed by 2023.",
"title": "Environmental aspects"
},
{
"paragraph_id": 33,
"text": "In 2018, the Eden Project revealed its design for a new version of the project, located on the seafront in Morecambe, Lancashire. There will be biomes shaped like mussels and a focus on the marine environment. There will also be reimagined lidos, gardens, performance spaces, immersive experiences, and observatories.",
"title": "Other projects"
},
{
"paragraph_id": 34,
"text": "Grimshaw are the architects for the project, which is expected to cost £80 million. The project is a partnership with the Lancashire Enterprise Partnership, Lancaster University, Lancashire County Council, and Lancaster City Council. In December 2018, the four local partners agreed to provide £1 million to develop the idea, which allowed the development of an outline planning application for the project. It is expected that there will be 500 jobs created and 8,000 visitors a day to the site.",
"title": "Other projects"
},
{
"paragraph_id": 35,
"text": "Having been granted planning permission in January 2022 and with £50 million of levelling-up funding granted in January 2023, it is due to open in 2024 and predicted to benefit the North West economy by £200 million per year.",
"title": "Other projects"
},
{
"paragraph_id": 36,
"text": "In May 2020, the Eden Project revealed plans to establish their first attraction in Scotland, and named Dundee as the proposed site of the location. The city's Camperdown Park was widely touted to be the proposed location of the new attraction however in May 2021, it was announced that the Eden Project had chosen the site of the former gasworks in Dundee as the location. It was planned that the new development would result in 200 new jobs and \"contribute £27m a year to the regional economy\". The project is in partnership with Dundee City Council, the University of Dundee and the Northwood Charitable Trust.",
"title": "Other projects"
},
{
"paragraph_id": 37,
"text": "In 2021, Eden Project announced that they would establish fourteen hectares of new wildflower habitat in areas across Dundee, including Morgan Academy and Caird Park.",
"title": "Other projects"
},
{
"paragraph_id": 38,
"text": "In July 2023, new images were released depicting what the Dundee attraction would look which accompanied the planning permission documents for the new attraction which would be submitted by autumn 2023.",
"title": "Other projects"
},
{
"paragraph_id": 39,
"text": "In 2020, Eastbourne Borough Council and the Eden Project announced a joint project to explore the viability of a new Eden site in the South Downs National Park.",
"title": "Other projects"
},
{
"paragraph_id": 40,
"text": "In 2015, the Eden Project announced that it had reached an agreement to construct an Eden site in Qingdao, China. While the site had originally been slated to open by 2020, construction fell behind schedule due to the COVID-19 pandemic and the opening date was delayed to 2023. The new site is expected to focus on \"water\" and its central role in civilization and nature.",
"title": "Other projects"
},
{
"paragraph_id": 41,
"text": "A planned Eden Project for the New Zealand city of Christchurch, to be called Eden Project New Zealand/Eden Project Aotearoa, is expected to be inaugurated in 2025. It is to be centred close to the Avon River, on a site largely razed as a result of the 2011 Christchurch Earthquake.",
"title": "Other projects"
},
{
"paragraph_id": 42,
"text": "Since 2002, the Project has hosted a series of musical performances, called the Eden Sessions, usually held during the summer.",
"title": "Eden Sessions"
},
{
"paragraph_id": 43,
"text": "The 2024 sessions will be headlined by Fatboy Slim, Suede, Manic Street Preachers, The National, JLS, Crowded House, Rick Astley and Tom Grennan.",
"title": "Eden Sessions"
},
{
"paragraph_id": 44,
"text": "The Eden Project has appeared in various television shows and films such as the James Bond film Die Another Day, The Bad Education Movie, in the Netflix series The Last Bus, and in the CBeebies show Andy's Aquatic Adventure.",
"title": "In the media"
},
{
"paragraph_id": 45,
"text": "A weekly radio show called The Eden Radio Project is held every Thursday afternoon on Radio St Austell Bay.",
"title": "In the media"
},
{
"paragraph_id": 46,
"text": "On 18 November 2019, on the Trees A Crowd podcast, David Oakes would interview the Eden Project's Head of Interpretation, Dr Jo Elworthy, about the site.",
"title": "In the media"
}
]
| The Eden Project is a visitor attraction in Cornwall, England. The project is located in a reclaimed china clay pit, located 2 km (1.2 mi) from the town of St Blazey and 5 km (3 mi) from the larger town of St Austell. The complex is dominated by two huge enclosures consisting of adjoining domes that house thousands of plant species, and each enclosure emulates a natural biome. The biomes consist of hundreds of hexagonal and pentagonal ethylene tetrafluoroethylene (ETFE) inflated cells supported by geodesic tubular steel domes. The larger of the two biomes simulates a rainforest environment and the second, a Mediterranean environment. The attraction also has an outside botanical garden which is home to many plants and wildlife native to Cornwall and the UK in general; it also has many plants that provide an important and interesting backstory, for example, those with a prehistoric heritage. There are plans to build an Eden Project North in the seaside town of Morecambe, Lancashire, with a focus on the marine environment. | 2001-10-23T15:46:17Z | 2023-12-18T10:25:19Z | [
"Template:Use British English",
"Template:Lang-kw",
"Template:ISBN",
"Template:Short description",
"Template:Redirect",
"Template:Authority control",
"Template:Infobox building",
"Template:Cite news",
"Template:Reflist",
"Template:Official site",
"Template:Portal",
"Template:Div col end",
"Template:Div col",
"Template:Clear",
"Template:Cite web",
"Template:Citation",
"Template:OCLC",
"Template:Commons category",
"Template:Use dmy dates",
"Template:Convert"
]
| https://en.wikipedia.org/wiki/Eden_Project |
9,974 | European Commission | The European Commission (EC) is part of the executive of the European Union (EU), together with the European Council. It operates as a cabinet government, with 27 members of the Commission (directorial system, informally known as "Commissioners") headed by a President. It includes an administrative body of about 32,000 European civil servants. The Commission is divided into departments known as Directorates-General (DGs) that can be likened to departments or ministries each headed by a Director-General who is responsible to a Commissioner.
There is one member per member state, but members are bound by their oath of office to represent the general interest of the EU as a whole rather than their home state. The Commission President (currently Ursula von der Leyen) is proposed by the European Council (the 27 heads of state/governments) and elected by the European Parliament. The Council of the European Union then nominates the other members of the Commission in agreement with the nominated President, and the 27 members as a team are then subject to a vote of approval by the European Parliament. The current Commission is the Von der Leyen Commission, which took office in December 2019, following the European Parliament elections in May of the same year.
The European Commission derives from one of the five key institutions created in the supranational European Community system, following the proposal of Robert Schuman, French Foreign Minister, on 9 May 1950. Originating in 1951 as the High Authority in the European Coal and Steel Community, the commission has undergone numerous changes in power and composition under various presidents, involving three Communities.
The first Commission originated in 1951 as the nine-member "High Authority" under President Jean Monnet (see Monnet Authority). The High Authority was the supranational administrative executive of the new European Coal and Steel Community (ECSC). It took office first on 10 August 1952 in Luxembourg City. In 1958, the Treaties of Rome had established two new communities alongside the ECSC: the European Economic Community (EEC) and the European Atomic Energy Community (Euratom). However, their executives were called "Commissions" rather than "High Authorities". The reason for the change in name was the new relationship between the executives and the Council. Some states, such as France, expressed reservations over the power of the High Authority and wished to limit it by giving more power to the Council rather than the new executives.
Louis Armand led the first Commission of Euratom. Walter Hallstein led the first Commission of the EEC, holding the first formal meeting on 16 January 1958 at the Château of Val-Duchesse. It achieved agreement on a contentious cereal price accord, as well as making a positive impression upon third countries when it made its international debut at the Kennedy Round of General Agreement on Tariffs and Trade (GATT) negotiations. Hallstein notably began the consolidation of European law and started to have a notable impact on national legislation. Little heed was taken of his administration at first but, with help from the European Court of Justice, his Commission stamped its authority solidly enough to allow future Commissions to be taken more seriously. In 1965, however, accumulating differences between the French government of Charles de Gaulle and the other member states on various subjects (British entry, direct elections to Parliament, the Fouchet Plan and the budget) triggered the "empty chair" crisis, ostensibly over proposals for the Common Agricultural Policy. Although the institutional crisis was solved the following year, it cost Étienne Hirsch his presidency of Euratom and later Walter Hallstein the EEC presidency, despite his otherwise being viewed as the most 'dynamic' leader until Jacques Delors.
The three bodies, collectively named the European Executives, co-existed until 1 July 1967 when, under the Merger Treaty, they were combined into a single administration under President Jean Rey. Owing to the merger, the Rey Commission saw a temporary increase to 14 members—although subsequent Commissions were reduced back to nine, following the formula of one member for small states and two for larger states. The Rey Commission completed the Community's customs union in 1968 and campaigned for a more powerful, elected, European Parliament. Despite Rey being the first President of the combined communities, Hallstein is seen as the first President of the modern Commission.
The Malfatti and Mansholt Commissions followed with work on monetary co-operation and the first enlargement to the north in 1973. With that enlargement, the College of Commissioners membership increased to thirteen under the Ortoli Commission (the United Kingdom as a large member was granted two Commissioners), which dealt with the enlarged community during economic and international instability at that time. The external representation of the Community took a step forward when President Roy Jenkins, recruited to the presidency in January 1977 from his role as Home Secretary of the United Kingdom's Labour government, became the first President to attend a G8 summit on behalf of the Community. Following the Jenkins Commission, Gaston Thorn's Commission oversaw the Community's enlargement to the south, in addition to beginning work on the Single European Act.
The Commission headed by Jacques Delors was seen as giving the Community a sense of direction and dynamism. Delors and his College are also considered as the "founding fathers of the euro". The International Herald Tribune noted the work of Delors at the end of his second term in 1992: "Mr. Delors rescued the European Community from the doldrums. He arrived when Europessimism was at its worst. Although he was a little-known former French finance minister, he breathed life and hope into the EC and into the dispirited Brussels Commission. In his first term, from 1985 to 1988, he rallied Europe to the call of the single market, and when appointed to a second term he began urging Europeans toward the far more ambitious goals of economic, monetary, and political union".
The successor to Delors was Jacques Santer. As a result of a fraud and corruption scandal, the entire Santer Commission was forced by the Parliament to resign in 1999; a central role was played by Édith Cresson. These frauds were revealed by an internal auditor, Paul van Buitenen.
That was the first time a College of Commissioners had been forced to resign en masse, and represented a shift of power towards the Parliament. However, the Santer Commission did carry out work on the Treaty of Amsterdam and the euro. In response to the scandal, the European Anti-Fraud Office (OLAF) was created.
Following Santer, Romano Prodi took office. The Amsterdam Treaty had increased the commission's powers and Prodi was dubbed by the press as something akin to a Prime Minister. Powers were strengthened again; the Treaty of Nice, signed in 2001, gave the Presidents more power over the composition of the College of Commissioners.
José Manuel Barroso became president in 2004: the Parliament once again asserted itself in objecting to the proposed membership of the Barroso Commission. Owing to this opposition, Barroso was forced to reshuffle his College before taking office. The Barroso Commission was also the first full Commission since the enlargement in 2004 to 25 members; hence, the number of Commissioners at the end of the Prodi Commission had reached 30. As a result of the increase in the number of states, the Amsterdam Treaty triggered a reduction in the number of Commissioners to one per state, rather than two for the larger states.
Allegations of fraud and corruption were again raised in 2004 by former chief auditor Jules Muis. A Commission officer, Guido Strack, reported alleged fraud and abuses in his department in the years 2002–2004 to OLAF, and was fired as a result. In 2008, Paul van Buitenen (the former auditor known from the Santer Commission scandal) accused the European Anti-Fraud Office (OLAF) of a lack of independence and effectiveness.
Barroso's first Commission term expired on 31 October 2009. Under the Treaty of Nice, the first Commission to be appointed after the number of member states reached 27 would have to be reduced to "less than the number of Member States". The exact number of Commissioners was to be decided by a unanimous vote of the European Council, and membership would rotate equally between member states. Following the accession of Romania and Bulgaria in January 2007, this clause took effect for the next Commission. The Treaty of Lisbon, which came into force on 1 December 2009, mandated a reduction of the number of commissioners to two-thirds of member-states from 2014 unless the Council decided otherwise. Membership would rotate equally and no member state would have more than one Commissioner. However, the treaty was rejected by voters in Ireland in 2008 with one main concern being the loss of their Commissioner. Hence a guarantee given for a rerun of the vote was that the council would use its power to amend the number of Commissioners upwards. However, according to the treaties it still has to be fewer than the total number of members, thus it was proposed that the member state that does not get a Commissioner would get the post of High Representative – the so-called 26+1 formula. This guarantee (which may find its way into the next treaty amendment, probably in an accession treaty) contributed to the Irish approving the treaty in a second referendum in 2009.
Lisbon also combined the posts of European Commissioner for External Relations with the council's High Representative for the Common Foreign and Security Policy. This post, also a Vice-President of the Commission, would chair the Council of the European Union's foreign affairs meetings as well as the commission's external relations duties. The treaty further provides that the most recent European elections should be "taken into account" when appointing the President of the European Commission, and although they are still proposed by the European Council; the European Parliament "elects" candidates to the office, rather than "approves" them as under the Treaty of Nice.
The Barroso Commission is, in reaction to Euroscepticism, said to have toned down enforcement to increase integration.
In 2014, Jean-Claude Juncker became President of the European Commission.
Juncker appointed his previous campaign director and head of the transition team, Martin Selmayr, as his chief of cabinet. During the Juncker presidency Selmayr has been described as "the most powerful EU chief of staff ever."
In 2019, Ursula von der Leyen was appointed as President of the European Commission. She submitted the guidelines of her policy to the European Parliament on 16 July 2019, following her confirmation. She had not been considered a likely candidate (in general, the elected candidate is determined, according to the results of the European election, as winner of the internal election into the dominant European party known as "spitzenkandidat"). While the European People's Party had won the European Parliament election, they had performed worse than expected and therefore nominated von der Leyen instead of Manfred Weber, their original candidate. On 9 September, the Council of the European Union declared a list of candidate-commissioners, which are sent to Brussels by the governments of each member state and which had to be officially approved by the parliament.
The commission was set up from the start to act as an independent supranational authority separate from governments; it has been described as "the only body paid to think European". The members are proposed by their member state governments, one from each. However, they are bound to act independently – free from other influences such as those governments which appointed them. This is in contrast to the Council of the European Union, which represents governments, the European Parliament, which represents citizens, the Economic and Social Committee, which represents organised civil society, and the Committee of the Regions, which represents local and regional authorities.
Through Article 17 of the Treaty on European Union the commission has several responsibilities: to develop medium-term strategies; to draft legislation and arbitrate in the legislative process; to represent the EU in trade negotiations; to make rules and regulations, for example in competition policy; to draw up the budget of the European Union; and to scrutinise the implementation of the treaties and legislation. The rules of procedure of the European Commission set out the commission's operation and organisation.
Before the Treaty of Lisbon came into force, the executive power of the EU was held by the council: it conferred on the Commission such powers for it to exercise. However, the council was allowed to withdraw these powers, exercise them directly, or impose conditions on their use. This aspect has been changed by the Treaty of Lisbon, after which the Commission exercises its powers just by virtue of the treaties. Powers are more restricted than most national executives, in part due to the commission's lack of power over areas like foreign policy – that power is held by the Council of the European Union and the European Council, which some analysts have described as another executive.
Considering that under the Treaty of Lisbon, the European Council has become a formal institution with the power of appointing the commission, it could be said that the two bodies hold the executive power of the EU (the European Council also holds individual national executive powers). However, it is the Commission that currently holds most of the executive power over the European Union.
The Commission differs from the other institutions in that it alone has legislative initiative in the EU. Only the commission can make formal proposals for legislation: they cannot originate in the legislative branches. Under the Treaty of Lisbon, no legislative act is allowed in the field of the Common Foreign and Security Policy. In the other fields, the Council and Parliament can request legislation; in most cases the Commission initiates on the basis of these proposals. This monopoly is designed to ensure coordinated and coherent drafting of EU law. This monopoly has been challenged by some who claim the Parliament should also have the right, with most national parliaments holding the right in some respects. However, the Council and Parliament may request the commission to draft legislation, though the Commission does have the power to refuse to do so as it did in 2008 over transnational collective conventions. Under the Lisbon Treaty, EU citizens are also able to request the commission to legislate in an area via a petition carrying one million signatures, but this is not binding.
The commission's powers in proposing law have usually centred on economic regulation. It has put forward a large number of regulations based on a "precautionary principle". This means that pre-emptive regulation takes place if there is a credible hazard to the environment or human health: for example on tackling climate change and restricting genetically modified organisms. The European Commission has committed EU member states to carbon neutrality by 2050. This is opposed to weighting regulations for their effect on the economy. Thus, the Commission often proposes stricter legislation than other countries. Owing to the size of the European market, this has made EU legislation an important influence in the global market.
Recently the commission has moved into creating European criminal law. In 2006, a toxic waste spill off the coast of Côte d'Ivoire, from a European ship, prompted the commission to look into legislation against toxic waste. Some EU states at that time did not even have a crime against shipping toxic waste; this led the Commissioners Franco Frattini and Stavros Dimas to put forward the idea of "ecological crimes". Their right to propose criminal law was challenged in the European Court of Justice but upheld. As of 2007, the only other criminal law proposals which have been brought forward are on the intellectual property rights directive, and on an amendment to the 2002 counter-terrorism framework decision, outlawing terrorism‑related incitement, recruitment (especially via the internet) and training.
Once legislation is passed by the Council and Parliament, it is the Commission's responsibility to ensure it is implemented. It does this through the member states or through its agencies. In adopting the necessary technical measures, the Commission is assisted by committees made up of representatives of member states and of the public and private lobbies (a process known in jargon as "comitology"). Furthermore, the commission is responsible for the implementation of the EU budget, ensuring, along with the Court of Auditors, that EU funds are correctly spent.
In particular the Commission has a duty to ensure the treaties and law are upheld, potentially by taking member states or other institutions to the Court of Justice in a dispute. In this role it is known informally as the "Guardian of the Treaties". Finally, the Commission provides some external representation for the Union, alongside the member states and the Common Foreign and Security Policy, representing the Union in bodies such as the World Trade Organization. It is also usual for the President to attend meetings of the G7.
The commission is composed of a College of "Commissioners" of 27 members, including the President and vice-presidents. Even though each member is nominated on the basis of the suggestions made by the national governments, one per state, they do not represent their state in the commission. In practice, however, they do occasionally press for their national interest. Once proposed, the President delegates portfolios among each of the members. The power of a Commissioner largely depends upon their portfolio, and can vary over time. For example, the Education Commissioner has been growing in importance, in line with the rise in the importance of education and culture in European policy-making. Another example is the Competition Commissioner, who holds a highly visible position with global reach. Before the commission can assume office, the College as a whole must be approved by the Parliament. Commissioners are supported by their personal cabinet who give them political guidance, while the Civil Service (the DGs, see below) deal with technical preparation.
The President of the Commission is first proposed by the European Council, following a Qualified Majority Vote (QMV), taking into account the latest parliamentary elections (any person from the largest party can be picked); that candidate then faces a formal election in the European Parliament. Thus this serves as a form of indirect election. If the European Parliament fails to elect the candidate, the European Council shall propose another within one month.
Following the selection of the President, and the appointment of the High Representative by the European Council, each Commissioner is nominated by their member state (except for those states who provided the President and High Representative) in consultation with the Commission President, who is responsible for the allocation of portfolios. The President's proposed College of Commissioners is then subject to hearings at the European Parliament which will question them and then vote on their suitability as a whole. If the European Parliament submits a negative opinion of a candidate, the President must either reshuffle them or request a new candidate from the member state to avoid the College's outright rejection by the European Parliament. Once the College is approved by parliament, it is formally appointed following a QMV vote by the European Council.
Following the College's appointment, the President appoints a number of Vice-Presidents from among the commissioners. Vice-Presidents manage policy areas involving multiple Commissioners. One of these includes the High Representative, who is automatically one of the Vice-Presidents ex officio rather than by appointment and confirmation. Commonly referred to as the 'HR/VP' position, the High Representative also coordinates commissioners' activities involving the external relations and defence cooperation of the European Union. The von der Leyen Commission also created the position of more senior Executive Vice-Presidents, appointed from the three largest political groups in the European Parliament. Unlike the other Vice-Presidents, their mission is to manage the incumbent Commission's top priority policy areas, for which they receive additional support from a dedicated Directorate-General.
The European Parliament can dissolve the College of Commissioners as a whole following a vote of no-confidence, which requires a two-thirds vote.
Only the President can request the resignation of an individual Commissioner. However, individual Commissioners, by request of the council or Commission, can be compelled to retire on account of a breach of obligation(s) and if so ruled by the European Court of Justice (Art. 245 and 247, Treaty on the Functioning of the European Union).
The Barroso Commission took office in late 2004 after being delayed by objections from the Parliament, which forced a reshuffle. In 2007 the Commission increased from 25 to 27 members with the accession of Romania and Bulgaria who each appointed their own Commissioners. With the increasing size of the commission, Barroso adopted a more presidential style of control over the college, which earned him some criticism.
However, under Barroso, the commission began to lose ground to the larger member states as countries such as France, the UK and Germany sought to sideline its role. This has increased with the creation of the President of the European Council under the Treaty of Lisbon. There has also been a greater degree of politicisation within the Commission.
The commission is divided into departments known as Directorates-General (DGs) that can be likened to departments or ministries. Each covers a specific policy area such as agriculture or justice and citizens' rights or internal services such as human resources and translation and is headed by a director-general who is responsible to a commissioner. A commissioner's portfolio can be supported by numerous DGs; they prepare proposals for them and if approved by a majority of commissioners proposals go forward to the Parliament and Council for consideration. The Commission's civil service is headed by a Secretary General. The position is currently held by Ilze Juhansone. The rules of procedure of the European Commission set out the Commission's operation and organisation.
There has been criticism from a number of people that the highly fragmented DG structure wastes a considerable amount of time in turf wars as the different departments and Commissioners compete with each other. Furthermore, the DGs can exercise considerable control over a Commissioner with the Commissioner having little time to learn to assert control over their staff.
According to figures published by the Commission, 23,803 persons were employed by the Commission as officials and temporary agents in September 2012. In addition to these, 9230 "external staff" (e.g. Contractual agents, detached national experts, young experts, trainees etc.) were employed. The single largest DG is the Directorate-General for Translation, with a 2309-strong staff, while the largest group by nationality is Belgian (18.7%), probably due to a majority (17,664) of staff being based in the country.
Communication with the press is handled by the Directorate-General Communication. The Commission's chief spokesperson is Eric Mamer who holds the midday press briefings, commonly known as the "Midday Presser". It takes place every weekday in the Commission's press room at the Berlaymont where journalists may ask questions to the Commission officials on any topic and legitimately expect to get an "on the record" answer for live TV. Such a situation is unique in the world.
As an integral part of the Directorate-General for Communication, the Spokesperson's Service, in coordination with the Executive Communication Adviser in the President's Cabinet, supports the President and Commissioners so that they can communicate effectively. On political communication matters, the chief spokesperson reports directly to the President of the European Commission.
It has been noted by one researcher that the press releases issued by the Commission are uniquely political. A release often goes through several stages of drafting which emphasises the role of the Commission and is used "for justifying the EU and the Commission" increasing their length and complexity. Where there are multiple departments involved a press release can also be a source of competition between areas of the Commission and Commissioners themselves. This also leads to an unusually high number of press releases, and is seen as a unique product of the EU's political set-up.
There is a larger press corps in Brussels than Washington, D.C.; in 2020, media outlets in every Union member-state had a Brussels correspondent. Although there has been a worldwide cut in journalists, the considerable press releases and operations such as Europe by Satellite and EuroparlTV leads many news organisations to believe they can cover the EU from these source and news agencies. In the face of high-level criticism, the Commission shut down Presseurop on 20 December 2013.
As the Commission is the executive branch, candidates are chosen individually by the 27 national governments. Within the EU, the legitimacy of the Commission is mainly drawn from the vote of approval that is required from the European Parliament, along with its power to dismiss the body. Eurosceptics have therefore raised the concern of the relatively low turnout (often less than 50%) in elections for the European Parliament since 1999. While that figure may be higher than that of some national elections, including the off-year elections of the United States Congress, the fact that there are no direct elections for the position of Commission President calls the position's legitimacy into question in the eyes of some Eurosceptics. The fact that the Commission can directly decide (albeit with oversight from specially formed 'comitology committees') on the shape and character of implementing legislation further raises concerns about democratic legitimacy.
Even though democratic structures and methods are changing there is not such a mirror in creating a European civil society. The Treaty of Lisbon may go some way to resolving the perceived deficit in creating greater democratic controls on the Commission, including enshrining the procedure of linking elections to the selection of the Commission president. Historically, the Commission had indeed been seen as a technocratic expert body which, akin with institutions such as independent central banks, deals with technical areas of policy and therefore ought to be removed from party politics. From this viewpoint, electoral pressures would undermine the Commission's role as an independent regulator. Defenders of the Commission point out that legislation must be approved by the Council in all areas (the ministers of member states) and the European Parliament in most areas before it can be adopted, thus the amount of legislation which is adopted in any one country without the approval of its government is limited.
In 2009 the European ombudsman published statistics of citizens' complaints against EU institutions, with most of them filed against the Commission (66%) and concerning lack of transparency (36%). In 2010 the Commission was sued for blocking access to documents on EU biofuel policy. This happened after media accused the Commission of blocking scientific evidence against biofuel subsidies. Lack of transparency, unclear lobbyist relations, conflicts of interests and excessive spending of the Commission was highlighted in a number of reports by internal and independent auditing organisations. It has also been criticised on IT-related issues, particularly with regard to Microsoft. In September 2020, the European Commission put forward an Anti-Racism Action Plan to tackle the structural racism in the European Union, including measures to address the lack of racial diversity among the European decision makers in Brussels, as denounced by the #BrusselsSoWhite movement.
The European Commission has an Action Plan to enhance preparedness against chemical, biological, radiological and nuclear (CBRN) security risks as part of its anti-terrorism package released in October 2017. In recent times Europe has seen an increased threat level of CBRN attacks. As such, the European Commission's preparedness plan is important, said Steven Neville Chatfield, a director for the Centre for Emergency Preparedness and Response in the United Kingdom's Health Protection Agency. For the first time, the European Commission proposed that medical preparedness for CBRN attack threats is a high priority. "The European Commission's (EC) Action Plan to enhance preparedness against CBRN security risks is part of its anti-terrorism package released in October 2017, a strategy aimed at better protecting the more than 511 million citizens across the 27 member states of the European Union (EU)."
The European Commission organized a video conference of world leaders on 4 May 2020 to raise funds for COVID-19 vaccine development. US$8 billion was raised. The United States declined to join this video conference or to contribute funds.
The European Commission issued a new multi-year data plan in February 2020 pushing the digitalization of all aspects of EU society for the benefit of civic and economic growth.
The goal of this data strategy is to create a single market for data in which data flows across the EU and across sectors while maintaining full respect for privacy and data protection, where access rules are fair, and where the European economy benefits enormously as a global player as a result of the new data economy.
The commission's political seat is in Brussels with the President's office and the commission's meeting room on the 13th floor of the Berlaymont building. The commission also operates out of numerous other buildings in Brussels and Luxembourg City. When the Parliament is meeting in Strasbourg, the Commissioners also meet there in the Winston Churchill building to attend the Parliament's debates. The Members of the Commission and their "cabinets" (immediate teams) are also based in the Berlaymont building in Brussels. Additionally, the European Commission has in-house scientific facilities that support it in: Ispra, Italy; Petten, the Netherlands; Karlsruhe, Germany; Geel, Belgium and Seville, Spain. In Grange, County Meath, Ireland there is a Commission site hosting part of DG Santè. | [
{
"paragraph_id": 0,
"text": "The European Commission (EC) is part of the executive of the European Union (EU), together with the European Council. It operates as a cabinet government, with 27 members of the Commission (directorial system, informally known as \"Commissioners\") headed by a President. It includes an administrative body of about 32,000 European civil servants. The Commission is divided into departments known as Directorates-General (DGs) that can be likened to departments or ministries each headed by a Director-General who is responsible to a Commissioner.",
"title": ""
},
{
"paragraph_id": 1,
"text": "There is one member per member state, but members are bound by their oath of office to represent the general interest of the EU as a whole rather than their home state. The Commission President (currently Ursula von der Leyen) is proposed by the European Council (the 27 heads of state/governments) and elected by the European Parliament. The Council of the European Union then nominates the other members of the Commission in agreement with the nominated President, and the 27 members as a team are then subject to a vote of approval by the European Parliament. The current Commission is the Von der Leyen Commission, which took office in December 2019, following the European Parliament elections in May of the same year.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The European Commission derives from one of the five key institutions created in the supranational European Community system, following the proposal of Robert Schuman, French Foreign Minister, on 9 May 1950. Originating in 1951 as the High Authority in the European Coal and Steel Community, the commission has undergone numerous changes in power and composition under various presidents, involving three Communities.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The first Commission originated in 1951 as the nine-member \"High Authority\" under President Jean Monnet (see Monnet Authority). The High Authority was the supranational administrative executive of the new European Coal and Steel Community (ECSC). It took office first on 10 August 1952 in Luxembourg City. In 1958, the Treaties of Rome had established two new communities alongside the ECSC: the European Economic Community (EEC) and the European Atomic Energy Community (Euratom). However, their executives were called \"Commissions\" rather than \"High Authorities\". The reason for the change in name was the new relationship between the executives and the Council. Some states, such as France, expressed reservations over the power of the High Authority and wished to limit it by giving more power to the Council rather than the new executives.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Louis Armand led the first Commission of Euratom. Walter Hallstein led the first Commission of the EEC, holding the first formal meeting on 16 January 1958 at the Château of Val-Duchesse. It achieved agreement on a contentious cereal price accord, as well as making a positive impression upon third countries when it made its international debut at the Kennedy Round of General Agreement on Tariffs and Trade (GATT) negotiations. Hallstein notably began the consolidation of European law and started to have a notable impact on national legislation. Little heed was taken of his administration at first but, with help from the European Court of Justice, his Commission stamped its authority solidly enough to allow future Commissions to be taken more seriously. In 1965, however, accumulating differences between the French government of Charles de Gaulle and the other member states on various subjects (British entry, direct elections to Parliament, the Fouchet Plan and the budget) triggered the \"empty chair\" crisis, ostensibly over proposals for the Common Agricultural Policy. Although the institutional crisis was solved the following year, it cost Étienne Hirsch his presidency of Euratom and later Walter Hallstein the EEC presidency, despite his otherwise being viewed as the most 'dynamic' leader until Jacques Delors.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The three bodies, collectively named the European Executives, co-existed until 1 July 1967 when, under the Merger Treaty, they were combined into a single administration under President Jean Rey. Owing to the merger, the Rey Commission saw a temporary increase to 14 members—although subsequent Commissions were reduced back to nine, following the formula of one member for small states and two for larger states. The Rey Commission completed the Community's customs union in 1968 and campaigned for a more powerful, elected, European Parliament. Despite Rey being the first President of the combined communities, Hallstein is seen as the first President of the modern Commission.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The Malfatti and Mansholt Commissions followed with work on monetary co-operation and the first enlargement to the north in 1973. With that enlargement, the College of Commissioners membership increased to thirteen under the Ortoli Commission (the United Kingdom as a large member was granted two Commissioners), which dealt with the enlarged community during economic and international instability at that time. The external representation of the Community took a step forward when President Roy Jenkins, recruited to the presidency in January 1977 from his role as Home Secretary of the United Kingdom's Labour government, became the first President to attend a G8 summit on behalf of the Community. Following the Jenkins Commission, Gaston Thorn's Commission oversaw the Community's enlargement to the south, in addition to beginning work on the Single European Act.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The Commission headed by Jacques Delors was seen as giving the Community a sense of direction and dynamism. Delors and his College are also considered as the \"founding fathers of the euro\". The International Herald Tribune noted the work of Delors at the end of his second term in 1992: \"Mr. Delors rescued the European Community from the doldrums. He arrived when Europessimism was at its worst. Although he was a little-known former French finance minister, he breathed life and hope into the EC and into the dispirited Brussels Commission. In his first term, from 1985 to 1988, he rallied Europe to the call of the single market, and when appointed to a second term he began urging Europeans toward the far more ambitious goals of economic, monetary, and political union\".",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The successor to Delors was Jacques Santer. As a result of a fraud and corruption scandal, the entire Santer Commission was forced by the Parliament to resign in 1999; a central role was played by Édith Cresson. These frauds were revealed by an internal auditor, Paul van Buitenen.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "That was the first time a College of Commissioners had been forced to resign en masse, and represented a shift of power towards the Parliament. However, the Santer Commission did carry out work on the Treaty of Amsterdam and the euro. In response to the scandal, the European Anti-Fraud Office (OLAF) was created.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Following Santer, Romano Prodi took office. The Amsterdam Treaty had increased the commission's powers and Prodi was dubbed by the press as something akin to a Prime Minister. Powers were strengthened again; the Treaty of Nice, signed in 2001, gave the Presidents more power over the composition of the College of Commissioners.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "José Manuel Barroso became president in 2004: the Parliament once again asserted itself in objecting to the proposed membership of the Barroso Commission. Owing to this opposition, Barroso was forced to reshuffle his College before taking office. The Barroso Commission was also the first full Commission since the enlargement in 2004 to 25 members; hence, the number of Commissioners at the end of the Prodi Commission had reached 30. As a result of the increase in the number of states, the Amsterdam Treaty triggered a reduction in the number of Commissioners to one per state, rather than two for the larger states.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Allegations of fraud and corruption were again raised in 2004 by former chief auditor Jules Muis. A Commission officer, Guido Strack, reported alleged fraud and abuses in his department in the years 2002–2004 to OLAF, and was fired as a result. In 2008, Paul van Buitenen (the former auditor known from the Santer Commission scandal) accused the European Anti-Fraud Office (OLAF) of a lack of independence and effectiveness.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Barroso's first Commission term expired on 31 October 2009. Under the Treaty of Nice, the first Commission to be appointed after the number of member states reached 27 would have to be reduced to \"less than the number of Member States\". The exact number of Commissioners was to be decided by a unanimous vote of the European Council, and membership would rotate equally between member states. Following the accession of Romania and Bulgaria in January 2007, this clause took effect for the next Commission. The Treaty of Lisbon, which came into force on 1 December 2009, mandated a reduction of the number of commissioners to two-thirds of member-states from 2014 unless the Council decided otherwise. Membership would rotate equally and no member state would have more than one Commissioner. However, the treaty was rejected by voters in Ireland in 2008 with one main concern being the loss of their Commissioner. Hence a guarantee given for a rerun of the vote was that the council would use its power to amend the number of Commissioners upwards. However, according to the treaties it still has to be fewer than the total number of members, thus it was proposed that the member state that does not get a Commissioner would get the post of High Representative – the so-called 26+1 formula. This guarantee (which may find its way into the next treaty amendment, probably in an accession treaty) contributed to the Irish approving the treaty in a second referendum in 2009.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Lisbon also combined the posts of European Commissioner for External Relations with the council's High Representative for the Common Foreign and Security Policy. This post, also a Vice-President of the Commission, would chair the Council of the European Union's foreign affairs meetings as well as the commission's external relations duties. The treaty further provides that the most recent European elections should be \"taken into account\" when appointing the President of the European Commission, and although they are still proposed by the European Council; the European Parliament \"elects\" candidates to the office, rather than \"approves\" them as under the Treaty of Nice.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The Barroso Commission is, in reaction to Euroscepticism, said to have toned down enforcement to increase integration.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In 2014, Jean-Claude Juncker became President of the European Commission.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Juncker appointed his previous campaign director and head of the transition team, Martin Selmayr, as his chief of cabinet. During the Juncker presidency Selmayr has been described as \"the most powerful EU chief of staff ever.\"",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In 2019, Ursula von der Leyen was appointed as President of the European Commission. She submitted the guidelines of her policy to the European Parliament on 16 July 2019, following her confirmation. She had not been considered a likely candidate (in general, the elected candidate is determined, according to the results of the European election, as winner of the internal election into the dominant European party known as \"spitzenkandidat\"). While the European People's Party had won the European Parliament election, they had performed worse than expected and therefore nominated von der Leyen instead of Manfred Weber, their original candidate. On 9 September, the Council of the European Union declared a list of candidate-commissioners, which are sent to Brussels by the governments of each member state and which had to be officially approved by the parliament.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The commission was set up from the start to act as an independent supranational authority separate from governments; it has been described as \"the only body paid to think European\". The members are proposed by their member state governments, one from each. However, they are bound to act independently – free from other influences such as those governments which appointed them. This is in contrast to the Council of the European Union, which represents governments, the European Parliament, which represents citizens, the Economic and Social Committee, which represents organised civil society, and the Committee of the Regions, which represents local and regional authorities.",
"title": "Powers and functions"
},
{
"paragraph_id": 20,
"text": "Through Article 17 of the Treaty on European Union the commission has several responsibilities: to develop medium-term strategies; to draft legislation and arbitrate in the legislative process; to represent the EU in trade negotiations; to make rules and regulations, for example in competition policy; to draw up the budget of the European Union; and to scrutinise the implementation of the treaties and legislation. The rules of procedure of the European Commission set out the commission's operation and organisation.",
"title": "Powers and functions"
},
{
"paragraph_id": 21,
"text": "Before the Treaty of Lisbon came into force, the executive power of the EU was held by the council: it conferred on the Commission such powers for it to exercise. However, the council was allowed to withdraw these powers, exercise them directly, or impose conditions on their use. This aspect has been changed by the Treaty of Lisbon, after which the Commission exercises its powers just by virtue of the treaties. Powers are more restricted than most national executives, in part due to the commission's lack of power over areas like foreign policy – that power is held by the Council of the European Union and the European Council, which some analysts have described as another executive.",
"title": "Powers and functions"
},
{
"paragraph_id": 22,
"text": "Considering that under the Treaty of Lisbon, the European Council has become a formal institution with the power of appointing the commission, it could be said that the two bodies hold the executive power of the EU (the European Council also holds individual national executive powers). However, it is the Commission that currently holds most of the executive power over the European Union.",
"title": "Powers and functions"
},
{
"paragraph_id": 23,
"text": "The Commission differs from the other institutions in that it alone has legislative initiative in the EU. Only the commission can make formal proposals for legislation: they cannot originate in the legislative branches. Under the Treaty of Lisbon, no legislative act is allowed in the field of the Common Foreign and Security Policy. In the other fields, the Council and Parliament can request legislation; in most cases the Commission initiates on the basis of these proposals. This monopoly is designed to ensure coordinated and coherent drafting of EU law. This monopoly has been challenged by some who claim the Parliament should also have the right, with most national parliaments holding the right in some respects. However, the Council and Parliament may request the commission to draft legislation, though the Commission does have the power to refuse to do so as it did in 2008 over transnational collective conventions. Under the Lisbon Treaty, EU citizens are also able to request the commission to legislate in an area via a petition carrying one million signatures, but this is not binding.",
"title": "Powers and functions"
},
{
"paragraph_id": 24,
"text": "The commission's powers in proposing law have usually centred on economic regulation. It has put forward a large number of regulations based on a \"precautionary principle\". This means that pre-emptive regulation takes place if there is a credible hazard to the environment or human health: for example on tackling climate change and restricting genetically modified organisms. The European Commission has committed EU member states to carbon neutrality by 2050. This is opposed to weighting regulations for their effect on the economy. Thus, the Commission often proposes stricter legislation than other countries. Owing to the size of the European market, this has made EU legislation an important influence in the global market.",
"title": "Powers and functions"
},
{
"paragraph_id": 25,
"text": "Recently the commission has moved into creating European criminal law. In 2006, a toxic waste spill off the coast of Côte d'Ivoire, from a European ship, prompted the commission to look into legislation against toxic waste. Some EU states at that time did not even have a crime against shipping toxic waste; this led the Commissioners Franco Frattini and Stavros Dimas to put forward the idea of \"ecological crimes\". Their right to propose criminal law was challenged in the European Court of Justice but upheld. As of 2007, the only other criminal law proposals which have been brought forward are on the intellectual property rights directive, and on an amendment to the 2002 counter-terrorism framework decision, outlawing terrorism‑related incitement, recruitment (especially via the internet) and training.",
"title": "Powers and functions"
},
{
"paragraph_id": 26,
"text": "Once legislation is passed by the Council and Parliament, it is the Commission's responsibility to ensure it is implemented. It does this through the member states or through its agencies. In adopting the necessary technical measures, the Commission is assisted by committees made up of representatives of member states and of the public and private lobbies (a process known in jargon as \"comitology\"). Furthermore, the commission is responsible for the implementation of the EU budget, ensuring, along with the Court of Auditors, that EU funds are correctly spent.",
"title": "Powers and functions"
},
{
"paragraph_id": 27,
"text": "In particular the Commission has a duty to ensure the treaties and law are upheld, potentially by taking member states or other institutions to the Court of Justice in a dispute. In this role it is known informally as the \"Guardian of the Treaties\". Finally, the Commission provides some external representation for the Union, alongside the member states and the Common Foreign and Security Policy, representing the Union in bodies such as the World Trade Organization. It is also usual for the President to attend meetings of the G7.",
"title": "Powers and functions"
},
{
"paragraph_id": 28,
"text": "The commission is composed of a College of \"Commissioners\" of 27 members, including the President and vice-presidents. Even though each member is nominated on the basis of the suggestions made by the national governments, one per state, they do not represent their state in the commission. In practice, however, they do occasionally press for their national interest. Once proposed, the President delegates portfolios among each of the members. The power of a Commissioner largely depends upon their portfolio, and can vary over time. For example, the Education Commissioner has been growing in importance, in line with the rise in the importance of education and culture in European policy-making. Another example is the Competition Commissioner, who holds a highly visible position with global reach. Before the commission can assume office, the College as a whole must be approved by the Parliament. Commissioners are supported by their personal cabinet who give them political guidance, while the Civil Service (the DGs, see below) deal with technical preparation.",
"title": "College"
},
{
"paragraph_id": 29,
"text": "The President of the Commission is first proposed by the European Council, following a Qualified Majority Vote (QMV), taking into account the latest parliamentary elections (any person from the largest party can be picked); that candidate then faces a formal election in the European Parliament. Thus this serves as a form of indirect election. If the European Parliament fails to elect the candidate, the European Council shall propose another within one month.",
"title": "College"
},
{
"paragraph_id": 30,
"text": "Following the selection of the President, and the appointment of the High Representative by the European Council, each Commissioner is nominated by their member state (except for those states who provided the President and High Representative) in consultation with the Commission President, who is responsible for the allocation of portfolios. The President's proposed College of Commissioners is then subject to hearings at the European Parliament which will question them and then vote on their suitability as a whole. If the European Parliament submits a negative opinion of a candidate, the President must either reshuffle them or request a new candidate from the member state to avoid the College's outright rejection by the European Parliament. Once the College is approved by parliament, it is formally appointed following a QMV vote by the European Council.",
"title": "College"
},
{
"paragraph_id": 31,
"text": "Following the College's appointment, the President appoints a number of Vice-Presidents from among the commissioners. Vice-Presidents manage policy areas involving multiple Commissioners. One of these includes the High Representative, who is automatically one of the Vice-Presidents ex officio rather than by appointment and confirmation. Commonly referred to as the 'HR/VP' position, the High Representative also coordinates commissioners' activities involving the external relations and defence cooperation of the European Union. The von der Leyen Commission also created the position of more senior Executive Vice-Presidents, appointed from the three largest political groups in the European Parliament. Unlike the other Vice-Presidents, their mission is to manage the incumbent Commission's top priority policy areas, for which they receive additional support from a dedicated Directorate-General.",
"title": "College"
},
{
"paragraph_id": 32,
"text": "The European Parliament can dissolve the College of Commissioners as a whole following a vote of no-confidence, which requires a two-thirds vote.",
"title": "College"
},
{
"paragraph_id": 33,
"text": "Only the President can request the resignation of an individual Commissioner. However, individual Commissioners, by request of the council or Commission, can be compelled to retire on account of a breach of obligation(s) and if so ruled by the European Court of Justice (Art. 245 and 247, Treaty on the Functioning of the European Union).",
"title": "College"
},
{
"paragraph_id": 34,
"text": "The Barroso Commission took office in late 2004 after being delayed by objections from the Parliament, which forced a reshuffle. In 2007 the Commission increased from 25 to 27 members with the accession of Romania and Bulgaria who each appointed their own Commissioners. With the increasing size of the commission, Barroso adopted a more presidential style of control over the college, which earned him some criticism.",
"title": "College"
},
{
"paragraph_id": 35,
"text": "However, under Barroso, the commission began to lose ground to the larger member states as countries such as France, the UK and Germany sought to sideline its role. This has increased with the creation of the President of the European Council under the Treaty of Lisbon. There has also been a greater degree of politicisation within the Commission.",
"title": "College"
},
{
"paragraph_id": 36,
"text": "The commission is divided into departments known as Directorates-General (DGs) that can be likened to departments or ministries. Each covers a specific policy area such as agriculture or justice and citizens' rights or internal services such as human resources and translation and is headed by a director-general who is responsible to a commissioner. A commissioner's portfolio can be supported by numerous DGs; they prepare proposals for them and if approved by a majority of commissioners proposals go forward to the Parliament and Council for consideration. The Commission's civil service is headed by a Secretary General. The position is currently held by Ilze Juhansone. The rules of procedure of the European Commission set out the Commission's operation and organisation.",
"title": "Administration"
},
{
"paragraph_id": 37,
"text": "There has been criticism from a number of people that the highly fragmented DG structure wastes a considerable amount of time in turf wars as the different departments and Commissioners compete with each other. Furthermore, the DGs can exercise considerable control over a Commissioner with the Commissioner having little time to learn to assert control over their staff.",
"title": "Administration"
},
{
"paragraph_id": 38,
"text": "According to figures published by the Commission, 23,803 persons were employed by the Commission as officials and temporary agents in September 2012. In addition to these, 9230 \"external staff\" (e.g. Contractual agents, detached national experts, young experts, trainees etc.) were employed. The single largest DG is the Directorate-General for Translation, with a 2309-strong staff, while the largest group by nationality is Belgian (18.7%), probably due to a majority (17,664) of staff being based in the country.",
"title": "Administration"
},
{
"paragraph_id": 39,
"text": "Communication with the press is handled by the Directorate-General Communication. The Commission's chief spokesperson is Eric Mamer who holds the midday press briefings, commonly known as the \"Midday Presser\". It takes place every weekday in the Commission's press room at the Berlaymont where journalists may ask questions to the Commission officials on any topic and legitimately expect to get an \"on the record\" answer for live TV. Such a situation is unique in the world.",
"title": "Administration"
},
{
"paragraph_id": 40,
"text": "As an integral part of the Directorate-General for Communication, the Spokesperson's Service, in coordination with the Executive Communication Adviser in the President's Cabinet, supports the President and Commissioners so that they can communicate effectively. On political communication matters, the chief spokesperson reports directly to the President of the European Commission.",
"title": "Administration"
},
{
"paragraph_id": 41,
"text": "It has been noted by one researcher that the press releases issued by the Commission are uniquely political. A release often goes through several stages of drafting which emphasises the role of the Commission and is used \"for justifying the EU and the Commission\" increasing their length and complexity. Where there are multiple departments involved a press release can also be a source of competition between areas of the Commission and Commissioners themselves. This also leads to an unusually high number of press releases, and is seen as a unique product of the EU's political set-up.",
"title": "Administration"
},
{
"paragraph_id": 42,
"text": "There is a larger press corps in Brussels than Washington, D.C.; in 2020, media outlets in every Union member-state had a Brussels correspondent. Although there has been a worldwide cut in journalists, the considerable press releases and operations such as Europe by Satellite and EuroparlTV leads many news organisations to believe they can cover the EU from these source and news agencies. In the face of high-level criticism, the Commission shut down Presseurop on 20 December 2013.",
"title": "Administration"
},
{
"paragraph_id": 43,
"text": "As the Commission is the executive branch, candidates are chosen individually by the 27 national governments. Within the EU, the legitimacy of the Commission is mainly drawn from the vote of approval that is required from the European Parliament, along with its power to dismiss the body. Eurosceptics have therefore raised the concern of the relatively low turnout (often less than 50%) in elections for the European Parliament since 1999. While that figure may be higher than that of some national elections, including the off-year elections of the United States Congress, the fact that there are no direct elections for the position of Commission President calls the position's legitimacy into question in the eyes of some Eurosceptics. The fact that the Commission can directly decide (albeit with oversight from specially formed 'comitology committees') on the shape and character of implementing legislation further raises concerns about democratic legitimacy.",
"title": "Legitimacy and criticism"
},
{
"paragraph_id": 44,
"text": "Even though democratic structures and methods are changing there is not such a mirror in creating a European civil society. The Treaty of Lisbon may go some way to resolving the perceived deficit in creating greater democratic controls on the Commission, including enshrining the procedure of linking elections to the selection of the Commission president. Historically, the Commission had indeed been seen as a technocratic expert body which, akin with institutions such as independent central banks, deals with technical areas of policy and therefore ought to be removed from party politics. From this viewpoint, electoral pressures would undermine the Commission's role as an independent regulator. Defenders of the Commission point out that legislation must be approved by the Council in all areas (the ministers of member states) and the European Parliament in most areas before it can be adopted, thus the amount of legislation which is adopted in any one country without the approval of its government is limited.",
"title": "Legitimacy and criticism"
},
{
"paragraph_id": 45,
"text": "In 2009 the European ombudsman published statistics of citizens' complaints against EU institutions, with most of them filed against the Commission (66%) and concerning lack of transparency (36%). In 2010 the Commission was sued for blocking access to documents on EU biofuel policy. This happened after media accused the Commission of blocking scientific evidence against biofuel subsidies. Lack of transparency, unclear lobbyist relations, conflicts of interests and excessive spending of the Commission was highlighted in a number of reports by internal and independent auditing organisations. It has also been criticised on IT-related issues, particularly with regard to Microsoft. In September 2020, the European Commission put forward an Anti-Racism Action Plan to tackle the structural racism in the European Union, including measures to address the lack of racial diversity among the European decision makers in Brussels, as denounced by the #BrusselsSoWhite movement.",
"title": "Legitimacy and criticism"
},
{
"paragraph_id": 46,
"text": "The European Commission has an Action Plan to enhance preparedness against chemical, biological, radiological and nuclear (CBRN) security risks as part of its anti-terrorism package released in October 2017. In recent times Europe has seen an increased threat level of CBRN attacks. As such, the European Commission's preparedness plan is important, said Steven Neville Chatfield, a director for the Centre for Emergency Preparedness and Response in the United Kingdom's Health Protection Agency. For the first time, the European Commission proposed that medical preparedness for CBRN attack threats is a high priority. \"The European Commission's (EC) Action Plan to enhance preparedness against CBRN security risks is part of its anti-terrorism package released in October 2017, a strategy aimed at better protecting the more than 511 million citizens across the 27 member states of the European Union (EU).\"",
"title": "Initiatives"
},
{
"paragraph_id": 47,
"text": "The European Commission organized a video conference of world leaders on 4 May 2020 to raise funds for COVID-19 vaccine development. US$8 billion was raised. The United States declined to join this video conference or to contribute funds.",
"title": "Initiatives"
},
{
"paragraph_id": 48,
"text": "The European Commission issued a new multi-year data plan in February 2020 pushing the digitalization of all aspects of EU society for the benefit of civic and economic growth.",
"title": "Initiatives"
},
{
"paragraph_id": 49,
"text": "The goal of this data strategy is to create a single market for data in which data flows across the EU and across sectors while maintaining full respect for privacy and data protection, where access rules are fair, and where the European economy benefits enormously as a global player as a result of the new data economy.",
"title": "Initiatives"
},
{
"paragraph_id": 50,
"text": "The commission's political seat is in Brussels with the President's office and the commission's meeting room on the 13th floor of the Berlaymont building. The commission also operates out of numerous other buildings in Brussels and Luxembourg City. When the Parliament is meeting in Strasbourg, the Commissioners also meet there in the Winston Churchill building to attend the Parliament's debates. The Members of the Commission and their \"cabinets\" (immediate teams) are also based in the Berlaymont building in Brussels. Additionally, the European Commission has in-house scientific facilities that support it in: Ispra, Italy; Petten, the Netherlands; Karlsruhe, Germany; Geel, Belgium and Seville, Spain. In Grange, County Meath, Ireland there is a Commission site hosting part of DG Santè.",
"title": "Location"
}
]
| The European Commission (EC) is part of the executive of the European Union (EU), together with the European Council. It operates as a cabinet government, with 27 members of the Commission headed by a President. It includes an administrative body of about 32,000 European civil servants. The Commission is divided into departments known as Directorates-General (DGs) that can be likened to departments or ministries each headed by a Director-General who is responsible to a Commissioner. There is one member per member state, but members are bound by their oath of office to represent the general interest of the EU as a whole rather than their home state. The Commission President is proposed by the European Council and elected by the European Parliament. The Council of the European Union then nominates the other members of the Commission in agreement with the nominated President, and the 27 members as a team are then subject to a vote of approval by the European Parliament. The current Commission is the Von der Leyen Commission, which took office in December 2019, following the European Parliament elections in May of the same year. | 2001-10-24T10:05:46Z | 2023-12-09T22:11:06Z | [
"Template:Main",
"Template:Further",
"Template:Quantify",
"Template:From whom?",
"Template:Sister project links",
"Template:European Union topics",
"Template:Use British English",
"Template:Clear right",
"Template:Cite journal",
"Template:Presidents of European Commissions",
"Template:Structural evolution of the European Commission",
"Template:Portal",
"Template:Cite news",
"Template:Webarchive",
"Template:Eu-directorates-general",
"Template:For",
"Template:See also",
"Template:Citation needed",
"Template:According to whom",
"Template:Cite book",
"Template:Cite press release",
"Template:Charlemagne Prize recipients",
"Template:Use dmy dates",
"Template:Reflist",
"Template:Cite web",
"Template:Short description",
"Template:-",
"Template:Cite magazine",
"Template:Politics of the European Union",
"Template:EUnum",
"Template:Cbignore",
"Template:Authority control",
"Template:Infobox executive government"
]
| https://en.wikipedia.org/wiki/European_Commission |
9,975 | Linear filter | Linear filters process time-varying input signals to produce output signals, subject to the constraint of linearity. In most cases these linear filters are also time invariant (or shift invariant) in which case they can be analyzed exactly using LTI ("linear time-invariant") system theory revealing their transfer functions in the frequency domain and their impulse responses in the time domain. Real-time implementations of such linear signal processing filters in the time domain are inevitably causal, an additional constraint on their transfer functions. An analog electronic circuit consisting only of linear components (resistors, capacitors, inductors, and linear amplifiers) will necessarily fall in this category, as will comparable mechanical systems or digital signal processing systems containing only linear elements. Since linear time-invariant filters can be completely characterized by their response to sinusoids of different frequencies (their frequency response), they are sometimes known as frequency filters.
Non real-time implementations of linear time-invariant filters need not be causal. Filters of more than one dimension are also used such as in Image processing. The general concept of linear filtering also extends into other fields and technologies such as statistics, data analysis, and mechanical engineering.
A linear time-invariant (LTI) filter can be uniquely specified by its impulse response h, and the output of any filter is mathematically expressed as the convolution of the input with that impulse response. The frequency response, given by the filter's transfer function H ( ω ) {\displaystyle H(\omega )} , is an alternative characterization of the filter. Typical filter design goals are to realize a particular frequency response, that is, the magnitude of the transfer function | H ( ω ) | {\displaystyle |H(\omega )|} ; the importance of the phase of the transfer function varies according to the application, inasmuch as the shape of a waveform can be distorted to a greater or lesser extent in the process of achieving a desired (amplitude) response in the frequency domain. The frequency response may be tailored to, for instance, eliminate unwanted frequency components from an input signal, or to limit an amplifier to signals within a particular band of frequencies.
The impulse response h of a linear time-invariant causal filter specifies the output that the filter would produce if it were to receive an input consisting of a single impulse at time 0. An "impulse" in a continuous time filter means a Dirac delta function; in a discrete time filter the Kronecker delta function would apply. The impulse response completely characterizes the response of any such filter, inasmuch as any possible input signal can be expressed as a (possibly infinite) combination of weighted delta functions. Multiplying the impulse response shifted in time according to the arrival of each of these delta functions by the amplitude of each delta function, and summing these responses together (according to the superposition principle, applicable to all linear systems) yields the output waveform.
Mathematically this is described as the convolution of a time-varying input signal x(t) with the filter's impulse response h, defined as:
The first form is the continuous-time form, which describes mechanical and analog electronic systems, for instance. The second equation is a discrete-time version used, for example, by digital filters implemented in software, so-called digital signal processing. The impulse response h completely characterizes any linear time-invariant (or shift-invariant in the discrete-time case) filter. The input x is said to be "convolved" with the impulse response h having a (possibly infinite) duration of time T (or of N sampling periods).
Filter design consists of finding a possible transfer function that can be implemented within certain practical constraints dictated by the technology or desired complexity of the system, followed by a practical design that realizes that transfer function using the chosen technology. The complexity of a filter may be specified according to the order of the filter.
Among the time-domain filters we here consider, there are two general classes of filter transfer functions that can approximate a desired frequency response. Very different mathematical treatments apply to the design of filters termed infinite impulse response (IIR) filters, characteristic of mechanical and analog electronics systems, and finite impulse response (FIR) filters, which can be implemented by discrete time systems such as computers (then termed digital signal processing).
Consider a physical system that acts as a linear filter, such as a system of springs and masses, or an analog electronic circuit that includes capacitors and/or inductors (along with other linear components such as resistors and amplifiers). When such a system is subject to an impulse (or any signal of finite duration) it responds with an output waveform that lasts past the duration of the input, eventually decaying exponentially in one or another manner, but never completely settling to zero (mathematically speaking). Such a system is said to have an infinite impulse response (IIR). The convolution integral (or summation) above extends over all time: T (or N) must be set to infinity.
For instance, consider a damped harmonic oscillator such as a pendulum, or a resonant L-C tank circuit. If the pendulum has been at rest and we were to strike it with a hammer (the "impulse"), setting it in motion, it would swing back and forth ("resonate"), say, with an amplitude of 10 cm. After 10 minutes, say, the pendulum would still be swinging but the amplitude would have decreased to 5 cm, half of its original amplitude. After another 10 minutes its amplitude would be only 2.5 cm, then 1.25 cm, etc. However it would never come to a complete rest, and we therefore call that response to the impulse (striking it with a hammer) "infinite" in duration.
The complexity of such a system is specified by its order N. N is often a constraint on the design of a transfer function since it specifies the number of reactive components in an analog circuit; in a digital IIR filter the number of computations required is proportional to N.
A filter implemented in a computer program (or a so-called digital signal processor) is a discrete-time system; a different (but parallel) set of mathematical concepts defines the behavior of such systems. Although a digital filter can be an IIR filter if the algorithm implementing it includes feedback, it is also possible to easily implement a filter whose impulse truly goes to zero after N time steps; this is called a finite impulse response (FIR) filter.
For instance, suppose one has a filter that, when presented with an impulse in a time series:
outputs a series that responds to that impulse at time 0 until time 4, and has no further response, such as:
Although the impulse response has lasted 4 time steps after the input, starting at time 5 it has truly gone to zero. The extent of the impulse response is finite, and this would be classified as a fourth-order FIR filter. The convolution integral (or summation) above need only extend to the full duration of the impulse response T, or the order N in a discrete time filter.
Classical analog filters are IIR filters, and classical filter theory centers on the determination of transfer functions given by low order rational functions, which can be synthesized using the same small number of reactive components. Using digital computers, on the other hand, both FIR and IIR filters are straightforward to implement in software.
A digital IIR filter can generally approximate a desired filter response using less computing power than a FIR filter, however this advantage is more often unneeded given the increasing power of digital processors. The ease of designing and characterizing FIR filters makes them preferable to the filter designer (programmer) when ample computing power is available. Another advantage of FIR filters is that their impulse response can be made symmetric, which implies a response in the frequency domain that has zero phase at all frequencies (not considering a finite delay), which is absolutely impossible with any IIR filter.
The frequency response or transfer function | H ( ω ) | {\displaystyle |H(\omega )|} of a filter can be obtained if the impulse response is known, or directly through analysis using Laplace transforms, or in discrete-time systems the Z-transform. The frequency response also includes the phase as a function of frequency, however in many cases the phase response is of little or no interest. FIR filters can be made to have zero phase, but with IIR filters that is generally impossible. With most IIR transfer functions there are related transfer functions having a frequency response with the same magnitude but a different phase; in most cases the so-called minimum phase transfer function is preferred.
Filters in the time domain are most often requested to follow a specified frequency response. Then, a mathematical procedure finds a filter transfer function that can be realized (within some constraints), and approximates the desired response to within some criterion. Common filter response specifications are described as follows:
Meeting a frequency response requirement with an FIR filter uses relatively straightforward procedures. In the most basic form, the desired frequency response itself can be sampled with a resolution of Δ f {\displaystyle \Delta f} and Fourier transformed to the time domain. This obtains the filter coefficients hi, which implements a zero phase FIR filter that matches the frequency response at the sampled frequencies used. To better match a desired response, Δ f {\displaystyle \Delta f} must be reduced. However the duration of the filter's impulse response, and the number of terms that must be summed for each output value (according to the above discrete time convolution) is given by N = 1 / ( Δ f T ) {\displaystyle N=1/(\Delta f\,T)} where T is the sampling period of the discrete time system (N-1 is also termed the order of an FIR filter). Thus the complexity of a digital filter and the computing time involved, grows inversely with Δ f {\displaystyle \Delta f} , placing a higher cost on filter functions that better approximate the desired behavior. For the same reason, filter functions whose critical response is at lower frequencies (compared to the sampling frequency 1/T) require a higher order, more computationally intensive FIR filter. An IIR filter can thus be much more efficient in such cases.
Elsewhere the reader may find further discussion of design methods for practical FIR filter design.
Since classical analog filters are IIR filters, there has been a long history of studying the range of possible transfer functions implementing various of the above desired filter responses in continuous time systems. Using transforms it is possible to convert these continuous time frequency responses to ones that are implemented in discrete time, for use in digital IIR filters. The complexity of any such filter is given by the order N, which describes the order of the rational function describing the frequency response. The order N is of particular importance in analog filters, because an N order electronic filter requires N reactive elements (capacitors and/or inductors) to implement. If a filter is implemented using, for instance, biquad stages using op-amps, N/2 stages are needed. In a digital implementation, the number of computations performed per sample is proportional to N. Thus the mathematical problem is to obtain the best approximation (in some sense) to the desired response using a smaller N, as we shall now illustrate.
Below are the frequency responses of several standard filter functions that approximate a desired response, optimized according to some criterion. These are all fifth-order low-pass filters, designed for a cutoff frequency of .5 in normalized units. Frequency responses are shown for the Butterworth, Chebyshev, inverse Chebyshev, and elliptic filters.
As is clear from the image, the elliptic filter is sharper than the others, but at the expense of ripples in both its passband and stopband. The Butterworth filter has the poorest transition but has a more even response, avoiding ripples in either the passband or stopband. A Bessel filter (not shown) has an even poorer transition in the frequency domain, but maintains the best phase fidelity of a waveform. Different applications emphasize different design requirements, leading to different choices among these (and other) optimizations, or requiring a filter of a higher order.
A popular circuit implementing a second order active R-C filter is the Sallen-Key design, whose schematic diagram is shown here. This topology can be adapted to produce low-pass, band-pass, and high pass filters.
An N order FIR filter can be implemented in a discrete time system using a computer program or specialized hardware in which the input signal is subject to N delay stages. The output of the filter is formed as the weighted sum of those delayed signals, as is depicted in the accompanying signal flow diagram. The response of the filter depends on the weighting coefficients denoted b0, b1, .... bN. For instance, if all of the coefficients were equal to unity, a so-called boxcar function, then it would implement a low-pass filter with a low frequency gain of N+1 and a frequency response given by the sinc function. Superior shapes for the frequency response can be obtained using coefficients derived from a more sophisticated design procedure.
LTI system theory describes linear time-invariant (LTI) filters of all types. LTI filters can be completely described by their frequency response and phase response, the specification of which uniquely defines their impulse response, and vice versa. From a mathematical viewpoint, continuous-time IIR LTI filters may be described in terms of linear differential equations, and their impulse responses considered as Green's functions of the equation. Continuous-time LTI filters may also be described in terms of the Laplace transform of their impulse response, which allows all of the characteristics of the filter to be analyzed by considering the pattern of zeros and poles of their Laplace transform in the complex plane. Similarly, discrete-time LTI filters may be analyzed via the Z-transform of their impulse response.
Before the advent of computer filter synthesis tools, graphical tools such as Bode plots and Nyquist plots were extensively used as design tools. Even today, they are invaluable tools to understanding filter behavior. Reference books had extensive plots of frequency response, phase response, group delay, and impulse response for various types of filters, of various orders. They also contained tables of values showing how to implement such filters as RLC ladders - very useful when amplifying elements were expensive compared to passive components. Such a ladder can also be designed to have minimal sensitivity to component variation a property hard to evaluate without computer tools.
Many different analog filter designs have been developed, each trying to optimise some feature of the system response. For practical filters, a custom design is sometimes desirable, that can offer the best tradeoff between different design criteria, which may include component count and cost, as well as filter response characteristics.
These descriptions refer to the mathematical properties of the filter (that is, the frequency and phase response). These can be implemented as analog circuits (for instance, using a Sallen Key filter topology, a type of active filter), or as algorithms in digital signal processing systems.
Digital filters are much more flexible to synthesize and use than analog filters, where the constraints of the design permits their use. Notably, there is no need to consider component tolerances, and very high Q levels may be obtained.
FIR digital filters may be implemented by the direct convolution of the desired impulse response with the input signal. They can easily be designed to give a matched filter for any arbitrary pulse shape.
IIR digital filters are often more difficult to design, due to problems including dynamic range issues, quantization noise and instability. Typically digital IIR filters are designed as a series of digital biquad filters.
All low-pass second-order continuous-time filters have a transfer function given by
All band-pass second-order continuous-time filters have a transfer function given by
where | [
{
"paragraph_id": 0,
"text": "Linear filters process time-varying input signals to produce output signals, subject to the constraint of linearity. In most cases these linear filters are also time invariant (or shift invariant) in which case they can be analyzed exactly using LTI (\"linear time-invariant\") system theory revealing their transfer functions in the frequency domain and their impulse responses in the time domain. Real-time implementations of such linear signal processing filters in the time domain are inevitably causal, an additional constraint on their transfer functions. An analog electronic circuit consisting only of linear components (resistors, capacitors, inductors, and linear amplifiers) will necessarily fall in this category, as will comparable mechanical systems or digital signal processing systems containing only linear elements. Since linear time-invariant filters can be completely characterized by their response to sinusoids of different frequencies (their frequency response), they are sometimes known as frequency filters.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Non real-time implementations of linear time-invariant filters need not be causal. Filters of more than one dimension are also used such as in Image processing. The general concept of linear filtering also extends into other fields and technologies such as statistics, data analysis, and mechanical engineering.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A linear time-invariant (LTI) filter can be uniquely specified by its impulse response h, and the output of any filter is mathematically expressed as the convolution of the input with that impulse response. The frequency response, given by the filter's transfer function H ( ω ) {\\displaystyle H(\\omega )} , is an alternative characterization of the filter. Typical filter design goals are to realize a particular frequency response, that is, the magnitude of the transfer function | H ( ω ) | {\\displaystyle |H(\\omega )|} ; the importance of the phase of the transfer function varies according to the application, inasmuch as the shape of a waveform can be distorted to a greater or lesser extent in the process of achieving a desired (amplitude) response in the frequency domain. The frequency response may be tailored to, for instance, eliminate unwanted frequency components from an input signal, or to limit an amplifier to signals within a particular band of frequencies.",
"title": "Impulse response and transfer function"
},
{
"paragraph_id": 3,
"text": "The impulse response h of a linear time-invariant causal filter specifies the output that the filter would produce if it were to receive an input consisting of a single impulse at time 0. An \"impulse\" in a continuous time filter means a Dirac delta function; in a discrete time filter the Kronecker delta function would apply. The impulse response completely characterizes the response of any such filter, inasmuch as any possible input signal can be expressed as a (possibly infinite) combination of weighted delta functions. Multiplying the impulse response shifted in time according to the arrival of each of these delta functions by the amplitude of each delta function, and summing these responses together (according to the superposition principle, applicable to all linear systems) yields the output waveform.",
"title": "Impulse response and transfer function"
},
{
"paragraph_id": 4,
"text": "Mathematically this is described as the convolution of a time-varying input signal x(t) with the filter's impulse response h, defined as:",
"title": "Impulse response and transfer function"
},
{
"paragraph_id": 5,
"text": "The first form is the continuous-time form, which describes mechanical and analog electronic systems, for instance. The second equation is a discrete-time version used, for example, by digital filters implemented in software, so-called digital signal processing. The impulse response h completely characterizes any linear time-invariant (or shift-invariant in the discrete-time case) filter. The input x is said to be \"convolved\" with the impulse response h having a (possibly infinite) duration of time T (or of N sampling periods).",
"title": "Impulse response and transfer function"
},
{
"paragraph_id": 6,
"text": "Filter design consists of finding a possible transfer function that can be implemented within certain practical constraints dictated by the technology or desired complexity of the system, followed by a practical design that realizes that transfer function using the chosen technology. The complexity of a filter may be specified according to the order of the filter.",
"title": "Impulse response and transfer function"
},
{
"paragraph_id": 7,
"text": "Among the time-domain filters we here consider, there are two general classes of filter transfer functions that can approximate a desired frequency response. Very different mathematical treatments apply to the design of filters termed infinite impulse response (IIR) filters, characteristic of mechanical and analog electronics systems, and finite impulse response (FIR) filters, which can be implemented by discrete time systems such as computers (then termed digital signal processing).",
"title": "Impulse response and transfer function"
},
{
"paragraph_id": 8,
"text": "Consider a physical system that acts as a linear filter, such as a system of springs and masses, or an analog electronic circuit that includes capacitors and/or inductors (along with other linear components such as resistors and amplifiers). When such a system is subject to an impulse (or any signal of finite duration) it responds with an output waveform that lasts past the duration of the input, eventually decaying exponentially in one or another manner, but never completely settling to zero (mathematically speaking). Such a system is said to have an infinite impulse response (IIR). The convolution integral (or summation) above extends over all time: T (or N) must be set to infinity.",
"title": "Impulse response and transfer function"
},
{
"paragraph_id": 9,
"text": "For instance, consider a damped harmonic oscillator such as a pendulum, or a resonant L-C tank circuit. If the pendulum has been at rest and we were to strike it with a hammer (the \"impulse\"), setting it in motion, it would swing back and forth (\"resonate\"), say, with an amplitude of 10 cm. After 10 minutes, say, the pendulum would still be swinging but the amplitude would have decreased to 5 cm, half of its original amplitude. After another 10 minutes its amplitude would be only 2.5 cm, then 1.25 cm, etc. However it would never come to a complete rest, and we therefore call that response to the impulse (striking it with a hammer) \"infinite\" in duration.",
"title": "Impulse response and transfer function"
},
{
"paragraph_id": 10,
"text": "The complexity of such a system is specified by its order N. N is often a constraint on the design of a transfer function since it specifies the number of reactive components in an analog circuit; in a digital IIR filter the number of computations required is proportional to N.",
"title": "Impulse response and transfer function"
},
{
"paragraph_id": 11,
"text": "A filter implemented in a computer program (or a so-called digital signal processor) is a discrete-time system; a different (but parallel) set of mathematical concepts defines the behavior of such systems. Although a digital filter can be an IIR filter if the algorithm implementing it includes feedback, it is also possible to easily implement a filter whose impulse truly goes to zero after N time steps; this is called a finite impulse response (FIR) filter.",
"title": "Impulse response and transfer function"
},
{
"paragraph_id": 12,
"text": "For instance, suppose one has a filter that, when presented with an impulse in a time series:",
"title": "Impulse response and transfer function"
},
{
"paragraph_id": 13,
"text": "outputs a series that responds to that impulse at time 0 until time 4, and has no further response, such as:",
"title": "Impulse response and transfer function"
},
{
"paragraph_id": 14,
"text": "Although the impulse response has lasted 4 time steps after the input, starting at time 5 it has truly gone to zero. The extent of the impulse response is finite, and this would be classified as a fourth-order FIR filter. The convolution integral (or summation) above need only extend to the full duration of the impulse response T, or the order N in a discrete time filter.",
"title": "Impulse response and transfer function"
},
{
"paragraph_id": 15,
"text": "Classical analog filters are IIR filters, and classical filter theory centers on the determination of transfer functions given by low order rational functions, which can be synthesized using the same small number of reactive components. Using digital computers, on the other hand, both FIR and IIR filters are straightforward to implement in software.",
"title": "Impulse response and transfer function"
},
{
"paragraph_id": 16,
"text": "A digital IIR filter can generally approximate a desired filter response using less computing power than a FIR filter, however this advantage is more often unneeded given the increasing power of digital processors. The ease of designing and characterizing FIR filters makes them preferable to the filter designer (programmer) when ample computing power is available. Another advantage of FIR filters is that their impulse response can be made symmetric, which implies a response in the frequency domain that has zero phase at all frequencies (not considering a finite delay), which is absolutely impossible with any IIR filter.",
"title": "Impulse response and transfer function"
},
{
"paragraph_id": 17,
"text": "The frequency response or transfer function | H ( ω ) | {\\displaystyle |H(\\omega )|} of a filter can be obtained if the impulse response is known, or directly through analysis using Laplace transforms, or in discrete-time systems the Z-transform. The frequency response also includes the phase as a function of frequency, however in many cases the phase response is of little or no interest. FIR filters can be made to have zero phase, but with IIR filters that is generally impossible. With most IIR transfer functions there are related transfer functions having a frequency response with the same magnitude but a different phase; in most cases the so-called minimum phase transfer function is preferred.",
"title": "Frequency response"
},
{
"paragraph_id": 18,
"text": "Filters in the time domain are most often requested to follow a specified frequency response. Then, a mathematical procedure finds a filter transfer function that can be realized (within some constraints), and approximates the desired response to within some criterion. Common filter response specifications are described as follows:",
"title": "Frequency response"
},
{
"paragraph_id": 19,
"text": "Meeting a frequency response requirement with an FIR filter uses relatively straightforward procedures. In the most basic form, the desired frequency response itself can be sampled with a resolution of Δ f {\\displaystyle \\Delta f} and Fourier transformed to the time domain. This obtains the filter coefficients hi, which implements a zero phase FIR filter that matches the frequency response at the sampled frequencies used. To better match a desired response, Δ f {\\displaystyle \\Delta f} must be reduced. However the duration of the filter's impulse response, and the number of terms that must be summed for each output value (according to the above discrete time convolution) is given by N = 1 / ( Δ f T ) {\\displaystyle N=1/(\\Delta f\\,T)} where T is the sampling period of the discrete time system (N-1 is also termed the order of an FIR filter). Thus the complexity of a digital filter and the computing time involved, grows inversely with Δ f {\\displaystyle \\Delta f} , placing a higher cost on filter functions that better approximate the desired behavior. For the same reason, filter functions whose critical response is at lower frequencies (compared to the sampling frequency 1/T) require a higher order, more computationally intensive FIR filter. An IIR filter can thus be much more efficient in such cases.",
"title": "Frequency response"
},
{
"paragraph_id": 20,
"text": "Elsewhere the reader may find further discussion of design methods for practical FIR filter design.",
"title": "Frequency response"
},
{
"paragraph_id": 21,
"text": "Since classical analog filters are IIR filters, there has been a long history of studying the range of possible transfer functions implementing various of the above desired filter responses in continuous time systems. Using transforms it is possible to convert these continuous time frequency responses to ones that are implemented in discrete time, for use in digital IIR filters. The complexity of any such filter is given by the order N, which describes the order of the rational function describing the frequency response. The order N is of particular importance in analog filters, because an N order electronic filter requires N reactive elements (capacitors and/or inductors) to implement. If a filter is implemented using, for instance, biquad stages using op-amps, N/2 stages are needed. In a digital implementation, the number of computations performed per sample is proportional to N. Thus the mathematical problem is to obtain the best approximation (in some sense) to the desired response using a smaller N, as we shall now illustrate.",
"title": "Frequency response"
},
{
"paragraph_id": 22,
"text": "Below are the frequency responses of several standard filter functions that approximate a desired response, optimized according to some criterion. These are all fifth-order low-pass filters, designed for a cutoff frequency of .5 in normalized units. Frequency responses are shown for the Butterworth, Chebyshev, inverse Chebyshev, and elliptic filters.",
"title": "Frequency response"
},
{
"paragraph_id": 23,
"text": "As is clear from the image, the elliptic filter is sharper than the others, but at the expense of ripples in both its passband and stopband. The Butterworth filter has the poorest transition but has a more even response, avoiding ripples in either the passband or stopband. A Bessel filter (not shown) has an even poorer transition in the frequency domain, but maintains the best phase fidelity of a waveform. Different applications emphasize different design requirements, leading to different choices among these (and other) optimizations, or requiring a filter of a higher order.",
"title": "Frequency response"
},
{
"paragraph_id": 24,
"text": "A popular circuit implementing a second order active R-C filter is the Sallen-Key design, whose schematic diagram is shown here. This topology can be adapted to produce low-pass, band-pass, and high pass filters.",
"title": "Example implementations"
},
{
"paragraph_id": 25,
"text": "An N order FIR filter can be implemented in a discrete time system using a computer program or specialized hardware in which the input signal is subject to N delay stages. The output of the filter is formed as the weighted sum of those delayed signals, as is depicted in the accompanying signal flow diagram. The response of the filter depends on the weighting coefficients denoted b0, b1, .... bN. For instance, if all of the coefficients were equal to unity, a so-called boxcar function, then it would implement a low-pass filter with a low frequency gain of N+1 and a frequency response given by the sinc function. Superior shapes for the frequency response can be obtained using coefficients derived from a more sophisticated design procedure.",
"title": "Example implementations"
},
{
"paragraph_id": 26,
"text": "LTI system theory describes linear time-invariant (LTI) filters of all types. LTI filters can be completely described by their frequency response and phase response, the specification of which uniquely defines their impulse response, and vice versa. From a mathematical viewpoint, continuous-time IIR LTI filters may be described in terms of linear differential equations, and their impulse responses considered as Green's functions of the equation. Continuous-time LTI filters may also be described in terms of the Laplace transform of their impulse response, which allows all of the characteristics of the filter to be analyzed by considering the pattern of zeros and poles of their Laplace transform in the complex plane. Similarly, discrete-time LTI filters may be analyzed via the Z-transform of their impulse response.",
"title": "Mathematics of filter design"
},
{
"paragraph_id": 27,
"text": "Before the advent of computer filter synthesis tools, graphical tools such as Bode plots and Nyquist plots were extensively used as design tools. Even today, they are invaluable tools to understanding filter behavior. Reference books had extensive plots of frequency response, phase response, group delay, and impulse response for various types of filters, of various orders. They also contained tables of values showing how to implement such filters as RLC ladders - very useful when amplifying elements were expensive compared to passive components. Such a ladder can also be designed to have minimal sensitivity to component variation a property hard to evaluate without computer tools.",
"title": "Mathematics of filter design"
},
{
"paragraph_id": 28,
"text": "Many different analog filter designs have been developed, each trying to optimise some feature of the system response. For practical filters, a custom design is sometimes desirable, that can offer the best tradeoff between different design criteria, which may include component count and cost, as well as filter response characteristics.",
"title": "Mathematics of filter design"
},
{
"paragraph_id": 29,
"text": "These descriptions refer to the mathematical properties of the filter (that is, the frequency and phase response). These can be implemented as analog circuits (for instance, using a Sallen Key filter topology, a type of active filter), or as algorithms in digital signal processing systems.",
"title": "Mathematics of filter design"
},
{
"paragraph_id": 30,
"text": "Digital filters are much more flexible to synthesize and use than analog filters, where the constraints of the design permits their use. Notably, there is no need to consider component tolerances, and very high Q levels may be obtained.",
"title": "Mathematics of filter design"
},
{
"paragraph_id": 31,
"text": "FIR digital filters may be implemented by the direct convolution of the desired impulse response with the input signal. They can easily be designed to give a matched filter for any arbitrary pulse shape.",
"title": "Mathematics of filter design"
},
{
"paragraph_id": 32,
"text": "IIR digital filters are often more difficult to design, due to problems including dynamic range issues, quantization noise and instability. Typically digital IIR filters are designed as a series of digital biquad filters.",
"title": "Mathematics of filter design"
},
{
"paragraph_id": 33,
"text": "All low-pass second-order continuous-time filters have a transfer function given by",
"title": "Mathematics of filter design"
},
{
"paragraph_id": 34,
"text": "All band-pass second-order continuous-time filters have a transfer function given by",
"title": "Mathematics of filter design"
},
{
"paragraph_id": 35,
"text": "where",
"title": "Mathematics of filter design"
}
]
| Linear filters process time-varying input signals to produce output signals, subject to the constraint of linearity. In most cases these linear filters are also time invariant in which case they can be analyzed exactly using LTI system theory revealing their transfer functions in the frequency domain and their impulse responses in the time domain. Real-time implementations of such linear signal processing filters in the time domain are inevitably causal, an additional constraint on their transfer functions. An analog electronic circuit consisting only of linear components will necessarily fall in this category, as will comparable mechanical systems or digital signal processing systems containing only linear elements. Since linear time-invariant filters can be completely characterized by their response to sinusoids of different frequencies, they are sometimes known as frequency filters. Non real-time implementations of linear time-invariant filters need not be causal. Filters of more than one dimension are also used such as in Image processing. The general concept of linear filtering also extends into other fields and technologies such as statistics, data analysis, and mechanical engineering. | 2002-02-25T15:51:15Z | 2023-12-20T08:38:24Z | [
"Template:ISBN",
"Template:Refbegin",
"Template:Cite book",
"Template:Refend",
"Template:More footnotes",
"Template:Linear analog electronic filter",
"Template:See also",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/Linear_filter |
9,976 | Ergative case | In grammar, the ergative case (abbreviated erg) is the grammatical case that identifies a nominal phrase as the agent of a transitive verb in ergative–absolutive languages.
In such languages, the ergative case is typically marked (most salient), while the absolutive case is unmarked. Recent work in case theory has vigorously supported the idea that the ergative case identifies the agent (the intentful performer of an action) of a verb.
In Kalaallisut (Greenlandic) for example, the ergative case is used to mark subjects of transitive verbs and possessors of nouns. This syncretism with the genitive is commonly referred to as the relative case.
Nez Perce has a three-way nominal case system with both ergative (-nim) and accusative (-ne) plus an absolute (unmarked) case for intransitive subjects: hipáayna qíiwn ‘the old man arrived’; hipáayna wewúkiye ‘the elk arrived’; wewúkiyene péexne qíiwnim ‘the old man saw an elk’.
Sahaptin has an ergative noun case (with suffix -nɨm) that is limited to transitive constructions only when the direct object is 1st or 2nd person: iwapáatayaaš łmámanɨm ‘the old woman helped me’; paanáy iwapáataya łmáma ‘the old woman helped him/her’ (direct); páwapaataya łmámayin ‘the old woman helped him/her’ (inverse).
In languages with an optional ergative, the choice between marking the ergative case or not depends on semantic or pragmatics aspects such as marking focus on the argument.
Other languages that use the ergative case are Georgian, Chechen, and other Caucasian languages, Mayan languages, Mixe–Zoque languages, Wagiman and other Australian Aboriginal languages as well as Basque, Burushaski and Tibetan. Among all Indo-European languages only, Yaghnobi, Kurdish language varieties (including Kurmanji, Zazaki and Sorani) and Pashto from Iranian languages and Hindi/Urdu, along with some other Indo-Aryan languages are ergative.
The ergative case is also a feature of some constructed languages such as Na'vi, Ithkuil and Black Speech. | [
{
"paragraph_id": 0,
"text": "In grammar, the ergative case (abbreviated erg) is the grammatical case that identifies a nominal phrase as the agent of a transitive verb in ergative–absolutive languages.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In such languages, the ergative case is typically marked (most salient), while the absolutive case is unmarked. Recent work in case theory has vigorously supported the idea that the ergative case identifies the agent (the intentful performer of an action) of a verb.",
"title": "Characteristics"
},
{
"paragraph_id": 2,
"text": "In Kalaallisut (Greenlandic) for example, the ergative case is used to mark subjects of transitive verbs and possessors of nouns. This syncretism with the genitive is commonly referred to as the relative case.",
"title": "Characteristics"
},
{
"paragraph_id": 3,
"text": "Nez Perce has a three-way nominal case system with both ergative (-nim) and accusative (-ne) plus an absolute (unmarked) case for intransitive subjects: hipáayna qíiwn ‘the old man arrived’; hipáayna wewúkiye ‘the elk arrived’; wewúkiyene péexne qíiwnim ‘the old man saw an elk’.",
"title": "Characteristics"
},
{
"paragraph_id": 4,
"text": "Sahaptin has an ergative noun case (with suffix -nɨm) that is limited to transitive constructions only when the direct object is 1st or 2nd person: iwapáatayaaš łmámanɨm ‘the old woman helped me’; paanáy iwapáataya łmáma ‘the old woman helped him/her’ (direct); páwapaataya łmámayin ‘the old woman helped him/her’ (inverse).",
"title": "Characteristics"
},
{
"paragraph_id": 5,
"text": "In languages with an optional ergative, the choice between marking the ergative case or not depends on semantic or pragmatics aspects such as marking focus on the argument.",
"title": "Characteristics"
},
{
"paragraph_id": 6,
"text": "Other languages that use the ergative case are Georgian, Chechen, and other Caucasian languages, Mayan languages, Mixe–Zoque languages, Wagiman and other Australian Aboriginal languages as well as Basque, Burushaski and Tibetan. Among all Indo-European languages only, Yaghnobi, Kurdish language varieties (including Kurmanji, Zazaki and Sorani) and Pashto from Iranian languages and Hindi/Urdu, along with some other Indo-Aryan languages are ergative.",
"title": "Characteristics"
},
{
"paragraph_id": 7,
"text": "The ergative case is also a feature of some constructed languages such as Na'vi, Ithkuil and Black Speech.",
"title": "Characteristics"
}
]
| In grammar, the ergative case is the grammatical case that identifies a nominal phrase as the agent of a transitive verb in ergative–absolutive languages. | 2001-10-24T04:11:03Z | 2023-09-25T15:32:47Z | [
"Template:Cuneiform",
"Template:Cite web",
"Template:Cite Q",
"Template:Reflist",
"Template:Cite book",
"Template:Grammatical cases",
"Template:Short description",
"Template:Smallcaps",
"Template:Wiktionary"
]
| https://en.wikipedia.org/wiki/Ergative_case |
9,977 | Ewe | A ewe is a female sheep.
Ewe or EWE may also refer to: | [
{
"paragraph_id": 0,
"text": "A ewe is a female sheep.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Ewe or EWE may also refer to:",
"title": ""
}
]
| A ewe is a female sheep. Ewe or EWE may also refer to: | 2001-10-24T15:31:59Z | 2023-10-22T13:52:23Z | [
"Template:TOC right",
"Template:Disambiguation",
"Template:Wiktionary"
]
| https://en.wikipedia.org/wiki/Ewe |
9,978 | Essenes | The Essenes (/ˈɛsiːnz, ɛˈsiːnz/; Hebrew: אִסִּיִים, Isiyim; Greek: Ἐσσηνοί, Ἐσσαῖοι, or Ὀσσαῖοι, Essenoi, Essaioi, Ossaioi) were a mystic Jewish sect during the Second Temple period that flourished from the 2nd century BCE to the 1st century CE.
The Essene movement likely originated as a distinct group among Jews during Jonathan Apphus' time, driven by disputes over Jewish law and the belief that Jonathan's high priesthood was illegitimate. Most scholars think the Essenes seceded from the Zadokite priests. They saw themselves as the genuine remnant of Israel, upholding the true covenant with God, and attributed their interpretation of the Torah to their early leader, the Teacher of Righteousness, possibly a legitimate high priest. Embracing a conservative approach to Jewish law, they observed a strict hierarchy favoring priests (the Sons of Zadok) over laypeople, emphasized ritual purity, and held a dualistic worldview.
According to Jewish writers Josephus and Philo, the Essenes numbered around four thousand, and resided in various settlements throughout Judaea. Conversely, Roman writer Pliny the Elder positioned them somewhere above Ein Gedi, on the west side of the Dead Sea. Pliny relates in a few lines that the Essenes possess no money, had existed for thousands of generations, and that their priestly class ("contemplatives") did not marry. Josephus gave a detailed account of the Essenes in The Jewish War (c. 75 CE), with a shorter description in Antiquities of the Jews (c. 94 CE) and The Life of Flavius Josephus (c. 97 CE). Claiming firsthand knowledge, he lists the Essenoi as one of the three sects of Jewish philosophy alongside the Pharisees and Sadducees. He relates the same information concerning piety, celibacy; the absence of personal property and of money; the belief in communality; and commitment to a strict observance of Sabbath. He further adds that the Essenes ritually immersed in water every morning (a practice similar to the use of the mikveh for daily immersion found among some contemporary Hasidim), ate together after prayer, devoted themselves to charity and benevolence, forbade the expression of anger, studied the books of the elders, preserved secrets, and were very mindful of the names of the angels kept in their sacred writings.
The Essenes have gained fame in modern times as a result of the discovery of an extensive group of religious documents known as the Dead Sea Scrolls, which are commonly believed to be the Essenes' library. The scrolls were found at Qumran, an archaeological site situated along the northwestern shore of the Dead Sea, believed to have been the dwelling place of an Essene community. These documents preserve multiple copies of parts of the Hebrew Bible along with deuterocanonical and sectarian manuscripts, including writings such as the Community Rule, the Damascus Document, and the War Scroll, which provide valuable insights into the communal life, ideology and theology of the Essenes.
According to the conventional view, the Essenes disappeared after the First Jewish–Roman War, which also witnessed the destruction of the settlement at Qumran. Scholars have noted the absence of direct sources supporting this claim, raising the possibility of their endurance or the survival of related groups in the following centuries. Some researchers suggest that Essene teachings could have influenced other religious traditions, such as Early Christianity and Mandaeism.
Josephus uses the name Essenes in his two main accounts, The Jewish War 2.119, 158, 160 and Antiquities of the Jews, 13.171–2, but some manuscripts read here Essaion ("holding the Essenes in honour"; "a certain Essene named Manaemus"; "to hold all Essenes in honor"; "the Essenes").
In several places, however, Josephus has Essaios, which is usually assumed to mean Essene ("Judas of the Essaios race"; "Simon of the Essaios race"; "John the Essaios"; "those who are called by us Essaioi"; "Simon a man of the Essaios race"). Josephus identified the Essenes as one of the three major Jewish sects of that period.
Philo's usage is Essaioi, although he admits this Greek form of the original name, that according to his etymology signifies "holiness", to be inexact. Pliny's Latin text has Esseni.
Gabriele Boccaccini implies that a convincing etymology for the name Essene has not been found, but that the term applies to a larger group within Judea that also included the Qumran community.
It was proposed before the Dead Sea Scrolls were discovered that the name came into several Greek spellings from a Hebrew self-designation later found in some Dead Sea Scrolls, ʻosey haTorah, "'doers' or 'makers' of Torah". Although dozens of etymology suggestions have been published, this is the only etymology published before 1947 that was confirmed by Qumran text self-designation references, and it is gaining acceptance among scholars. It is recognized as the etymology of the form Ossaioi (and note that Philo also offered an O spelling) and Essaioi and Esseni spelling variations have been discussed by VanderKam, Goranson, and others. In medieval Hebrew (e.g. Sefer Yosippon) Hassidim "the Pious" replaces "Essenes". While this Hebrew name is not the etymology of Essaioi/Esseni, the Aramaic equivalent Hesi'im known from Eastern Aramaic texts has been suggested. Others suggest that Essene is a transliteration of the Hebrew word ḥiṣonim (ḥiṣon "outside"), which the Mishnah (e.g. Megillah 4:8) uses to describe various sectarian groups. Another theory is that the name was borrowed from a cult of devotees to Artemis in Anatolia, whose demeanor and dress somewhat resembled those of the group in Judea.
Flavius Josephus in Chapter 8 of "The Jewish War" states:
2.(119)For there are three philosophical sects among the Jews. The followers of the first of which are the Pharisees; of the second, the Sadducees; and the third sect, which pretends to a severer discipline, are called Essenes. These last are Jews by birth, and seem to have a greater affection for each other than other sects have.
According to Josephus, the Essenes had settled "not in one city" but "in large numbers in every town". Philo speaks of "more than four thousand" Essaioi living in "Palestine and Syria", more precisely, "in many cities of Judaea and in many villages and grouped in great societies of many members".
Pliny locates them "on the west side of the Dead Sea, away from the coast... [above] the town of Engeda".
Some modern scholars and archeologists have argued that Essenes inhabited the settlement at Qumran, a plateau in the Judean Desert along the Dead Sea, citing Pliny the Elder in support and giving credence that the Dead Sea Scrolls are the product of the Essenes. This theory, though not yet conclusively proven, has come to dominate the scholarly discussion and public perception of the Essenes.
The accounts by Josephus and Philo show that the Essenes led a strictly communal life—often compared to later Christian monasticism. Many of the Essene groups appear to have been celibate, but Josephus speaks also of another "order of Essenes" that observed the practice of being engaged for three years and then becoming married. According to Josephus, they had customs and observances such as collective ownership, electing a leader to attend to the interests of the group, and obedience to the orders from their leader. Also, they were forbidden from swearing oaths and from sacrificing animals. They controlled their tempers and served as channels of peace, carrying weapons only for protection against robbers. The Essenes chose not to possess slaves but served each other and, as a result of communal ownership, did not engage in trading. Josephus and Philo provide lengthy accounts of their communal meetings, meals, and religious celebrations. This communal living has led some scholars to view the Essenes as a group practicing social and material egalitarianism.
Despite their prohibition on swearing oaths, after a three-year probationary period, new members would take an oath that included a commitment to practice piety to God and righteousness toward humanity; maintain a pure lifestyle; abstain from criminal and immoral activities; transmit their rules uncorrupted; and preserve the books of the Essenes and the names of the angels. Their theology included belief in the immortality of the soul and that they would receive their souls back after death. Part of their activities included purification by water rituals which was supported by rainwater catchment and storage. According to the Community Rule, repentance was a prerequisite to water purification.
Ritual purification was a common practice among the peoples of Judea during this period and was thus not specific to the Essenes. A ritual bath or mikveh was found near many synagogues of the period continuing into modern times. Purity and cleanliness was considered so important to the Essenes that they would refrain from defecation on the Sabbath.
According to Joseph Lightfoot, the Church Father Epiphanius (writing in the 4th century CE) seems to make a distinction between two main groups within the Essenes: "Of those that came before his [Elxai, an Ossaean prophet] time and during it, the Ossaeans and the Nasaraeans."Part 18 Epiphanius describes each group as following:
The Nasaraean—they were Jews by nationality—originally from Gileaditis, Bashanitis and the Transjordan... They acknowledged Moses and believed that he had received laws—not this law, however, but some other. And so, they were Jews who kept all the Jewish observances, but they would not offer sacrifice or eat meat. They considered it unlawful to eat meat or make sacrifices with it. They claim that these Books are fictions, and that none of these customs were instituted by the fathers. This was the difference between the Nasaraean and the others...
After this Nasaraean sect in turn comes another closely connected with them, called the Ossaeans. These are Jews like the former... originally came from Nabataea, Ituraea, Moabitis, and Arielis, the lands beyond the basin of what sacred scripture called the Salt Sea... Though it is different from the other six of these seven sects, it causes schism only by forbidding the books of Moses like the Nasaraean.
We do not know much about the canon of the Essenes, and what their attitude was towards the apocryphal writings, however the Essenes perhaps did not esteem the book of Esther highly as manuscripts of Esther are completely absent in Qumran, likely because of their opposition to mixed marriages and the use of different calendars.
The Essenes were unique for their time for being against the practice of slave-ownership, and slavery, which they regarded as unjust and ungodly, regarding all men as having been born equal.
Josephus and Philo discuss the Essenes in detail. Most scholars believe that the community at Qumran that most likely produced the Dead Sea Scrolls was an offshoot of the Essenes. However, this theory has been disputed by some; for example, Norman Golb argues that the primary research on the Qumran documents and ruins (by Father Roland de Vaux, from the École Biblique et Archéologique de Jérusalem) lacked scientific method, and drew wrong conclusions that comfortably entered the academic canon. For Golb, the number of documents is too extensive and includes many different writing styles and calligraphies; the ruins seem to have been a fortress, used as a military base for a very long period of time—including the 1st century—so they therefore could not have been inhabited by the Essenes; and the large graveyard excavated in 1870, just 50 metres (160 ft) east of the Qumran ruins, was made of over 1200 tombs that included many women and children; Pliny clearly wrote that the Essenes who lived near the Dead Sea "had not one woman, had renounced all pleasure... and no one was born in their race". Golb's book presents observations about de Vaux's premature conclusions and their uncontroverted acceptance by the general academic community. He states that the documents probably stemmed from various libraries in Jerusalem, kept safe in the desert from the Roman invasions. Other scholars refute these arguments—particularly since Josephus describes some Essenes as allowing marriage.
Another issue is the relationship between the Essaioi and Philo's Therapeutae and Therapeutrides. He regarded the Therapeutae as a contemplative branch of the Essaioi who, he said, pursued an active life.
One theory on the formation of the Essenes suggests that the movement was founded by a Jewish high priest, dubbed by the Essenes the Teacher of Righteousness, whose office had been usurped by Jonathan (of priestly but not of Zadokite lineage), labeled the "man of lies" or "false priest". Others follow this line and a few argue that the Teacher of Righteousness was not only the leader of the Essenes at Qumran, but was also identical to the original Messianic figure about 150 years before the time of the Gospels. Fred Gladstone Bratton notes that
The Teacher of Righteousness of the Scrolls would seem to be a prototype of Jesus, for both spoke of the New Covenant; they preached a similar gospel; each was regarded as a Savior or Redeemer; and each was condemned and put to death by reactionary factions... We do not know whether Jesus was an Essene, but some scholars feel that he was at least influenced by them.
Lawrence Schiffman has argued that the Qumran community may be called Sadducean, and not Essene, since their legal positions retain a link with Sadducean tradition.
Rituals of the Essenes and Christianity have much in common; the Dead Sea Scrolls describe a meal of bread and wine that will be instituted by the messiah, both the Essenes and Christians were eschatological communities, where judgement on the world would come at any time. The New Testament also possibly quotes writings used by the Qumran community. Luke 1:31-35 states " And now you will conceive in your womb and bear a son and you will name him Jesus. He will be great and will be called the son of the Most High...the son of God" which seems to echo 4Q 246, stating: "He will be called great and he will be called Son of God, and they will call him Son of the Most High...He will judge the earth in righteousness...and every nation will bow down to him".
Other similarities include high devotion to the faith even to the point of martyrdom, communal prayer, self denial and a belief in a captivity in a sinful world.
John the Baptist has also been argued to have been an Essene, as there are numerous parallels between John's mission and the Essenes, which suggests he perhaps was trained by the Essene community.
In the early church a book called the Odes of Solomon was written. The writer was likely a very early convert from the Essene community into Christianity. The book reflects a mixture of mystical ideas of the Essene community with Christian concepts.
Both the Essenes and Christians practiced voluntary celibacy and prohibited divorce. Both also used concepts of "light" and "darkness" for good and evil.
A few have argued that the Essenes had an idea of a pierced Messiah based on 4Q285; however, the interpretation of the text is ambiguous. Some scholars interpreted it as the Messiah being killed himself, while modern scholars mostly interpret it as the Messiah executing the enemies of Israel in an eschatological war.
Both the Essenes and Christians practiced a ritual of immersion by water, however the Essenes had it as a regular practice instead of a one time event.
The Haran Gawaita uses the name Nasoraeans for the Mandaeans arriving from Jerusalem meaning guardians or possessors of secret rites and knowledge. Scholars such as Kurt Rudolph, Rudolf Macúch, Mark Lidzbarski and Ethel S. Drower connect the Mandaeans with the Nasaraeans described by Epiphanius, a group within the Essenes according to Joseph Lightfoot. Epiphanius says (29:6) that they existed before Jesus. That is questioned by some, but others accept the pre-Christian origin of the Nasaraeans.
Early religious concepts and terminologies recur in the Dead Sea Scrolls, and Yardena (Jordan) has been the name of every baptismal water in Mandaeism. One of the names for the Mandaean God Hayyi Rabbi, Mara d-Rabuta (Lord of Greatness) is found in the Genesis Apocryphon II, 4. Another early self-appellation is bhiri zidqa meaning 'elect of righteousness' or 'the chosen righteous', a term found in the Book of Enoch and Genesis Apocryphon II, 4. As Nasoraeans, Mandaeans believe that they constitute the true congregation of bnai nhura meaning 'Sons of Light', a term used by the Essenes. Mandaean scripture affirms that the Mandaeans descend directly from John the Baptist's original Nasoraean Mandaean disciples in Jerusalem. Similar to the Essenes, it is forbidden for a Mandaean to reveal the names of the angels to a gentile. Essene graves are oriented north–south and a Mandaean's grave must also be in the north–south direction so that if the dead Mandaean were stood upright, they would face north. Mandaeans have an oral tradition that some were originally vegetarian and also similar to the Essenes, they are pacifists.
The beit manda (beth manda) is described as biniana rab ḏ-srara ("the Great building of Truth") and bit tušlima ("house of Perfection") in Mandaean texts such as the Qolasta, Ginza Rabba, and the Mandaean Book of John. The only known literary parallels are in Essene texts from Qumran such as the Community Rule, which has similar phrases such as the "house of Perfection and Truth in Israel" (Community Rule 1QS VIII 9) and "house of Truth in Israel."
The Magharians or Magarites (Arabic: Al-Maghariyyah, 'people of the caves') were, according to Jacob Qirqisani, a Jewish sect founded in the 1st century BCE. Abraham Harkavy and others identify the Magharians with the Essenes, and their author referred to as the "Alexandrinian" with Philo (whose affinity for the Essenes is well-known), based on the following evidence: | [
{
"paragraph_id": 0,
"text": "The Essenes (/ˈɛsiːnz, ɛˈsiːnz/; Hebrew: אִסִּיִים, Isiyim; Greek: Ἐσσηνοί, Ἐσσαῖοι, or Ὀσσαῖοι, Essenoi, Essaioi, Ossaioi) were a mystic Jewish sect during the Second Temple period that flourished from the 2nd century BCE to the 1st century CE.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Essene movement likely originated as a distinct group among Jews during Jonathan Apphus' time, driven by disputes over Jewish law and the belief that Jonathan's high priesthood was illegitimate. Most scholars think the Essenes seceded from the Zadokite priests. They saw themselves as the genuine remnant of Israel, upholding the true covenant with God, and attributed their interpretation of the Torah to their early leader, the Teacher of Righteousness, possibly a legitimate high priest. Embracing a conservative approach to Jewish law, they observed a strict hierarchy favoring priests (the Sons of Zadok) over laypeople, emphasized ritual purity, and held a dualistic worldview.",
"title": ""
},
{
"paragraph_id": 2,
"text": "According to Jewish writers Josephus and Philo, the Essenes numbered around four thousand, and resided in various settlements throughout Judaea. Conversely, Roman writer Pliny the Elder positioned them somewhere above Ein Gedi, on the west side of the Dead Sea. Pliny relates in a few lines that the Essenes possess no money, had existed for thousands of generations, and that their priestly class (\"contemplatives\") did not marry. Josephus gave a detailed account of the Essenes in The Jewish War (c. 75 CE), with a shorter description in Antiquities of the Jews (c. 94 CE) and The Life of Flavius Josephus (c. 97 CE). Claiming firsthand knowledge, he lists the Essenoi as one of the three sects of Jewish philosophy alongside the Pharisees and Sadducees. He relates the same information concerning piety, celibacy; the absence of personal property and of money; the belief in communality; and commitment to a strict observance of Sabbath. He further adds that the Essenes ritually immersed in water every morning (a practice similar to the use of the mikveh for daily immersion found among some contemporary Hasidim), ate together after prayer, devoted themselves to charity and benevolence, forbade the expression of anger, studied the books of the elders, preserved secrets, and were very mindful of the names of the angels kept in their sacred writings.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Essenes have gained fame in modern times as a result of the discovery of an extensive group of religious documents known as the Dead Sea Scrolls, which are commonly believed to be the Essenes' library. The scrolls were found at Qumran, an archaeological site situated along the northwestern shore of the Dead Sea, believed to have been the dwelling place of an Essene community. These documents preserve multiple copies of parts of the Hebrew Bible along with deuterocanonical and sectarian manuscripts, including writings such as the Community Rule, the Damascus Document, and the War Scroll, which provide valuable insights into the communal life, ideology and theology of the Essenes.",
"title": ""
},
{
"paragraph_id": 4,
"text": "According to the conventional view, the Essenes disappeared after the First Jewish–Roman War, which also witnessed the destruction of the settlement at Qumran. Scholars have noted the absence of direct sources supporting this claim, raising the possibility of their endurance or the survival of related groups in the following centuries. Some researchers suggest that Essene teachings could have influenced other religious traditions, such as Early Christianity and Mandaeism.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Josephus uses the name Essenes in his two main accounts, The Jewish War 2.119, 158, 160 and Antiquities of the Jews, 13.171–2, but some manuscripts read here Essaion (\"holding the Essenes in honour\"; \"a certain Essene named Manaemus\"; \"to hold all Essenes in honor\"; \"the Essenes\").",
"title": "Etymology"
},
{
"paragraph_id": 6,
"text": "In several places, however, Josephus has Essaios, which is usually assumed to mean Essene (\"Judas of the Essaios race\"; \"Simon of the Essaios race\"; \"John the Essaios\"; \"those who are called by us Essaioi\"; \"Simon a man of the Essaios race\"). Josephus identified the Essenes as one of the three major Jewish sects of that period.",
"title": "Etymology"
},
{
"paragraph_id": 7,
"text": "Philo's usage is Essaioi, although he admits this Greek form of the original name, that according to his etymology signifies \"holiness\", to be inexact. Pliny's Latin text has Esseni.",
"title": "Etymology"
},
{
"paragraph_id": 8,
"text": "Gabriele Boccaccini implies that a convincing etymology for the name Essene has not been found, but that the term applies to a larger group within Judea that also included the Qumran community.",
"title": "Etymology"
},
{
"paragraph_id": 9,
"text": "It was proposed before the Dead Sea Scrolls were discovered that the name came into several Greek spellings from a Hebrew self-designation later found in some Dead Sea Scrolls, ʻosey haTorah, \"'doers' or 'makers' of Torah\". Although dozens of etymology suggestions have been published, this is the only etymology published before 1947 that was confirmed by Qumran text self-designation references, and it is gaining acceptance among scholars. It is recognized as the etymology of the form Ossaioi (and note that Philo also offered an O spelling) and Essaioi and Esseni spelling variations have been discussed by VanderKam, Goranson, and others. In medieval Hebrew (e.g. Sefer Yosippon) Hassidim \"the Pious\" replaces \"Essenes\". While this Hebrew name is not the etymology of Essaioi/Esseni, the Aramaic equivalent Hesi'im known from Eastern Aramaic texts has been suggested. Others suggest that Essene is a transliteration of the Hebrew word ḥiṣonim (ḥiṣon \"outside\"), which the Mishnah (e.g. Megillah 4:8) uses to describe various sectarian groups. Another theory is that the name was borrowed from a cult of devotees to Artemis in Anatolia, whose demeanor and dress somewhat resembled those of the group in Judea.",
"title": "Etymology"
},
{
"paragraph_id": 10,
"text": "Flavius Josephus in Chapter 8 of \"The Jewish War\" states:",
"title": "Etymology"
},
{
"paragraph_id": 11,
"text": "2.(119)For there are three philosophical sects among the Jews. The followers of the first of which are the Pharisees; of the second, the Sadducees; and the third sect, which pretends to a severer discipline, are called Essenes. These last are Jews by birth, and seem to have a greater affection for each other than other sects have.",
"title": "Etymology"
},
{
"paragraph_id": 12,
"text": "According to Josephus, the Essenes had settled \"not in one city\" but \"in large numbers in every town\". Philo speaks of \"more than four thousand\" Essaioi living in \"Palestine and Syria\", more precisely, \"in many cities of Judaea and in many villages and grouped in great societies of many members\".",
"title": "Location"
},
{
"paragraph_id": 13,
"text": "Pliny locates them \"on the west side of the Dead Sea, away from the coast... [above] the town of Engeda\".",
"title": "Location"
},
{
"paragraph_id": 14,
"text": "Some modern scholars and archeologists have argued that Essenes inhabited the settlement at Qumran, a plateau in the Judean Desert along the Dead Sea, citing Pliny the Elder in support and giving credence that the Dead Sea Scrolls are the product of the Essenes. This theory, though not yet conclusively proven, has come to dominate the scholarly discussion and public perception of the Essenes.",
"title": "Location"
},
{
"paragraph_id": 15,
"text": "The accounts by Josephus and Philo show that the Essenes led a strictly communal life—often compared to later Christian monasticism. Many of the Essene groups appear to have been celibate, but Josephus speaks also of another \"order of Essenes\" that observed the practice of being engaged for three years and then becoming married. According to Josephus, they had customs and observances such as collective ownership, electing a leader to attend to the interests of the group, and obedience to the orders from their leader. Also, they were forbidden from swearing oaths and from sacrificing animals. They controlled their tempers and served as channels of peace, carrying weapons only for protection against robbers. The Essenes chose not to possess slaves but served each other and, as a result of communal ownership, did not engage in trading. Josephus and Philo provide lengthy accounts of their communal meetings, meals, and religious celebrations. This communal living has led some scholars to view the Essenes as a group practicing social and material egalitarianism.",
"title": "Rules, customs, theology, and beliefs"
},
{
"paragraph_id": 16,
"text": "Despite their prohibition on swearing oaths, after a three-year probationary period, new members would take an oath that included a commitment to practice piety to God and righteousness toward humanity; maintain a pure lifestyle; abstain from criminal and immoral activities; transmit their rules uncorrupted; and preserve the books of the Essenes and the names of the angels. Their theology included belief in the immortality of the soul and that they would receive their souls back after death. Part of their activities included purification by water rituals which was supported by rainwater catchment and storage. According to the Community Rule, repentance was a prerequisite to water purification.",
"title": "Rules, customs, theology, and beliefs"
},
{
"paragraph_id": 17,
"text": "Ritual purification was a common practice among the peoples of Judea during this period and was thus not specific to the Essenes. A ritual bath or mikveh was found near many synagogues of the period continuing into modern times. Purity and cleanliness was considered so important to the Essenes that they would refrain from defecation on the Sabbath.",
"title": "Rules, customs, theology, and beliefs"
},
{
"paragraph_id": 18,
"text": "According to Joseph Lightfoot, the Church Father Epiphanius (writing in the 4th century CE) seems to make a distinction between two main groups within the Essenes: \"Of those that came before his [Elxai, an Ossaean prophet] time and during it, the Ossaeans and the Nasaraeans.\"Part 18 Epiphanius describes each group as following:",
"title": "Rules, customs, theology, and beliefs"
},
{
"paragraph_id": 19,
"text": "The Nasaraean—they were Jews by nationality—originally from Gileaditis, Bashanitis and the Transjordan... They acknowledged Moses and believed that he had received laws—not this law, however, but some other. And so, they were Jews who kept all the Jewish observances, but they would not offer sacrifice or eat meat. They considered it unlawful to eat meat or make sacrifices with it. They claim that these Books are fictions, and that none of these customs were instituted by the fathers. This was the difference between the Nasaraean and the others...",
"title": "Rules, customs, theology, and beliefs"
},
{
"paragraph_id": 20,
"text": "After this Nasaraean sect in turn comes another closely connected with them, called the Ossaeans. These are Jews like the former... originally came from Nabataea, Ituraea, Moabitis, and Arielis, the lands beyond the basin of what sacred scripture called the Salt Sea... Though it is different from the other six of these seven sects, it causes schism only by forbidding the books of Moses like the Nasaraean.",
"title": "Rules, customs, theology, and beliefs"
},
{
"paragraph_id": 21,
"text": "We do not know much about the canon of the Essenes, and what their attitude was towards the apocryphal writings, however the Essenes perhaps did not esteem the book of Esther highly as manuscripts of Esther are completely absent in Qumran, likely because of their opposition to mixed marriages and the use of different calendars.",
"title": "Rules, customs, theology, and beliefs"
},
{
"paragraph_id": 22,
"text": "The Essenes were unique for their time for being against the practice of slave-ownership, and slavery, which they regarded as unjust and ungodly, regarding all men as having been born equal.",
"title": "Rules, customs, theology, and beliefs"
},
{
"paragraph_id": 23,
"text": "Josephus and Philo discuss the Essenes in detail. Most scholars believe that the community at Qumran that most likely produced the Dead Sea Scrolls was an offshoot of the Essenes. However, this theory has been disputed by some; for example, Norman Golb argues that the primary research on the Qumran documents and ruins (by Father Roland de Vaux, from the École Biblique et Archéologique de Jérusalem) lacked scientific method, and drew wrong conclusions that comfortably entered the academic canon. For Golb, the number of documents is too extensive and includes many different writing styles and calligraphies; the ruins seem to have been a fortress, used as a military base for a very long period of time—including the 1st century—so they therefore could not have been inhabited by the Essenes; and the large graveyard excavated in 1870, just 50 metres (160 ft) east of the Qumran ruins, was made of over 1200 tombs that included many women and children; Pliny clearly wrote that the Essenes who lived near the Dead Sea \"had not one woman, had renounced all pleasure... and no one was born in their race\". Golb's book presents observations about de Vaux's premature conclusions and their uncontroverted acceptance by the general academic community. He states that the documents probably stemmed from various libraries in Jerusalem, kept safe in the desert from the Roman invasions. Other scholars refute these arguments—particularly since Josephus describes some Essenes as allowing marriage.",
"title": "Scholarly discussion"
},
{
"paragraph_id": 24,
"text": "Another issue is the relationship between the Essaioi and Philo's Therapeutae and Therapeutrides. He regarded the Therapeutae as a contemplative branch of the Essaioi who, he said, pursued an active life.",
"title": "Scholarly discussion"
},
{
"paragraph_id": 25,
"text": "One theory on the formation of the Essenes suggests that the movement was founded by a Jewish high priest, dubbed by the Essenes the Teacher of Righteousness, whose office had been usurped by Jonathan (of priestly but not of Zadokite lineage), labeled the \"man of lies\" or \"false priest\". Others follow this line and a few argue that the Teacher of Righteousness was not only the leader of the Essenes at Qumran, but was also identical to the original Messianic figure about 150 years before the time of the Gospels. Fred Gladstone Bratton notes that",
"title": "Scholarly discussion"
},
{
"paragraph_id": 26,
"text": "The Teacher of Righteousness of the Scrolls would seem to be a prototype of Jesus, for both spoke of the New Covenant; they preached a similar gospel; each was regarded as a Savior or Redeemer; and each was condemned and put to death by reactionary factions... We do not know whether Jesus was an Essene, but some scholars feel that he was at least influenced by them.",
"title": "Scholarly discussion"
},
{
"paragraph_id": 27,
"text": "Lawrence Schiffman has argued that the Qumran community may be called Sadducean, and not Essene, since their legal positions retain a link with Sadducean tradition.",
"title": "Scholarly discussion"
},
{
"paragraph_id": 28,
"text": "Rituals of the Essenes and Christianity have much in common; the Dead Sea Scrolls describe a meal of bread and wine that will be instituted by the messiah, both the Essenes and Christians were eschatological communities, where judgement on the world would come at any time. The New Testament also possibly quotes writings used by the Qumran community. Luke 1:31-35 states \" And now you will conceive in your womb and bear a son and you will name him Jesus. He will be great and will be called the son of the Most High...the son of God\" which seems to echo 4Q 246, stating: \"He will be called great and he will be called Son of God, and they will call him Son of the Most High...He will judge the earth in righteousness...and every nation will bow down to him\".",
"title": "Connection to other religious traditions"
},
{
"paragraph_id": 29,
"text": "Other similarities include high devotion to the faith even to the point of martyrdom, communal prayer, self denial and a belief in a captivity in a sinful world.",
"title": "Connection to other religious traditions"
},
{
"paragraph_id": 30,
"text": "John the Baptist has also been argued to have been an Essene, as there are numerous parallels between John's mission and the Essenes, which suggests he perhaps was trained by the Essene community.",
"title": "Connection to other religious traditions"
},
{
"paragraph_id": 31,
"text": "In the early church a book called the Odes of Solomon was written. The writer was likely a very early convert from the Essene community into Christianity. The book reflects a mixture of mystical ideas of the Essene community with Christian concepts.",
"title": "Connection to other religious traditions"
},
{
"paragraph_id": 32,
"text": "Both the Essenes and Christians practiced voluntary celibacy and prohibited divorce. Both also used concepts of \"light\" and \"darkness\" for good and evil.",
"title": "Connection to other religious traditions"
},
{
"paragraph_id": 33,
"text": "A few have argued that the Essenes had an idea of a pierced Messiah based on 4Q285; however, the interpretation of the text is ambiguous. Some scholars interpreted it as the Messiah being killed himself, while modern scholars mostly interpret it as the Messiah executing the enemies of Israel in an eschatological war.",
"title": "Connection to other religious traditions"
},
{
"paragraph_id": 34,
"text": "Both the Essenes and Christians practiced a ritual of immersion by water, however the Essenes had it as a regular practice instead of a one time event.",
"title": "Connection to other religious traditions"
},
{
"paragraph_id": 35,
"text": "The Haran Gawaita uses the name Nasoraeans for the Mandaeans arriving from Jerusalem meaning guardians or possessors of secret rites and knowledge. Scholars such as Kurt Rudolph, Rudolf Macúch, Mark Lidzbarski and Ethel S. Drower connect the Mandaeans with the Nasaraeans described by Epiphanius, a group within the Essenes according to Joseph Lightfoot. Epiphanius says (29:6) that they existed before Jesus. That is questioned by some, but others accept the pre-Christian origin of the Nasaraeans.",
"title": "Connection to other religious traditions"
},
{
"paragraph_id": 36,
"text": "Early religious concepts and terminologies recur in the Dead Sea Scrolls, and Yardena (Jordan) has been the name of every baptismal water in Mandaeism. One of the names for the Mandaean God Hayyi Rabbi, Mara d-Rabuta (Lord of Greatness) is found in the Genesis Apocryphon II, 4. Another early self-appellation is bhiri zidqa meaning 'elect of righteousness' or 'the chosen righteous', a term found in the Book of Enoch and Genesis Apocryphon II, 4. As Nasoraeans, Mandaeans believe that they constitute the true congregation of bnai nhura meaning 'Sons of Light', a term used by the Essenes. Mandaean scripture affirms that the Mandaeans descend directly from John the Baptist's original Nasoraean Mandaean disciples in Jerusalem. Similar to the Essenes, it is forbidden for a Mandaean to reveal the names of the angels to a gentile. Essene graves are oriented north–south and a Mandaean's grave must also be in the north–south direction so that if the dead Mandaean were stood upright, they would face north. Mandaeans have an oral tradition that some were originally vegetarian and also similar to the Essenes, they are pacifists.",
"title": "Connection to other religious traditions"
},
{
"paragraph_id": 37,
"text": "The beit manda (beth manda) is described as biniana rab ḏ-srara (\"the Great building of Truth\") and bit tušlima (\"house of Perfection\") in Mandaean texts such as the Qolasta, Ginza Rabba, and the Mandaean Book of John. The only known literary parallels are in Essene texts from Qumran such as the Community Rule, which has similar phrases such as the \"house of Perfection and Truth in Israel\" (Community Rule 1QS VIII 9) and \"house of Truth in Israel.\"",
"title": "Connection to other religious traditions"
},
{
"paragraph_id": 38,
"text": "The Magharians or Magarites (Arabic: Al-Maghariyyah, 'people of the caves') were, according to Jacob Qirqisani, a Jewish sect founded in the 1st century BCE. Abraham Harkavy and others identify the Magharians with the Essenes, and their author referred to as the \"Alexandrinian\" with Philo (whose affinity for the Essenes is well-known), based on the following evidence:",
"title": "Connection to other religious traditions"
}
]
| The Essenes were a mystic Jewish sect during the Second Temple period that flourished from the 2nd century BCE to the 1st century CE. The Essene movement likely originated as a distinct group among Jews during Jonathan Apphus' time, driven by disputes over Jewish law and the belief that Jonathan's high priesthood was illegitimate. Most scholars think the Essenes seceded from the Zadokite priests. They saw themselves as the genuine remnant of Israel, upholding the true covenant with God, and attributed their interpretation of the Torah to their early leader, the Teacher of Righteousness, possibly a legitimate high priest. Embracing a conservative approach to Jewish law, they observed a strict hierarchy favoring priests over laypeople, emphasized ritual purity, and held a dualistic worldview. According to Jewish writers Josephus and Philo, the Essenes numbered around four thousand, and resided in various settlements throughout Judaea. Conversely, Roman writer Pliny the Elder positioned them somewhere above Ein Gedi, on the west side of the Dead Sea. Pliny relates in a few lines that the Essenes possess no money, had existed for thousands of generations, and that their priestly class ("contemplatives") did not marry. Josephus gave a detailed account of the Essenes in The Jewish War, with a shorter description in Antiquities of the Jews and The Life of Flavius Josephus. Claiming firsthand knowledge, he lists the Essenoi as one of the three sects of Jewish philosophy alongside the Pharisees and Sadducees. He relates the same information concerning piety, celibacy; the absence of personal property and of money; the belief in communality; and commitment to a strict observance of Sabbath. He further adds that the Essenes ritually immersed in water every morning, ate together after prayer, devoted themselves to charity and benevolence, forbade the expression of anger, studied the books of the elders, preserved secrets, and were very mindful of the names of the angels kept in their sacred writings. The Essenes have gained fame in modern times as a result of the discovery of an extensive group of religious documents known as the Dead Sea Scrolls, which are commonly believed to be the Essenes' library. The scrolls were found at Qumran, an archaeological site situated along the northwestern shore of the Dead Sea, believed to have been the dwelling place of an Essene community. These documents preserve multiple copies of parts of the Hebrew Bible along with deuterocanonical and sectarian manuscripts, including writings such as the Community Rule, the Damascus Document, and the War Scroll, which provide valuable insights into the communal life, ideology and theology of the Essenes. According to the conventional view, the Essenes disappeared after the First Jewish–Roman War, which also witnessed the destruction of the settlement at Qumran. Scholars have noted the absence of direct sources supporting this claim, raising the possibility of their endurance or the survival of related groups in the following centuries. Some researchers suggest that Essene teachings could have influenced other religious traditions, such as Early Christianity and Mandaeism. | 2001-10-25T03:04:27Z | 2023-12-27T22:55:52Z | [
"Template:Blockquote",
"Template:Citation needed",
"Template:See also",
"Template:Rp",
"Template:Reflist",
"Template:Cite news",
"Template:Cite thesis",
"Template:Unreliable source?",
"Template:Sfn",
"Template:R",
"Template:Cite book",
"Template:Cite journal",
"Template:Short description",
"Template:C.",
"Template:NIE Poster",
"Template:Dead Sea Scrolls",
"Template:Jews and Judaism sidebar",
"Template:Script/Hebrew",
"Template:Page needed",
"Template:Citation",
"Template:Cite magazine",
"Template:Div col",
"Template:Authority control",
"Template:Redirect",
"Template:Infobox political party",
"Template:IPAc-en",
"Template:Convert",
"Template:Cite web",
"Template:Use dmy dates",
"Template:RP",
"Template:Cite encyclopedia",
"Template:Div col end",
"Template:Wikiquote",
"Template:Lang-ar"
]
| https://en.wikipedia.org/wiki/Essenes |
9,979 | Eyes Wide Shut | Eyes Wide Shut is a 1999 erotic mystery psychological drama film directed, produced, and co-written by Stanley Kubrick. It is based on the 1926 novella Traumnovelle (Dream Story) by Arthur Schnitzler, transferring the story's setting from early twentieth-century Vienna to 1990s New York City. The plot centers on a physician (Tom Cruise) who is shocked when his wife (Nicole Kidman) reveals that she had contemplated having an affair a year earlier. He then embarks on a night-long adventure, during which he infiltrates a masked orgy of an unnamed secret society.
Kubrick obtained the filming rights for Dream Story in the 1960s, considering it a perfect text for a film adaptation about sexual relations. He revived the project in the 1990s when he hired writer Frederic Raphael to help him with the adaptation. The film, which was mostly shot in England, apart from some exterior establishing shots, includes a detailed recreation of exterior Greenwich Village street scenes made at Pinewood Studios. The film's production, at 400 days, holds the Guinness World Record for the longest continuous film shoot.
Kubrick died of a heart attack six days after showing the final cut of Eyes Wide Shut to Warner Bros., making it the final film he directed. He reportedly considered it his "greatest contribution to the art of cinema". In order to ensure a theatrical R rating in the United States, Warner Bros. digitally altered several sexually explicit scenes during post-production. This version was premiered on July 13, 1999, before being released on July 16, to generally positive reviews from critics. Box office receipts for the film worldwide were about $162 million, making it Kubrick's highest-grossing film. The uncut version has since been released in DVD, HD DVD and Blu-ray Disc formats. Eyes Wide Shut has been included in several lists of the greatest films of the 1990s.
Dr. William "Bill" Harford and his wife Alice live in New York City with their daughter Helena. At a Christmas party hosted by patient Victor Ziegler, Bill reunites with old medical school classmate Nick Nightingale, who now plays piano professionally. An older Hungarian guest attempts to seduce Alice, while two young models try to seduce Bill. Host Victor interrupts with news of an overdose by Mandy, a young woman Victor was having sex with. Bill aids in Mandy's recovery.
The next night, while smoking marijuana, Alice and Bill discuss their unfulfilled temptations. Bill is not jealous of other men's attraction to Alice, believing women to be naturally faithful. Alice admits to fantasizing about a naval officer she met on vacation and considered leaving Bill and Helena. Bill is disturbed before being called to a patient's house. The patient's daughter, Marion, tries to seduce Bill, but he resists.
After leaving Marion's, Bill meets a prostitute named Domino. When Alice calls, he pays Domino for a non-sexual encounter and meets Nick at a jazz club. Nick describes a masked orgy in a mansion outside New York City at which he will play piano blindfolded, and gives Bill the password to enter the party. Bill goes to a costume store owned by a patient of his in order to buy an outfit to fit in at the masked orgy. Finding the costume store now owned by a man named Milich, he offers money to rent a costume from the shop, where he and Milich finds Milich's young daughter with two men.
Bill goes to the mansion and gives the password, discovering a sexual ritual in progress. A masked woman warns him he is in danger. He is brought before the master of ceremonies who demands to know a second password for the house, revealing that the password Bill has is only to enter the grounds. Bill removes his mask at the demand of the master of ceremonies, but the woman who warned him intervenes. She insists on redeeming him, at a personal cost. Bill is let off with a warning to keep quiet.
Bill comes home feeling guilty and confused, only to find Alice laughing in her sleep. He wakes her up and she tearfully tells him about a dream where she was having sex with the naval officer and many other men, and laughing at the idea of Bill watching. The next day, Bill goes to Nick's hotel, but the desk clerk tells him that Nick left with two dangerous-looking men. Bill returns the costume, but realizes he has misplaced the mask, and learns that Milich has sold his teenage daughter into sex slavery. Milich implies that Bill can pay to have sex with his daughter if he likes.
In the afternoon, consumed by thoughts of his wife's infidelity, Bill leaves work early and returns to the site of the orgy. At the front gate, he is handed an envelope with a warning to stay away. That evening, Bill tries to call Marion, but hangs up when her fiancé answers. He decides to go to Domino's apartment to consummate their affair, but is met by her roommate, Sally. Although there is sexual tension between them, Sally informs Bill that Domino has just received news that she is HIV-positive. Bill leaves.
After leaving the apartment, Bill is followed by a mysterious figure. He discovers that an ex-beauty queen has died from an overdose and identifies her as Mandy at the morgue. Later, Ziegler summons him and admits to being a guest at the orgy. Ziegler reveals that there was no second password at all, and failing to know this is what outed Bill as an outsider. Ziegler assures Bill that the secret society only aims to intimidate him into silence but implies that they are capable of taking action if necessary. Bill is concerned about Nick's disappearance and Mandy's death, whom he correctly identifies as the masked orgy participant who sacrificed herself for him. Ziegler claims Nick is safe and that Mandy died from an accidental overdose due to drug addiction.
Bill returns home to find the rented mask on his pillow and confides in his wife, Alice, about the past two days. The next day they go Christmas shopping with their daughter and Bill apologizes to Alice. She suggests they do something "as soon as possible," to which Bill asks what she means and Alice simply responds with one word, "Fuck."
Eyes Wide Shut developed after Stanley Kubrick read Arthur Schnitzler's Dream Story in 1968, when Kubrick was looking for a project to follow 2001: A Space Odyssey. Kubrick was interested in adapting the story, and with the help of journalist Jay Cocks, bought the filming rights to the novel. For the following decade, Kubrick considered making the Dream Story adaptation a sex comedy "with a wild and somber streak running through it", starring Steve Martin or Woody Allen in the main role. Kubrick also considered Tom Hanks, Bill Murray, Dustin Hoffman, Warren Beatty, Albert Brooks, Alan Alda and Sam Shepard for the lead in the 80s. The project was revived in 1994 when Kubrick hired Frederic Raphael to work on the script, updating the setting from early 20th century Vienna to late 20th century New York City. Kubrick invited his friend Michael Herr, who helped write Full Metal Jacket, to make revisions, but Herr declined for fear he would be underpaid and have to commit to a long production.
Arthur Schnitzler's 1926 novella Dream Story is set around Vienna after the turn of the century. The main characters are a couple named Fridolin and Albertina. The couple's home is a typical suburban middle-class home. Like the protagonist of the novel, Schnitzler was Jewish, lived in Vienna, and was a doctor, although he left medicine to write.
Kubrick frequently removed references to the Jewishness of characters in the novels he adapted. In Eyes Wide Shut, Frederic Raphael, who is Jewish, wanted to keep the Jewish background of the protagonists, but Kubrick disagreed and removed details that would identify characters as Jewish. Kubrick determined Bill should be a "Harrison Ford-ish goy" and created the surname of Harford as an allusion to the actor. In the film, Bill is taunted with homophobic slurs. In the novella, the taunters are members of an anti-Semitic college fraternity. In an introduction to a Penguin Classics edition of Dream Story, Raphael wrote that "Fridolin is not declared to be a Jew, but his feelings of cowardice, for failing to challenge his aggressor, echo the uneasiness of Austrian Jews in the face of Gentile provocation."
The novella is set during the Carnival, when people often wear masks to parties. The party that both husband and wife attend at the opening of the story is a Carnival Masquerade ball, whereas the film's story begins at Christmas time.
In the novella, the party (which is sparsely attended) uses "Denmark" as the password for entrance; that is significant in that Albertina had her infatuation with her soldier in Denmark; the film's password is "Fidelio". In early drafts of the screenplay, the password was "Fidelio Rainbow". Jonathan Rosenbaum noted that both passwords echo elements of one member of the couple's behaviour, though in opposite ways. The party in the novella consists mostly of nude ballroom dancing.
In the novella, the woman who "redeems" Fridolin at the party, saving him from punishment, is costumed as a nun, and most of the characters at the party are dressed as nuns or monks; Fridolin himself used a monk costume. This aspect was retained in the film's original screenplay, but was deleted in the filmed version.
The novella makes it clear that Fridolin at this point hates Albertina more than ever, thinking they are now lying together "like mortal enemies". It has been argued that the dramatic climax of the novella is actually Albertina's dream, and the film has shifted the focus to Bill's visit to the secret society's orgy, whose content is more shocking in the film.
The adaptation created a character with no counterpart in the novella: Ziegler, who represents both the high wealth and prestige to which Bill Harford aspires, and a connection between Bill's two worlds (his regular life, and the secret society organizing the ball). Critic Randy Rasmussen interprets Ziegler as representing Bill's worst self, much as in other Kubrick films; the title character in Dr. Strangelove represents the worst of the American national security establishment, Charles Grady represents the worst of Jack Torrance in The Shining, and Clare Quilty represents the worst of Humbert Humbert in Lolita.
More significantly, in the film, Ziegler gives a commentary on the whole story to Bill, including an explanation that the party incident, where Bill is apprehended, threatened, and ultimately redeemed by the woman's sacrifice, was staged. Whether this is to be believed or not, it is an exposition of Ziegler's view of the ways of the world as a member of the power elite.
When Warner Bros. president Terry Semel approved production, he asked Kubrick to cast a movie star as "you haven't done that since Jack Nicholson [in The Shining]". Kubrick considered casting Alec Baldwin and Kim Basinger as Bill and Alice Harford. Cruise was in England because his wife Nicole Kidman was there filming The Portrait of a Lady (1996), and the pair eventually decided to visit Kubrick's estate. After that meeting, the director awarded them the roles. Kubrick also managed to make both not commit to other projects until Eyes Wide Shut was completed. Jennifer Jason Leigh and Harvey Keitel each were cast in supporting roles and filmed by Kubrick. Reportedly due to scheduling conflicts, both had to drop out – first Keitel with Finding Graceland, then Leigh with eXistenZ – and they were replaced by Sydney Pollack and Marie Richardson in the final cut. Although, Keitel quit after doing 68 takes for a scene of his character walking through the door. Kubrick offered Eva Herzigová a role in the film, but she declined.
In 2019, it was revealed that Cate Blanchett had provided the voice of the mysterious masked woman at the orgy party. Actress Abigail Good could not do a convincing American accent, and Cruise and Kidman ended up suggesting Blanchett for the dubbing, which occurred after Kubrick's death.
Principal photography began in November 1996. Kubrick's perfectionism led to script pages being rewritten on the set with most scenes requiring numerous takes. The shoot went on for much longer than expected; the actress Vinessa Shaw was initially contracted for two weeks but ended up working for two months while the actor Alan Cumming, who appears in one scene as a hotel clerk, auditioned six times before the filming process. Due to the relentless nature of the production, the crew became exhausted and were reported to have been impacted by low morale. Filming finally wrapped in June 1998. The Guinness World Records recognized Eyes Wide Shut as the longest constant movie shoot that ran "...for over 15 months, a period that included an unbroken shoot of 46 weeks".
Given Kubrick's fear of flying, the entire film was shot in England. Sound-stage works were completed at London's Pinewood Studios which included a detailed recreation of Greenwich Village. Kubrick's perfectionism went as far as sending workmen to Manhattan to measure street widths and note newspaper vending machine locations. Real New York footage was also shot to be rear projected behind Cruise. Production was followed by a strong campaign of secrecy helped by Kubrick always working with a short team on set. Outdoor locations included Hatton Garden for a Greenwich Village street, Hamleys for the toy store from the film's ending, and Mentmore Towers and Elveden Hall in Elveden, Suffolk, England for the mansion. Larry Smith, who had first served as a gaffer on both Barry Lyndon and The Shining, was chosen by Kubrick to be the film's cinematographer. Wherever possible, Smith made use of available light sources visible in the shots such as lamps and Christmas tree lights, but when this was insufficient he used Chinese paper ball lamps to softly brighten the scene and/or other types of film lighting. The color was enhanced by push processing the film reels (emulsion) which helped bring out the intensity of the color and emphasize highlights. This effect is evident in the Christmas party scene at Ziegler's house, with Smith noting that the push processing "made the lights appear to be much brighter than they were" and created a "wonderful warm glow."
Kubrick's perfectionism led him to oversee every visual element that would appear in a given frame, from props and furniture to the color of walls and other objects. One such element were the masks used in the orgy which were inspired by the masked carnival balls visited by the protagonists in the novel. Costume designer Marit Allen explained that Kubrick felt they fit in that scene for being part of the imaginary world and ended up "creat[ing] the impression of menace, but without exaggeration". As many masks as were used in the Venetian carnival were sent to London and Kubrick chose who would wear each piece. The paintings of Kubrick's wife Christiane and his daughter Katherina are featured as decorations.
Nicole Kidman revealed that her explicit scenes with the naval officer, played by Gary Goba, were filmed over three days and that Kubrick wanted them to be 'almost pornographic'.
After shooting had been completed, Kubrick entered a prolonged post-production process and on March 1, 1999, Kubrick showed a cut to Cruise, Kidman and the Warner Bros. executives. The director died six days later.
Jocelyn Pook wrote the original music for Eyes Wide Shut but, like other Kubrick movies, the film was noted for its use of classical music. The opening title music is Shostakovich's Waltz No. 2 from "Suite for Variety Stage Orchestra", misidentified as "Jazz Suite No. 2". One recurring piece is the second movement of György Ligeti's piano cycle "Musica ricercata". Kubrick originally intended to feature "Im Treibhaus" from Wagner's Wesendonck Lieder, but the director eventually replaced it with Ligeti's tune feeling Wagner's song was "too beautiful". In the morgue scene, Franz Liszt's late solo piano piece, "Nuages Gris" ("Grey Clouds") (1881), is heard. "Rex tremendae" from Mozart's Requiem plays as Bill walks into the café and reads of Mandy's death.
Pook was hired after choreographer Yolande Snaith rehearsed the masked ball orgy scene using Pook's composition "Backwards Priests" – which features a Romanian Orthodox Divine Liturgy recorded in a church in Baia Mare, played backwards – as a reference track. Kubrick then called the composer and asked if she had anything else "weird" like that song, which was reworked for the final cut of the scene, with the title "Masked Ball". Pook ended up composing and recording four pieces of music, many times based on her previous work, totaling 24 minutes. The composer's work ended up having mostly string instruments – including a viola played by Pook herself – with no brass or woodwinds as Pook "just couldn't justify these other textures", particularly as she wanted the tracks played on dialogue-heavy scenes to be "subliminal" and felt such instruments would be intrusive.
Another track in the orgy, "Migrations", features a Tamil song sung by Manickam Yogeswaran, a Carnatic singer. The original cut featured a scriptural recitation from the Bhagavad Gita, which Pook took from a previous Yogeswaran recording. South African Hindu Mahasabha, a Hindu group, protested against the scripture being used, Warner Bros. issued a public apology, and hired the singer to record a similar track to replace the chant.
The party at Ziegler's house features rearrangements of love songs such as "When I Fall in Love" and "It Had to Be You", used in increasingly ironic ways considering how Alice and Bill flirt with other people in the scene. As Kidman was nervous about doing nude scenes, Kubrick stated she could bring music to liven up. When Kidman brought a Chris Isaak CD, Kubrick approved it, and incorporated Isaak's song "Baby Did a Bad, Bad Thing" to both an early romantic embrace of Bill and Alice and the film's trailer.
The film was described by some reviewers, and partially marketed, as an erotic thriller, a categorization disputed by others. It is classified as such in the book The Erotic Thriller in Contemporary Cinema, by Linda Ruth Williams, and was described as such in news articles about Cruise and Kidman's lawsuit over assertions that they saw a sex therapist during filming. The positive review in Combustible Celluloid describes it as an erotic thriller upon first viewing, but actually a "complex story about marriage and sexuality". High-Def Digest also called it an erotic thriller.
However, reviewing the film at AboutFilm.com, Carlo Cavagna regards this as a misleading classification, as does Leo Goldsmith, writing at notcoming.com, and the review on Blu-ray.com. Writing in TV Guide, Maitland McDonagh writes "No one familiar with the cold precision of Kubrick's work will be surprised that this isn't the steamy erotic thriller a synopsis (or the ads) might suggest." Writing in general about the genre of 'erotic thriller' for CineAction in 2001, Douglas Keesey states that "whatever [Eyes Wide Shut's] actual type, [it] was at least marketed as an erotic thriller". Michael Koresky, writing in the 2006 issue of film journal Reverse Shot, writes "this director, who defies expectations at every turn and brings genre to his feet, was ... setting out to make neither the 'erotic thriller' that the press maintained nor an easily identifiable 'Kubrick film'". DVD Talk similarly dissociates the film from this genre.
In addition to relocating the story from Vienna in the 1900s to New York City in the 1990s, Kubrick changed the time-frame of Schnitzler's story from Mardi Gras to Christmas. Michael Koresky believed Kubrick did this because of the rejuvenating symbolism of Christmas. Mario Falsetto, on the other hand, notes that Christmas lights allow Kubrick to employ some of his distinct methods of shooting including using source location lighting, as he also did in Barry Lyndon. The New York Times notes that the film "gives an otherworldly radiance and personality to Christmas lights", and critic Randy Rasmussen notes that "colorful Christmas lights ... illuminate almost every location in the film." Harper's film critic, Lee Siegel, believes that the film's recurring motif is the Christmas tree, because it symbolizes the way that "Compared with the everyday reality of sex and emotion, our fantasies of gratification are ... pompous and solemn in the extreme ... For desire is like Christmas: it always promises more than it delivers." Author Tim Kreider notes that the "Satanic" mansion-party at Somerton is the only set in the film without a Christmas tree, stating that "Almost every set is suffused with the dreamlike, hazy glow of colored lights and tinsel." Furthermore, he argues that "Eyes Wide Shut, though it was released in summer, was the Christmas movie of 1999." Noting that Kubrick has shown viewers the dark side of Christmas consumerism, Louise Kaplan states that the film illustrates ways in which the "material reality of money" is shown replacing the spiritual values of Christmas, charity, and compassion. While virtually every scene has a Christmas tree, there is "no Christmas music or cheery Christmas spirit." Critic Alonso Duralde, in his book Have Yourself a Movie Little Christmas, categorized the film as a "Christmas movie for grownups", arguing that "Christmas weaves its way through the film from start to finish".
Historians, travel guide authors, novelists, and merchants of Venetian masks have noted that these have a long history of being worn during promiscuous activities. Authors Tim Kreider and Thomas Nelson have linked the film's usage of these to Venice's reputation as a center of both eroticism and mercantilism. Nelson notes that the sex ritual combines elements of Venetian Carnival and Catholic rites, in particular, the character of "Red Cloak" who simultaneously serves as Grand Inquisitor and King of Carnival. As such, Nelson argues that the sex ritual is a symbolic mirror of the darker truth behind the façade of Victor Ziegler's earlier Christmas party. Carolin Ruwe, in her book Symbols in Stanley Kubrick's Movie 'Eyes Wide Shut', argues that the mask is the prime symbol of the film. Its symbolic meaning is represented through its connection to the characters in the film; as Tim Kreider points out, this can be seen through the masks in the prostitute's apartment and her being renamed as "Domino" in the film, which is a type of Venetian Mask. Unused early poster designs for the film by Kubrick's daughter Katharina used the motif of Venetian masks, but were rejected by the studio because they obscured the faces of the film's two stars.
Paintings and sculptures appear throughout the film, some historical and others painted by Kubrick's wife Christiane Kubrick and step daughter Katharina Kubrick Hobbs. The home of the Harford's contains the majority of the works painted by Kubrick's family members, with the exception being a painting of a nude reclining pregnant woman by Christiane Kubrick title Paula On Red that appears in Ziegler's bathroom during the overdose scene. In the beginning of the film, as Bill and Alice are saying goodbye to their daughter Helena and the babysitter, a painting by Christiane Kubrick titled "View from the Mentmore" can be seen hanging next to the Christmas tree. Mentmore Towers is an English country house in the south east of England that was used for filming the interior scenes of the Somerton house and the masked orgy.
During Ziegler's party, Bill is summoned to the bathroom to deal with an apparent overdose, as he climbs the spiral staircase he passes Giulio Bergonzoli's sculpture Gli amori degli angeli (The Loves of Angels) which is at the foot of the staircase. This sculpture is said to be inspired by a poem titled The Loves of The Angels by 19th-century poet Thomas Moore, the poem itself describes the story of three angels who fall in love with mortal women and share the password to heaven with them resulting in their banishment. At the time of the poem's release, it was received with controversy due to the open eroticism throughout.
When Bill enters a cafe towards the end of the film, two Pre-Raphaelite paintings can be seen hanging on parallel walls, Ophelia by John William Waterhouse and Astarte Syriaca by Dante Gabriel Rossetti. Waterhouse's Ophelia depicts the character by the same name in Shakespeare's tragedy Hamlet moments before her death. Astarte Syriaca depicts Astarte, the ancient Syrian goddess of love, as well as two symmetrical angels holding torches directly behind her. Both paintings mirror events within the film and, as Robert Wilkes writes, reflect its "mood of sensuality, ritualism, and exoticism". In the same cafe scene, a black-and-white print of a reclining woman is seen directly behind Bill as he sits down with a newspaper, in the proceeding shot the print is replaced with what Wilkes describes as a "more chaotic, nightmarish image" as Bill reads about the ex-beauty queen's apparent overdose.
Warner Bros. heavily promoted Eyes Wide Shut, while following Kubrick's secrecy campaign – to the point that the film's press kits contained no production notes, not even the director's suggestions to Semel regarding the marketing campaign, given one week prior to Kubrick's death. The first footage was shown to theater owners attending the 1999 ShoWest convention in Las Vegas. TV spots featured both Isaak and Ligeti's music from the soundtrack, while revealing little about the movie's plot. The film also appeared on the cover of Time magazine, and on show business programs such as Entertainment Tonight and Access Hollywood.
Eyes Wide Shut opened on July 16, 1999, in the United States. The film topped the week-end box office, with $21.7 million from 2,411 screens. These numbers surpassed the studio's expectations of $20 million, and became both Cruise's sixth consecutive chart topper and Kubrick's highest opening week-end as well as the highest featuring Kidman and Cruise together. Eyes Wide Shut ended up grossing a total of $55,691,208 in the US. The numbers put it as Kubrick's second-highest-grossing film in the country, behind 2001: A Space Odyssey, but both were considered a box office disappointment.
Shortly after its screening at the Venice Film Festival, Eyes Wide Shut had a British premiere on September 3, 1999, at the Warner Village cinema in Leicester Square. The film's wide opening occurred the following week-end, and topped the U.K. charts with £1,189,672.
The international performances for Eyes Wide Shut were more positive, with Kubrick's long-time assistant and brother-in-law Jan Harlan stating that "It was badly received in the Anglo-Saxon world, but it was very well received in the Latin world and Japan. In Italy, it was a huge hit." Overseas earnings of over $105 million led to a $162,091,208 box office run world-wide, turning it into the highest-grossing Kubrick film.
Eyes Wide Shut received generally positive reviews from critics. On Rotten Tomatoes, the film holds an approval rating of 76% based on 160 reviews, with an average rating of 7.5/10. The website's critical consensus reads, "Kubrick's intense study of the human psyche yields an impressive cinematic work." Metacritic gives the film a weighted average score of 68 out of 100 based on 34 reviews, indicating "generally favorable reviews". Over 50 critics listed the film among the best of 1999. French magazine Cahiers du Cinéma named it the best film of the year in its annual "top ten" list. However, audiences polled by CinemaScore gave the film an average grade of "D−" on an A+ to F scale.
In the Chicago Tribune, Michael Wilmington declared the film a masterpiece, lauding it as "provocatively conceived, gorgeously shot and masterfully executed ... Kubrick's brilliantly choreographed one-take scenes create a near-hypnotic atmosphere of commingled desire and dread." Nathan Rabin of The A.V. Club was also highly positive, arguing that "the film's primal, almost religious intensity and power is primarily derived from its multifaceted realization that disobeying the dictates of society and your conscience can be both terrifying and exhilarating. ... The film's depiction of sexual depravity and amorality could easily venture into the realm of camp in the hands of a lesser filmmaker, but Kubrick depicts primal evil in a way that somehow makes it seem both new and deeply terrifying."
Roger Ebert of the Chicago Sun-Times gave the film a score of three and a half stars out of four, writing, "Kubrick's great achievement in the film is to find and hold an odd, unsettling, sometimes erotic tone for the doctor's strange encounters." He praised the individual dream-like atmosphere of the separate scenes, and called the choice of Christmas-themed lighting "garish, like an urban sideshow".
Reviewer James Berardinelli stated that it was arguably one of Kubrick's best films. Along with considering Kidman "consistently excellent", he wrote that Kubrick "has something to say about the causes and effects of depersonalized sex", and praised the work as "thought-provoking and unsettling". Writing for The New York Times, reviewer Janet Maslin commented, "This is a dead-serious film about sexual yearnings, one that flirts with ridicule yet sustains its fundamental eeriness and gravity throughout. The dreamlike intensity of previous Kubrick visions is in full force here."
Some reviewers were unfavorable. One complaint was that the movie's pacing was too slow; while this may have been intended to convey a dream state, critics objected that it made actions and decisions seem laboured. Another complaint was that it did not live up to the expectation of it being a "sexy film" which is what it had been marketed as, thus defying audiences' expectations. Many critics, such as Manohla Dargis of LA Weekly, found the prolific orgy scene to be "banal" and "surprisingly tame". While Kubrick's "pictorial talents" were described as "striking" by Rod Dreher of the New York Post, the pivotal scene was deemed by Stephen Hunter, writing for The Washington Post, as the "dullest orgy [he'd] ever seen". Hunter elaborates on his criticism, and states that "Kubrick is annoyingly offhand while at the same time grindingly pedantic; plot points are made over and over again, things are explained till the dawn threatens to break in the east, and the movie stumbles along at a glacial pace". Owen Gleiberman of Entertainment Weekly complained about the inauthenticity of the New York setting, claiming that the soundstage used for the film's production didn't have "enough bustle" to capture the reality of New York. Paul Tatara of CNN described the film as a "slow-motion morality tale full of hot female bodies and thoroughly uneventful 'mystery'", while Andrew Sarris writing for The New York Observer criticised the film's "feeble attempts at melodramatic tension and suspense". David Edelstein of Slate dismissed it as "estranged from any period I recognize. Who are these people played by Cruise and Kidman, who act as if no one has ever made a pass at them and are so deeply traumatized by their newfound knowledge of sexual fantasies—the kind that mainstream culture absorbed at least half a century ago? Who are these aristocrats whose limos take them to secret masked orgies in Long Island mansions? Even dream plays need some grounding in the real world." J. Hoberman wrote that the film "feels like a rough draft at best."
Lee Siegel from Harper's felt that most critics responded mainly to the marketing campaign and did not address the film on its own terms. Others felt that American censorship took an esoteric film and made it even harder to understand. In his article "Grotesque Caricature", published in Postmodern Culture, Stefan Mattesich praises the film's nuanced caricatured elements, and states that the film's negation of conventional narrative elements is what resulted in its subsequent negative reception.
For the introduction to Michel Ciment's Kubrick: The Definitive Edition, Martin Scorsese wrote: "When Eyes Wide Shut came out a few months after Stanley Kubrick's death in 1999, it was severely misunderstood, which came as no surprise. If you go back and look at the contemporary reactions to any Kubrick picture (except the earliest ones), you'll see that all his films were initially misunderstood. Then, after five or ten years came the realization that 2001 or Barry Lyndon or The Shining was like nothing else before or since." In 2012, Slant Magazine ranked the film as the second greatest of the 1990s. British Film Institute ranked the film at No. 19 on its list of "90 great films of the 1990s". The BBC listed it number 61 in its list of the 100 greatest American films of all time.
Eyes Wide Shut was first released on VHS, LaserDisc, and DVD on March 7, 2000. The original DVD release corrects technical gaffes, including a reflected crew member, and altering a piece of Alice Harford's dialogue. Most home videos remove the verse that was claimed to be cited from the sacred Hindu scripture Bhagavad Gita (although it was Pook's reworking of "Backwards Priests" as stated above). In the UK, Warner Home Video's 'rated 18' [no video altering] 1999 DVD release was in 4:3 full frame aspect ratio, with a note at the beginning that this was as Kubrick intended it to be shown [ratio as shot]. However, the film's length on this UK DVD is only 153 minutes, as opposed to the 159 minutes of other available DVD and Blu-ray versions. This is due to the transfer being done at 25 frames per secondes rather than 24 as shot; no actual footage was cut.
On October 23, 2007, Warner's released Eyes Wide Shut in a special edition DVD, plus the HD DVD and Blu-ray disc formats. This is the first home video release that presents the film in anamorphic 1.78:1 (16:9) format (the film was shown theatrically as soft matted 1.66:1 in Europe and 1.85:1 in the US and Japan). The previous DVD release used a 1.33:1 (4:3) aspect ratio. It is also the first American home video release to feature the uncut version. Although the earliest American DVD of the uncut version states on the cover that it includes both the R-rated and unrated editions, in actuality only the unrated edition is on the DVD.
Though Warner Bros. insisted that Kubrick had turned in his final cut before his death, the film was still in the final stages of post-production, which was therefore completed by the studio in collaboration with Kubrick's estate. Some have argued that the work that remained was minor and exclusively technical in nature, allowing the estate to faithfully complete the film based on the director's notes. However, decisions regarding sound mixing, scoring and color-correction would have necessarily been made without Kubrick's input. Furthermore, Kubrick had a history of continuing to edit his films up until the last minute, and in some cases even after initial public screenings, as had been the case with 2001: A Space Odyssey and The Shining.
Writing for Vanity Fair, Kubrick collaborator Michael Herr recalled a phone call from the director regarding the cut that would be screened for the Warner Bros. executives four days before his death:
... there was looping to be done and the music wasn't finished, lots of small technical fixes on color and sound; would I show work that wasn't finished? He had to show it to Tom and Nicole because they had to sign nudity releases, and to Terry Semel and Bob Daly of Warner Bros., but he hated it that he had to, and I could hear it in his voice that he did.
Garrett Brown, inventor of the Steadicam, has expressed that he considers Eyes Wide Shut to be an unfinished film:
I think Eyes Wide Shut was snatched up by the studio when Stanley died and they just grabbed the highest number Avid edit and ran off as if that was the movie. But it was three months before the movie was due to be released. I don't think there's a chance that was the movie he had in mind, or the music track and a lot of other things. It's a great shame because you know it's out there, but it doesn't feel to me as it's really his film.
Nicole Kidman, one of the stars of the film, briefly wrote about the completion of the film and the release of the film being at the same time of John F. Kennedy Jr.'s death from her perspective:
There was a lot of interest in Eyes Wide Shut before it was released. But the weekend it came out, July 16, 1999, was the death of JFK Jr., his wife and her sister – a black, black weekend. And for Stanley to have died [on March 7, 1999, at age 70] before the film opened... Well, it all felt so dark and strange. Stanley had sent over the cut he considered done to us, Tom and I watched it in New York – and then he died.
Jan Harlan, Kubrick's brother-in-law and executive producer, reported that Kubrick was "very happy" with the film and considered it to be his "greatest contribution to the art of cinema".
R. Lee Ermey, an actor in Kubrick's film Full Metal Jacket, stated that Kubrick phoned him two weeks before his death to express his despondency over Eyes Wide Shut. "He told me it was a piece of shit", Ermey said in Radar magazine, "and that he was disgusted with it and that the critics were going to 'have him for lunch'. He said Cruise and Kidman had their way with him – exactly the words he used."
According to Todd Field, Kubrick's friend and an actor in Eyes Wide Shut, Ermey's claims do not accurately reflect Kubrick's essential attitude. Field's response appeared in an October 18, 2006, interview with Grouch Reviews:
The polite thing would be to say 'No comment'. But the truth is that ... let's put it this way, you've never seen two actors more completely subservient and prostrate themselves at the feet of a director. Stanley was absolutely thrilled with the film. He was still working on the film when he died. And he probably died because he finally relaxed. It was one of the happiest weekends of his life, right before he died, after he had shown the first cut to Terry, Tom and Nicole. He would have kept working on it, like he did on all of his films. But I know that from people around him personally, my partner who was his assistant for thirty years. And I thought about R. Lee Ermey for In the Bedroom. And I talked to Stanley a lot about that film, and all I can say is Stanley was adamant that I shouldn't work with him for all kinds of reasons that I won't get into because there is no reason to do that to anyone, even if they are saying slanderous things that I know are completely untrue.
Citing contractual obligations to deliver an R rating, Warner Bros. digitally altered the orgy for the film's American release by blocking out graphic sexuality using additional figures to obscure the view in order to avoid an adults-only NC-17 rating that would have limited its financial viability. This alteration antagonized both film critics and cinephiles, who argued that Kubrick had never been shy about ratings (A Clockwork Orange was originally given an X-rating). The unrated version of Eyes Wide Shut was released in the United States on October 23, 2007, on DVD, HD DVD and Blu-ray formats.
Roger Ebert heavily criticized the technique of using digital images to mask the action. In his positive review of the film, he said it "should not have been done at all" and it is "symbolic of the moral hypocrisy of the rating system that it would force a great director to compromise his vision, while by the same process making his adult film more accessible to young viewers." Although Ebert has been frequently cited as calling the standard North American R-rated version the "Austin Powers" version of Eyes Wide Shut – referring to two scenes in Austin Powers: International Man of Mystery in which, through camera angles and coincidences, full frontal nudity is blocked from view in a comical way – his review stated that this joke referred to an early rough draft of the altered scene, never publicly released. | [
{
"paragraph_id": 0,
"text": "Eyes Wide Shut is a 1999 erotic mystery psychological drama film directed, produced, and co-written by Stanley Kubrick. It is based on the 1926 novella Traumnovelle (Dream Story) by Arthur Schnitzler, transferring the story's setting from early twentieth-century Vienna to 1990s New York City. The plot centers on a physician (Tom Cruise) who is shocked when his wife (Nicole Kidman) reveals that she had contemplated having an affair a year earlier. He then embarks on a night-long adventure, during which he infiltrates a masked orgy of an unnamed secret society.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Kubrick obtained the filming rights for Dream Story in the 1960s, considering it a perfect text for a film adaptation about sexual relations. He revived the project in the 1990s when he hired writer Frederic Raphael to help him with the adaptation. The film, which was mostly shot in England, apart from some exterior establishing shots, includes a detailed recreation of exterior Greenwich Village street scenes made at Pinewood Studios. The film's production, at 400 days, holds the Guinness World Record for the longest continuous film shoot.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Kubrick died of a heart attack six days after showing the final cut of Eyes Wide Shut to Warner Bros., making it the final film he directed. He reportedly considered it his \"greatest contribution to the art of cinema\". In order to ensure a theatrical R rating in the United States, Warner Bros. digitally altered several sexually explicit scenes during post-production. This version was premiered on July 13, 1999, before being released on July 16, to generally positive reviews from critics. Box office receipts for the film worldwide were about $162 million, making it Kubrick's highest-grossing film. The uncut version has since been released in DVD, HD DVD and Blu-ray Disc formats. Eyes Wide Shut has been included in several lists of the greatest films of the 1990s.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Dr. William \"Bill\" Harford and his wife Alice live in New York City with their daughter Helena. At a Christmas party hosted by patient Victor Ziegler, Bill reunites with old medical school classmate Nick Nightingale, who now plays piano professionally. An older Hungarian guest attempts to seduce Alice, while two young models try to seduce Bill. Host Victor interrupts with news of an overdose by Mandy, a young woman Victor was having sex with. Bill aids in Mandy's recovery.",
"title": "Plot"
},
{
"paragraph_id": 4,
"text": "The next night, while smoking marijuana, Alice and Bill discuss their unfulfilled temptations. Bill is not jealous of other men's attraction to Alice, believing women to be naturally faithful. Alice admits to fantasizing about a naval officer she met on vacation and considered leaving Bill and Helena. Bill is disturbed before being called to a patient's house. The patient's daughter, Marion, tries to seduce Bill, but he resists.",
"title": "Plot"
},
{
"paragraph_id": 5,
"text": "After leaving Marion's, Bill meets a prostitute named Domino. When Alice calls, he pays Domino for a non-sexual encounter and meets Nick at a jazz club. Nick describes a masked orgy in a mansion outside New York City at which he will play piano blindfolded, and gives Bill the password to enter the party. Bill goes to a costume store owned by a patient of his in order to buy an outfit to fit in at the masked orgy. Finding the costume store now owned by a man named Milich, he offers money to rent a costume from the shop, where he and Milich finds Milich's young daughter with two men.",
"title": "Plot"
},
{
"paragraph_id": 6,
"text": "Bill goes to the mansion and gives the password, discovering a sexual ritual in progress. A masked woman warns him he is in danger. He is brought before the master of ceremonies who demands to know a second password for the house, revealing that the password Bill has is only to enter the grounds. Bill removes his mask at the demand of the master of ceremonies, but the woman who warned him intervenes. She insists on redeeming him, at a personal cost. Bill is let off with a warning to keep quiet.",
"title": "Plot"
},
{
"paragraph_id": 7,
"text": "Bill comes home feeling guilty and confused, only to find Alice laughing in her sleep. He wakes her up and she tearfully tells him about a dream where she was having sex with the naval officer and many other men, and laughing at the idea of Bill watching. The next day, Bill goes to Nick's hotel, but the desk clerk tells him that Nick left with two dangerous-looking men. Bill returns the costume, but realizes he has misplaced the mask, and learns that Milich has sold his teenage daughter into sex slavery. Milich implies that Bill can pay to have sex with his daughter if he likes.",
"title": "Plot"
},
{
"paragraph_id": 8,
"text": "In the afternoon, consumed by thoughts of his wife's infidelity, Bill leaves work early and returns to the site of the orgy. At the front gate, he is handed an envelope with a warning to stay away. That evening, Bill tries to call Marion, but hangs up when her fiancé answers. He decides to go to Domino's apartment to consummate their affair, but is met by her roommate, Sally. Although there is sexual tension between them, Sally informs Bill that Domino has just received news that she is HIV-positive. Bill leaves.",
"title": "Plot"
},
{
"paragraph_id": 9,
"text": "After leaving the apartment, Bill is followed by a mysterious figure. He discovers that an ex-beauty queen has died from an overdose and identifies her as Mandy at the morgue. Later, Ziegler summons him and admits to being a guest at the orgy. Ziegler reveals that there was no second password at all, and failing to know this is what outed Bill as an outsider. Ziegler assures Bill that the secret society only aims to intimidate him into silence but implies that they are capable of taking action if necessary. Bill is concerned about Nick's disappearance and Mandy's death, whom he correctly identifies as the masked orgy participant who sacrificed herself for him. Ziegler claims Nick is safe and that Mandy died from an accidental overdose due to drug addiction.",
"title": "Plot"
},
{
"paragraph_id": 10,
"text": "Bill returns home to find the rented mask on his pillow and confides in his wife, Alice, about the past two days. The next day they go Christmas shopping with their daughter and Bill apologizes to Alice. She suggests they do something \"as soon as possible,\" to which Bill asks what she means and Alice simply responds with one word, \"Fuck.\"",
"title": "Plot"
},
{
"paragraph_id": 11,
"text": "Eyes Wide Shut developed after Stanley Kubrick read Arthur Schnitzler's Dream Story in 1968, when Kubrick was looking for a project to follow 2001: A Space Odyssey. Kubrick was interested in adapting the story, and with the help of journalist Jay Cocks, bought the filming rights to the novel. For the following decade, Kubrick considered making the Dream Story adaptation a sex comedy \"with a wild and somber streak running through it\", starring Steve Martin or Woody Allen in the main role. Kubrick also considered Tom Hanks, Bill Murray, Dustin Hoffman, Warren Beatty, Albert Brooks, Alan Alda and Sam Shepard for the lead in the 80s. The project was revived in 1994 when Kubrick hired Frederic Raphael to work on the script, updating the setting from early 20th century Vienna to late 20th century New York City. Kubrick invited his friend Michael Herr, who helped write Full Metal Jacket, to make revisions, but Herr declined for fear he would be underpaid and have to commit to a long production.",
"title": "Production"
},
{
"paragraph_id": 12,
"text": "Arthur Schnitzler's 1926 novella Dream Story is set around Vienna after the turn of the century. The main characters are a couple named Fridolin and Albertina. The couple's home is a typical suburban middle-class home. Like the protagonist of the novel, Schnitzler was Jewish, lived in Vienna, and was a doctor, although he left medicine to write.",
"title": "Production"
},
{
"paragraph_id": 13,
"text": "Kubrick frequently removed references to the Jewishness of characters in the novels he adapted. In Eyes Wide Shut, Frederic Raphael, who is Jewish, wanted to keep the Jewish background of the protagonists, but Kubrick disagreed and removed details that would identify characters as Jewish. Kubrick determined Bill should be a \"Harrison Ford-ish goy\" and created the surname of Harford as an allusion to the actor. In the film, Bill is taunted with homophobic slurs. In the novella, the taunters are members of an anti-Semitic college fraternity. In an introduction to a Penguin Classics edition of Dream Story, Raphael wrote that \"Fridolin is not declared to be a Jew, but his feelings of cowardice, for failing to challenge his aggressor, echo the uneasiness of Austrian Jews in the face of Gentile provocation.\"",
"title": "Production"
},
{
"paragraph_id": 14,
"text": "The novella is set during the Carnival, when people often wear masks to parties. The party that both husband and wife attend at the opening of the story is a Carnival Masquerade ball, whereas the film's story begins at Christmas time.",
"title": "Production"
},
{
"paragraph_id": 15,
"text": "In the novella, the party (which is sparsely attended) uses \"Denmark\" as the password for entrance; that is significant in that Albertina had her infatuation with her soldier in Denmark; the film's password is \"Fidelio\". In early drafts of the screenplay, the password was \"Fidelio Rainbow\". Jonathan Rosenbaum noted that both passwords echo elements of one member of the couple's behaviour, though in opposite ways. The party in the novella consists mostly of nude ballroom dancing.",
"title": "Production"
},
{
"paragraph_id": 16,
"text": "In the novella, the woman who \"redeems\" Fridolin at the party, saving him from punishment, is costumed as a nun, and most of the characters at the party are dressed as nuns or monks; Fridolin himself used a monk costume. This aspect was retained in the film's original screenplay, but was deleted in the filmed version.",
"title": "Production"
},
{
"paragraph_id": 17,
"text": "The novella makes it clear that Fridolin at this point hates Albertina more than ever, thinking they are now lying together \"like mortal enemies\". It has been argued that the dramatic climax of the novella is actually Albertina's dream, and the film has shifted the focus to Bill's visit to the secret society's orgy, whose content is more shocking in the film.",
"title": "Production"
},
{
"paragraph_id": 18,
"text": "The adaptation created a character with no counterpart in the novella: Ziegler, who represents both the high wealth and prestige to which Bill Harford aspires, and a connection between Bill's two worlds (his regular life, and the secret society organizing the ball). Critic Randy Rasmussen interprets Ziegler as representing Bill's worst self, much as in other Kubrick films; the title character in Dr. Strangelove represents the worst of the American national security establishment, Charles Grady represents the worst of Jack Torrance in The Shining, and Clare Quilty represents the worst of Humbert Humbert in Lolita.",
"title": "Production"
},
{
"paragraph_id": 19,
"text": "More significantly, in the film, Ziegler gives a commentary on the whole story to Bill, including an explanation that the party incident, where Bill is apprehended, threatened, and ultimately redeemed by the woman's sacrifice, was staged. Whether this is to be believed or not, it is an exposition of Ziegler's view of the ways of the world as a member of the power elite.",
"title": "Production"
},
{
"paragraph_id": 20,
"text": "When Warner Bros. president Terry Semel approved production, he asked Kubrick to cast a movie star as \"you haven't done that since Jack Nicholson [in The Shining]\". Kubrick considered casting Alec Baldwin and Kim Basinger as Bill and Alice Harford. Cruise was in England because his wife Nicole Kidman was there filming The Portrait of a Lady (1996), and the pair eventually decided to visit Kubrick's estate. After that meeting, the director awarded them the roles. Kubrick also managed to make both not commit to other projects until Eyes Wide Shut was completed. Jennifer Jason Leigh and Harvey Keitel each were cast in supporting roles and filmed by Kubrick. Reportedly due to scheduling conflicts, both had to drop out – first Keitel with Finding Graceland, then Leigh with eXistenZ – and they were replaced by Sydney Pollack and Marie Richardson in the final cut. Although, Keitel quit after doing 68 takes for a scene of his character walking through the door. Kubrick offered Eva Herzigová a role in the film, but she declined.",
"title": "Production"
},
{
"paragraph_id": 21,
"text": "In 2019, it was revealed that Cate Blanchett had provided the voice of the mysterious masked woman at the orgy party. Actress Abigail Good could not do a convincing American accent, and Cruise and Kidman ended up suggesting Blanchett for the dubbing, which occurred after Kubrick's death.",
"title": "Production"
},
{
"paragraph_id": 22,
"text": "Principal photography began in November 1996. Kubrick's perfectionism led to script pages being rewritten on the set with most scenes requiring numerous takes. The shoot went on for much longer than expected; the actress Vinessa Shaw was initially contracted for two weeks but ended up working for two months while the actor Alan Cumming, who appears in one scene as a hotel clerk, auditioned six times before the filming process. Due to the relentless nature of the production, the crew became exhausted and were reported to have been impacted by low morale. Filming finally wrapped in June 1998. The Guinness World Records recognized Eyes Wide Shut as the longest constant movie shoot that ran \"...for over 15 months, a period that included an unbroken shoot of 46 weeks\".",
"title": "Production"
},
{
"paragraph_id": 23,
"text": "Given Kubrick's fear of flying, the entire film was shot in England. Sound-stage works were completed at London's Pinewood Studios which included a detailed recreation of Greenwich Village. Kubrick's perfectionism went as far as sending workmen to Manhattan to measure street widths and note newspaper vending machine locations. Real New York footage was also shot to be rear projected behind Cruise. Production was followed by a strong campaign of secrecy helped by Kubrick always working with a short team on set. Outdoor locations included Hatton Garden for a Greenwich Village street, Hamleys for the toy store from the film's ending, and Mentmore Towers and Elveden Hall in Elveden, Suffolk, England for the mansion. Larry Smith, who had first served as a gaffer on both Barry Lyndon and The Shining, was chosen by Kubrick to be the film's cinematographer. Wherever possible, Smith made use of available light sources visible in the shots such as lamps and Christmas tree lights, but when this was insufficient he used Chinese paper ball lamps to softly brighten the scene and/or other types of film lighting. The color was enhanced by push processing the film reels (emulsion) which helped bring out the intensity of the color and emphasize highlights. This effect is evident in the Christmas party scene at Ziegler's house, with Smith noting that the push processing \"made the lights appear to be much brighter than they were\" and created a \"wonderful warm glow.\"",
"title": "Production"
},
{
"paragraph_id": 24,
"text": "Kubrick's perfectionism led him to oversee every visual element that would appear in a given frame, from props and furniture to the color of walls and other objects. One such element were the masks used in the orgy which were inspired by the masked carnival balls visited by the protagonists in the novel. Costume designer Marit Allen explained that Kubrick felt they fit in that scene for being part of the imaginary world and ended up \"creat[ing] the impression of menace, but without exaggeration\". As many masks as were used in the Venetian carnival were sent to London and Kubrick chose who would wear each piece. The paintings of Kubrick's wife Christiane and his daughter Katherina are featured as decorations.",
"title": "Production"
},
{
"paragraph_id": 25,
"text": "Nicole Kidman revealed that her explicit scenes with the naval officer, played by Gary Goba, were filmed over three days and that Kubrick wanted them to be 'almost pornographic'.",
"title": "Production"
},
{
"paragraph_id": 26,
"text": "After shooting had been completed, Kubrick entered a prolonged post-production process and on March 1, 1999, Kubrick showed a cut to Cruise, Kidman and the Warner Bros. executives. The director died six days later.",
"title": "Production"
},
{
"paragraph_id": 27,
"text": "Jocelyn Pook wrote the original music for Eyes Wide Shut but, like other Kubrick movies, the film was noted for its use of classical music. The opening title music is Shostakovich's Waltz No. 2 from \"Suite for Variety Stage Orchestra\", misidentified as \"Jazz Suite No. 2\". One recurring piece is the second movement of György Ligeti's piano cycle \"Musica ricercata\". Kubrick originally intended to feature \"Im Treibhaus\" from Wagner's Wesendonck Lieder, but the director eventually replaced it with Ligeti's tune feeling Wagner's song was \"too beautiful\". In the morgue scene, Franz Liszt's late solo piano piece, \"Nuages Gris\" (\"Grey Clouds\") (1881), is heard. \"Rex tremendae\" from Mozart's Requiem plays as Bill walks into the café and reads of Mandy's death.",
"title": "Music"
},
{
"paragraph_id": 28,
"text": "Pook was hired after choreographer Yolande Snaith rehearsed the masked ball orgy scene using Pook's composition \"Backwards Priests\" – which features a Romanian Orthodox Divine Liturgy recorded in a church in Baia Mare, played backwards – as a reference track. Kubrick then called the composer and asked if she had anything else \"weird\" like that song, which was reworked for the final cut of the scene, with the title \"Masked Ball\". Pook ended up composing and recording four pieces of music, many times based on her previous work, totaling 24 minutes. The composer's work ended up having mostly string instruments – including a viola played by Pook herself – with no brass or woodwinds as Pook \"just couldn't justify these other textures\", particularly as she wanted the tracks played on dialogue-heavy scenes to be \"subliminal\" and felt such instruments would be intrusive.",
"title": "Music"
},
{
"paragraph_id": 29,
"text": "Another track in the orgy, \"Migrations\", features a Tamil song sung by Manickam Yogeswaran, a Carnatic singer. The original cut featured a scriptural recitation from the Bhagavad Gita, which Pook took from a previous Yogeswaran recording. South African Hindu Mahasabha, a Hindu group, protested against the scripture being used, Warner Bros. issued a public apology, and hired the singer to record a similar track to replace the chant.",
"title": "Music"
},
{
"paragraph_id": 30,
"text": "The party at Ziegler's house features rearrangements of love songs such as \"When I Fall in Love\" and \"It Had to Be You\", used in increasingly ironic ways considering how Alice and Bill flirt with other people in the scene. As Kidman was nervous about doing nude scenes, Kubrick stated she could bring music to liven up. When Kidman brought a Chris Isaak CD, Kubrick approved it, and incorporated Isaak's song \"Baby Did a Bad, Bad Thing\" to both an early romantic embrace of Bill and Alice and the film's trailer.",
"title": "Music"
},
{
"paragraph_id": 31,
"text": "The film was described by some reviewers, and partially marketed, as an erotic thriller, a categorization disputed by others. It is classified as such in the book The Erotic Thriller in Contemporary Cinema, by Linda Ruth Williams, and was described as such in news articles about Cruise and Kidman's lawsuit over assertions that they saw a sex therapist during filming. The positive review in Combustible Celluloid describes it as an erotic thriller upon first viewing, but actually a \"complex story about marriage and sexuality\". High-Def Digest also called it an erotic thriller.",
"title": "Themes and interpretations"
},
{
"paragraph_id": 32,
"text": "However, reviewing the film at AboutFilm.com, Carlo Cavagna regards this as a misleading classification, as does Leo Goldsmith, writing at notcoming.com, and the review on Blu-ray.com. Writing in TV Guide, Maitland McDonagh writes \"No one familiar with the cold precision of Kubrick's work will be surprised that this isn't the steamy erotic thriller a synopsis (or the ads) might suggest.\" Writing in general about the genre of 'erotic thriller' for CineAction in 2001, Douglas Keesey states that \"whatever [Eyes Wide Shut's] actual type, [it] was at least marketed as an erotic thriller\". Michael Koresky, writing in the 2006 issue of film journal Reverse Shot, writes \"this director, who defies expectations at every turn and brings genre to his feet, was ... setting out to make neither the 'erotic thriller' that the press maintained nor an easily identifiable 'Kubrick film'\". DVD Talk similarly dissociates the film from this genre.",
"title": "Themes and interpretations"
},
{
"paragraph_id": 33,
"text": "In addition to relocating the story from Vienna in the 1900s to New York City in the 1990s, Kubrick changed the time-frame of Schnitzler's story from Mardi Gras to Christmas. Michael Koresky believed Kubrick did this because of the rejuvenating symbolism of Christmas. Mario Falsetto, on the other hand, notes that Christmas lights allow Kubrick to employ some of his distinct methods of shooting including using source location lighting, as he also did in Barry Lyndon. The New York Times notes that the film \"gives an otherworldly radiance and personality to Christmas lights\", and critic Randy Rasmussen notes that \"colorful Christmas lights ... illuminate almost every location in the film.\" Harper's film critic, Lee Siegel, believes that the film's recurring motif is the Christmas tree, because it symbolizes the way that \"Compared with the everyday reality of sex and emotion, our fantasies of gratification are ... pompous and solemn in the extreme ... For desire is like Christmas: it always promises more than it delivers.\" Author Tim Kreider notes that the \"Satanic\" mansion-party at Somerton is the only set in the film without a Christmas tree, stating that \"Almost every set is suffused with the dreamlike, hazy glow of colored lights and tinsel.\" Furthermore, he argues that \"Eyes Wide Shut, though it was released in summer, was the Christmas movie of 1999.\" Noting that Kubrick has shown viewers the dark side of Christmas consumerism, Louise Kaplan states that the film illustrates ways in which the \"material reality of money\" is shown replacing the spiritual values of Christmas, charity, and compassion. While virtually every scene has a Christmas tree, there is \"no Christmas music or cheery Christmas spirit.\" Critic Alonso Duralde, in his book Have Yourself a Movie Little Christmas, categorized the film as a \"Christmas movie for grownups\", arguing that \"Christmas weaves its way through the film from start to finish\".",
"title": "Themes and interpretations"
},
{
"paragraph_id": 34,
"text": "Historians, travel guide authors, novelists, and merchants of Venetian masks have noted that these have a long history of being worn during promiscuous activities. Authors Tim Kreider and Thomas Nelson have linked the film's usage of these to Venice's reputation as a center of both eroticism and mercantilism. Nelson notes that the sex ritual combines elements of Venetian Carnival and Catholic rites, in particular, the character of \"Red Cloak\" who simultaneously serves as Grand Inquisitor and King of Carnival. As such, Nelson argues that the sex ritual is a symbolic mirror of the darker truth behind the façade of Victor Ziegler's earlier Christmas party. Carolin Ruwe, in her book Symbols in Stanley Kubrick's Movie 'Eyes Wide Shut', argues that the mask is the prime symbol of the film. Its symbolic meaning is represented through its connection to the characters in the film; as Tim Kreider points out, this can be seen through the masks in the prostitute's apartment and her being renamed as \"Domino\" in the film, which is a type of Venetian Mask. Unused early poster designs for the film by Kubrick's daughter Katharina used the motif of Venetian masks, but were rejected by the studio because they obscured the faces of the film's two stars.",
"title": "Themes and interpretations"
},
{
"paragraph_id": 35,
"text": "Paintings and sculptures appear throughout the film, some historical and others painted by Kubrick's wife Christiane Kubrick and step daughter Katharina Kubrick Hobbs. The home of the Harford's contains the majority of the works painted by Kubrick's family members, with the exception being a painting of a nude reclining pregnant woman by Christiane Kubrick title Paula On Red that appears in Ziegler's bathroom during the overdose scene. In the beginning of the film, as Bill and Alice are saying goodbye to their daughter Helena and the babysitter, a painting by Christiane Kubrick titled \"View from the Mentmore\" can be seen hanging next to the Christmas tree. Mentmore Towers is an English country house in the south east of England that was used for filming the interior scenes of the Somerton house and the masked orgy.",
"title": "Themes and interpretations"
},
{
"paragraph_id": 36,
"text": "During Ziegler's party, Bill is summoned to the bathroom to deal with an apparent overdose, as he climbs the spiral staircase he passes Giulio Bergonzoli's sculpture Gli amori degli angeli (The Loves of Angels) which is at the foot of the staircase. This sculpture is said to be inspired by a poem titled The Loves of The Angels by 19th-century poet Thomas Moore, the poem itself describes the story of three angels who fall in love with mortal women and share the password to heaven with them resulting in their banishment. At the time of the poem's release, it was received with controversy due to the open eroticism throughout.",
"title": "Themes and interpretations"
},
{
"paragraph_id": 37,
"text": "When Bill enters a cafe towards the end of the film, two Pre-Raphaelite paintings can be seen hanging on parallel walls, Ophelia by John William Waterhouse and Astarte Syriaca by Dante Gabriel Rossetti. Waterhouse's Ophelia depicts the character by the same name in Shakespeare's tragedy Hamlet moments before her death. Astarte Syriaca depicts Astarte, the ancient Syrian goddess of love, as well as two symmetrical angels holding torches directly behind her. Both paintings mirror events within the film and, as Robert Wilkes writes, reflect its \"mood of sensuality, ritualism, and exoticism\". In the same cafe scene, a black-and-white print of a reclining woman is seen directly behind Bill as he sits down with a newspaper, in the proceeding shot the print is replaced with what Wilkes describes as a \"more chaotic, nightmarish image\" as Bill reads about the ex-beauty queen's apparent overdose.",
"title": "Themes and interpretations"
},
{
"paragraph_id": 38,
"text": "Warner Bros. heavily promoted Eyes Wide Shut, while following Kubrick's secrecy campaign – to the point that the film's press kits contained no production notes, not even the director's suggestions to Semel regarding the marketing campaign, given one week prior to Kubrick's death. The first footage was shown to theater owners attending the 1999 ShoWest convention in Las Vegas. TV spots featured both Isaak and Ligeti's music from the soundtrack, while revealing little about the movie's plot. The film also appeared on the cover of Time magazine, and on show business programs such as Entertainment Tonight and Access Hollywood.",
"title": "Release"
},
{
"paragraph_id": 39,
"text": "Eyes Wide Shut opened on July 16, 1999, in the United States. The film topped the week-end box office, with $21.7 million from 2,411 screens. These numbers surpassed the studio's expectations of $20 million, and became both Cruise's sixth consecutive chart topper and Kubrick's highest opening week-end as well as the highest featuring Kidman and Cruise together. Eyes Wide Shut ended up grossing a total of $55,691,208 in the US. The numbers put it as Kubrick's second-highest-grossing film in the country, behind 2001: A Space Odyssey, but both were considered a box office disappointment.",
"title": "Release"
},
{
"paragraph_id": 40,
"text": "Shortly after its screening at the Venice Film Festival, Eyes Wide Shut had a British premiere on September 3, 1999, at the Warner Village cinema in Leicester Square. The film's wide opening occurred the following week-end, and topped the U.K. charts with £1,189,672.",
"title": "Release"
},
{
"paragraph_id": 41,
"text": "The international performances for Eyes Wide Shut were more positive, with Kubrick's long-time assistant and brother-in-law Jan Harlan stating that \"It was badly received in the Anglo-Saxon world, but it was very well received in the Latin world and Japan. In Italy, it was a huge hit.\" Overseas earnings of over $105 million led to a $162,091,208 box office run world-wide, turning it into the highest-grossing Kubrick film.",
"title": "Release"
},
{
"paragraph_id": 42,
"text": "Eyes Wide Shut received generally positive reviews from critics. On Rotten Tomatoes, the film holds an approval rating of 76% based on 160 reviews, with an average rating of 7.5/10. The website's critical consensus reads, \"Kubrick's intense study of the human psyche yields an impressive cinematic work.\" Metacritic gives the film a weighted average score of 68 out of 100 based on 34 reviews, indicating \"generally favorable reviews\". Over 50 critics listed the film among the best of 1999. French magazine Cahiers du Cinéma named it the best film of the year in its annual \"top ten\" list. However, audiences polled by CinemaScore gave the film an average grade of \"D−\" on an A+ to F scale.",
"title": "Reception"
},
{
"paragraph_id": 43,
"text": "In the Chicago Tribune, Michael Wilmington declared the film a masterpiece, lauding it as \"provocatively conceived, gorgeously shot and masterfully executed ... Kubrick's brilliantly choreographed one-take scenes create a near-hypnotic atmosphere of commingled desire and dread.\" Nathan Rabin of The A.V. Club was also highly positive, arguing that \"the film's primal, almost religious intensity and power is primarily derived from its multifaceted realization that disobeying the dictates of society and your conscience can be both terrifying and exhilarating. ... The film's depiction of sexual depravity and amorality could easily venture into the realm of camp in the hands of a lesser filmmaker, but Kubrick depicts primal evil in a way that somehow makes it seem both new and deeply terrifying.\"",
"title": "Reception"
},
{
"paragraph_id": 44,
"text": "Roger Ebert of the Chicago Sun-Times gave the film a score of three and a half stars out of four, writing, \"Kubrick's great achievement in the film is to find and hold an odd, unsettling, sometimes erotic tone for the doctor's strange encounters.\" He praised the individual dream-like atmosphere of the separate scenes, and called the choice of Christmas-themed lighting \"garish, like an urban sideshow\".",
"title": "Reception"
},
{
"paragraph_id": 45,
"text": "Reviewer James Berardinelli stated that it was arguably one of Kubrick's best films. Along with considering Kidman \"consistently excellent\", he wrote that Kubrick \"has something to say about the causes and effects of depersonalized sex\", and praised the work as \"thought-provoking and unsettling\". Writing for The New York Times, reviewer Janet Maslin commented, \"This is a dead-serious film about sexual yearnings, one that flirts with ridicule yet sustains its fundamental eeriness and gravity throughout. The dreamlike intensity of previous Kubrick visions is in full force here.\"",
"title": "Reception"
},
{
"paragraph_id": 46,
"text": "Some reviewers were unfavorable. One complaint was that the movie's pacing was too slow; while this may have been intended to convey a dream state, critics objected that it made actions and decisions seem laboured. Another complaint was that it did not live up to the expectation of it being a \"sexy film\" which is what it had been marketed as, thus defying audiences' expectations. Many critics, such as Manohla Dargis of LA Weekly, found the prolific orgy scene to be \"banal\" and \"surprisingly tame\". While Kubrick's \"pictorial talents\" were described as \"striking\" by Rod Dreher of the New York Post, the pivotal scene was deemed by Stephen Hunter, writing for The Washington Post, as the \"dullest orgy [he'd] ever seen\". Hunter elaborates on his criticism, and states that \"Kubrick is annoyingly offhand while at the same time grindingly pedantic; plot points are made over and over again, things are explained till the dawn threatens to break in the east, and the movie stumbles along at a glacial pace\". Owen Gleiberman of Entertainment Weekly complained about the inauthenticity of the New York setting, claiming that the soundstage used for the film's production didn't have \"enough bustle\" to capture the reality of New York. Paul Tatara of CNN described the film as a \"slow-motion morality tale full of hot female bodies and thoroughly uneventful 'mystery'\", while Andrew Sarris writing for The New York Observer criticised the film's \"feeble attempts at melodramatic tension and suspense\". David Edelstein of Slate dismissed it as \"estranged from any period I recognize. Who are these people played by Cruise and Kidman, who act as if no one has ever made a pass at them and are so deeply traumatized by their newfound knowledge of sexual fantasies—the kind that mainstream culture absorbed at least half a century ago? Who are these aristocrats whose limos take them to secret masked orgies in Long Island mansions? Even dream plays need some grounding in the real world.\" J. Hoberman wrote that the film \"feels like a rough draft at best.\"",
"title": "Reception"
},
{
"paragraph_id": 47,
"text": "Lee Siegel from Harper's felt that most critics responded mainly to the marketing campaign and did not address the film on its own terms. Others felt that American censorship took an esoteric film and made it even harder to understand. In his article \"Grotesque Caricature\", published in Postmodern Culture, Stefan Mattesich praises the film's nuanced caricatured elements, and states that the film's negation of conventional narrative elements is what resulted in its subsequent negative reception.",
"title": "Reception"
},
{
"paragraph_id": 48,
"text": "For the introduction to Michel Ciment's Kubrick: The Definitive Edition, Martin Scorsese wrote: \"When Eyes Wide Shut came out a few months after Stanley Kubrick's death in 1999, it was severely misunderstood, which came as no surprise. If you go back and look at the contemporary reactions to any Kubrick picture (except the earliest ones), you'll see that all his films were initially misunderstood. Then, after five or ten years came the realization that 2001 or Barry Lyndon or The Shining was like nothing else before or since.\" In 2012, Slant Magazine ranked the film as the second greatest of the 1990s. British Film Institute ranked the film at No. 19 on its list of \"90 great films of the 1990s\". The BBC listed it number 61 in its list of the 100 greatest American films of all time.",
"title": "Reception"
},
{
"paragraph_id": 49,
"text": "Eyes Wide Shut was first released on VHS, LaserDisc, and DVD on March 7, 2000. The original DVD release corrects technical gaffes, including a reflected crew member, and altering a piece of Alice Harford's dialogue. Most home videos remove the verse that was claimed to be cited from the sacred Hindu scripture Bhagavad Gita (although it was Pook's reworking of \"Backwards Priests\" as stated above). In the UK, Warner Home Video's 'rated 18' [no video altering] 1999 DVD release was in 4:3 full frame aspect ratio, with a note at the beginning that this was as Kubrick intended it to be shown [ratio as shot]. However, the film's length on this UK DVD is only 153 minutes, as opposed to the 159 minutes of other available DVD and Blu-ray versions. This is due to the transfer being done at 25 frames per secondes rather than 24 as shot; no actual footage was cut.",
"title": "Reception"
},
{
"paragraph_id": 50,
"text": "On October 23, 2007, Warner's released Eyes Wide Shut in a special edition DVD, plus the HD DVD and Blu-ray disc formats. This is the first home video release that presents the film in anamorphic 1.78:1 (16:9) format (the film was shown theatrically as soft matted 1.66:1 in Europe and 1.85:1 in the US and Japan). The previous DVD release used a 1.33:1 (4:3) aspect ratio. It is also the first American home video release to feature the uncut version. Although the earliest American DVD of the uncut version states on the cover that it includes both the R-rated and unrated editions, in actuality only the unrated edition is on the DVD.",
"title": "Reception"
},
{
"paragraph_id": 51,
"text": "Though Warner Bros. insisted that Kubrick had turned in his final cut before his death, the film was still in the final stages of post-production, which was therefore completed by the studio in collaboration with Kubrick's estate. Some have argued that the work that remained was minor and exclusively technical in nature, allowing the estate to faithfully complete the film based on the director's notes. However, decisions regarding sound mixing, scoring and color-correction would have necessarily been made without Kubrick's input. Furthermore, Kubrick had a history of continuing to edit his films up until the last minute, and in some cases even after initial public screenings, as had been the case with 2001: A Space Odyssey and The Shining.",
"title": "Controversies"
},
{
"paragraph_id": 52,
"text": "Writing for Vanity Fair, Kubrick collaborator Michael Herr recalled a phone call from the director regarding the cut that would be screened for the Warner Bros. executives four days before his death:",
"title": "Controversies"
},
{
"paragraph_id": 53,
"text": "... there was looping to be done and the music wasn't finished, lots of small technical fixes on color and sound; would I show work that wasn't finished? He had to show it to Tom and Nicole because they had to sign nudity releases, and to Terry Semel and Bob Daly of Warner Bros., but he hated it that he had to, and I could hear it in his voice that he did.",
"title": "Controversies"
},
{
"paragraph_id": 54,
"text": "Garrett Brown, inventor of the Steadicam, has expressed that he considers Eyes Wide Shut to be an unfinished film:",
"title": "Controversies"
},
{
"paragraph_id": 55,
"text": "I think Eyes Wide Shut was snatched up by the studio when Stanley died and they just grabbed the highest number Avid edit and ran off as if that was the movie. But it was three months before the movie was due to be released. I don't think there's a chance that was the movie he had in mind, or the music track and a lot of other things. It's a great shame because you know it's out there, but it doesn't feel to me as it's really his film.",
"title": "Controversies"
},
{
"paragraph_id": 56,
"text": "Nicole Kidman, one of the stars of the film, briefly wrote about the completion of the film and the release of the film being at the same time of John F. Kennedy Jr.'s death from her perspective:",
"title": "Controversies"
},
{
"paragraph_id": 57,
"text": "There was a lot of interest in Eyes Wide Shut before it was released. But the weekend it came out, July 16, 1999, was the death of JFK Jr., his wife and her sister – a black, black weekend. And for Stanley to have died [on March 7, 1999, at age 70] before the film opened... Well, it all felt so dark and strange. Stanley had sent over the cut he considered done to us, Tom and I watched it in New York – and then he died.",
"title": "Controversies"
},
{
"paragraph_id": 58,
"text": "Jan Harlan, Kubrick's brother-in-law and executive producer, reported that Kubrick was \"very happy\" with the film and considered it to be his \"greatest contribution to the art of cinema\".",
"title": "Controversies"
},
{
"paragraph_id": 59,
"text": "R. Lee Ermey, an actor in Kubrick's film Full Metal Jacket, stated that Kubrick phoned him two weeks before his death to express his despondency over Eyes Wide Shut. \"He told me it was a piece of shit\", Ermey said in Radar magazine, \"and that he was disgusted with it and that the critics were going to 'have him for lunch'. He said Cruise and Kidman had their way with him – exactly the words he used.\"",
"title": "Controversies"
},
{
"paragraph_id": 60,
"text": "According to Todd Field, Kubrick's friend and an actor in Eyes Wide Shut, Ermey's claims do not accurately reflect Kubrick's essential attitude. Field's response appeared in an October 18, 2006, interview with Grouch Reviews:",
"title": "Controversies"
},
{
"paragraph_id": 61,
"text": "The polite thing would be to say 'No comment'. But the truth is that ... let's put it this way, you've never seen two actors more completely subservient and prostrate themselves at the feet of a director. Stanley was absolutely thrilled with the film. He was still working on the film when he died. And he probably died because he finally relaxed. It was one of the happiest weekends of his life, right before he died, after he had shown the first cut to Terry, Tom and Nicole. He would have kept working on it, like he did on all of his films. But I know that from people around him personally, my partner who was his assistant for thirty years. And I thought about R. Lee Ermey for In the Bedroom. And I talked to Stanley a lot about that film, and all I can say is Stanley was adamant that I shouldn't work with him for all kinds of reasons that I won't get into because there is no reason to do that to anyone, even if they are saying slanderous things that I know are completely untrue.",
"title": "Controversies"
},
{
"paragraph_id": 62,
"text": "Citing contractual obligations to deliver an R rating, Warner Bros. digitally altered the orgy for the film's American release by blocking out graphic sexuality using additional figures to obscure the view in order to avoid an adults-only NC-17 rating that would have limited its financial viability. This alteration antagonized both film critics and cinephiles, who argued that Kubrick had never been shy about ratings (A Clockwork Orange was originally given an X-rating). The unrated version of Eyes Wide Shut was released in the United States on October 23, 2007, on DVD, HD DVD and Blu-ray formats.",
"title": "Controversies"
},
{
"paragraph_id": 63,
"text": "Roger Ebert heavily criticized the technique of using digital images to mask the action. In his positive review of the film, he said it \"should not have been done at all\" and it is \"symbolic of the moral hypocrisy of the rating system that it would force a great director to compromise his vision, while by the same process making his adult film more accessible to young viewers.\" Although Ebert has been frequently cited as calling the standard North American R-rated version the \"Austin Powers\" version of Eyes Wide Shut – referring to two scenes in Austin Powers: International Man of Mystery in which, through camera angles and coincidences, full frontal nudity is blocked from view in a comical way – his review stated that this joke referred to an early rough draft of the altered scene, never publicly released.",
"title": "Controversies"
}
]
| Eyes Wide Shut is a 1999 erotic mystery psychological drama film directed, produced, and co-written by Stanley Kubrick. It is based on the 1926 novella Traumnovelle by Arthur Schnitzler, transferring the story's setting from early twentieth-century Vienna to 1990s New York City. The plot centers on a physician who is shocked when his wife reveals that she had contemplated having an affair a year earlier. He then embarks on a night-long adventure, during which he infiltrates a masked orgy of an unnamed secret society. Kubrick obtained the filming rights for Dream Story in the 1960s, considering it a perfect text for a film adaptation about sexual relations. He revived the project in the 1990s when he hired writer Frederic Raphael to help him with the adaptation. The film, which was mostly shot in England, apart from some exterior establishing shots, includes a detailed recreation of exterior Greenwich Village street scenes made at Pinewood Studios. The film's production, at 400 days, holds the Guinness World Record for the longest continuous film shoot. Kubrick died of a heart attack six days after showing the final cut of Eyes Wide Shut to Warner Bros., making it the final film he directed. He reportedly considered it his "greatest contribution to the art of cinema". In order to ensure a theatrical R rating in the United States, Warner Bros. digitally altered several sexually explicit scenes during post-production. This version was premiered on July 13, 1999, before being released on July 16, to generally positive reviews from critics. Box office receipts for the film worldwide were about $162 million, making it Kubrick's highest-grossing film. The uncut version has since been released in DVD, HD DVD and Blu-ray Disc formats. Eyes Wide Shut has been included in several lists of the greatest films of the 1990s. | 2001-10-25T12:40:12Z | 2023-12-30T17:10:34Z | [
"Template:AFI film",
"Template:Use mdy dates",
"Template:RT data",
"Template:Won",
"Template:Cite book",
"Template:Cite news",
"Template:AllMovie title",
"Template:Metacritic film",
"Template:Short description",
"Template:Infobox film",
"Template:Cite magazine",
"Template:Harvnb",
"Template:Sister project links",
"Template:Mojo title",
"Template:Nom",
"Template:Cite journal",
"Template:Authority control",
"Template:Arthur Schnitzler",
"Template:By whom",
"Template:'\"",
"Template:Webarchive",
"Template:Cite press release",
"Template:Better source needed",
"Template:Stanley Kubrick",
"Template:Other uses",
"Template:Reflist",
"Template:Cite web",
"Template:Cite video",
"Template:Letterboxd title",
"Template:Rotten Tomatoes",
"Template:Cast listing",
"Template:Sfn",
"Template:'s",
"Template:Blockquote",
"Template:IMDb title",
"Template:TCMDb title",
"Template:Use American English"
]
| https://en.wikipedia.org/wiki/Eyes_Wide_Shut |
9,986 | Outline of education | The following outline is provided as an overview of and topical guide to education:
Education is the process of facilitating learning, or the acquisition of knowledge, skills, values, morals, beliefs, habits, and personal development.
There are many types of potential educational aims and objectives, irrespective of the specific subject being learned. Some can cross multiple school disciplines.
In addition, research methods are drawn from many social research and psychological fields. | [
{
"paragraph_id": 0,
"text": "The following outline is provided as an overview of and topical guide to education:",
"title": ""
},
{
"paragraph_id": 1,
"text": "Education is the process of facilitating learning, or the acquisition of knowledge, skills, values, morals, beliefs, habits, and personal development.",
"title": ""
},
{
"paragraph_id": 2,
"text": "There are many types of potential educational aims and objectives, irrespective of the specific subject being learned. Some can cross multiple school disciplines.",
"title": "Types of educational goals and outcomes"
},
{
"paragraph_id": 3,
"text": "In addition, research methods are drawn from many social research and psychological fields.",
"title": "Educational research"
}
]
| The following outline is provided as an overview of and topical guide to education: Education is the process of facilitating learning, or the acquisition of knowledge, skills, values, morals, beliefs, habits, and personal development. | 2002-02-25T15:51:15Z | 2023-10-30T12:08:38Z | [
"Template:Short description",
"Template:TOC limit",
"Template:Main",
"Template:Portal",
"Template:Outline footer",
"Template:Organize section",
"Template:Philosophy of education",
"Template:See also",
"Template:Schools",
"Template:Education stages",
"Template:Reflist",
"Template:Sister project links",
"Template:World topic"
]
| https://en.wikipedia.org/wiki/Outline_of_education |
9,987 | Outline of engineering | The following outline is provided as an overview of and topical guide to engineering:
Engineering is the scientific discipline and profession that applies scientific theories, mathematical methods, and empirical evidence to design, create, and analyze technological solutions cognizant of safety, human factors, physical laws, regulations, practicality, and cost.
History of engineering
Engineering education
Regulation and licensure in engineering | [
{
"paragraph_id": 0,
"text": "The following outline is provided as an overview of and topical guide to engineering:",
"title": ""
},
{
"paragraph_id": 1,
"text": "Engineering is the scientific discipline and profession that applies scientific theories, mathematical methods, and empirical evidence to design, create, and analyze technological solutions cognizant of safety, human factors, physical laws, regulations, practicality, and cost.",
"title": ""
},
{
"paragraph_id": 2,
"text": "History of engineering",
"title": "History of engineering"
},
{
"paragraph_id": 3,
"text": "Engineering education",
"title": "Engineering education and certification"
},
{
"paragraph_id": 4,
"text": "Regulation and licensure in engineering",
"title": "Engineering education and certification"
}
]
| The following outline is provided as an overview of and topical guide to engineering: Engineering is the scientific discipline and profession that applies scientific theories, mathematical methods, and empirical evidence to design, create, and analyze technological solutions cognizant of safety, human factors, physical laws, regulations, practicality, and cost. | 2001-10-26T01:40:02Z | 2023-11-05T19:22:52Z | [
"Template:Short description",
"Template:TOC limit",
"Template:Portal",
"Template:Sister project links",
"Template:Outline footer"
]
| https://en.wikipedia.org/wiki/Outline_of_engineering |
9,988 | Outline of entertainment | The following outline provides an overview of and topical guide to entertainment and the entertainment industry:
Entertainment is any activity which provides a diversion or permits people to amuse themselves in their leisure time, and may also provide fun, enjoyment, and laughter. People may create their own entertainment, such as when they spontaneously invent a game; participate actively in an activity they find entertaining, such as when they play sport as a hobby; or consume an entertainment product passively, such as when they attend a performance.
The entertainment industry (informally known as show business or show biz) is part of the tertiary sector of the economy and includes many sub-industries devoted to entertainment. However, the term is often used in the mass media to describe the mass media companies that control the distribution and manufacture of mass media entertainment. In the popular parlance, the term show biz in particular connotes the commercially popular performing arts, especially musical theatre, vaudeville, comedy, film, fun, and music. It applies to every aspect of entertainment including cinema, television, radio, theatre, and music. | [
{
"paragraph_id": 0,
"text": "The following outline provides an overview of and topical guide to entertainment and the entertainment industry:",
"title": ""
},
{
"paragraph_id": 1,
"text": "Entertainment is any activity which provides a diversion or permits people to amuse themselves in their leisure time, and may also provide fun, enjoyment, and laughter. People may create their own entertainment, such as when they spontaneously invent a game; participate actively in an activity they find entertaining, such as when they play sport as a hobby; or consume an entertainment product passively, such as when they attend a performance.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The entertainment industry (informally known as show business or show biz) is part of the tertiary sector of the economy and includes many sub-industries devoted to entertainment. However, the term is often used in the mass media to describe the mass media companies that control the distribution and manufacture of mass media entertainment. In the popular parlance, the term show biz in particular connotes the commercially popular performing arts, especially musical theatre, vaudeville, comedy, film, fun, and music. It applies to every aspect of entertainment including cinema, television, radio, theatre, and music.",
"title": ""
}
]
| The following outline provides an overview of and topical guide to entertainment and the entertainment industry: Entertainment is any activity which provides a diversion or permits people to amuse themselves in their leisure time, and may also provide fun, enjoyment, and laughter. People may create their own entertainment, such as when they spontaneously invent a game; participate actively in an activity they find entertaining, such as when they play sport as a hobby; or consume an entertainment product passively, such as when they attend a performance. The entertainment industry is part of the tertiary sector of the economy and includes many sub-industries devoted to entertainment. However, the term is often used in the mass media to describe the mass media companies that control the distribution and manufacture of mass media entertainment. In the popular parlance, the term show biz in particular connotes the commercially popular performing arts, especially musical theatre, vaudeville, comedy, film, fun, and music. It applies to every aspect of entertainment including cinema, television, radio, theatre, and music. | 2002-02-25T15:51:15Z | 2023-12-09T04:55:50Z | [
"Template:Short description",
"Template:TOC limit",
"Template:Sister project links",
"Template:Outline footer"
]
| https://en.wikipedia.org/wiki/Outline_of_entertainment |
9,992 | List of contemporary ethnic groups | The following is a list of contemporary ethnic groups. There has been constant debate over the classification of ethnic groups. Membership of an ethnic group tends to be associated with shared ancestry, history, homeland, language or dialect and cultural heritage; where the term "culture" specifically includes aspects such as religion, mythology and ritual, cuisine, dressing (clothing) style and other factors.
By the nature of the concept, ethnic groups tend to be divided into subgroups, which may themselves be or not be identified as independent ethnic groups depending on the source consulted.
The following groups are commonly identified as "ethnic groups", as opposed to ethno-linguistic phyla, national groups, racial groups or similar. | [
{
"paragraph_id": 0,
"text": "The following is a list of contemporary ethnic groups. There has been constant debate over the classification of ethnic groups. Membership of an ethnic group tends to be associated with shared ancestry, history, homeland, language or dialect and cultural heritage; where the term \"culture\" specifically includes aspects such as religion, mythology and ritual, cuisine, dressing (clothing) style and other factors.",
"title": ""
},
{
"paragraph_id": 1,
"text": "By the nature of the concept, ethnic groups tend to be divided into subgroups, which may themselves be or not be identified as independent ethnic groups depending on the source consulted.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The following groups are commonly identified as \"ethnic groups\", as opposed to ethno-linguistic phyla, national groups, racial groups or similar.",
"title": "Ethnic groups"
}
]
| The following is a list of contemporary ethnic groups. There has been constant debate over the classification of ethnic groups. Membership of an ethnic group tends to be associated with shared ancestry, history, homeland, language or dialect and cultural heritage; where the term "culture" specifically includes aspects such as religion, mythology and ritual, cuisine, dressing (clothing) style and other factors. By the nature of the concept, ethnic groups tend to be divided into subgroups, which may themselves be or not be identified as independent ethnic groups depending on the source consulted. | 2001-10-30T23:07:46Z | 2023-12-31T23:51:41Z | [
"Template:Citation needed",
"Template:Cite web",
"Template:Blockquote",
"Template:Ethnicity",
"Template:Dynamic list",
"Template:Refn",
"Template:Cite journal",
"Template:Compact TOC",
"Template:Ill",
"Template:Notelist",
"Template:Citation",
"Template:Short description",
"Template:More citations needed",
"Template:See also",
"Template:Reflist",
"Template:Cite book",
"Template:Webarchive",
"Template:Indigenous peoples by continent",
"Template:Use dmy dates"
]
| https://en.wikipedia.org/wiki/List_of_contemporary_ethnic_groups |
9,993 | Edda | "Edda" (/ˈɛdə/; Old Norse Edda, plural Eddur) is an Old Norse term that has been applied by modern scholars to the collective of two Medieval Icelandic literary works: what is now known as the Prose Edda and an older collection of poems (without an original title) now known as the Poetic Edda. The term historically referred only to the Prose Edda, but this usage has fallen out of favour because of confusion with the other work. Both works were written down in Iceland during the 13th century in Icelandic, although they contain material from earlier traditional sources, reaching back into the Viking Age. The books provide the main sources for medieval skaldic tradition in Iceland and for Norse mythology.
The Edda has been criticized for imposing Snorri Sturluson’s own Christian views on Norse mythology. In particular the clean-cut explanation of what happens to a soul after death as understood in the Edda contradicts other sources on death in Norse mythology.
At least five hypotheses have been suggested for the origins of the word edda:
The Poetic Edda, also known as Sæmundar Edda or the Elder Edda, is a collection of Old Norse poems from the Icelandic medieval manuscript Codex Regius ("Royal Book"). Along with the Prose Edda, the Poetic Edda is the most expansive source on Norse mythology. The first part of the Codex Regius preserves poems that narrate the creation and foretold destruction and rebirth of the Old Norse mythological world as well as individual myths about gods concerning Norse deities. The poems in the second part narrate legends about Norse heroes and heroines, such as Sigurd, Brynhildr and Gunnar.
It consists of two parts. The first part has 10 songs about gods, and the second one has 19 songs about heroes.
The Codex Regius was written in the 13th century, but nothing is known of its whereabouts until 1643, when it came into the possession of Brynjólfur Sveinsson, then the Church of Iceland's Bishop of Skálholt. At that time, versions of the Prose Edda were well known in Iceland, but scholars speculated that there once was another Edda—an Elder Edda—which contained the pagan poems Snorri quotes in his book. When the Codex Regius was discovered, it seemed that this speculation had proven correct. Brynjólfur attributed the manuscript to Sæmundr the Learned, a larger-than-life 12th century Icelandic priest. While this attribution is rejected by modern scholars, the name Sæmundar Edda is still sometimes encountered.
Bishop Brynjólfur sent the Codex Regius as a present to King Christian IV of Denmark, hence the name Codex Regius. For centuries it was stored in the Royal Library in Copenhagen but in 1971 it was returned to Iceland.
The Prose Edda, sometimes referred to as the Younger Edda or Snorri's Edda, is an Icelandic manual of poetics which also contains many mythological stories. Its purpose was to enable Icelandic poets and readers to understand the subtleties of alliterative verse, and to grasp the mythological allusions behind the many kennings that were used in skaldic poetry.
It was written by the Icelandic scholar and historian Snorri Sturluson around 1220. It survives in four known manuscripts and three fragments, written down from about 1300 to about 1600.
The Prose Edda consists of a Prologue and three separate books: Gylfaginning, concerning the creation and foretold destruction and rebirth of the Norse mythical world; Skáldskaparmál, a dialogue between Ægir, a Norse god connected with the sea, and Bragi, the skaldic god of poetry; and Háttatal, a demonstration of verse forms used in Norse mythology. | [
{
"paragraph_id": 0,
"text": "\"Edda\" (/ˈɛdə/; Old Norse Edda, plural Eddur) is an Old Norse term that has been applied by modern scholars to the collective of two Medieval Icelandic literary works: what is now known as the Prose Edda and an older collection of poems (without an original title) now known as the Poetic Edda. The term historically referred only to the Prose Edda, but this usage has fallen out of favour because of confusion with the other work. Both works were written down in Iceland during the 13th century in Icelandic, although they contain material from earlier traditional sources, reaching back into the Viking Age. The books provide the main sources for medieval skaldic tradition in Iceland and for Norse mythology.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Edda has been criticized for imposing Snorri Sturluson’s own Christian views on Norse mythology. In particular the clean-cut explanation of what happens to a soul after death as understood in the Edda contradicts other sources on death in Norse mythology.",
"title": ""
},
{
"paragraph_id": 2,
"text": "At least five hypotheses have been suggested for the origins of the word edda:",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "The Poetic Edda, also known as Sæmundar Edda or the Elder Edda, is a collection of Old Norse poems from the Icelandic medieval manuscript Codex Regius (\"Royal Book\"). Along with the Prose Edda, the Poetic Edda is the most expansive source on Norse mythology. The first part of the Codex Regius preserves poems that narrate the creation and foretold destruction and rebirth of the Old Norse mythological world as well as individual myths about gods concerning Norse deities. The poems in the second part narrate legends about Norse heroes and heroines, such as Sigurd, Brynhildr and Gunnar.",
"title": "The Poetic Edda"
},
{
"paragraph_id": 4,
"text": "It consists of two parts. The first part has 10 songs about gods, and the second one has 19 songs about heroes.",
"title": "The Poetic Edda"
},
{
"paragraph_id": 5,
"text": "The Codex Regius was written in the 13th century, but nothing is known of its whereabouts until 1643, when it came into the possession of Brynjólfur Sveinsson, then the Church of Iceland's Bishop of Skálholt. At that time, versions of the Prose Edda were well known in Iceland, but scholars speculated that there once was another Edda—an Elder Edda—which contained the pagan poems Snorri quotes in his book. When the Codex Regius was discovered, it seemed that this speculation had proven correct. Brynjólfur attributed the manuscript to Sæmundr the Learned, a larger-than-life 12th century Icelandic priest. While this attribution is rejected by modern scholars, the name Sæmundar Edda is still sometimes encountered.",
"title": "The Poetic Edda"
},
{
"paragraph_id": 6,
"text": "Bishop Brynjólfur sent the Codex Regius as a present to King Christian IV of Denmark, hence the name Codex Regius. For centuries it was stored in the Royal Library in Copenhagen but in 1971 it was returned to Iceland.",
"title": "The Poetic Edda"
},
{
"paragraph_id": 7,
"text": "The Prose Edda, sometimes referred to as the Younger Edda or Snorri's Edda, is an Icelandic manual of poetics which also contains many mythological stories. Its purpose was to enable Icelandic poets and readers to understand the subtleties of alliterative verse, and to grasp the mythological allusions behind the many kennings that were used in skaldic poetry.",
"title": "The Prose Edda"
},
{
"paragraph_id": 8,
"text": "It was written by the Icelandic scholar and historian Snorri Sturluson around 1220. It survives in four known manuscripts and three fragments, written down from about 1300 to about 1600.",
"title": "The Prose Edda"
},
{
"paragraph_id": 9,
"text": "The Prose Edda consists of a Prologue and three separate books: Gylfaginning, concerning the creation and foretold destruction and rebirth of the Norse mythical world; Skáldskaparmál, a dialogue between Ægir, a Norse god connected with the sea, and Bragi, the skaldic god of poetry; and Háttatal, a demonstration of verse forms used in Norse mythology.",
"title": "The Prose Edda"
}
]
| "Edda" is an Old Norse term that has been applied by modern scholars to the collective of two Medieval Icelandic literary works: what is now known as the Prose Edda and an older collection of poems now known as the Poetic Edda. The term historically referred only to the Prose Edda, but this usage has fallen out of favour because of confusion with the other work. Both works were written down in Iceland during the 13th century in Icelandic, although they contain material from earlier traditional sources, reaching back into the Viking Age. The books provide the main sources for medieval skaldic tradition in Iceland and for Norse mythology. The Edda has been criticized for imposing Snorri Sturluson’s own Christian views on Norse mythology. In particular the clean-cut explanation of what happens to a soul after death as understood in the Edda contradicts other sources on death in Norse mythology. | 2001-10-27T15:37:02Z | 2023-11-27T09:11:13Z | [
"Template:Cite book",
"Template:IPAc-en",
"Template:Main",
"Template:Reflist",
"Template:Poetic Edda",
"Template:Short description",
"Template:Other uses",
"Template:Fact",
"Template:Cite journal",
"Template:Cite EB1911",
"Template:Norse mythology",
"Template:Old Norse topics",
"Template:Cite web",
"Template:ISBN",
"Template:Cite AmCyc",
"Template:Cite EB9",
"Template:Prose Edda",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Edda |
9,994 | Ephemeris time | The term ephemeris time (often abbreviated ET) can in principle refer to time in association with any ephemeris (itinerary of the trajectory of an astronomical object). In practice it has been used more specifically to refer to:
Most of the following sections relate to the ephemeris time of the 1952 standard.
An impression has sometimes arisen that ephemeris time was in use from 1900: this probably arose because ET, though proposed and adopted in the period 1948–1952, was defined in detail using formulae that made retrospective use of the epoch date of 1900 January 0 and of Newcomb's Tables of the Sun.
The ephemeris time of the 1952 standard leaves a continuing legacy, through its historical unit ephemeris second which became closely duplicated in the length of the current standard SI second (see below: Redefinition of the second).
Ephemeris time (ET), adopted as standard in 1952, was originally designed as an approach to a uniform time scale, to be freed from the effects of irregularity in the rotation of the Earth, "for the convenience of astronomers and other scientists", for example for use in ephemerides of the Sun (as observed from the Earth), the Moon, and the planets. It was proposed in 1948 by G M Clemence.
From the time of John Flamsteed (1646–1719) it had been believed that the Earth's daily rotation was uniform. But in the later nineteenth and early twentieth centuries, with increasing precision of astronomical measurements, it began to be suspected, and was eventually established, that the rotation of the Earth (i.e. the length of the day) showed irregularities on short time scales, and was slowing down on longer time scales. The evidence was compiled by W de Sitter (1927) who wrote "If we accept this hypothesis, then the 'astronomical time', given by the Earth's rotation, and used in all practical astronomical computations, differs from the 'uniform' or 'Newtonian' time, which is defined as the independent variable of the equations of celestial mechanics". De Sitter offered a correction to be applied to the mean solar time given by the Earth's rotation to get uniform time.
Other astronomers of the period also made suggestions for obtaining uniform time, including A Danjon (1929), who suggested in effect that observed positions of the Moon, Sun and planets, when compared with their well-established gravitational ephemerides, could better and more uniformly define and determine time.
Thus the aim developed, to provide a new time scale for astronomical and scientific purposes, to avoid the unpredictable irregularities of the mean solar time scale, and to replace for these purposes Universal Time (UT) and any other time scale based on the rotation of the Earth around its axis, such as sidereal time.
The American astronomer G M Clemence (1948) made a detailed proposal of this type based on the results of the English Astronomer Royal H Spencer Jones (1939). Clemence (1948) made it clear that his proposal was intended "for the convenience of astronomers and other scientists only" and that it was "logical to continue the use of mean solar time for civil purposes".
De Sitter and Clemence both referred to the proposal as 'Newtonian' or 'uniform' time. D Brouwer suggested the name 'ephemeris time'.
Following this, an astronomical conference held in Paris in 1950 recommended "that in all cases where the mean solar second is unsatisfactory as a unit of time by reason of its variability, the unit adopted should be the sidereal year at 1900.0, that the time reckoned in this unit be designated ephemeris time", and gave Clemence's formula (see Definition of ephemeris time (1952)) for translating mean solar time to ephemeris time.
The International Astronomical Union approved this recommendation at its 1952 general assembly. Practical introduction took some time (see Use of ephemeris time in official almanacs and ephemerides); ephemeris time (ET) remained a standard until superseded in the 1970s by further time scales (see Revision).
During the currency of ephemeris time as a standard, the details were revised a little. The unit was redefined in terms of the tropical year at 1900.0 instead of the sidereal year; and the standard second was defined first as 1/31556925.975 of the tropical year at 1900.0, and then as the slightly modified fraction 1/31556925.9747 instead, finally being redefined in 1967/8 in terms of the cesium atomic clock standard (see below).
Although ET is no longer directly in use, it leaves a continuing legacy. Its successor time scales, such as TDT, as well as the atomic time scale IAT (TAI), were designed with a relationship that "provides continuity with ephemeris time". ET was used for the calibration of atomic clocks in the 1950s. Close equality between the ET second with the later SI second (as defined with reference to the cesium atomic clock) has been verified to within 1 part in 10.
In this way, decisions made by the original designers of ephemeris time influenced the length of today's standard SI second, and in turn, this has a continuing influence on the number of leap seconds which have been needed for insertion into current broadcast time scales, to keep them approximately in step with mean solar time.
Ephemeris time was defined in principle by the orbital motion of the Earth around the Sun (but its practical implementation was usually achieved in another way, see below). Its detailed definition was based on Simon Newcomb's Tables of the Sun (1895), implemented in a new way to accommodate certain observed discrepancies:
In the introduction to Tables of the Sun, the basis of the tables (p. 9) includes a formula for the Sun's mean longitude at a time, indicated by interval T (in units of Julian centuries of 36525 mean solar days), reckoned from Greenwich Mean Noon on 0 January 1900:
Spencer Jones' work of 1939 showed that differences between the observed positions of the Sun and the predicted positions given by Newcomb's formula demonstrated the need for the following correction to the formula:
where "the times of observation are in Universal time, not corrected to Newtonian time," and 0.0748B represents an irregular fluctuation calculated from lunar observations.
Thus, a conventionally corrected form of Newcomb's formula, incorporating the corrections on the basis of mean solar time, would be the sum of the two preceding expressions:
Clemence's 1948 proposal, however, did not adopt such a correction of mean solar time. Instead, the same numbers were used as in Newcomb's original uncorrected formula (1), but now applied somewhat prescriptively, to define a new time and time scale implicitly, based on the real position of the Sun:
With this reapplication, the time variable, now given as E, represents time in ephemeris centuries of 36525 ephemeris days of 86400 ephemeris seconds each. The 1961 official reference summarized the concept as such: "The origin and rate of ephemeris time are defined to make the Sun's mean longitude agree with Newcomb's expression"
From the comparison of formulae (2) and (3), both of which express the same real solar motion in the same real time but defined on separate time scales, Clemence arrived at an explicit expression, estimating the difference in seconds of time between ephemeris time and mean solar time, in the sense (ET-UT):
δ t = + 24 s .349 + 72 s .3165 T + 29 s .949 T 2 + 1.821 B {\displaystyle \delta t=+24^{s}.349+72^{s}.3165T+29^{s}.949T^{2}+1.821B} . . . . . (4)
with the 24.349 seconds of time corresponding to the 1.00" in ΔLs. Clemence's formula (today superseded by more modern estimations) was included in the original conference decision on ephemeris time. In view of the fluctuation term, practical determination of the difference between ephemeris time and UT depended on observation. Inspection of the formulae above shows that the (ideally constant) units of ephemeris time have been, for the whole of the twentieth century, very slightly shorter than the corresponding (but not precisely constant) units of mean solar time (which, besides their irregular fluctuations, tend to lengthen gradually). This finding is consistent with the modern results of Morrison and Stephenson (see article ΔT).
Although ephemeris time was defined in principle by the orbital motion of the Earth around the Sun, it was usually measured in practice by the orbital motion of the Moon around the Earth. These measurements can be considered as secondary realizations (in a metrological sense) of the primary definition of ET in terms of the solar motion, after a calibration of the mean motion of the Moon with respect to the mean motion of the Sun.
Reasons for the use of lunar measurements were practically based: the Moon moves against the background of stars about 13 times as fast as the Sun's corresponding rate of motion, and the accuracy of time determinations from lunar measurements is correspondingly greater.
When ephemeris time was first adopted, time scales were still based on astronomical observation, as they always had been. The accuracy was limited by the accuracy of optical observation, and corrections of clocks and time signals were published in arrear.
A few years later, with the invention of the cesium atomic clock, an alternative offered itself. Increasingly, after the calibration in 1958 of the cesium atomic clock by reference to ephemeris time, cesium atomic clocks running on the basis of ephemeris seconds began to be used and kept in step with ephemeris time. The atomic clocks offered a further secondary realization of ET, on a quasi-real time basis that soon proved to be more useful than the primary ET standard: not only more convenient, but also more precisely uniform than the primary standard itself. Such secondary realizations were used and described as 'ET', with an awareness that the time scales based on the atomic clocks were not identical to that defined by the primary ephemeris time standard, but rather, an improvement over it on account of their closer approximation to uniformity. The atomic clocks gave rise to the atomic time scale, and to what was first called Terrestrial Dynamical Time and is now Terrestrial Time, defined to provide continuity with ET.
The availability of atomic clocks, together with the increasing accuracy of astronomical observations (which meant that relativistic corrections were at least in the foreseeable future no longer going to be small enough to be neglected), led to the eventual replacement of the ephemeris time standard by more refined time scales including terrestrial time and barycentric dynamical time, to which ET can be seen as an approximation.
In 1976, the IAU resolved that the theoretical basis for its then-current (since 1952) standard of Ephemeris Time was non-relativistic, and that therefore, beginning in 1984, Ephemeris Time would be replaced by two relativistic timescales intended to constitute dynamical timescales: Terrestrial Dynamical Time (TDT) and Barycentric Dynamical Time (TDB). Difficulties were recognized, which led to these, in turn, being superseded in the 1990s by time scales Terrestrial Time (TT), Geocentric Coordinate Time GCT (TCG) and Barycentric Coordinate Time BCT (TCB).
High-precision ephemerides of sun, moon and planets were developed and calculated at the Jet Propulsion Laboratory (JPL) over a long period, and the latest available were adopted for the ephemerides in the Astronomical Almanac starting in 1984. Although not an IAU standard, the ephemeris time argument Teph has been in use at that institution since the 1960s. The time scale represented by Teph has been characterized as a relativistic coordinate time that differs from Terrestrial Time only by small periodic terms with an amplitude not exceeding 2 milliseconds of time: it is linearly related to, but distinct (by an offset and constant rate which is of the order of 0.5 s/a) from the TCB time scale adopted in 1991 as a standard by the IAU. Thus for clocks on or near the geoid, Teph (within 2 milliseconds), but not so closely TCB, can be used as approximations to Terrestrial Time, and via the standard ephemerides Teph is in widespread use.
Partly in acknowledgement of the widespread use of Teph via the JPL ephemerides, IAU resolution 3 of 2006 (re-)defined Barycentric Dynamical Time (TDB) as a current standard. As re-defined in 2006, TDB is a linear transformation of TCB. The same IAU resolution also stated (in note 4) that the "independent time argument of the JPL ephemeris DE405, which is called Teph" (here the IAU source cites), "is for practical purposes the same as TDB defined in this Resolution". Thus the new TDB, like Teph, is essentially a more refined continuation of the older ephemeris time ET and (apart from the < 2 ms periodic fluctuations) has the same mean rate as that established for ET in the 1950s.
Ephemeris time based on the standard adopted in 1952 was introduced into the Astronomical Ephemeris (UK) and the American Ephemeris and Nautical Almanac, replacing UT in the main ephemerides in the issues for 1960 and after. (But the ephemerides in the Nautical Almanac, by then a separate publication for the use of navigators, continued to be expressed in terms of UT.) The ephemerides continued on this basis through 1983 (with some changes due to adoption of improved values of astronomical constants), after which, for 1984 onwards, they adopted the JPL ephemerides.
Previous to the 1960 change, the 'Improved Lunar Ephemeris' had already been made available in terms of ephemeris time for the years 1952—1959 (computed by W J Eckert from Brown's theory with modifications recommended by Clemence (1948)).
Successive definitions of the unit of ephemeris time are mentioned above (History). The value adopted for the 1956/1960 standard second:
was obtained from the linear time-coefficient in Newcomb's expression for the solar mean longitude (above), taken and applied with the same meaning for the time as in formula (3) above. The relation with Newcomb's coefficient can be seen from:
Caesium atomic clocks became operational in 1955, and quickly confirmed the evidence that the rotation of the Earth fluctuated irregularly. This confirmed the unsuitability of the mean solar second of Universal Time as a measure of time interval for the most precise purposes. After three years of comparisons with lunar observations, Markowitz et al. (1958) determined that the ephemeris second corresponded to 9 192 631 770 ± 20 cycles of the chosen cesium resonance.
Following this, in 1967/68, the General Conference on Weights and Measures (CGPM) replaced the definition of the SI second by the following:
The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.
Although this is an independent definition that does not refer to the older basis of ephemeris time, it uses the same quantity as the value of the ephemeris second measured by the cesium clock in 1958. This SI second referred to atomic time was later verified by Markowitz (1988) to be in agreement, within 1 part in 10, with the second of ephemeris time as determined from lunar observations.
For practical purposes the length of the ephemeris second can be taken as equal to the length of the second of Barycentric Dynamical Time (TDB) or Terrestrial Time (TT) or its predecessor TDT.
The difference between ET and UT is called ΔT; it changes irregularly, but the long-term trend is parabolic, decreasing from ancient times until the nineteenth century, and increasing since then at a rate corresponding to an increase in the solar day length of 1.7 ms per century (see leap seconds).
International Atomic Time (TAI) was set equal to UT2 at 1 January 1958 0:00:00. At that time, ΔT was already about 32.18 seconds. The difference between Terrestrial Time (TT) (the successor to ephemeris time) and atomic time was later defined as follows:
This difference may be assumed constant—the rates of TT and TAI are designed to be identical. | [
{
"paragraph_id": 0,
"text": "The term ephemeris time (often abbreviated ET) can in principle refer to time in association with any ephemeris (itinerary of the trajectory of an astronomical object). In practice it has been used more specifically to refer to:",
"title": ""
},
{
"paragraph_id": 1,
"text": "Most of the following sections relate to the ephemeris time of the 1952 standard.",
"title": ""
},
{
"paragraph_id": 2,
"text": "An impression has sometimes arisen that ephemeris time was in use from 1900: this probably arose because ET, though proposed and adopted in the period 1948–1952, was defined in detail using formulae that made retrospective use of the epoch date of 1900 January 0 and of Newcomb's Tables of the Sun.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The ephemeris time of the 1952 standard leaves a continuing legacy, through its historical unit ephemeris second which became closely duplicated in the length of the current standard SI second (see below: Redefinition of the second).",
"title": ""
},
{
"paragraph_id": 4,
"text": "Ephemeris time (ET), adopted as standard in 1952, was originally designed as an approach to a uniform time scale, to be freed from the effects of irregularity in the rotation of the Earth, \"for the convenience of astronomers and other scientists\", for example for use in ephemerides of the Sun (as observed from the Earth), the Moon, and the planets. It was proposed in 1948 by G M Clemence.",
"title": "History (1952 standard)"
},
{
"paragraph_id": 5,
"text": "From the time of John Flamsteed (1646–1719) it had been believed that the Earth's daily rotation was uniform. But in the later nineteenth and early twentieth centuries, with increasing precision of astronomical measurements, it began to be suspected, and was eventually established, that the rotation of the Earth (i.e. the length of the day) showed irregularities on short time scales, and was slowing down on longer time scales. The evidence was compiled by W de Sitter (1927) who wrote \"If we accept this hypothesis, then the 'astronomical time', given by the Earth's rotation, and used in all practical astronomical computations, differs from the 'uniform' or 'Newtonian' time, which is defined as the independent variable of the equations of celestial mechanics\". De Sitter offered a correction to be applied to the mean solar time given by the Earth's rotation to get uniform time.",
"title": "History (1952 standard)"
},
{
"paragraph_id": 6,
"text": "Other astronomers of the period also made suggestions for obtaining uniform time, including A Danjon (1929), who suggested in effect that observed positions of the Moon, Sun and planets, when compared with their well-established gravitational ephemerides, could better and more uniformly define and determine time.",
"title": "History (1952 standard)"
},
{
"paragraph_id": 7,
"text": "Thus the aim developed, to provide a new time scale for astronomical and scientific purposes, to avoid the unpredictable irregularities of the mean solar time scale, and to replace for these purposes Universal Time (UT) and any other time scale based on the rotation of the Earth around its axis, such as sidereal time.",
"title": "History (1952 standard)"
},
{
"paragraph_id": 8,
"text": "The American astronomer G M Clemence (1948) made a detailed proposal of this type based on the results of the English Astronomer Royal H Spencer Jones (1939). Clemence (1948) made it clear that his proposal was intended \"for the convenience of astronomers and other scientists only\" and that it was \"logical to continue the use of mean solar time for civil purposes\".",
"title": "History (1952 standard)"
},
{
"paragraph_id": 9,
"text": "De Sitter and Clemence both referred to the proposal as 'Newtonian' or 'uniform' time. D Brouwer suggested the name 'ephemeris time'.",
"title": "History (1952 standard)"
},
{
"paragraph_id": 10,
"text": "Following this, an astronomical conference held in Paris in 1950 recommended \"that in all cases where the mean solar second is unsatisfactory as a unit of time by reason of its variability, the unit adopted should be the sidereal year at 1900.0, that the time reckoned in this unit be designated ephemeris time\", and gave Clemence's formula (see Definition of ephemeris time (1952)) for translating mean solar time to ephemeris time.",
"title": "History (1952 standard)"
},
{
"paragraph_id": 11,
"text": "The International Astronomical Union approved this recommendation at its 1952 general assembly. Practical introduction took some time (see Use of ephemeris time in official almanacs and ephemerides); ephemeris time (ET) remained a standard until superseded in the 1970s by further time scales (see Revision).",
"title": "History (1952 standard)"
},
{
"paragraph_id": 12,
"text": "During the currency of ephemeris time as a standard, the details were revised a little. The unit was redefined in terms of the tropical year at 1900.0 instead of the sidereal year; and the standard second was defined first as 1/31556925.975 of the tropical year at 1900.0, and then as the slightly modified fraction 1/31556925.9747 instead, finally being redefined in 1967/8 in terms of the cesium atomic clock standard (see below).",
"title": "History (1952 standard)"
},
{
"paragraph_id": 13,
"text": "Although ET is no longer directly in use, it leaves a continuing legacy. Its successor time scales, such as TDT, as well as the atomic time scale IAT (TAI), were designed with a relationship that \"provides continuity with ephemeris time\". ET was used for the calibration of atomic clocks in the 1950s. Close equality between the ET second with the later SI second (as defined with reference to the cesium atomic clock) has been verified to within 1 part in 10.",
"title": "History (1952 standard)"
},
{
"paragraph_id": 14,
"text": "In this way, decisions made by the original designers of ephemeris time influenced the length of today's standard SI second, and in turn, this has a continuing influence on the number of leap seconds which have been needed for insertion into current broadcast time scales, to keep them approximately in step with mean solar time.",
"title": "History (1952 standard)"
},
{
"paragraph_id": 15,
"text": "Ephemeris time was defined in principle by the orbital motion of the Earth around the Sun (but its practical implementation was usually achieved in another way, see below). Its detailed definition was based on Simon Newcomb's Tables of the Sun (1895), implemented in a new way to accommodate certain observed discrepancies:",
"title": "Definition (1952)"
},
{
"paragraph_id": 16,
"text": "In the introduction to Tables of the Sun, the basis of the tables (p. 9) includes a formula for the Sun's mean longitude at a time, indicated by interval T (in units of Julian centuries of 36525 mean solar days), reckoned from Greenwich Mean Noon on 0 January 1900:",
"title": "Definition (1952)"
},
{
"paragraph_id": 17,
"text": "Spencer Jones' work of 1939 showed that differences between the observed positions of the Sun and the predicted positions given by Newcomb's formula demonstrated the need for the following correction to the formula:",
"title": "Definition (1952)"
},
{
"paragraph_id": 18,
"text": "where \"the times of observation are in Universal time, not corrected to Newtonian time,\" and 0.0748B represents an irregular fluctuation calculated from lunar observations.",
"title": "Definition (1952)"
},
{
"paragraph_id": 19,
"text": "Thus, a conventionally corrected form of Newcomb's formula, incorporating the corrections on the basis of mean solar time, would be the sum of the two preceding expressions:",
"title": "Definition (1952)"
},
{
"paragraph_id": 20,
"text": "Clemence's 1948 proposal, however, did not adopt such a correction of mean solar time. Instead, the same numbers were used as in Newcomb's original uncorrected formula (1), but now applied somewhat prescriptively, to define a new time and time scale implicitly, based on the real position of the Sun:",
"title": "Definition (1952)"
},
{
"paragraph_id": 21,
"text": "With this reapplication, the time variable, now given as E, represents time in ephemeris centuries of 36525 ephemeris days of 86400 ephemeris seconds each. The 1961 official reference summarized the concept as such: \"The origin and rate of ephemeris time are defined to make the Sun's mean longitude agree with Newcomb's expression\"",
"title": "Definition (1952)"
},
{
"paragraph_id": 22,
"text": "From the comparison of formulae (2) and (3), both of which express the same real solar motion in the same real time but defined on separate time scales, Clemence arrived at an explicit expression, estimating the difference in seconds of time between ephemeris time and mean solar time, in the sense (ET-UT):",
"title": "Definition (1952)"
},
{
"paragraph_id": 23,
"text": "δ t = + 24 s .349 + 72 s .3165 T + 29 s .949 T 2 + 1.821 B {\\displaystyle \\delta t=+24^{s}.349+72^{s}.3165T+29^{s}.949T^{2}+1.821B} . . . . . (4)",
"title": "Definition (1952)"
},
{
"paragraph_id": 24,
"text": "with the 24.349 seconds of time corresponding to the 1.00\" in ΔLs. Clemence's formula (today superseded by more modern estimations) was included in the original conference decision on ephemeris time. In view of the fluctuation term, practical determination of the difference between ephemeris time and UT depended on observation. Inspection of the formulae above shows that the (ideally constant) units of ephemeris time have been, for the whole of the twentieth century, very slightly shorter than the corresponding (but not precisely constant) units of mean solar time (which, besides their irregular fluctuations, tend to lengthen gradually). This finding is consistent with the modern results of Morrison and Stephenson (see article ΔT).",
"title": "Definition (1952)"
},
{
"paragraph_id": 25,
"text": "Although ephemeris time was defined in principle by the orbital motion of the Earth around the Sun, it was usually measured in practice by the orbital motion of the Moon around the Earth. These measurements can be considered as secondary realizations (in a metrological sense) of the primary definition of ET in terms of the solar motion, after a calibration of the mean motion of the Moon with respect to the mean motion of the Sun.",
"title": "Implementations"
},
{
"paragraph_id": 26,
"text": "Reasons for the use of lunar measurements were practically based: the Moon moves against the background of stars about 13 times as fast as the Sun's corresponding rate of motion, and the accuracy of time determinations from lunar measurements is correspondingly greater.",
"title": "Implementations"
},
{
"paragraph_id": 27,
"text": "When ephemeris time was first adopted, time scales were still based on astronomical observation, as they always had been. The accuracy was limited by the accuracy of optical observation, and corrections of clocks and time signals were published in arrear.",
"title": "Implementations"
},
{
"paragraph_id": 28,
"text": "A few years later, with the invention of the cesium atomic clock, an alternative offered itself. Increasingly, after the calibration in 1958 of the cesium atomic clock by reference to ephemeris time, cesium atomic clocks running on the basis of ephemeris seconds began to be used and kept in step with ephemeris time. The atomic clocks offered a further secondary realization of ET, on a quasi-real time basis that soon proved to be more useful than the primary ET standard: not only more convenient, but also more precisely uniform than the primary standard itself. Such secondary realizations were used and described as 'ET', with an awareness that the time scales based on the atomic clocks were not identical to that defined by the primary ephemeris time standard, but rather, an improvement over it on account of their closer approximation to uniformity. The atomic clocks gave rise to the atomic time scale, and to what was first called Terrestrial Dynamical Time and is now Terrestrial Time, defined to provide continuity with ET.",
"title": "Implementations"
},
{
"paragraph_id": 29,
"text": "The availability of atomic clocks, together with the increasing accuracy of astronomical observations (which meant that relativistic corrections were at least in the foreseeable future no longer going to be small enough to be neglected), led to the eventual replacement of the ephemeris time standard by more refined time scales including terrestrial time and barycentric dynamical time, to which ET can be seen as an approximation.",
"title": "Implementations"
},
{
"paragraph_id": 30,
"text": "In 1976, the IAU resolved that the theoretical basis for its then-current (since 1952) standard of Ephemeris Time was non-relativistic, and that therefore, beginning in 1984, Ephemeris Time would be replaced by two relativistic timescales intended to constitute dynamical timescales: Terrestrial Dynamical Time (TDT) and Barycentric Dynamical Time (TDB). Difficulties were recognized, which led to these, in turn, being superseded in the 1990s by time scales Terrestrial Time (TT), Geocentric Coordinate Time GCT (TCG) and Barycentric Coordinate Time BCT (TCB).",
"title": "Revision of time scales"
},
{
"paragraph_id": 31,
"text": "High-precision ephemerides of sun, moon and planets were developed and calculated at the Jet Propulsion Laboratory (JPL) over a long period, and the latest available were adopted for the ephemerides in the Astronomical Almanac starting in 1984. Although not an IAU standard, the ephemeris time argument Teph has been in use at that institution since the 1960s. The time scale represented by Teph has been characterized as a relativistic coordinate time that differs from Terrestrial Time only by small periodic terms with an amplitude not exceeding 2 milliseconds of time: it is linearly related to, but distinct (by an offset and constant rate which is of the order of 0.5 s/a) from the TCB time scale adopted in 1991 as a standard by the IAU. Thus for clocks on or near the geoid, Teph (within 2 milliseconds), but not so closely TCB, can be used as approximations to Terrestrial Time, and via the standard ephemerides Teph is in widespread use.",
"title": "JPL ephemeris time argument Teph"
},
{
"paragraph_id": 32,
"text": "Partly in acknowledgement of the widespread use of Teph via the JPL ephemerides, IAU resolution 3 of 2006 (re-)defined Barycentric Dynamical Time (TDB) as a current standard. As re-defined in 2006, TDB is a linear transformation of TCB. The same IAU resolution also stated (in note 4) that the \"independent time argument of the JPL ephemeris DE405, which is called Teph\" (here the IAU source cites), \"is for practical purposes the same as TDB defined in this Resolution\". Thus the new TDB, like Teph, is essentially a more refined continuation of the older ephemeris time ET and (apart from the < 2 ms periodic fluctuations) has the same mean rate as that established for ET in the 1950s.",
"title": "JPL ephemeris time argument Teph"
},
{
"paragraph_id": 33,
"text": "Ephemeris time based on the standard adopted in 1952 was introduced into the Astronomical Ephemeris (UK) and the American Ephemeris and Nautical Almanac, replacing UT in the main ephemerides in the issues for 1960 and after. (But the ephemerides in the Nautical Almanac, by then a separate publication for the use of navigators, continued to be expressed in terms of UT.) The ephemerides continued on this basis through 1983 (with some changes due to adoption of improved values of astronomical constants), after which, for 1984 onwards, they adopted the JPL ephemerides.",
"title": "Use in official almanacs and ephemerides"
},
{
"paragraph_id": 34,
"text": "Previous to the 1960 change, the 'Improved Lunar Ephemeris' had already been made available in terms of ephemeris time for the years 1952—1959 (computed by W J Eckert from Brown's theory with modifications recommended by Clemence (1948)).",
"title": "Use in official almanacs and ephemerides"
},
{
"paragraph_id": 35,
"text": "Successive definitions of the unit of ephemeris time are mentioned above (History). The value adopted for the 1956/1960 standard second:",
"title": "Redefinition of the second"
},
{
"paragraph_id": 36,
"text": "was obtained from the linear time-coefficient in Newcomb's expression for the solar mean longitude (above), taken and applied with the same meaning for the time as in formula (3) above. The relation with Newcomb's coefficient can be seen from:",
"title": "Redefinition of the second"
},
{
"paragraph_id": 37,
"text": "Caesium atomic clocks became operational in 1955, and quickly confirmed the evidence that the rotation of the Earth fluctuated irregularly. This confirmed the unsuitability of the mean solar second of Universal Time as a measure of time interval for the most precise purposes. After three years of comparisons with lunar observations, Markowitz et al. (1958) determined that the ephemeris second corresponded to 9 192 631 770 ± 20 cycles of the chosen cesium resonance.",
"title": "Redefinition of the second"
},
{
"paragraph_id": 38,
"text": "Following this, in 1967/68, the General Conference on Weights and Measures (CGPM) replaced the definition of the SI second by the following:",
"title": "Redefinition of the second"
},
{
"paragraph_id": 39,
"text": "The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.",
"title": "Redefinition of the second"
},
{
"paragraph_id": 40,
"text": "Although this is an independent definition that does not refer to the older basis of ephemeris time, it uses the same quantity as the value of the ephemeris second measured by the cesium clock in 1958. This SI second referred to atomic time was later verified by Markowitz (1988) to be in agreement, within 1 part in 10, with the second of ephemeris time as determined from lunar observations.",
"title": "Redefinition of the second"
},
{
"paragraph_id": 41,
"text": "For practical purposes the length of the ephemeris second can be taken as equal to the length of the second of Barycentric Dynamical Time (TDB) or Terrestrial Time (TT) or its predecessor TDT.",
"title": "Redefinition of the second"
},
{
"paragraph_id": 42,
"text": "The difference between ET and UT is called ΔT; it changes irregularly, but the long-term trend is parabolic, decreasing from ancient times until the nineteenth century, and increasing since then at a rate corresponding to an increase in the solar day length of 1.7 ms per century (see leap seconds).",
"title": "Redefinition of the second"
},
{
"paragraph_id": 43,
"text": "International Atomic Time (TAI) was set equal to UT2 at 1 January 1958 0:00:00. At that time, ΔT was already about 32.18 seconds. The difference between Terrestrial Time (TT) (the successor to ephemeris time) and atomic time was later defined as follows:",
"title": "Redefinition of the second"
},
{
"paragraph_id": 44,
"text": "This difference may be assumed constant—the rates of TT and TAI are designed to be identical.",
"title": "Redefinition of the second"
}
]
| The term ephemeris time can in principle refer to time in association with any ephemeris. In practice it has been used more specifically to refer to: a former standard astronomical time scale adopted in 1952 by the IAU, and superseded during the 1970s. This time scale was proposed in 1948, to overcome the disadvantages of irregularly fluctuating mean solar time. The intent was to define a uniform time based on Newtonian theory. Ephemeris time was a first application of the concept of a dynamical time scale, in which the time and time scale are defined implicitly, inferred from the observed position of an astronomical object via the dynamical theory of its motion.
a modern relativistic coordinate time scale, implemented by the JPL ephemeris time argument Teph, in a series of numerically integrated Development Ephemerides. Among them is the DE405 ephemeris in widespread current use. The time scale represented by Teph is closely related to, but distinct from, the TCB time scale currently adopted as a standard by the IAU. Most of the following sections relate to the ephemeris time of the 1952 standard. An impression has sometimes arisen that ephemeris time was in use from 1900: this probably arose because ET, though proposed and adopted in the period 1948–1952, was defined in detail using formulae that made retrospective use of the epoch date of 1900 January 0 and of Newcomb's Tables of the Sun. The ephemeris time of the 1952 standard leaves a continuing legacy, through its historical unit ephemeris second which became closely duplicated in the length of the current standard SI second. | 2023-05-06T04:17:30Z | [
"Template:Cite web",
"Template:ISBN",
"Template:ISSN",
"Template:Time measurement and standards",
"Template:Short description",
"Template:Nowrap"
]
| https://en.wikipedia.org/wiki/Ephemeris_time |
|
9,995 | EastEnders | EastEnders is a British television soap opera created by Julia Smith and Tony Holland which has been broadcast on BBC One since February 1985. Set in the fictional borough of Walford in the East End of London, the programme follows the stories of local residents and their families as they go about their daily lives. Within eight months of the show's original launch, it had reached the number one spot in BARB's television ratings, and has consistently remained among the top-rated series in Britain. Four EastEnders episodes are listed in the all-time top 10 most-watched programmes in the UK, including the number one spot, when over 30 million watched the 1986 Christmas Day episode. EastEnders has been important in the history of British television drama, tackling many subjects that are considered to be controversial or taboo in British culture, and portraying a social life previously unseen on UK mainstream television.
Since co-creator Holland was from a large family in the East End, a theme heavily featured in EastEnders is strong families, and each character is supposed to have their own place in the fictional community. The Beales, Brannings, Mitchells, Slaters and the Watts are some of the families that have been central to the soap's notable and dramatic storylines. EastEnders has been filmed at the BBC Elstree Centre since its inception, with a set that is outdoors and open to weather. In 2014, the BBC announced plans to rebuild the set entirely. Filming commenced on the new set in January 2022, and it was first used on-screen in March 2022. Demolition on the old set commenced in November 2022.
EastEnders has received both praise and criticism for many of its storylines, which have dealt with difficult themes including violence, rape, murder and abuse. It has been criticised for various storylines, including the 2010 baby swap storyline, which attracted over 6,000 complaints, as well as complaints of showing too much violence and allegations of national and racial stereotypes. However, EastEnders has also been commended for representing real-life issues and spreading awareness on social topics. The cast and crew of the show have received and been nominated for various awards.
In March 1983, under two years before EastEnders' first episode was broadcast, the show was a vague idea in the mind of a handful of BBC executives, who decided that what BBC One needed was a popular bi-weekly drama series that would attract the kind of mass audiences that ITV were getting with Coronation Street. The first people to whom David Reid, then head of series and serials, turned were Julia Smith and Tony Holland, a well established producer/script editor team who had first worked together on Z-Cars. The outline that Reid presented was vague: two episodes a week, 52 weeks a year. After the concept was put to them on 14 March 1983, Smith and Holland then went about putting their ideas down on paper; they decided it would be set in the East End of London. Granada Television gave Smith unrestricted access to the Coronation Street production for a month so that she could get a sense of how a continuing drama was produced.
There was anxiety at first that the viewing public would not accept a new soap set in the south of England, though research commissioned by lead figures in the BBC revealed that southerners would accept a northern soap, northerners would accept a southern soap and those from the Midlands, as Julia Smith herself pointed out, did not mind where it was set as long as it was somewhere else. This was the beginning of a close and continuing association between EastEnders and audience research, which, though commonplace today, was something of a revolution in practice.
The show's creators were both Londoners, but when they researched Victorian squares, they found massive changes in areas they thought they knew well; however, delving further into the East End of London, they found exactly what they had been searching for: a real East End spirit, an inward-looking quality, a distrust of strangers and authority figures, a sense of territory and community that the creators summed up as "Hurt one of us and you hurt us all".
When developing EastEnders, both Smith and Holland looked at influential models like Coronation Street, but they found that it offered a rather outdated and nostalgic view of working-class life. Only after EastEnders began, and featured the characters of Tony Carpenter and Kelvin Carpenter, did Coronation Street start to feature black characters, for example. They came to the conclusion that Coronation Street had grown old with its audience, and that EastEnders would have to attract a younger, more socially extensive audience, ensuring that it had the longevity to retain it for many years thereafter. They also looked at Brookside, but found there was a lack of central meeting points for the characters, making it difficult for the writers to intertwine different storylines, so EastEnders was set in Albert Square.
A previous UK soap set in an East End market was ATV's Market in Honey Lane; however, between 1967 and 1969, this show, which graduated from one showing a week to two in three separate series (the latter series being shown in different time slots across the ITV network) was very different in style and approach from EastEnders. The British Film Institute described Market in Honey Lane thus: "It was not an earth-shaking programme, and certainly not pioneering in any revolutionary ideas in technique and production, but simply proposed itself to the casual viewer as a mildly pleasant affair."
The target launch date was originally January 1985. Smith and Holland had eleven months in which to write, cast and shoot the whole thing; however, in February 1984, they did not even have a title or a place to film. Both Smith and Holland were unhappy about the January 1985 launch date, favouring November or even September 1984 when seasonal audiences would be higher, but the BBC stayed firm, and Smith and Holland had to concede that, with the massive task of getting the Elstree Studios operational, January was the most realistic date; however, this was later to be changed to February.
The project had a number of working titles: Square Dance, Round the Square, Round the Houses, London Pride and East 8. It was the latter that stuck (E8 is the postcode for Hackney) in the early months of creative process; however, the show was renamed after many casting agents mistakenly thought the show was to be called Estate, and the fictional postcode E20 was created, instead of using E8. Julia Smith came up with the name Eastenders after she and Holland had spent months telephoning theatrical agents and asking "Do you have any real East Enders on your books?" Smith thought "Eastenders" "looked ugly written down" and was "hard to say", so decided to capitalise the second "e".
After they decided on the filming location of BBC Elstree Centre in south Hertfordshire, Smith and Holland set about creating the 23 characters needed, in just 14 days. They took a holiday in Playa de los Pocillos, Lanzarote, and started to create the characters. Holland created the Beale and Fowler family, drawing on his own background. His mother, Ethel Holland, was one of four sisters raised in Walthamstow. Her eldest sister, Lou, had married a man named Albert Beale and had two children, named Peter and Pauline. These family members were the basis for Lou Beale, Pete Beale and Pauline Fowler. Holland also created Pauline's unemployed husband Arthur Fowler, their children Mark and Michelle, Pete's wife Kathy and their son Ian. Smith used her personal memories of East End residents she met when researching Victorian squares. Ethel Skinner was based on an old woman she met in a pub, with ill-fitting false teeth, and a "face to rival a neon sign", holding a Yorkshire Terrier in one hand and a pint of Guinness in the other.
Other characters created included Jewish doctor Harold Legg, the Anglo-Cypriot Osman family (Ali, Sue and baby Hassan), black father and son Tony and Kelvin Carpenter, single mother Mary Smith and Bangladeshi couple Saeed and Naima Jeffery. Jack, Pearl and Tracey Watts were created to bring "flash, trash, and melodrama" to the Square (they were later renamed Den, Angie and Sharon). The characters of Andy O'Brien and Debbie Wilkins were created to show a modern couple with outwardly mobile pretensions, and Lofty Holloway to show an outsider; someone who did not fit in with other residents. It was decided that he would be a former soldier, as Holland's personal experiences of ex-soldiers were that they had trouble fitting into society after being in the army. When they compared the characters they had created, Smith and Holland realised they had created a cross-section of East End residents. The Beale and Fowler family represented the old families of the East End, who had always been there. The Osmans, Jefferys and Carpenters represented the more modern diverse ethnic community of the East End. Debbie, Andy and Mary represented more modern-day individuals.
Once they had decided on their 23 characters, they returned to London for a meeting with the BBC. Everyone agreed that EastEnders would be tough, violent on occasion, funny and sharp—set in Margaret Thatcher's Britain—and it would start with a bang (namely the death of Reg Cox). They decided that none of their existing characters were wicked enough to have killed Reg, so a 24th character, Nick Cotton was added to the line-up. He was a racist thug, who often tried to lead other young characters astray. When all the characters had been created, Smith and Holland set about casting the actors, which also involved the input of lead director Matthew Robinson, who supervised auditions with the other directors at the outset, Vivienne Cozens and Peter Edwards.
Through the next few months, the set was growing rapidly at Elstree, and a composer and designer had been commissioned to create the title sequence. Simon May wrote the theme music and Alan Jeapes created the visuals. The visual images were taken from an aircraft flying over the East End of London at 1000 feet. Approximately 800 photographs were taken and pieced together to create one big image. The credits were later updated when the Millennium Dome was built.
The launch was delayed until February 1985 due to a delay in the chat show Wogan, that was to be a part of the major revamp in BBC1's schedules. Smith was uneasy about the late start as EastEnders no longer had the winter months to build up a loyal following before the summer ratings lull. The press were invited to Elstree to meet the cast and see the lot, and stories immediately started circulating about the show, about a rivalry with ITV (who were launching their own market-based soap, Albion Market) and about the private lives of the cast. Anticipation and rumour grew in equal measure until the first transmission at 7 p.m. on 19 February 1985. Neither Holland nor Smith could watch; they both instead returned to the place where it all began, Albertine's Wine Bar on Wood Lane. The next day, viewing figures were confirmed at 17 million. The reviews were largely favourable, although, after three weeks on air, BBC1's early evening share had returned to the pre-EastEnders figure of seven million, though EastEnders then climbed to highs of up to 23 million later on in the year. Following the launch, both group discussions and telephone surveys were conducted to test audience reaction to early episodes.
Press coverage of EastEnders, which was already intense, went into overdrive once the show was broadcast. With public interest so high, the media began investigating the private lives of the show's popular stars. Within days, a scandalous headline appeared – "EASTENDERS STAR IS A KILLER". This referred to Leslie Grantham, and his prison sentence for the murder of a taxi driver in an attempted robbery nearly 20 years earlier. This shocking tell-all style set the tone for relations between Albert Square and the press for the next 20 years.
The show's first episode attracted some 17 million viewers, and it continued to attract high viewing figures from then on. By Christmas 1985, the tabloids could not get enough of the soap. "Exclusives" about EastEnders storylines and the actors on the show became a staple of tabloid buyers' daily reading.
In 1987, the show featured the first same-sex kiss on a British soap, when Colin Russell (Michael Cashman) kissed boyfriend Barry Clarke on the forehead. This was followed in January 1989, less than a year after legislation came into effect in the UK, prohibiting the "promotion of homosexuality" by local authorities, by the first on-the-mouth gay kiss in a British soap when Colin kissed a new character, Guido Smith (Nicholas Donovan), an episode that was watched by 17 million people.
Writer Colin Brake suggested that 1989 was a year of big change for EastEnders, both behind the cameras and in front of them. Original production designer Keith Harris left the show, and Holland and Smith both decided that the time had come to move on too, their final contribution coinciding with the exit of one of EastEnders' most successful characters, Den Watts (Leslie Grantham). Producer Mike Gibbon was given the task of running the show, and he enlisted the most experienced writers to take over the storylining of the programme, including Charlie Humphreys, Jane Hollowood and Tony McHale.
According to Brake, the departure of two of the soap's most popular characters, Den and Angie Watts (Anita Dobson), left a void in the programme, which needed to be filled. In addition, several other long-running characters left the show that year, including Sue and Ali Osman (Sandy Ratcliff and Nejdet Salih) and their family; Donna Ludlow (Matilda Ziegler); Carmel Jackson (Judith Jacob) and Colin Russell (Michael Cashman). Brake indicated that the production team decided that 1989 was to be a year of change in Walford, commenting, "it was almost as if Walford itself was making a fresh start".
By the end of 1989, EastEnders had acquired a new executive producer, Michael Ferguson, who had previously been a successful producer on ITV's The Bill. Brake suggested that Ferguson was responsible for bringing in a new sense of vitality and creating a programme that was more in touch with the real world than it had been over the previous year.
A new era began in 1990, with the introduction of Phil Mitchell (Steve McFadden) and Grant Mitchell (Ross Kemp)—the Mitchell brothers—successful characters who would go on to dominate the soap thereafter. As the new production team cleared the way for new characters and a new direction, all of the characters introduced under Gibbon were axed from the show at the start of the year. Ferguson introduced other characters and was responsible for storylines including HIV, Alzheimer's disease and murder. After a successful revamp of the soap, Ferguson decided to leave EastEnders in July 1991. Ferguson was succeeded by both Leonard Lewis and Helen Greaves, who initially shared the role as Executive Producer for EastEnders. Lewis and Greaves formulated a new regime for EastEnders, giving the writers of the serial more authority in storyline progression, with the script department providing "guidance rather than prescriptive episode storylines". By the end of 1992, Greaves had left, and Lewis became executive and series producer. He left EastEnders in 1994 after the BBC controllers demanded an extra episode a week, taking its weekly airtime from 60 to 90 minutes. Lewis felt that producing an hour of "reasonable quality drama" a week was the maximum that any broadcasting system could generate without loss of integrity. Having set up the transition to the new schedule, the first trio of episodes—dubbed The Vic siege—marked Lewis's departure from the programme. Barbara Emile then became the Executive Producer of EastEnders, remaining with EastEnders until early 1995. She was succeeded by Corinne Hollingworth.
Hollingworth's contributions to the soap were awarded in 1997 when EastEnders won the BAFTA for Best Drama Series. Hollingworth shared the award with the next Executive Producer, Jane Harris. Harris was responsible for the critically panned Ireland episodes and Cindy Beale's attempted assassination of Ian Beale, which brought in an audience of 23 million in 1996, roughly four million more than Coronation Street. In 1998 Matthew Robinson was appointed as the Executive Producer of EastEnders. During his reign, EastEnders won the BAFTA for "Best Soap" in consecutive years 1999 and 2000 and many other awards. Robinson also earned tabloid soubriquet "Axeman of Albert Square" after sacking a large number of characters in one hit, and several more thereafter. In their place, Robinson introduced new long-running characters including Melanie Healy, Jamie Mitchell, Lisa Shaw, Steve Owen and Billy Mitchell.
John Yorke became the Executive Producer of EastEnders in 2000. Yorke was given the task of introducing the soap's fourth weekly episode. He axed the majority of the Di Marco family, except Beppe di Marco, and helped introduce popular characters such as the Slater family. As what Mal Young described as "two of EastEnders' most successful years", Yorke was responsible for highly rated storylines such as "Who Shot Phil?", Ethel Skinner's death, Jim Branning and Dot Cotton's marriage, Trevor Morgan's domestic abuse of his wife Little Mo Morgan, and Kat Slater's revelation to her daughter Zoe Slater that she was her mother.
In 2002, Louise Berridge succeeded Yorke as the Executive Producer. During her time at EastEnders, Berridge introduced popular characters such as Alfie Moon, Dennis Rickman, Chrissie Watts, Jane Beale, Stacey Slater and the critically panned Indian Ferreira family.
Berridge was responsible for some ratings success stories, such as Alfie and Kat Slater's relationship, Janine Butcher getting her comeuppance, Trevor Morgan and Jamie Mitchell's death storylines and the return of one of the greatest soap icons, Den Watts, who had been presumed dead for 14 years. His return in late 2003 was watched by over 16 million viewers, putting EastEnders back at number one in the rating war with the Coronation Street; however, other storylines, such as one about a kidney transplant involving the Ferreiras, were not well received, and although Den Watts's return proved to be a ratings success, the British press branded the plot unrealistic and felt that it questioned the show's credibility. A severe press backlash followed after Den's actor, Leslie Grantham, was outed in an internet sex scandal, which coincided with a swift decline in viewer ratings. The scandal led to Grantham's departure from the soap, but the occasion was used to mark the 20th anniversary of EastEnders, with an episode showing Den's murder at the Queen Vic pub.
On 21 September 2004, Berridge quit as executive producer of EastEnders following continued criticism of the show. Kathleen Hutchison was swiftly appointed as the Executive Producer of EastEnders, and was tasked with quickly turning the fortunes of the soap. During her time at the soap Hutchison axed multiple characters, and reportedly ordered the rewriting of numerous scripts. Newspapers reported on employee dissatisfaction with Hutchison's tenure at EastEnders. In January 2005, Hutchison left the soap and John Yorke (who by this time, was the BBC Controller of Continuing Drama Series) took total control of the show himself and became acting Executive Producer for a short period, before appointing Kate Harwood to the role. Harwood stayed at EastEnders for 20 months before being promoted by the BBC. The highly anticipated return of Ross Kemp as Grant Mitchell in October 2005 proved to be a sudden major ratings success, with the first two episodes consolidating to ratings of 13.21 to 13.34 million viewers. On Friday 11 November 2005, EastEnders was the first British drama to feature a two-minute silence. This episode later went on to win British Soap Award for "Best Single Episode". In October 2006, Diederick Santer took over as Executive Producer of EastEnders. He introduced several characters to the show, including ethnic minority and homosexual characters to make the show 'feel more 21st Century'. Santer also reintroduced past and popular characters to the programme.
On 2 March 2007, BBC signed a deal with Google to put videos on YouTube. A behind the scenes video of EastEnders, hosted by Matt Di Angelo, who played Deano Wicks on the show, was put on the site the same day, and was followed by another on 6 March 2007. In April 2007, EastEnders became available to view on mobile phones, via 3G technology, for 3, Vodafone and Orange customers. On 21 April 2007, the BBC launched a new advertising campaign using the slogan "There's more to EastEnders". The first television advert showed Dot Branning with a refugee baby, Tomas, whom she took in under the pretence of being her grandson. The second and third featured Stacey Slater and Dawn Swann, respectively. There have also been adverts in magazines and on radio.
In 2009, producers introduced a limit on the number of speaking parts in each episode due to budget cuts, with an average of 16 characters per episode. The decision was criticised by Martin McGrath of Equity, who said: "Trying to produce quality TV on the cheap is doomed to fail." The BBC responded by saying they had been working that way for some time and it had not affected the quality of the show.
From 4 February 2010, CGI was used in the show for the first time, with the addition of computer-generated trains.
EastEnders celebrated its 25th anniversary on 19 February 2010. Santer came up with several plans to mark the occasion, including the show's first episode to be broadcast live, the second wedding between Ricky Butcher and Bianca Jackson and the return of Bianca's relatives, mother Carol Jackson, and siblings Robbie Jackson, Sonia Fowler and Billie Jackson. He told entertainment website Digital Spy, "It's really important that the feel of the week is active and exciting and not too reflective. There'll be those moments for some of our longer-serving characters that briefly reflect on themselves and how they've changed. The characters don't know that it's the 25th anniversary of anything, so it'd be absurd to contrive too many situations in which they're reflective on the past. The main engine of that week is great stories that'll get people talking." The live episode featured the death of Bradley Branning (Charlie Clements) at the conclusion of the "Who Killed Archie?" storyline, which saw Bradley's wife Stacey Slater (Lacey Turner) reveal that she was the murderer. Viewing figures peaked at 16.6 million, which was the highest viewed episode in seven years. Other events to mark the anniversary were a spin-off DVD, EastEnders: Last Tango in Walford, and an Internet spin-off, EastEnders: E20.
Santer officially left EastEnders in March 2010, and was replaced by Bryan Kirkwood. Kirkwood's first signing was the reintroduction of characters Alfie Moon (Shane Richie) and Kat Moon (Jessie Wallace), and his first new character was Vanessa Gold, played by Zöe Lucker. In April and May 2010, Kirkwood axed eight characters from the show, Barbara Windsor left her role of Peggy Mitchell, which left a hole in the show, which Kirkwood decided to fill by bringing back Kat and Alfie, which he said would "herald the new era of EastEnders." EastEnders started broadcasting in high definition on 25 December 2010. Old sets had to be rebuilt, so The Queen Victoria set was burnt down in a storyline (and in reality) to facilitate this.
In November 2011, a storyline showed character Billy Mitchell, played by Perry Fenwick, selected to be a torch bearer for the 2012 Summer Olympics. In reality, Fenwick carried the torch through the setting of Albert Square, with live footage shown in the episode on 23 July 2012. This was the second live broadcast of EastEnders. In 2012, Kirkwood chose to leave his role as executive producer and was replaced by Lorraine Newman. The show lost many of its significant characters during this period. Newman stepped down as executive producer after 16 months in the job in 2013 after the soap was criticised for its boring storylines and its lowest-ever figures pointing at around 4.8 million. Dominic Treadwell-Collins was appointed as the new executive producer on 19 August 2013 and was credited on 9 December. He axed multiple characters from the show and introduced the extended Carter family. He also introduced a long-running storyline, "Who Killed Lucy Beale?", which peaked during the show's 30th anniversary in 2015 with a week of live episodes. Treadwell-Collins announced his departure from EastEnders on 18 February 2016.
Sean O'Connor, former EastEnders series story producer and then-editor on radio soap opera The Archers, was announced to be taking over the role. Treadwell-Collins left on 6 May and O'Connor's first credited episode was broadcast on 11 July Although O'Connor's first credited episode aired in July, his own creative work was not seen onscreen until late September. Additionally, Oliver Kent was brought in as the Head of Continuing Drama Series for BBC Scripted Studios, meaning that Kent would oversee EastEnders along with O'Connor. O'Connor's approach to the show was to have a firmer focus on realism, which he said was being "true to EastEnders' DNA and [finding] a way of capturing what it would be like if Julia Smith and Tony Holland were making the show now." He said that "EastEnders has always had a distinctly different tone from the other soaps but over time we've diluted our unique selling point. I think we need to be ourselves and go back to the origins of the show and what made it successful in the first place. It should be entertaining but it should also be informative—that's part of our unique BBC compact with the audience. It shouldn't just be a distraction from your own life, it should be an exploration of the life shared by the audience and the characters." O'Connor planned to stay with EastEnders until the end of 2017, but announced his departure on 23 June 2017 with immediate effect, saying he wanted to concentrate on a career in film. John Yorke returned as a temporary executive consultant. Kent said, "John Yorke is a Walford legend and I am thrilled that he will be joining us for a short period to oversee the show and to help us build on Sean's legacy while we recruit a long-term successor." Yorke initially returned for three months but his contract was later extended.
In July 2018, a special episode was aired as part of a knife crime storyline. This episode, which showed the funeral of Shakil Kazemi (Shaheen Jafargholi) interspersed with real people talking about their true-life experiences of knife crime. On 8 August 2018, it was announced that Kate Oates, who has previously been a producer on the ITV soap operas Emmerdale and Coronation Street, would become Senior Executive Producer of EastEnders, as well of Holby City and Casualty. Oates began her role in October, and continued to work with Yorke until the end of the year to "ensure a smooth handover". It was also announced that Oates was looking for an Executive Producer to work under her. Jon Sen was announced on 10 December 2018 to be taking on the role of executive producer.
In late 2016, popularity and viewership of EastEnders began to decline, with viewers criticising the storylines during the O'Connor reign, such as the killing of the Mitchell sisters and a storyline centred around the local bin collection. Although, since Yorke and Oates' reigns, opinions towards the storylines have become more favourable, with storylines such as Ruby Allen's (Louisa Lytton) sexual consent, which featured a special episode which "broke new ground" and knife crime, both of which have created "vital" discussions. The soap won the award for Best Continuing Drama at the 2019 British Academy Television Awards; its first high-profile award since 2016; however, in June 2019, EastEnders suffered its lowest ever ratings of 2.4 million due to its airing at 7 pm because of the BBC's coverage of the 2019 FIFA Women's World Cup. As of 2019, the soap is one of the most watched series on BBC iPlayer and averages around 5 million viewers per episode. The soap enjoyed a record-breaking year on the streaming platform in 2019, with viewers requesting to stream or download the show 234 million times, up 10% on 2018. The Christmas Day episode in 2019 became EastEnders biggest ever episode on BBC iPlayer, with 2.14 million viewer requests.
In February 2020, EastEnders celebrated its 35th anniversary with a stunt on the River Thames leading to the death of Dennis Rickman Jr (Bleu Landau).
It was announced on 18 March 2020 that production had been suspended on EastEnders and other BBC Studios continuing dramas in light of new government guidelines following the COVID-19 pandemic, and that broadcast of the show would be reduced to two 30-minute episodes per week, broadcast on Mondays and Tuesdays, respectively. A spokesperson confirmed that the decision was made to reduce transmission so that EastEnders could remain on-screen for longer. Two months later, Charlotte Moore, the director of content at the BBC, announced plans for a return to production. She confirmed that EastEnders would return to filming during June 2020 and that there would be a transmission break between episodes filmed before and after production paused. When production recommences, social distancing measures will be utilised and the show's cast will be required to do their own hair and make-up, which is normally done by a make-up artist.
It was announced on 3 June 2020 that EastEnders would go on a transmission break following the broadcast of episode 6124 on 16 June. A behind-the-scenes show, EastEnders: Secrets From The Square, will air in the show's place during the transmission break and is hosted by television personality Stacey Dooley. The first episode of the week features exclusive interviews with the show's cast, while the second episode will be a repeat of "iconic" episodes of the show. Beginning on 22 June 2020, Dooley interviews two cast members together in the show's restaurant set while observing social distancing measures. Kate Phillips, the controller of BBC Entertainment, explained that EastEnders: Secrets From The Square would be the "perfect opportunity to celebrate the show" in the absence of the show. Jon Sen, the show's executive producer, expressed his excitement at the new series, dubbing it "a unique opportunity to see from the cast themselves just what it is like to be part of EastEnders".
Plans for the show's return to transmission were announced on 12 June 2020. It was confirmed that after the transmission break, the show would temporarily broadcast four 20-minute episodes per week, until it can return to its normal output. Sen explained that the challenges in production and filming of the show has led to the show's reduced output, but also stated that the crew had been "trialing techniques, filming methods and new ways of working" to prepare the show for its return. Filming recommenced on 29 June, with episodes airing from 7 September 2020.
On 9 April 2021, following the death of Prince Philip, Duke of Edinburgh, the episode of EastEnders that was due to be aired that night was postponed along with the final of Masterchef. In May 2021, it was announced that from 14 June 2021, boxsets of episodes would be uploaded to BBC iPlayer each Monday for three weeks. Executive producer Sen explained that the bi-annual scheduling conflicts that the UEFA European Championship and the FIFA World Cup cause to the soap, premiering four episodes on the streaming service would be beneficial for fans of the show who want to watch at their own chosen pace. Sen also confirmed that the episodes will still air on BBC One throughout the week. The release of these boxsets was extended for a further five weeks, due to similar impacts caused by the 2020 Summer Olympics.
On 12 October 2021, it was announced that EastEnders would partake in a special week-long crossover event involving multiple British soaps to promote the topic of climate change ahead of the 2021 United Nations Climate Change Conference. During the week, beginning from 1 November, a social media clip featuring Maria Connor from Coronation Street was featured on the programme while Cindy Cunningham from Hollyoaks was also referenced. Similar clips featuring the show's own characters (Bailey Baker and Peter Beale) were featured on Doctors and Emmerdale during the week.
In November 2021, it was announced that Sen would step down from his role as executive producer, and would be succeeded by former story producer, Chris Clenshaw. Sen's final credited episode as executive producer was broadcast on 10 March 2022 and coincided in a week of episodes that saw the arrest of serial killer Gray Atkins (Toby-Alexander Smith). From the week commencing on 7 March 2022, the show has been broadcast every weekday from Monday to Thursday in a 7:30 pm slot, making it the first time in the show's history that the programme began airing permanently on Wednesdays. On 2 June 2022, EastEnders aired an episode celebrating the Platinum Jubilee of Elizabeth II. Charles, Prince of Wales and Camilla, Duchess of Cornwall guest starred in the episode; it also marked the first executive producer credit for Clenshaw. Clenshaw's first major decision as executive producer was the axing of five series regulars: Peter Beale (Dayle Hudson), Stuart Highway (Ricky Champ), Jada Lennox (Kelsey Calladine-Smith), Dana Monroe (Barbara Smith) and Lola Pearce (Danielle Harold). Viewers criticised the decision, feeling that some of the characters had potential to add to the soap. Clenshaw has since overseen the returns of Alfie Moon (Shane Richie) and Yolande Trueman (Angela Wynter), the recasts of Amy Mitchell (Ellie Dadd) and Ricky Branning (Frankie Day), as well as the reintroduction of Cindy Beale (Michelle Collins), who returned from the dead after 25 years. Public opinion on Clenshaw then changed and he has been credited for improving ratings and garnering critical acclaim for the soap, with EastEnders winning the award for Best British Soap at the 2023 British Soap Awards and the award for Serial Drama at the 28th National Television Awards under his leadership.
The central focus of EastEnders is the fictional Victorian square Albert Square in the fictional London Borough of Walford. In the show's narrative, Albert Square is a 19th-century street, named after Prince Albert (1819–1861), the husband of Queen Victoria (1819–1901, reigned 1837–1901). Thus, central to Albert Square is The Queen Victoria Public House (also known as The Queen Vic or The Vic). The show's producers based the square's design on Fassett Square in Dalston. There is also a market close to Fassett Square at Ridley Road. The postcode for the area, E8, was one of the working titles for the series. The name Walford is both a street in Dalston where Tony Holland lived and a blend of Walthamstow and Stratford—the areas of Greater London where the creators were born. Other parts of the Square and set interiors are based on other locations. The railway bridge is based upon one near BBC Television Centre which carries the Hammersmith & City line over Wood Lane W12, and the Queen Vic on the former College Park Hotel pub in Willesden at the end of Scrubs Lane at the junction with Harrow Road NW10 just a couple of miles from BBC Television Centre.
Walford East is a fictional London Underground station for Walford, and a tube map that was first seen on air in 1996 showed Walford East between Bow Road and West Ham, in the actual location of Bromley-by-Bow on the District and Hammersmith & City lines.
Walford has the postal district of E20. It was named as if Walford were part of the actual E postcode area which covers much of east London, the E standing for Eastern. E20 was entirely fictional when it was created, as London East postal districts stopped at E18 at the time. The show's creators opted for E20 instead of E19 as it was thought to sound better.
In March 2011, Royal Mail allocated the E20 postal district to the 2012 Olympic Park. In September 2011, the postcode for Albert Square was revealed in an episode as E20 6PQ.
EastEnders is built around the idea of relationships and strong families, with each character having a place in the community. This theme encompasses the whole Square, making the entire community a family of sorts, prey to upsets and conflict, but pulling together in times of trouble. Co-creator Tony Holland was from a large East End family, and such families have typified EastEnders. The first central family was the combination of the Fowler family, consisting of Pauline Fowler (Wendy Richard), her husband Arthur (Bill Treacher), and teenage children Mark (David Scarboro/Todd Carty) and Michelle (Susan Tully). Pauline's family, the Beales, consisted of Pauline's twin brother Pete Beale (Peter Dean), his wife Kathy (Gillian Taylforth) and their teenage son Ian (Adam Woodyatt). Pauline and Pete's domineering mother Lou Beale (Anna Wing) lived with Pauline and her family. Holland drew on the names of his own family for the characters.
The Watts and Mitchell families have been central to many notable EastEnders storylines, the show having been dominated by the Watts in the 1980s, with the 1990s focusing on the Mitchells and Butchers. The early 2000s saw a shift in attention towards the newly introduced female Slater clan, before a renewal of emphasis upon the restored Watts family beginning in 2003. In 2006, EastEnders became largely dominated by the Mitchell, Masood and Branning families, though the early 2010s also saw a renewed focus on the Moon and Slater family, and, from 2013 onwards, the Carters. In 2016, the Fowlers were revived and merged with the Slaters, with Martin Fowler (James Bye) marrying Stacey Slater (Lacey Turner). The late 2010s saw the newly introduced Taylor family become central to the show's main storylines, and in 2019, the first Sikh family, the Panesars, were introduced. The early 2020s was dominated by the Mitchells, Brannings, Panesars, Slaters, as well as the newly introduced Knight family. Key people involved in the production of EastEnders have stressed how important the idea of strong families is to the programme.
EastEnders has an emphasis on strong family matriarchs, with examples including Pauline Fowler (Wendy Richard) and Peggy Mitchell (Barbara Windsor), helping to attract a female audience. John Yorke, the former BBC's head of drama production, put this down to Tony Holland's "gay sensibility, which showed a love for strong women". The matriarchal role is one that has been seen in various reincarnations since the programme's inception, often depicted as the centre of the family unit. The original matriarch was Lou Beale (Anna Wing), though later examples include Mo Harris (Laila Morse), Pat Butcher (Pam St Clement), Zainab Masood (Nina Wadia), Cora Cross (Ann Mitchell), Kathy Beale (Gillian Taylforth), Jean Slater (Gillian Wright), and Suki Panesar (Balvinder Sopal). These characters are often seen as being loud and interfering but most importantly, responsible for the well-being of the family.
The show often includes strong, brassy, long-suffering women who exhibit diva-like behaviour and stoically battle through an array of tragedy and misfortune. Such characters include Angie Watts (Anita Dobson), Kathy Beale (Gillian Taylforth), Sharon Watts (Letitia Dean), Pat Butcher (Pam St Clement), Peggy Mitchell (Barbara Windsor), Kat Slater (Jessie Wallace), Denise Fox (Diane Parish), Tanya Branning (Jo Joyner) and Linda Carter (Kellie Bright). Conversely there are female characters who handle tragedy less well, depicted as eternal victims and endless sufferers, who include Ronnie Mitchell (Samantha Womack), Little Mo Mitchell (Kacey Ainsworth), Laura Beale (Hannah Waterman), Sue Osman (Sandy Ratcliff), Lisa Fowler (Lucy Benjamin), Mel Owen (Tamzin Outhwaite) and Rainie Cross (Tanya Franks). The 'tart with a heart' is another recurring character. Often, their promiscuity masks a hidden vulnerability and a desire to be loved. Such characters have included Pat Butcher (Pam St Clement), Tiffany Mitchell (Martine McCutcheon) and Kat Slater (Jessie Wallace).
A gender balance in the show is maintained via the inclusion of various "macho" male personalities such as Phil Mitchell (Steve McFadden), Grant Mitchell (Ross Kemp), Dan Sullivan (Craig Fairbrass), and George Knight (Colin Salmon), "bad boys" such as Den Watts (Leslie Grantham), Sean Slater (Robert Kazinsky), Michael Moon (Steve John Shepherd), Derek Branning (Jamie Foreman), Vincent Hubbard (Richard Blackwood), and Ravi Gulati (Aaron Thiara) and "heartthrobs" such as Simon Wicks (Nick Berry), Joe Wicks (Paul Nicholls), Jamie Mitchell (Jack Ryder), Dennis Rickman (Nigel Harman), Joey Branning (David Witts), Kush Kazemi (Davood Ghadami) and Zack Hudson (James Farrar). Another recurring male character type is the smartly dressed businessman, often involved in gang culture and crime and seen as a local authority figure. Examples include Steve Owen (Martin Kemp), Jack Dalton (Hywel Bennett), Andy Hunter (Michael Higgs), Johnny Allen (Billy Murray), Derek Branning (Jamie Foreman), and Nish Panesar (Navin Chowdhry). Following criticism aimed at the show's over-emphasis on "gangsters" in 2005, such characters have been significantly reduced. Another recurring male character seen in EastEnders is the "loser" or "soft touch", males often comically under the thumb of their female counterparts, which have included Arthur Fowler (Bill Treacher), Ricky Butcher (Sid Owen), Garry Hobbs (Ricky Groves), Lofty Holloway (Tom Watt), Billy Mitchell (Perry Fenwick) and Howie Danes (Delroy Atkinson).
Other recurring character types that have appeared throughout the serial are "cheeky-chappies" Pete Beale (Peter Dean), Alfie Moon (Shane Richie), Garry Hobbs (Ricky Groves) and Kush Kazemi (Davood Ghadami), "lost girls" such as Mary Smith (Linda Davidson), Donna Ludlow (Matilda Ziegler), Mandy Salter (Nicola Stapleton), Janine Butcher (Charlie Brooks), Zoe Slater (Michelle Ryan), Whitney Dean (Shona McGarty), and Hayley Slater (Katie Jarvis), delinquents such as Stacey Slater (Lacey Turner), Jay Brown (Jamie Borthwick), Lola Pearce (Danielle Harold), Bobby Beale (Eliot Carrington/Clay Milner Russell) and Keegan Baker (Zack Morris), "villains" such as Nick Cotton (John Altman), Trevor Morgan (Alex Ferns), May Wright (Amanda Drew), Yusef Khan (Ace Bhatti), Archie Mitchell (Larry Lamb), Dean Wicks (Matt Di Angelo), Stuart Highway (Ricky Champ) and Gray Atkins (Toby-Alexander Smith), "bitches" such as Cindy Beale (Michelle Collins), Janine Butcher (Charlie Brooks), Chrissie Watts (Tracy-Ann Oberman), Suzy Branning (Maggie O'Neill), Lucy Beale (Melissa Suffield/Hetti Bywater), Clare Bates (Gemma Bissix), Abi Branning (Lorna Fitzgerald), Babe Smith (Annette Badland) and Suki Panesar (Balvinder Sopal), "brawlers" or "fighters" such as Mary Smith (Linda Davidson), Bianca Jackson (Patsy Palmer), Kat Slater (Jessie Wallace), Stacey Slater (Lacey Turner), Shirley Carter (Linda Henry), Chelsea Fox (Tiana Benjamin/Zaraah Abrahams), Roxy Mitchell (Rita Simons) and Karen Taylor (Lorraine Stanley), and cockney "wide boys" or "wheeler dealers" such as Frank Butcher (Mike Reid), Alfie Moon (Shane Richie), Kevin Wicks (Phil Daniels), Darren Miller (Charlie G. Hawkins), Fatboy (Ricky Norwood), Jay Brown (Jamie Borthwick) and Kheerat Panesar (Jaz Deol).
Over the years EastEnders has typically featured a number of elderly residents, who are used to show vulnerability, nostalgia, stalwart-like attributes and are sometimes used for comedic purposes. The original elderly residents included Lou Beale (Anna Wing), Ethel Skinner (Gretchen Franklin) and Dot Cotton (June Brown). Over the years they have been joined by the likes of Mo Butcher (Edna Doré), Jules Tavernier (Tommy Eytle), Marge Green (Pat Coombs), Nellie Ellis (Elizabeth Kelly), Jim Branning (John Bardon), Charlie Slater (Derek Martin), Mo Harris (Laila Morse), Patrick Trueman (Rudolph Walker), Cora Cross (Ann Mitchell), Les Coker (Roger Sloman), Rose Cotton (Polly Perkins), Pam Coker (Lin Blakley), Stan Carter (Timothy West), Babe Smith (Annette Badland), Claudette Hubbard (Ellen Thomas), Sylvie Carter (Linda Marlowe), Ted Murray (Christopher Timothy), Joyce Murray (Maggie Steed), Arshad Ahmed (Madhav Sharma), Mariam Ahmed (Indira Joshi) and Vi Highway (Gwen Taylor). The programme has more recently included a higher number of teenagers and successful young adults in a bid to capture the younger television audience. This has spurred criticism, most notably from the actress Anna Wing, who portrayed Lou Beale in the show. She commented, "I don't want to be disloyal, but I think you need a few mature people in a soap because they give it backbone and body... if all the main people are young it gets a bit thin and inexperienced. It gets too lightweight."
EastEnders has been known to feature a "comedy double-act", originally demonstrated with the characters of Dot and Ethel, whose friendship was one of the serial's most enduring. Other examples include Paul Priestly (Mark Thrippleton) and Trevor Short (Phil McDermott), In 1989 especially, characters were brought in who were deliberately conceived as comic or light-hearted. Such characters included Julie Cooper (Louise Plowright)—a brassy maneater; Marge Green—a batty older lady played by veteran comedy actress Pat Coombs; Trevor Short (Phil McDermott)—the "village idiot"; his friend, northern heartbreaker Paul Priestly (Mark Thrippleton); wheeler-dealer Vince Johnson (Hepburn Graham) and Laurie Bates (Gary Powell), who became Pete Beale's (Peter Dean) sparring partner. The majority of EastEnders' characters are working-class. Middle-class characters do occasionally become regulars, but have been less successful and rarely become long-term characters. In the main, middle-class characters exist as villains, such as James Wilmott-Brown (William Boyde), May Wright (Amanda Drew), Stella Crawford (Sophie Thompson), Yusef Khan (Ace Bhatti) and Gray Atkins (Toby-Alexander Smith) or are used to promote positive liberal influences, such as Colin Russell (Michael Cashman), Rachel Kominski (Jacquetta May) and Derek Harkinson (Ian Lavender).
EastEnders has always featured a culturally diverse cast which has included black, Asian, Turkish, Polish and Latvian characters. "The expansion of minority representation signals a move away from the traditional soap opera format, providing more opportunities for audience identification with the characters and hence a wider appeal". Despite this, the programme has been criticised by the Commission for Racial Equality, who argued in 2002 that EastEnders was not giving a realistic representation of the East End's "ethnic make-up". They suggested that the average proportion of visible minority faces on EastEnders was substantially lower than the actual ethnic minority population in East London boroughs, and it, therefore, reflected the East End in the 1960s, not the East End of the 2000s. The programme has since attempted to address these issues. A sari shop was opened and various characters of different ethnicities were introduced throughout 2006 and 2007, including the Fox family, the Ahmeds, and various background artists. This was part of producer Diederick Santer's plan to "diversify", to make EastEnders "feel more 21st century". EastEnders has had varying success with ethnic minority characters. Possibly the least successful were the Indian Ferreira family, who were not well received by critics or viewers and were dismissed as unrealistic by the Asian community in the UK.
EastEnders has been praised for its portrayal of characters with disabilities, including Adam Best (David Proud) (spina bifida), Noah Chambers (Micah Thomas) and Frankie Lewis (Rose Ayling-Ellis) (deaf), Jean Slater (Gillian Wright) and her daughter Stacey (Lacey Turner) (bipolar disorder), Janet Mitchell (Grace) (Down syndrome), Jim Branning (John Bardon) (stroke) and Dinah Wilson (Anjela Lauren Smith) (multiple sclerosis). The show also features a large number of gay, lesbian and bisexual characters (see list of soap operas with LGBT characters), including Colin Russell (Michael Cashman), Barry Clark (Gary Hailes), Simon Raymond (Andrew Lynford), Tony Hills (Mark Homer), Sonia Fowler (Natalie Cassidy), Naomi Julien (Petra Letang), Tina Carter (Luisa Bradshaw-White), Tosh Mackintosh (Rebecca Scroggs), Christian Clarke (John Partridge), Syed Masood (Marc Elliott), Ben Mitchell (Harry Reid/Max Bowden), Paul Coker (Jonny Labey), Iqra Ahmed (Priya Davdra), Ash Panesar (Gurlaine Kaur Garcha), Bernadette Taylor (Clair Norris), Callum Highway (Tony Clay) and Eve Unwin (Heather Peace). Kyle Slater (Riley Carter Millington), a transgender character, was introduced in 2015.
EastEnders has a high cast turnover and characters are regularly changed to facilitate storylines or refresh the format. The show has also become known for the return of characters after they have left the show. Sharon Watts (Letitia Dean) returned in August 2012 for her third stint on the show. Den Watts (Leslie Grantham) returned 14 years after he was believed to have died in September 2003, a feat repeated by Kathy Beale (Gillian Taylforth) in 2015, and Cindy Beale (Michelle Collins) in 2023. Speaking extras, including Tracey the barmaid (Jane Slaughter) (who has been in the show since the first episode in 1985), have made appearances throughout the show's duration, without being the focus of any major storylines. The character of Nick Cotton (John Altman) gained a reputation for making constant exits and returns since the programme's first year until the character died in 2015.
As of June 2023, Gillian Taylforth, Letitia Dean and Adam Woodyatt are the only members of the original cast remaining in the show, in their roles of Kathy Beale, Sharon Watts and Ian Beale respectively. Tracey is the longest-serving female character in the show, having appeared since 1985, albeit as a minor character.
EastEnders programme makers took the decision that the show was to be about "everyday life" in the inner city "today" and regarded it as a "slice of life". Creator/producer Julia Smith declared that "We don't make life, we reflect it". She also said, "We decided to go for a realistic, fairly outspoken type of drama which could encompass stories about homosexuality, rape, unemployment, racial prejudice, etc., in a believable context. Above all, we wanted realism". In 2011, the head of BBC drama, John Yorke, said that the real East End had changed significantly since EastEnders started, and the show no longer truly reflected real life, but that it had an "emotional truthfulness" and was partly "true to the original vision" and partly "adapt[ing] to a changing world", adding that "If it was a show where every house cost a fortune and everyone drove a Lexus, it wouldn't be EastEnders. You have to show shades of that change, but certain things are immutable, I would argue, like The Vic and the market."
In the 1980s, EastEnders featured "gritty" storylines involving drugs and crime, representing the issues faced by working-class Britain under Thatcherism. Storylines included the cot death of 14-month-old Hassan Osman, Nick Cotton's (John Altman) homophobia, racism and murder of Reg Cox (Johnnie Clayton), Arthur Fowler's (Bill Treacher) unemployment reflecting the recession of the 1980s, the rape of Kathy Beale (Gillian Taylforth) in 1988 by James Willmott-Brown (William Boyde) and Michelle Fowler's (Susan Tully) teenage pregnancy. The show also dealt with prostitution, mixed-race relationships, shoplifting, sexism, divorce, domestic violence and mugging. In 1989, the programme came under criticism in the British media for being too depressing, and according to writer Colin Brake, the programme makers were determined to change this. In 1989, there was a deliberate attempt to increase the lighter, more comic aspects of life in Albert Square. This led to the introduction of some characters who were deliberately conceived as comic or light-hearted. Brake suggested that humour was an important element in EastEnders' storylines during 1989, with a greater amount of slapstick and light comedy than before. He classed 1989's changes as a brave experiment, and suggested that while some found this period of EastEnders entertaining, many other viewers felt that the comedy stretched the programme's credibility. Although the programme still covered many issues in 1989, such as domestic violence, drugs, rape and racism, Brake reflected that the new emphasis on a more balanced mix between "light and heavy storylines" gave the illusion that the show had lost a "certain edge".
As the show progressed into the 1990s, EastEnders still featured hard-hitting issues such as Mark Fowler (Todd Carty) revealing he was HIV positive in 1991, the death of his wife Gill (Susanna Dawson) from an AIDS-related illness in 1992, murder, adoption, abortion, Peggy Mitchell's (Barbara Windsor) battle with breast cancer, and Phil Mitchell's (Steve McFadden) alcoholism and violence towards wife Kathy. Mental health issues were confronted in 1996 when 16-year-old Joe Wicks developed schizophrenia following the off-screen death of his sister in a car crash. The long-running storyline of Mark Fowler's HIV was so successful in raising awareness that in 1999, a survey by the National Aids Trust found teenagers got most of their information about HIV from the soap, though one campaigner noted that in some ways the storyline was not reflective of what was happening at the time as the condition was more common among the gay community. Still, heterosexual Mark struggled with various issues connected to his HIV status, including public fears of contamination, a marriage breakdown connected to his inability to have children and the side effects of combination therapies.
In the early 2000s, EastEnders covered the issue of euthanasia with Ethel Skinner's (Gretchen Franklin) death in a pact with her friend Dot Cotton (June Brown), the unveiling of Kat Slater's (Jessie Wallace) sexual abuse by her uncle Harry (Michael Elphick) as a child (which led to the birth of her daughter Zoe (Michelle Ryan), who had been brought up to believe that Kat was her sister), the domestic abuse of Little Mo Morgan (Kacey Ainsworth) by husband Trevor (Alex Ferns) (which involved marital rape and culminated in Trevor's death after he tried to kill Little Mo in a fire), Sonia Jackson (Natalie Cassidy) giving birth at the age of 15 and then putting her baby up for adoption, and Janine Butcher's (Charlie Brooks) prostitution, agoraphobia and drug addiction. The soap also tackled the issue of mental illness and carers of people who have mental conditions, illustrated with mother and daughter Jean (Gillian Wright) and Stacey Slater (Lacey Turner); Jean has bipolar disorder, and teenage daughter Stacey was her carer (this storyline won a Mental Health Media Award in September 2006). Stacey went on to struggle with the disorder herself. The issue of illiteracy was highlighted by the characters of middle-aged Keith (David Spinx) and his young son Darren (Charlie G. Hawkins). EastEnders has also covered the issue of Down syndrome, as Billy (Perry Fenwick) and Honey Mitchell's (Emma Barton) baby, Janet Mitchell (Grace), was born with the condition in 2006. EastEnders covered child abuse with its storyline involving Phil Mitchell's (Steve McFadden) 11-year-old son Ben (Charlie Jones) and lawyer girlfriend Stella Crawford (Sophie Thompson), and child grooming involving the characters Tony King (Chris Coghill) as the perpetrator and Whitney Dean (Shona McGarty) as the victim.
Aside from this, soap opera staples of youthful romance, jealousy, domestic rivalry, gossip and extramarital affairs are regularly featured, with high-profile storylines occurring several times a year. Whodunits also feature regularly, including the "Who Shot Phil?" story arc in 2001 that attracted over 19 million viewers and was one of the biggest successes in British soap television; the "Who Killed Archie?" storyline, which was revealed in a special live episode of the show that drew a peak of 17 million viewers; and the "Who Killed Lucy Beale?" saga.
The exterior set for the fictional Albert Square is located in the permanent backlot of the BBC Elstree Centre, Borehamwood, Hertfordshire, at 51°39′32″N 0°16′40″W / 51.65889°N 0.27778°W / 51.65889; -0.27778, and is outdoors and open to the weather. It was initially built in 1984 with a specification that it should last for at least 15 years at a cost of £750,000. The EastEnders lot was designed by Keith Harris, who was a senior designer within the production team together with supervising art directors Peter Findley and Gina Parr. The main buildings on the square consisted originally of hollow shells, constructed from marine plywood facades mounted onto steel frames. The lower walls, pavements, etc., were constructed of real brick and tarmac. The set had to be made to look as if it had been standing for years. This was done by a number of means, including chipping the pavements, using chemicals to crack the top layer of the paint work, using varnish to create damp patches underneath the railway bridge, and making garden walls in such a way they appeared to sag. The final touches were added in summer 1984, these included a telephone box, telegraph pole that was provided by British Telecom, lampposts that were provided by Hertsmere Borough Council and a number of vehicles parked on the square. On each set all the appliances are fully functional such as gas cookers, the laundry washing machines and The Queen Victoria beer pumps.
The walls were intentionally built crooked to give them an aged appearance. The drains around the set are real so rainwater can naturally flow from the streets. The square was built in two phases with only three sides being built, plus Bridge Street, to begin with in 1984, in time to be used for the show's first episode. Then in 1986, Harris added an extension to the set, building the fourth side of Albert Square, and in 1987, Turpin Road began to be featured more, which included buildings such as The Dagmar.
In 1993, George Street was added, and soon after Walford East Underground station was built, to create further locations when EastEnders went from two to three episodes per week. The set was constructed by the BBC in-house construction department under construction manager Mike Hagan. Most of the buildings on Albert Square have no interior filming space, with a few exceptions, and most do not have rears or gardens. Some interior shots are filmed in the actual buildings.
In February 2008, it was reported that the set would transfer to Pinewood Studios in Buckinghamshire, where a new set would be built as the set was looking "shabby", with its flaws showing up on high-definition television broadcasts; however, by April 2010 a follow-up report confirmed that Albert Square would remain at Elstree Studios for at least another four years, taking the set through its 25th anniversary. The set was consequently rebuilt for high definition on the same site, using mostly real brick with some areas using a new improved plastic brick. Throughout rebuilding filming would still take place, and so scaffolding was often seen on screen during the process, with some storylines written to accommodate the rebuilding, such as the Queen Vic fire.
In 2014, then executive producer Dominic Treadwell-Collins said that he wanted Albert Square to look like a real-life east London neighbourhood so that the soap would "better reflect the more fashionable areas of east London beloved of young professionals" giving a flavour of the "creeping gentrification" of east London. He added: "It should feel more like London. It's been frozen in aspic for too long." The BBC announced that they would rebuild the EastEnders set to secure the long-term future of the show, with completion expected to be in 2018. The set will provide a modern, upgraded exterior filming resource for EastEnders, and will copy the appearance of the existing buildings; however, it will be 20 per cent bigger, in order to enable greater editorial ambition and improve working conditions for staff. A temporary set will be created on-site to enable filming to continue while the permanent structure is rebuilt.
In May 2016 the rebuild was delayed until 2020 and forecast to cost in excess of £15 million, although the main part of the set is scheduled to be able to start filming in May 2019. In December 2018, it was revealed that the new set was now planned to cost £59 million but a National Audit Office (NAO) report stated that it would actually cost £86.7 million and be completed two-and-a-half years later than planned, in 2023; the NAO concluded that the BBC "could not provide value for money on the project". The NAO's forecast cost is more than the annual combined budget for BBC Radio 1 and Radio 2. The BBC said the new set would be more suitable for HD filming and better reflect the modern East End of London. In March 2019 there was criticism from a group of MPs about how the BBC handled the redevelopment of the set. In March 2020 during the suspension of filming, the interior sets were used for a new adaptation of Talking Heads. This marked the first time that it had been used for anything other than EastEnders. In January 2022 the new £86.7m exterior set of EastEnders was officially unveiled by the BBC replacing the original set built in 1984. The new scenes from the new set will first appear from new episodes airing in spring.
The majority of EastEnders episodes are filmed at the BBC Elstree Centre in Borehamwood, Hertfordshire. In January 1987, EastEnders had three production teams each comprising a director, production manager, production assistant and assistant floor manager. Other permanent staff included the producer's office, script department and designer, meaning between 30 and 35 people would be working full-time on EastEnders, rising to 60 to 70 on filming days. When the number of episodes was increased to four per week, more studio space was needed, so Top of the Pops was moved from its studio at Elstree to BBC Television Centre in April 2001. Episodes are produced in "quartets" of four episodes, each of which starts filming on a Tuesday and takes nine days to record. Each day, between 25 and 30 scenes are recorded. During the filming week, actors can film for as many as eight to twelve episodes. Exterior scenes are filmed on a specially constructed film lot, and interior scenes take place in six studios. The episodes are usually filmed about six to eight weeks in advance of broadcast. During the winter period, filming can take place up to twelve weeks in advance, due to less daylight for outdoor filming sessions. This time difference has been known to cause problems when filming outdoor scenes. On 8 February 2007, heavy snow fell on the set and filming had to be cancelled as the scenes due to be filmed on the day were to be transmitted in April. EastEnders is normally recorded using four cameras. When a quartet is completed, it is edited by the director, videotape editor and script supervisor. The producer then reviews the edits and decides if anything needs to be re-edited, which the director will do. A week later, sound is added to the episodes and they are technically reviewed, and are ready for transmission if they are deemed of acceptable quality.
Although episodes are predominantly recorded weeks before they are broadcast, occasionally, EastEnders includes current events in their episodes. In 1987, EastEnders covered the general election. Using a plan devised by co-creators Smith and Holland, five minutes of material was cut from four of the pre-recorded episodes preceding the election. These were replaced by specially recorded election material, including representatives from each major party, and a scene recorded on the day after the election reflecting the result, which was broadcast the following Tuesday. The result of the 2010 general election was referenced on 7 May 2010 episode. During the 2006 FIFA World Cup, actors filmed short scenes following the tournament's events that were edited into the programme in the following episode. Last-minute scenes have also been recorded to reference the fiftieth anniversary of the end of the Second World War in 1995, the two-minute silence on Remembrance Day 2005 (2005 also being the year for the sixtieth anniversary of the end of the Second World War and the 200th anniversary of the Battle of Trafalgar), Barack Obama's election victory in 2008, the death of Michael Jackson in 2009, the 2010 Comprehensive Spending Review, Andy Murray winning the Men's Singles at the 2013 Wimbledon Championships, the Wedding of Prince William and Catherine Middleton, the birth of Prince George of Wales, Scotland voting no against independence in 2014, and the 100th anniversary of the beginning of the Great War.
EastEnders is often filmed on location, away from the studios in Borehamwood. Sometimes an entire quartet is filmed on location, which has a practical function and are the result of EastEnders making a "double bank", when an extra week's worth of episodes are recorded at the same time as the regular schedule, enabling the production of the programme to stop for a two-week break at Christmas. These episodes often air in late June or early July and again in late October or early November. The first time this happened was in December 1985 when Pauline (Wendy Richard) and Arthur Fowler (Bill Treacher) travelled to the Southend-on-Sea to find their son Mark, who had run away from home. In 1986, EastEnders filmed overseas for the first time, in Venice, and this was also the first time it was not filmed on videotape, as a union rule at the time prevented producers taking a video crew abroad and a film crew had to be used instead. In 2011, it was reported that eight per cent of the series is filmed on location.
If scenes during a normal week are to be filmed on location, this is done during the normal recording week. Off-set locations that have been used for filming include Clacton (1989), Devon (September 1990), Hertfordshire (used for scenes set in Gretna Green in July 1991), Portsmouth (November 1991), Milan (1997), Ireland (1997), Amsterdam (December 1999), Brighton (2001) and Portugal (2003). In 2003, filming took place at Loch Fyne Hotel and Leisure Club in Inveraray, The Arkinglass Estate in Cairndow and Grims Dyke Hotel, Harrow Weald, north London, for a week of episodes set in Scotland. The episode shown on 9 April 2007 featured scenes filmed at St Giles Church and The Blacksmiths Arms public house in Wormshill, the Ringlestone Inn, two miles away and Court Lodge Farm in Stansted, Kent. and the Port of Dover, Kent. .
Other locations have included the court house, a disused office block, Evershed House, and St Peter's Church, all in St Albans, an abandoned mental facility in Worthing, and a wedding dress shop in Muswell Hill, north London. A week of episodes in 2011 saw filming take place on a beach in Thorpe Bay and a pier in Southend-on-Sea—during which a stuntman was injured when a gust of wind threw him off balance and he fell onto rocks— with other scenes filmed on the Essex coast. In 2012, filming took place in Keynsham, Somerset. In January 2013, on-location filming at Grahame Park in Colindale, north London, was interrupted by at least seven youths who threw a firework at the set and threatened to cut members of the crew. In October 2013, scenes were filmed on a road near London Southend Airport in Essex.
EastEnders has featured seven live broadcasts. For its 25th anniversary in February 2010, a live episode was broadcast in which Stacey Slater (Lacey Turner) was revealed as Archie Mitchell's (Larry Lamb) killer. Turner was told only 30 minutes before the live episode and to maintain suspense, she whispers this revelation to former lover and current father-in-law, Max Branning, in the very final moments of the live show. Many other cast members only found out at the same time as the public, when the episode was broadcast. On 23 July 2012, a segment of that evening's episode was screened live as Billy Mitchell (Perry Fenwick) carried the Olympic flame around Walford in preparation for the 2012 Summer Olympics. In February 2015, for the soap's 30th anniversary, five episodes in a week featured live inserts throughout them. Episodes airing on Tuesday 17, Wednesday 18 and Thursday 19 (which featured an hour long episode and a second episode) all featured at least one live insert. The show revealed that the killer of Lucy Beale (Hetti Bywater) was her younger brother, Bobby (Eliot Carrington), during the second episode on Thursday, after a ten-month mystery regarding who killed her. In a flashback episode which revisited the night of the murder, Bobby was revealed to have killed his sister. The aftermath episode, which aired on Friday 20, was completely live and explained in detail Lucy's death. Carrington was told he was Lucy's killer on Monday 16, while Laurie Brett (who plays Bobby's adoptive mother, Jane) was informed in November, due to the character playing a huge role in the cover-up of Lucy's murder. Bywater only discovered Bobby was responsible for Lucy's death on the morning of Thursday, 19 February, several hours before they filmed the scenes revealing Bobby as Lucy's killer.
Each episode should run for 27 minutes and 15 seconds; however, if any episode runs over or under then it is the job of post-production to cut or add scenes where appropriate. As noted in the 1994 behind-the-scenes book, EastEnders: The First 10 Years, after filming, tapes were sent to the videotape editor, who then edited the scenes together into an episode. The videotape editor used the director's notes so they knew which scenes the director wanted to appear in a particular episode. The producer might have asked for further changes to be made. The episode was then copied onto D3 video. The final process was to add the audio which included background noise such as a train or a jukebox music and to check it met the BBC's technical standard for broadcasting.
Since 2010, EastEnders no longer uses tapes in the recording or editing process. After footage is recorded, the material is sent digitally to the post-production team. The editors then assemble all the scenes recorded for the director to view and note any changes that are needed. The sound team also have the capability to access the edited episode, enabling them to dub the sound and create the final version.
According to the book How to Study Television, in 1995 EastEnders cost the BBC £40,000 per episode on average. A 2012 agreement between the BBC, the Writers' Guild of Great Britain and the Personal Managers' Association set out the pay rate for EastEnders scripts as £137.70 per minute of transmission time (£4,131 for 30 minutes), which is 85 per cent of the rate for scripts for other BBC television series. The writers would be paid 75 per cent of that fee for any repeats of the episode. In 2011, it was reported that actors receive a per-episode fee of between £400 and £1,200, and are guaranteed a certain number of episodes per year, perhaps as few as 30 or as many as 100, therefore annual salaries could range from £12,000 to £200,000 depending on the popularity of a character. Some actors' salaries were leaked in 2006, revealing that Natalie Cassidy (Sonia Fowler) was paid £150,000, Cliff Parisi (Minty Peterson) received £220,000, Barbara Windsor (Peggy Mitchell) and Steve McFadden (Phil Mitchell) each received £360,000 and Wendy Richard (Pauline Fowler) had a salary of £370,000. In 2017, it was revealed that Danny Dyer (Mick Carter) and Adam Woodyatt (Ian Beale) were the highest-paid actors in EastEnders, earning between £200,000 and £249,999, followed by Laurie Brett (Jane Beale), Letitia Dean (Sharon Watts), Tameka Empson (Kim Fox), Linda Henry (Shirley Carter), Scott Maslen (Jack Branning), Diane Parish (Denise Fox), Gillian Taylforth (Kathy Beale) and Lacey Turner (Stacey Slater), earning between £150,000 and £199,999.
A 2011 report from the National Audit Office (NAO) showed that EastEnders had an annual budget of £29.9 million. Of that, £2.9 million was spent on scripts and £6.9 million went towards paying actors, extras and chaperones for child actors. According to the NAO, BBC executives approved £500,000 of additional funding for the 25th anniversary live episode (19 February 2010). With a total cost of £696,000, the difference was covered from the 2009–2010 series budget for EastEnders. When repeats and omnibus editions are shown, the BBC pays additional fees to cast and scriptwriters and incurs additional editing costs, which in the period 2009–2010, amounted to £5.5 million. According to a Radio Times article for 212 episodes it works out at £141,000 per episode or 3.5p per viewer hour.
In 2014, two new studios were built and they were equipped with low-energy lighting which has saved approximately 90,000 kwh per year. A carbon literacy course was run with Heads of Departments of EastEnders attending and as a result, representatives from each department agreed to meet quarterly to share new sustainability ideas. The paper usage was reduced by 50 per cent across script distribution and other weekly documents and 20 per cent across all other paper usage. The production team now use recycled paper and recycled stationery.
Also changes to working online has also saved transportation cost of distribution 2,500 DVDs per year. Sets, costumes, paste pots and paint are all recycled by the design department. Cars used by the studio are low emission vehicles and the production team take more efficient energy efficient generators out on location. Caterers no longer use polystyrene cups and recycling on location must be provided.
As a result of EastEnders' sustainability, it was awarded albert+, an award that recognises the production's commitment to becoming a more eco-friendly television production. The albert+ logo was first shown at the end of the EastEnders titles for episode 5281 on 9 May 2016.
Since 1985, EastEnders has remained at the centre of BBC One's primetime schedule. From 2001 to 2022, it was broadcast at 7:30 pm on Tuesday and Thursday, and 8 pm on Monday and Friday. EastEnders was originally broadcast twice weekly at 7:00 pm on Tuesdays and Thursdays from 19 February 1985; however, in September 1985 the two episodes were moved to 7:30 pm as Michael Grade did not want the soap running in direct competition with Emmerdale Farm, and this remained the same until 7 April 1994. The BBC had originally planned to take advantage of the "summer break" that Emmerdale Farm usually took to capitalise on ratings, but ITV added extra episodes and repeats so that Emmerdale Farm was not taken off the air over the summer. Realising the futility of the situation, Grade decided to move the show to the later 7:30 pm slot.
EastEnders output then increased to three times a week on Mondays, Tuesdays and Thursdays from 11 April 1994 until 2 August 2001. From 10 August 2001, EastEnders then added its fourth episode (shown on Fridays). This caused some controversy as the first Friday episode clashed with Coronation Street, which was moved to 8 pm to make way for an hour-long episode of rural soap Emmerdale. In this first head-to-head battle, EastEnders claimed victory over its rival.
In early 2003, viewers could watch episodes of EastEnders on digital channel BBC Three before they were broadcast on BBC One. This was to coincide with the relaunch of the channel and helped BBC Three break the one million viewers mark for the first time with 1.03 million who watched to see Mark Fowler's departure. According to the EastEnders website, there are, on average, 208 episodes outputted each year.
On 21 February 2022, it was announced that from 7 March 2022, EastEnders would begin airing from Monday to Thursday at 7:30 pm, therefore no longer airing on a Friday. This meant that EastEnders would clash with Emmerdale, but the producers stated that due to the importance of online streaming figures, they were not concerned about the soaps clashing on the live television guides.
The omnibus edition, a compilation of the week's episodes in a continuous sequence, originally aired on BBC One on Sunday afternoons, until 1 April 2012, when it was changed to a late Friday night or early Saturday morning slot, commencing on 6 April 2012, though the exact time differed. It reverted to a weekend daytime slot as from January 2013 on BBC Two. In 2014, the omnibus moved back to around midnight on Friday nights, and in April 2015, the omnibus was axed, following detailed audience research and the introduction of 30-day catch up on BBC iPlayer and the planning of BBC One +1. The last omnibus on the BBC was shown on 24 April 2015. While W was showing same-day repeats of EastEnders, they also returned the weekend omnibus, starting on 20 February 2016.
From 20 February to 26 May 1995, as part of the programme's 10th Anniversary celebrations, episodes from 1985 were repeated each weekday morning at 10 am, starting from episode one. Four specially selected episodes from 1985, 1986 and 1987 were also repeated on BBC1 on Friday evenings at 8 pm under the banner "The Unforgettable EastEnders". These included the wedding of Michelle Fowler and Lofty Holloway, the revelation of the father of Michelle's baby, a two-hander between Dot Cotton and Ethel Skinner and the 1986 Christmas episode featuring Den Watts presenting Angie Watts with divorce papers.
EastEnders was regularly repeated at 10 pm on BBC Choice from the channel's launch in 1998, a practice continued by BBC Three for many years until mid-2012 with the repeat moving to 10:30 pm. From 25 December 2010 – 29 April 2011 and 31 July 2012 – 13 August 2012 to the show was repeated on BBC HD in a Simulcast with BBC Three. In 2015, the BBC Three repeat moved back to 10 pm. In February 2016, the repeat moved to W, the rebranded Watch, after BBC Three became an online-only channel. W stopped showing EastEnders in April 2018. Following the reinstatement of BBC Three as a linear channel in 2022, the nightly 'narrative repeat' was not reinstated; instead, the channel retransmits that week's four BBC One episodes at the weekend, airing two episodes on each of Saturday and Sunday evenings, unless live sports or music/events coverage takes precedence. Episodes of EastEnders are available on-demand through BBC iPlayer for 30 days after their original screening.
On 1 December 2012, the BBC uploaded the first 54 episodes of EastEnders to YouTube, and on 23 July 2013 they uploaded a further 14 episodes bringing the total to 68. These have since been taken down. In April 2018, it was announced that Drama would be showing repeats starting 6 August 2018 during weekdays and they are also available on-demand on the UKTV Play catch-up service for 30 days after the broadcast. In December 2019, Christmas episodes were added to Britbox UK.
EastEnders is broadcast around the world in many English-speaking countries. New Zealand became the first to broadcast EastEnders overseas, the first episode being shown on 30 August 1985. This was followed by the Netherlands on 8 December 1986, Australia on 5 January 1987, Norway on 27 April, and Barcelona on 30 June (dubbed into Catalan). On 9 July 1987, it was announced that the show would be aired in the United States on PBS. BBC Worldwide licensed 200 hours of EastEnders for broadcast in Serbia on RTS (dubbed into Serbian); it began airing the first episode in December 1997. The series was broadcast in the United States until BBC America ceased broadcasts of the serial in 2003, amidst fan protests. In June 2004, the satellite television provider Dish Network picked up EastEnders, broadcasting episodes starting at the point where BBC America had ceased broadcasting them, offering the series as a pay-per-view item. Episodes air two months behind the UK schedule. Episodes from prior years are still shown on various PBS stations in the US. Since 7 March 2017, EastEnders has been available in the United States on demand, 24 hours after it has aired in the United Kingdom via BritBox, a joint venture between BBC and ITV.
The series was screened in Australia by ABC TV from 1987 until 1991. It is aired in Australia on Satellite & Streaming services on BBC UKTV, from Mondays to Thursdays 7:50 pm–8:30 pm with two advertisement breaks of five minutes each. Episodes are shown roughly one week after their UK broadcast. In New Zealand, it was shown by TVNZ on TVNZ 1 for several years, and then on Prime each weekday afternoon. It is shown on BBC UKTV from Mondays to Thursdays at 8 pm. Episodes are roughly two weeks behind the UK.
EastEnders is shown on BBC Entertainment (formerly BBC Prime) in Europe and in Africa, where it is approximately six episodes behind the UK. It was also shown on BBC Prime in Asia, but when the channel was replaced by BBC Entertainment, it ceased broadcasting the series. In Canada, EastEnders was shown on BBC Canada until 2010, at which point it was picked up by VisionTV.
In Ireland, EastEnders was shown on TV3 from September 1998 until March 2001, when it moved over to RTÉ One, after RTÉ lost to TV3 the rights to air rival soap Coronation Street. Additionally, episodes of EastEnders are available on-demand through RTÉ Online for seven days after their original screening.
In 1991 the BBC sold the programme's format rights to a Dutch production company IDTV, the programme was renamed Het Oude Noorden (Translation: Old North). The Dutch version was re-written from already existing EastEnders scripts. The schedule remained the same as EastEnders twice weekly episodes; however, some notable changes included the programme is now set in Rotterdam rather than London, characters are given Dutch names (Den and Angie became Ger and Ankie) and The Queen Victoria pub is renamed "Cade Faas".
According to Barbara Jurgen who re-wrote the scripts for a Dutch audience he said "The power of the show is undeniable. The Scripts are full of hard, sharp drama, plus great one-liners which will translate well to Holland." The Dutch version began broadcasting on VARA 13 March 1993 but was cancelled after 20 episodes.
On 26 December 1988, the first EastEnders "bubble" was shown, titled "CivvyStreet". Since then, "Return of Nick Cotton" (2000), "Ricky & Bianca" (2002), "Dot's Story" (2003), "Perfectly Frank" (2003) and "Pat and Mo" (2004) have all been broadcast, each episode looking into lives of various characters and revealing part of their backstories or lives since leaving EastEnders. In 1993, the two-part story "Dimensions in Time", a charity cross-over with Doctor Who, was shown.
In 1998, EastEnders Revealed was launched on BBC Choice (now BBC Three). The show takes a look behind the scenes of the EastEnders and investigates particular places, characters or families within EastEnders. An episode of EastEnders Revealed that was commissioned for BBC Three attracted 611,000 viewers. As part of the BBC's digital push, EastEnders Xtra was introduced in 2005. The show was presented by Angellica Bell and was available to digital viewers at 8:30 pm on Monday nights. It was also shown after the Sunday omnibus. The series went behind the scenes of the show and spoke to some of the cast members. A new breed of behind-the-scenes programmes have been broadcast on BBC Three since 1 December 2006. These are all documentaries related to current storylines in EastEnders, in a similar format to EastEnders Revealed, though not using the EastEnders Revealed name.
In October 2009, a 12-part Internet spin-off series entitled EastEnders: E20 was announced. The series was conceived by executive producer Diederick Santer "as a way of nurturing new, young talent, both on- and off-screen, and exploring the stories of the soaps' anonymous bystanders." E20 features a group of sixth-form characters and targets the "Hollyoaks demographic". It was written by a team of young writers and was shown three times a week on the EastEnders website from 8 January 2010. A second 10-part series started in September 2010, with twice-weekly episodes available online and an omnibus on BBC Three. A third series of 15 episodes started in September 2011.
EastEnders and rival soap opera Coronation Street took part in a crossover episode for Children in Need on 19 November 2010 called "East Street". On 4 April 2015, EastEnders confirmed plans for a BBC One series featuring Kat and Alfie Moon. The six-part drama, Kat & Alfie: Redwater, was created by executive producer Dominic Treadwell-Collins and his team. In the spin-off, the Moons visit Ireland where they "search for answers to some very big questions".
Until its closure, BBC Store released 553 EastEnders episodes from various years, including the special episode "CivvyStreet", available to buy as digital downloads.
An example of EastEnders' popularity is that after episodes, electricity use in the United Kingdom rises significantly as viewers who have waited for the show to end begin boiling water for tea, a phenomenon known as TV pickup. Over five minutes, power demand rises by three GW, the equivalent of 1.5 to 1.75 million kettles. National Grid personnel watch the show to know when closing credits begin so they can prepare for the surge, asking for additional power from France if necessary.
EastEnders is the BBC's most consistent programme in terms of ratings, and as of 2021, episodes typically receive between 4 and 6 million viewers. EastEnders two biggest ratings rivals are the ITV soaps Coronation Street (produced by Granada Television in Manchester) and Emmerdale (Produced by Yorkshire Television in Leeds).
The launch show in 1985 attracted 17.35 million viewers. 25 July 1985 was the first time the show's viewership rose to first position in the weekly top 10 shows for BBC One. The highest rated episode of EastEnders is the Christmas Day 1986 episode, which attracted a combined 30.15 million viewers who tuned into either the original transmission or the omnibus to see Den Watts hand over divorce papers to his wife Angie. This remains the highest rated episode of a soap in British television history.
In 2001, EastEnders clashed with Coronation Street for the first time. EastEnders won the battle with 8.4 million viewers (41% share) whilst Coronation Street lagged behind with 7.3 million viewers (34% share). On 21 September 2004, Louise Berridge, the then executive producer, quit following criticism of the show. The following day the show received its lowest ever ratings at that time (6.2 million) when ITV scheduled an hour-long episode of Emmerdale against it. Emmerdale was watched by 8.1 million viewers. The poor ratings motivated the press into reporting viewers were bored with implausible and ill-thought-out storylines. Under new producers, EastEnders and Emmerdale continued to clash at times, and Emmerdale tended to come out on top, giving EastEnders lower than average ratings. In 2006, EastEnders regularly attracted between 8 and 12 million viewers in official ratings. EastEnders received its second lowest ratings on 17 May 2007, when 4.0 million viewers tuned in. This was also the lowest ever audience share, with just 19.6 per cent. This was attributed to a conflicting one-hour special episode of Emmerdale on ITV1; however, ratings for the 10 pm EastEnders repeat on BBC Three reached an all-time high of 1.4 million; however, there have been times when EastEnders had higher ratings than Emmerdale despite the two going head-to-head.
The ratings increased in 2010, thanks to the "Who Killed Archie?" storyline and second wedding of Ricky Butcher (Sid Owen) and Bianca Jackson (Patsy Palmer), and the show's first live episode on 19 February 2010. The live-episode averaged 15.6 million viewers, peaking at 16.6 million in the final five minutes of broadcast. In January 2010, the average audience was higher than that of Coronation Street for the first time in three years. During the 30th anniversary week in which there were live elements and the climax of the Who Killed Lucy Beale? storyline, 10.84 million viewers tuned in for the 30th anniversary episode itself in an hour long special on 19 February 2015 (peaking with 11.9 million). Later on in the same evening, a special flashback episode averaged 10.3 million viewers, and peaked with 11.2 million. The following day, the anniversary week was rounded off with another fully live episode (the second after 2010) with 9.97 million viewers watching the aftermath of the reveal, the Beale family finding out the truth of Lucy's killer and deciding to keep it a secret. In 2013, the average audience share for an episode was around 30 per cent.
Due to the impact of the COVID-19 pandemic on the soap, EastEnders suffered a ratings drop after 2020. Despite once being the highest-rated soap, it dropped to third in the rankings in 2021, behind Coronation Street and Emmerdale, with 4.09 million viewers. BBC's head of drama, Piers Wenger, explained that since the episode duration had been shortened and the airtime frequently suffered changes, it had led to the audience not knowing when to watch it. Digital Spy opined that the ratings drop was accredited to "lacklustre storylines" and thought that storylines on rival soaps were better. Later that year, EastEnders suffered its lowest rating ever, with 1.7 million viewers watching live. The Daily Mirror's Jamie Roberts felt that viewers had "turned their back" on the soap due to its lack of interesting stories and iconic characters. Ratings expert Stephen Price also noted that the drop is partly due to the rise of streaming services.
EastEnders has received both praise and criticism for most of its storylines, which have dealt with difficult themes, such as violence, rape, murder and child abuse.
Mary Whitehouse, social critic, argued at the time that EastEnders represented a violation of "family viewing time" and that it undermined the watershed policy. She regarded EastEnders as a fundamental assault on the family and morality itself. She made reference to representation of family life and emphasis on psychological and emotional violence within the show. She was also critical of language such as "bleeding", "bloody hell", "bastard" and "for Christ's sake"; however, Whitehouse also praised the programme, describing Michelle Fowler's decision not to have an abortion as a "very positive storyline". She also felt that EastEnders had been cleaned up as a result of her protests, though she later commented that EastEnders had returned to its old ways. Her criticisms were widely reported in the tabloid press as ammunition in its existing hostility towards the BBC. The stars of Coronation Street in particular aligned themselves with Mary Whitehouse, gaining headlines such as "STREETS AHEAD! RIVALS LASH SEEDY EASTENDERS" and "CLEAN UP SOAP! Street Star Bill Lashes "Steamy" EastEnders".
EastEnders has been criticised for being too violent, most notably during a domestic violence storyline between Little Mo Morgan (Kacey Ainsworth) and her husband Trevor Morgan (Alex Ferns). As EastEnders is shown pre-watershed, there were worries that some scenes in this storyline were too graphic for its audience. Complaints against a scene in which Little Mo's face was pushed in gravy on Christmas Day were upheld by the Broadcasting Standards Council; however, a helpline after this episode attracted over 2000 calls. Erin Pizzey, who became internationally famous for having started one of the first women's refuges, said that EastEnders had done more to raise the issue of violence against women in one story than she had done in 25 years. The character of Phil Mitchell (played by Steve McFadden since early 1990) has been criticised on several occasions for glorifying violence and proving a bad role model to children. On one occasion following a scene in an episode broadcast in October 2002, where Phil brutally beat his godson, Jamie Mitchell (Jack Ryder), 31 complaints came from viewers.
In 2003, cast member Shaun Williamson, who was in the final months of his role of Barry Evans, said that the programme had become much grittier over the past 10 to 15 years, and found it "frightening" that parents let their young children watch.
In 2005, the BBC was accused of anti-religious bias by a House of Lords committee, who cited EastEnders as an example. Indarjit Singh, editor of the Sikh Messenger and patron of the World Congress of Faiths, said: "EastEnders' Dot Cotton is an example. She quotes endlessly from the Bible and it ridicules religion to some extent." In July 2010, complaints were received following the storyline of Christian minister Lucas Johnson (Don Gilet) committing a number of murders that he believed was his duty to God, claiming that the storyline was offensive to Christians.
In 2008, EastEnders, along with Coronation Street, was criticised by Martin McGuinness, then Northern Ireland's deputy first minister, for "the level of concentration around the pub" and the "antics portrayed in The [...] Queen Vic".
In 2017, viewers complained on Twitter about scenes implying that Keanu Taylor (Danny Walters) is the father of his 15-year-old sister Bernadette Taylor's (Clair Norris) unborn baby, with the pair agreeing to keep the pregnancy secret from their mother, Karen Taylor (Lorraine Stanley); however, the baby's father is revealed as one of Bernadette's school friends.
In 1997, several episodes were shot and set in Ireland, resulting in criticisms for portraying the Irish in a negatively stereotypical way. Ted Barrington, the Irish Ambassador to the UK at the time, described the portrayal of Ireland as an "unrepresentative caricature", stating he was worried by the negative stereotypes and the images of drunkenness, backwardness and isolation. Jana Bennett, the BBC's then director of production, later apologised for the episodes, stating on BBC1's news bulletin: "It is clear that a significant number of viewers have been upset by the recent episodes of EastEnders, and we are very sorry, because the production team and programme makers did not mean to cause any offence." A year later BBC chairman Christopher Bland admitted that as result of the Irish-set EastEnders episodes, the station failed in its pledge to represent all groups accurately and avoid reinforcing prejudice.
In 2008, the show was criticised for stereotyping their Asian and Black characters, by having a black single mother, Denise Fox (Diane Parish), and an Asian shopkeeper, Zainab Masood (Nina Wadia). There has been criticism that the programme does not authentically portray the ethnic diversity of the population of East London, with the programme being "twice as white" as the real East End.
In 1992, writer David Yallop successfully sued the BBC for £68,000 after it was revealed he had been hired by producer Mike Gibbon in 1989 to pen several controversial storylines in an effort to "slim down" the cast; however, after Gibbon left the programme, executive producers chose not to use Yallop's storylines, which put the BBC in breach of the contract Yallop had signed with them. Unused storylines penned by Yallop, which were revealed in the press during the trial, included the death of Cindy Beale's (Michelle Collins) infant son Steven; Sufia Karim (Rani Singh) being killed during a shotgun raid at the corner shop; Pauline Fowler (Wendy Richard) dying of undiscovered cancer; and an IRA explosion at the Walford community centre, killing Pete Beale (Peter Dean) and Diane Butcher (Sophie Lawrence), and leaving Simon Wicks (Nick Berry) paralysed below the waist. A suicide was also planned, but the character this storyline was assigned to was not revealed.
Some storylines have provoked high levels of viewer complaints. In August 2006, a scene involving Carly Wicks (Kellie Shirley) and Jake Moon (Joel Beckett) having sex on the floor of Scarlet nightclub, and another scene involving Owen Turner (Lee Ross) violently attacking Denise Fox (Diane Parish), prompted 129 and 128 complaints, respectively.
In March 2008, scenes showing Tanya Branning (Jo Joyner) and boyfriend, Sean Slater (Robert Kazinsky), burying Tanya's husband Max (Jake Wood) alive, attracted many complaints. The UK communications regulator Ofcom later found that the episodes depicting the storyline were in breach of the 2005 Broadcasting Code. They contravened the rules regarding protection of children by appropriate scheduling, appropriate depiction of violence before the 9 p.m. watershed and appropriate depiction of potentially offensive content. In September 2008, EastEnders began a grooming and paedophilia storyline involving characters Tony King (Chris Coghill), Whitney Dean (Shona McGarty), Bianca Jackson (Patsy Palmer), Lauren Branning (Madeline Duggan) and Peter Beale (Thomas Law). The storyline attracted over 200 complaints.
In December 2010, Ronnie Branning (Samantha Womack) swapped her newborn baby, who died in cot, with Kat Moon's (Jessie Wallace) living baby. Around 3,400 complaints were received, with viewers branding the storyline "insensitive", "irresponsible" and "desperate". Roz Laws from the Sunday Mercury called the plot "shocking and ridiculous" and asked "are we really supposed to believe that Kat won't recognise that the baby looks different?" The Foundation for the Study of Infant Deaths (FSID) praised the storyline, and its director Joyce Epstein explained, "We are very grateful to EastEnders for their accurate depiction of the devastating effect that the sudden death of an infant can have on a family. We hope that this story will help raise the public's awareness of cot death, which claims 300 babies' lives each year." By 7 January, that storyline had generated the most complaints in show history: the BBC received about 8,500 complaints, and media regulator Ofcom received 374; however, despite the controversy, EastEnders pulled in rating highs of 9–10 million throughout the duration of the storyline.
In October 2014, the BBC defended a storyline, after receiving 278 complaints about 6 October 2014 episode where pub landlady Linda Carter (Kellie Bright) was raped by Dean Wicks (Matt Di Angelo). On 17 November 2014 it was announced that Ofcom will investigate over the storyline. On 5 January 2015, the investigation was cleared by Ofcom. A spokesman of Ofcom said: "After carefully investigating complaints about this scene, Ofcom found the BBC took appropriate steps to limit offence to viewers. This included a warning before the episode and implying the assault, rather than depicting it. Ofcom also took into account the programme's role in presenting sometimes challenging or distressing social issues."
In 2022, EastEnders aired their first male rape scene which saw Lewis Butler (Aidan O'Callaghan) rape Ben Mitchell (Max Bowden). The BBC received complaints from viewers who were unhappy with the content in the episode. Viewers felt that the scenes were too violent and graphic for a pre-watershed time slot. The BBC responded by stating: "EastEnders has been a pre-watershed BBC One staple for over 37 years and has a rich history of dealing with challenging and difficult issues and Ben's story is one of these. We have worked closely with organisations and experts in the field to tell this story which we hope will raise awareness of sexual assaults and the issues surrounding them. We are always mindful of the timeslot in which EastEnders is shown and we took great care to signpost this storyline prior to transmission, through on-air continuity and publicity as well as providing a BBC Action Line at the end of the episode which offers advice and support to those affected by the issue".
In 2010, EastEnders came under criticism from the police for the way that they were portrayed during the "Who Killed Archie?" storyline. During the storyline, DCI Jill Marsden (Sophie Stanton) and DC Wayne Hughes (Jamie Treacher) talk to locals about the case and Hughes accepts a bribe. The police claimed that such scenes were "damaging" to their reputation and added that the character DC Deanne Cunningham (Zoë Henry) was "irritatingly inaccurate". In response to the criticism, EastEnders apologised for offending real life detectives and confirmed that they use a police consultant for such storylines.
In October 2012, a storyline involving Lola Pearce (Danielle Harold), forced to hand over her baby Lexi Pearce, was criticised by the charity The Who Cares? Trust, who called the storyline an "unhelpful portrayal" and said it had already received calls from members of the public who were "distressed about the EastEnders scene where a social worker snatches a baby from its mother's arms". The scenes were also condemned by the British Association of Social Workers (BASW), calling the BBC "too lazy and arrogant" to correctly portray the child protection process, and saying that the baby was taken "without sufficient grounds to do so". Bridget Robb, acting chief of the BASW, said the storyline provoked "real anger among a profession well used to a less than accurate public and media perception of their jobs .. EastEnders' shabby portrayal of an entire profession has made a tough job even tougher."
Since its premiere in 1985, EastEnders has had a large impact on British popular culture. It has frequently been referred to in many different media, including songs and television programmes.
Many books have been written about EastEnders. Notably, from 1985 to 1988, author and television writer Hugh Miller wrote 17 novels, detailing the lives of many of the show's original characters before 1985, when events on screen took place.
Kate Lock also wrote four novels centred on more recent characters; Steve Owen (Martin Kemp), Grant Mitchell (Ross Kemp), Bianca Jackson (Patsy Palmer) and Tiffany Mitchell (Martine McCutcheon). Lock also wrote a character guide entitled Who's Who in EastEnders (ISBN 978-0-563-55178-2) in 2000, examining main characters from the first 15 years of the show.
Show creators Julia Smith and Tony Holland also wrote a book about the show in 1987, entitled EastEnders: The Inside Story (ISBN 978-0-563-20601-9), telling the story of how the show made it to screen. Two special anniversary books have been written about the show; EastEnders: The First 10 Years: A Celebration (ISBN 978-0-563-37057-4) by Colin Brakein 1995 and EastEnders: 20 Years in Albert Square (ISBN 978-0-563-52165-5) by Rupert Smith in 2005. | [
{
"paragraph_id": 0,
"text": "EastEnders is a British television soap opera created by Julia Smith and Tony Holland which has been broadcast on BBC One since February 1985. Set in the fictional borough of Walford in the East End of London, the programme follows the stories of local residents and their families as they go about their daily lives. Within eight months of the show's original launch, it had reached the number one spot in BARB's television ratings, and has consistently remained among the top-rated series in Britain. Four EastEnders episodes are listed in the all-time top 10 most-watched programmes in the UK, including the number one spot, when over 30 million watched the 1986 Christmas Day episode. EastEnders has been important in the history of British television drama, tackling many subjects that are considered to be controversial or taboo in British culture, and portraying a social life previously unseen on UK mainstream television.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Since co-creator Holland was from a large family in the East End, a theme heavily featured in EastEnders is strong families, and each character is supposed to have their own place in the fictional community. The Beales, Brannings, Mitchells, Slaters and the Watts are some of the families that have been central to the soap's notable and dramatic storylines. EastEnders has been filmed at the BBC Elstree Centre since its inception, with a set that is outdoors and open to weather. In 2014, the BBC announced plans to rebuild the set entirely. Filming commenced on the new set in January 2022, and it was first used on-screen in March 2022. Demolition on the old set commenced in November 2022.",
"title": ""
},
{
"paragraph_id": 2,
"text": "EastEnders has received both praise and criticism for many of its storylines, which have dealt with difficult themes including violence, rape, murder and abuse. It has been criticised for various storylines, including the 2010 baby swap storyline, which attracted over 6,000 complaints, as well as complaints of showing too much violence and allegations of national and racial stereotypes. However, EastEnders has also been commended for representing real-life issues and spreading awareness on social topics. The cast and crew of the show have received and been nominated for various awards.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In March 1983, under two years before EastEnders' first episode was broadcast, the show was a vague idea in the mind of a handful of BBC executives, who decided that what BBC One needed was a popular bi-weekly drama series that would attract the kind of mass audiences that ITV were getting with Coronation Street. The first people to whom David Reid, then head of series and serials, turned were Julia Smith and Tony Holland, a well established producer/script editor team who had first worked together on Z-Cars. The outline that Reid presented was vague: two episodes a week, 52 weeks a year. After the concept was put to them on 14 March 1983, Smith and Holland then went about putting their ideas down on paper; they decided it would be set in the East End of London. Granada Television gave Smith unrestricted access to the Coronation Street production for a month so that she could get a sense of how a continuing drama was produced.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "There was anxiety at first that the viewing public would not accept a new soap set in the south of England, though research commissioned by lead figures in the BBC revealed that southerners would accept a northern soap, northerners would accept a southern soap and those from the Midlands, as Julia Smith herself pointed out, did not mind where it was set as long as it was somewhere else. This was the beginning of a close and continuing association between EastEnders and audience research, which, though commonplace today, was something of a revolution in practice.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The show's creators were both Londoners, but when they researched Victorian squares, they found massive changes in areas they thought they knew well; however, delving further into the East End of London, they found exactly what they had been searching for: a real East End spirit, an inward-looking quality, a distrust of strangers and authority figures, a sense of territory and community that the creators summed up as \"Hurt one of us and you hurt us all\".",
"title": "History"
},
{
"paragraph_id": 6,
"text": "When developing EastEnders, both Smith and Holland looked at influential models like Coronation Street, but they found that it offered a rather outdated and nostalgic view of working-class life. Only after EastEnders began, and featured the characters of Tony Carpenter and Kelvin Carpenter, did Coronation Street start to feature black characters, for example. They came to the conclusion that Coronation Street had grown old with its audience, and that EastEnders would have to attract a younger, more socially extensive audience, ensuring that it had the longevity to retain it for many years thereafter. They also looked at Brookside, but found there was a lack of central meeting points for the characters, making it difficult for the writers to intertwine different storylines, so EastEnders was set in Albert Square.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "A previous UK soap set in an East End market was ATV's Market in Honey Lane; however, between 1967 and 1969, this show, which graduated from one showing a week to two in three separate series (the latter series being shown in different time slots across the ITV network) was very different in style and approach from EastEnders. The British Film Institute described Market in Honey Lane thus: \"It was not an earth-shaking programme, and certainly not pioneering in any revolutionary ideas in technique and production, but simply proposed itself to the casual viewer as a mildly pleasant affair.\"",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The target launch date was originally January 1985. Smith and Holland had eleven months in which to write, cast and shoot the whole thing; however, in February 1984, they did not even have a title or a place to film. Both Smith and Holland were unhappy about the January 1985 launch date, favouring November or even September 1984 when seasonal audiences would be higher, but the BBC stayed firm, and Smith and Holland had to concede that, with the massive task of getting the Elstree Studios operational, January was the most realistic date; however, this was later to be changed to February.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The project had a number of working titles: Square Dance, Round the Square, Round the Houses, London Pride and East 8. It was the latter that stuck (E8 is the postcode for Hackney) in the early months of creative process; however, the show was renamed after many casting agents mistakenly thought the show was to be called Estate, and the fictional postcode E20 was created, instead of using E8. Julia Smith came up with the name Eastenders after she and Holland had spent months telephoning theatrical agents and asking \"Do you have any real East Enders on your books?\" Smith thought \"Eastenders\" \"looked ugly written down\" and was \"hard to say\", so decided to capitalise the second \"e\".",
"title": "History"
},
{
"paragraph_id": 10,
"text": "After they decided on the filming location of BBC Elstree Centre in south Hertfordshire, Smith and Holland set about creating the 23 characters needed, in just 14 days. They took a holiday in Playa de los Pocillos, Lanzarote, and started to create the characters. Holland created the Beale and Fowler family, drawing on his own background. His mother, Ethel Holland, was one of four sisters raised in Walthamstow. Her eldest sister, Lou, had married a man named Albert Beale and had two children, named Peter and Pauline. These family members were the basis for Lou Beale, Pete Beale and Pauline Fowler. Holland also created Pauline's unemployed husband Arthur Fowler, their children Mark and Michelle, Pete's wife Kathy and their son Ian. Smith used her personal memories of East End residents she met when researching Victorian squares. Ethel Skinner was based on an old woman she met in a pub, with ill-fitting false teeth, and a \"face to rival a neon sign\", holding a Yorkshire Terrier in one hand and a pint of Guinness in the other.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Other characters created included Jewish doctor Harold Legg, the Anglo-Cypriot Osman family (Ali, Sue and baby Hassan), black father and son Tony and Kelvin Carpenter, single mother Mary Smith and Bangladeshi couple Saeed and Naima Jeffery. Jack, Pearl and Tracey Watts were created to bring \"flash, trash, and melodrama\" to the Square (they were later renamed Den, Angie and Sharon). The characters of Andy O'Brien and Debbie Wilkins were created to show a modern couple with outwardly mobile pretensions, and Lofty Holloway to show an outsider; someone who did not fit in with other residents. It was decided that he would be a former soldier, as Holland's personal experiences of ex-soldiers were that they had trouble fitting into society after being in the army. When they compared the characters they had created, Smith and Holland realised they had created a cross-section of East End residents. The Beale and Fowler family represented the old families of the East End, who had always been there. The Osmans, Jefferys and Carpenters represented the more modern diverse ethnic community of the East End. Debbie, Andy and Mary represented more modern-day individuals.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Once they had decided on their 23 characters, they returned to London for a meeting with the BBC. Everyone agreed that EastEnders would be tough, violent on occasion, funny and sharp—set in Margaret Thatcher's Britain—and it would start with a bang (namely the death of Reg Cox). They decided that none of their existing characters were wicked enough to have killed Reg, so a 24th character, Nick Cotton was added to the line-up. He was a racist thug, who often tried to lead other young characters astray. When all the characters had been created, Smith and Holland set about casting the actors, which also involved the input of lead director Matthew Robinson, who supervised auditions with the other directors at the outset, Vivienne Cozens and Peter Edwards.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Through the next few months, the set was growing rapidly at Elstree, and a composer and designer had been commissioned to create the title sequence. Simon May wrote the theme music and Alan Jeapes created the visuals. The visual images were taken from an aircraft flying over the East End of London at 1000 feet. Approximately 800 photographs were taken and pieced together to create one big image. The credits were later updated when the Millennium Dome was built.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The launch was delayed until February 1985 due to a delay in the chat show Wogan, that was to be a part of the major revamp in BBC1's schedules. Smith was uneasy about the late start as EastEnders no longer had the winter months to build up a loyal following before the summer ratings lull. The press were invited to Elstree to meet the cast and see the lot, and stories immediately started circulating about the show, about a rivalry with ITV (who were launching their own market-based soap, Albion Market) and about the private lives of the cast. Anticipation and rumour grew in equal measure until the first transmission at 7 p.m. on 19 February 1985. Neither Holland nor Smith could watch; they both instead returned to the place where it all began, Albertine's Wine Bar on Wood Lane. The next day, viewing figures were confirmed at 17 million. The reviews were largely favourable, although, after three weeks on air, BBC1's early evening share had returned to the pre-EastEnders figure of seven million, though EastEnders then climbed to highs of up to 23 million later on in the year. Following the launch, both group discussions and telephone surveys were conducted to test audience reaction to early episodes.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Press coverage of EastEnders, which was already intense, went into overdrive once the show was broadcast. With public interest so high, the media began investigating the private lives of the show's popular stars. Within days, a scandalous headline appeared – \"EASTENDERS STAR IS A KILLER\". This referred to Leslie Grantham, and his prison sentence for the murder of a taxi driver in an attempted robbery nearly 20 years earlier. This shocking tell-all style set the tone for relations between Albert Square and the press for the next 20 years.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The show's first episode attracted some 17 million viewers, and it continued to attract high viewing figures from then on. By Christmas 1985, the tabloids could not get enough of the soap. \"Exclusives\" about EastEnders storylines and the actors on the show became a staple of tabloid buyers' daily reading.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In 1987, the show featured the first same-sex kiss on a British soap, when Colin Russell (Michael Cashman) kissed boyfriend Barry Clarke on the forehead. This was followed in January 1989, less than a year after legislation came into effect in the UK, prohibiting the \"promotion of homosexuality\" by local authorities, by the first on-the-mouth gay kiss in a British soap when Colin kissed a new character, Guido Smith (Nicholas Donovan), an episode that was watched by 17 million people.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Writer Colin Brake suggested that 1989 was a year of big change for EastEnders, both behind the cameras and in front of them. Original production designer Keith Harris left the show, and Holland and Smith both decided that the time had come to move on too, their final contribution coinciding with the exit of one of EastEnders' most successful characters, Den Watts (Leslie Grantham). Producer Mike Gibbon was given the task of running the show, and he enlisted the most experienced writers to take over the storylining of the programme, including Charlie Humphreys, Jane Hollowood and Tony McHale.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "According to Brake, the departure of two of the soap's most popular characters, Den and Angie Watts (Anita Dobson), left a void in the programme, which needed to be filled. In addition, several other long-running characters left the show that year, including Sue and Ali Osman (Sandy Ratcliff and Nejdet Salih) and their family; Donna Ludlow (Matilda Ziegler); Carmel Jackson (Judith Jacob) and Colin Russell (Michael Cashman). Brake indicated that the production team decided that 1989 was to be a year of change in Walford, commenting, \"it was almost as if Walford itself was making a fresh start\".",
"title": "History"
},
{
"paragraph_id": 20,
"text": "By the end of 1989, EastEnders had acquired a new executive producer, Michael Ferguson, who had previously been a successful producer on ITV's The Bill. Brake suggested that Ferguson was responsible for bringing in a new sense of vitality and creating a programme that was more in touch with the real world than it had been over the previous year.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "A new era began in 1990, with the introduction of Phil Mitchell (Steve McFadden) and Grant Mitchell (Ross Kemp)—the Mitchell brothers—successful characters who would go on to dominate the soap thereafter. As the new production team cleared the way for new characters and a new direction, all of the characters introduced under Gibbon were axed from the show at the start of the year. Ferguson introduced other characters and was responsible for storylines including HIV, Alzheimer's disease and murder. After a successful revamp of the soap, Ferguson decided to leave EastEnders in July 1991. Ferguson was succeeded by both Leonard Lewis and Helen Greaves, who initially shared the role as Executive Producer for EastEnders. Lewis and Greaves formulated a new regime for EastEnders, giving the writers of the serial more authority in storyline progression, with the script department providing \"guidance rather than prescriptive episode storylines\". By the end of 1992, Greaves had left, and Lewis became executive and series producer. He left EastEnders in 1994 after the BBC controllers demanded an extra episode a week, taking its weekly airtime from 60 to 90 minutes. Lewis felt that producing an hour of \"reasonable quality drama\" a week was the maximum that any broadcasting system could generate without loss of integrity. Having set up the transition to the new schedule, the first trio of episodes—dubbed The Vic siege—marked Lewis's departure from the programme. Barbara Emile then became the Executive Producer of EastEnders, remaining with EastEnders until early 1995. She was succeeded by Corinne Hollingworth.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Hollingworth's contributions to the soap were awarded in 1997 when EastEnders won the BAFTA for Best Drama Series. Hollingworth shared the award with the next Executive Producer, Jane Harris. Harris was responsible for the critically panned Ireland episodes and Cindy Beale's attempted assassination of Ian Beale, which brought in an audience of 23 million in 1996, roughly four million more than Coronation Street. In 1998 Matthew Robinson was appointed as the Executive Producer of EastEnders. During his reign, EastEnders won the BAFTA for \"Best Soap\" in consecutive years 1999 and 2000 and many other awards. Robinson also earned tabloid soubriquet \"Axeman of Albert Square\" after sacking a large number of characters in one hit, and several more thereafter. In their place, Robinson introduced new long-running characters including Melanie Healy, Jamie Mitchell, Lisa Shaw, Steve Owen and Billy Mitchell.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "John Yorke became the Executive Producer of EastEnders in 2000. Yorke was given the task of introducing the soap's fourth weekly episode. He axed the majority of the Di Marco family, except Beppe di Marco, and helped introduce popular characters such as the Slater family. As what Mal Young described as \"two of EastEnders' most successful years\", Yorke was responsible for highly rated storylines such as \"Who Shot Phil?\", Ethel Skinner's death, Jim Branning and Dot Cotton's marriage, Trevor Morgan's domestic abuse of his wife Little Mo Morgan, and Kat Slater's revelation to her daughter Zoe Slater that she was her mother.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "In 2002, Louise Berridge succeeded Yorke as the Executive Producer. During her time at EastEnders, Berridge introduced popular characters such as Alfie Moon, Dennis Rickman, Chrissie Watts, Jane Beale, Stacey Slater and the critically panned Indian Ferreira family.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Berridge was responsible for some ratings success stories, such as Alfie and Kat Slater's relationship, Janine Butcher getting her comeuppance, Trevor Morgan and Jamie Mitchell's death storylines and the return of one of the greatest soap icons, Den Watts, who had been presumed dead for 14 years. His return in late 2003 was watched by over 16 million viewers, putting EastEnders back at number one in the rating war with the Coronation Street; however, other storylines, such as one about a kidney transplant involving the Ferreiras, were not well received, and although Den Watts's return proved to be a ratings success, the British press branded the plot unrealistic and felt that it questioned the show's credibility. A severe press backlash followed after Den's actor, Leslie Grantham, was outed in an internet sex scandal, which coincided with a swift decline in viewer ratings. The scandal led to Grantham's departure from the soap, but the occasion was used to mark the 20th anniversary of EastEnders, with an episode showing Den's murder at the Queen Vic pub.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "On 21 September 2004, Berridge quit as executive producer of EastEnders following continued criticism of the show. Kathleen Hutchison was swiftly appointed as the Executive Producer of EastEnders, and was tasked with quickly turning the fortunes of the soap. During her time at the soap Hutchison axed multiple characters, and reportedly ordered the rewriting of numerous scripts. Newspapers reported on employee dissatisfaction with Hutchison's tenure at EastEnders. In January 2005, Hutchison left the soap and John Yorke (who by this time, was the BBC Controller of Continuing Drama Series) took total control of the show himself and became acting Executive Producer for a short period, before appointing Kate Harwood to the role. Harwood stayed at EastEnders for 20 months before being promoted by the BBC. The highly anticipated return of Ross Kemp as Grant Mitchell in October 2005 proved to be a sudden major ratings success, with the first two episodes consolidating to ratings of 13.21 to 13.34 million viewers. On Friday 11 November 2005, EastEnders was the first British drama to feature a two-minute silence. This episode later went on to win British Soap Award for \"Best Single Episode\". In October 2006, Diederick Santer took over as Executive Producer of EastEnders. He introduced several characters to the show, including ethnic minority and homosexual characters to make the show 'feel more 21st Century'. Santer also reintroduced past and popular characters to the programme.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "On 2 March 2007, BBC signed a deal with Google to put videos on YouTube. A behind the scenes video of EastEnders, hosted by Matt Di Angelo, who played Deano Wicks on the show, was put on the site the same day, and was followed by another on 6 March 2007. In April 2007, EastEnders became available to view on mobile phones, via 3G technology, for 3, Vodafone and Orange customers. On 21 April 2007, the BBC launched a new advertising campaign using the slogan \"There's more to EastEnders\". The first television advert showed Dot Branning with a refugee baby, Tomas, whom she took in under the pretence of being her grandson. The second and third featured Stacey Slater and Dawn Swann, respectively. There have also been adverts in magazines and on radio.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "In 2009, producers introduced a limit on the number of speaking parts in each episode due to budget cuts, with an average of 16 characters per episode. The decision was criticised by Martin McGrath of Equity, who said: \"Trying to produce quality TV on the cheap is doomed to fail.\" The BBC responded by saying they had been working that way for some time and it had not affected the quality of the show.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "From 4 February 2010, CGI was used in the show for the first time, with the addition of computer-generated trains.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "EastEnders celebrated its 25th anniversary on 19 February 2010. Santer came up with several plans to mark the occasion, including the show's first episode to be broadcast live, the second wedding between Ricky Butcher and Bianca Jackson and the return of Bianca's relatives, mother Carol Jackson, and siblings Robbie Jackson, Sonia Fowler and Billie Jackson. He told entertainment website Digital Spy, \"It's really important that the feel of the week is active and exciting and not too reflective. There'll be those moments for some of our longer-serving characters that briefly reflect on themselves and how they've changed. The characters don't know that it's the 25th anniversary of anything, so it'd be absurd to contrive too many situations in which they're reflective on the past. The main engine of that week is great stories that'll get people talking.\" The live episode featured the death of Bradley Branning (Charlie Clements) at the conclusion of the \"Who Killed Archie?\" storyline, which saw Bradley's wife Stacey Slater (Lacey Turner) reveal that she was the murderer. Viewing figures peaked at 16.6 million, which was the highest viewed episode in seven years. Other events to mark the anniversary were a spin-off DVD, EastEnders: Last Tango in Walford, and an Internet spin-off, EastEnders: E20.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "Santer officially left EastEnders in March 2010, and was replaced by Bryan Kirkwood. Kirkwood's first signing was the reintroduction of characters Alfie Moon (Shane Richie) and Kat Moon (Jessie Wallace), and his first new character was Vanessa Gold, played by Zöe Lucker. In April and May 2010, Kirkwood axed eight characters from the show, Barbara Windsor left her role of Peggy Mitchell, which left a hole in the show, which Kirkwood decided to fill by bringing back Kat and Alfie, which he said would \"herald the new era of EastEnders.\" EastEnders started broadcasting in high definition on 25 December 2010. Old sets had to be rebuilt, so The Queen Victoria set was burnt down in a storyline (and in reality) to facilitate this.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "In November 2011, a storyline showed character Billy Mitchell, played by Perry Fenwick, selected to be a torch bearer for the 2012 Summer Olympics. In reality, Fenwick carried the torch through the setting of Albert Square, with live footage shown in the episode on 23 July 2012. This was the second live broadcast of EastEnders. In 2012, Kirkwood chose to leave his role as executive producer and was replaced by Lorraine Newman. The show lost many of its significant characters during this period. Newman stepped down as executive producer after 16 months in the job in 2013 after the soap was criticised for its boring storylines and its lowest-ever figures pointing at around 4.8 million. Dominic Treadwell-Collins was appointed as the new executive producer on 19 August 2013 and was credited on 9 December. He axed multiple characters from the show and introduced the extended Carter family. He also introduced a long-running storyline, \"Who Killed Lucy Beale?\", which peaked during the show's 30th anniversary in 2015 with a week of live episodes. Treadwell-Collins announced his departure from EastEnders on 18 February 2016.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "Sean O'Connor, former EastEnders series story producer and then-editor on radio soap opera The Archers, was announced to be taking over the role. Treadwell-Collins left on 6 May and O'Connor's first credited episode was broadcast on 11 July Although O'Connor's first credited episode aired in July, his own creative work was not seen onscreen until late September. Additionally, Oliver Kent was brought in as the Head of Continuing Drama Series for BBC Scripted Studios, meaning that Kent would oversee EastEnders along with O'Connor. O'Connor's approach to the show was to have a firmer focus on realism, which he said was being \"true to EastEnders' DNA and [finding] a way of capturing what it would be like if Julia Smith and Tony Holland were making the show now.\" He said that \"EastEnders has always had a distinctly different tone from the other soaps but over time we've diluted our unique selling point. I think we need to be ourselves and go back to the origins of the show and what made it successful in the first place. It should be entertaining but it should also be informative—that's part of our unique BBC compact with the audience. It shouldn't just be a distraction from your own life, it should be an exploration of the life shared by the audience and the characters.\" O'Connor planned to stay with EastEnders until the end of 2017, but announced his departure on 23 June 2017 with immediate effect, saying he wanted to concentrate on a career in film. John Yorke returned as a temporary executive consultant. Kent said, \"John Yorke is a Walford legend and I am thrilled that he will be joining us for a short period to oversee the show and to help us build on Sean's legacy while we recruit a long-term successor.\" Yorke initially returned for three months but his contract was later extended.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "In July 2018, a special episode was aired as part of a knife crime storyline. This episode, which showed the funeral of Shakil Kazemi (Shaheen Jafargholi) interspersed with real people talking about their true-life experiences of knife crime. On 8 August 2018, it was announced that Kate Oates, who has previously been a producer on the ITV soap operas Emmerdale and Coronation Street, would become Senior Executive Producer of EastEnders, as well of Holby City and Casualty. Oates began her role in October, and continued to work with Yorke until the end of the year to \"ensure a smooth handover\". It was also announced that Oates was looking for an Executive Producer to work under her. Jon Sen was announced on 10 December 2018 to be taking on the role of executive producer.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "In late 2016, popularity and viewership of EastEnders began to decline, with viewers criticising the storylines during the O'Connor reign, such as the killing of the Mitchell sisters and a storyline centred around the local bin collection. Although, since Yorke and Oates' reigns, opinions towards the storylines have become more favourable, with storylines such as Ruby Allen's (Louisa Lytton) sexual consent, which featured a special episode which \"broke new ground\" and knife crime, both of which have created \"vital\" discussions. The soap won the award for Best Continuing Drama at the 2019 British Academy Television Awards; its first high-profile award since 2016; however, in June 2019, EastEnders suffered its lowest ever ratings of 2.4 million due to its airing at 7 pm because of the BBC's coverage of the 2019 FIFA Women's World Cup. As of 2019, the soap is one of the most watched series on BBC iPlayer and averages around 5 million viewers per episode. The soap enjoyed a record-breaking year on the streaming platform in 2019, with viewers requesting to stream or download the show 234 million times, up 10% on 2018. The Christmas Day episode in 2019 became EastEnders biggest ever episode on BBC iPlayer, with 2.14 million viewer requests.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "In February 2020, EastEnders celebrated its 35th anniversary with a stunt on the River Thames leading to the death of Dennis Rickman Jr (Bleu Landau).",
"title": "History"
},
{
"paragraph_id": 37,
"text": "It was announced on 18 March 2020 that production had been suspended on EastEnders and other BBC Studios continuing dramas in light of new government guidelines following the COVID-19 pandemic, and that broadcast of the show would be reduced to two 30-minute episodes per week, broadcast on Mondays and Tuesdays, respectively. A spokesperson confirmed that the decision was made to reduce transmission so that EastEnders could remain on-screen for longer. Two months later, Charlotte Moore, the director of content at the BBC, announced plans for a return to production. She confirmed that EastEnders would return to filming during June 2020 and that there would be a transmission break between episodes filmed before and after production paused. When production recommences, social distancing measures will be utilised and the show's cast will be required to do their own hair and make-up, which is normally done by a make-up artist.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "It was announced on 3 June 2020 that EastEnders would go on a transmission break following the broadcast of episode 6124 on 16 June. A behind-the-scenes show, EastEnders: Secrets From The Square, will air in the show's place during the transmission break and is hosted by television personality Stacey Dooley. The first episode of the week features exclusive interviews with the show's cast, while the second episode will be a repeat of \"iconic\" episodes of the show. Beginning on 22 June 2020, Dooley interviews two cast members together in the show's restaurant set while observing social distancing measures. Kate Phillips, the controller of BBC Entertainment, explained that EastEnders: Secrets From The Square would be the \"perfect opportunity to celebrate the show\" in the absence of the show. Jon Sen, the show's executive producer, expressed his excitement at the new series, dubbing it \"a unique opportunity to see from the cast themselves just what it is like to be part of EastEnders\".",
"title": "History"
},
{
"paragraph_id": 39,
"text": "Plans for the show's return to transmission were announced on 12 June 2020. It was confirmed that after the transmission break, the show would temporarily broadcast four 20-minute episodes per week, until it can return to its normal output. Sen explained that the challenges in production and filming of the show has led to the show's reduced output, but also stated that the crew had been \"trialing techniques, filming methods and new ways of working\" to prepare the show for its return. Filming recommenced on 29 June, with episodes airing from 7 September 2020.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "On 9 April 2021, following the death of Prince Philip, Duke of Edinburgh, the episode of EastEnders that was due to be aired that night was postponed along with the final of Masterchef. In May 2021, it was announced that from 14 June 2021, boxsets of episodes would be uploaded to BBC iPlayer each Monday for three weeks. Executive producer Sen explained that the bi-annual scheduling conflicts that the UEFA European Championship and the FIFA World Cup cause to the soap, premiering four episodes on the streaming service would be beneficial for fans of the show who want to watch at their own chosen pace. Sen also confirmed that the episodes will still air on BBC One throughout the week. The release of these boxsets was extended for a further five weeks, due to similar impacts caused by the 2020 Summer Olympics.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "On 12 October 2021, it was announced that EastEnders would partake in a special week-long crossover event involving multiple British soaps to promote the topic of climate change ahead of the 2021 United Nations Climate Change Conference. During the week, beginning from 1 November, a social media clip featuring Maria Connor from Coronation Street was featured on the programme while Cindy Cunningham from Hollyoaks was also referenced. Similar clips featuring the show's own characters (Bailey Baker and Peter Beale) were featured on Doctors and Emmerdale during the week.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "In November 2021, it was announced that Sen would step down from his role as executive producer, and would be succeeded by former story producer, Chris Clenshaw. Sen's final credited episode as executive producer was broadcast on 10 March 2022 and coincided in a week of episodes that saw the arrest of serial killer Gray Atkins (Toby-Alexander Smith). From the week commencing on 7 March 2022, the show has been broadcast every weekday from Monday to Thursday in a 7:30 pm slot, making it the first time in the show's history that the programme began airing permanently on Wednesdays. On 2 June 2022, EastEnders aired an episode celebrating the Platinum Jubilee of Elizabeth II. Charles, Prince of Wales and Camilla, Duchess of Cornwall guest starred in the episode; it also marked the first executive producer credit for Clenshaw. Clenshaw's first major decision as executive producer was the axing of five series regulars: Peter Beale (Dayle Hudson), Stuart Highway (Ricky Champ), Jada Lennox (Kelsey Calladine-Smith), Dana Monroe (Barbara Smith) and Lola Pearce (Danielle Harold). Viewers criticised the decision, feeling that some of the characters had potential to add to the soap. Clenshaw has since overseen the returns of Alfie Moon (Shane Richie) and Yolande Trueman (Angela Wynter), the recasts of Amy Mitchell (Ellie Dadd) and Ricky Branning (Frankie Day), as well as the reintroduction of Cindy Beale (Michelle Collins), who returned from the dead after 25 years. Public opinion on Clenshaw then changed and he has been credited for improving ratings and garnering critical acclaim for the soap, with EastEnders winning the award for Best British Soap at the 2023 British Soap Awards and the award for Serial Drama at the 28th National Television Awards under his leadership.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "The central focus of EastEnders is the fictional Victorian square Albert Square in the fictional London Borough of Walford. In the show's narrative, Albert Square is a 19th-century street, named after Prince Albert (1819–1861), the husband of Queen Victoria (1819–1901, reigned 1837–1901). Thus, central to Albert Square is The Queen Victoria Public House (also known as The Queen Vic or The Vic). The show's producers based the square's design on Fassett Square in Dalston. There is also a market close to Fassett Square at Ridley Road. The postcode for the area, E8, was one of the working titles for the series. The name Walford is both a street in Dalston where Tony Holland lived and a blend of Walthamstow and Stratford—the areas of Greater London where the creators were born. Other parts of the Square and set interiors are based on other locations. The railway bridge is based upon one near BBC Television Centre which carries the Hammersmith & City line over Wood Lane W12, and the Queen Vic on the former College Park Hotel pub in Willesden at the end of Scrubs Lane at the junction with Harrow Road NW10 just a couple of miles from BBC Television Centre.",
"title": "Setting"
},
{
"paragraph_id": 44,
"text": "Walford East is a fictional London Underground station for Walford, and a tube map that was first seen on air in 1996 showed Walford East between Bow Road and West Ham, in the actual location of Bromley-by-Bow on the District and Hammersmith & City lines.",
"title": "Setting"
},
{
"paragraph_id": 45,
"text": "Walford has the postal district of E20. It was named as if Walford were part of the actual E postcode area which covers much of east London, the E standing for Eastern. E20 was entirely fictional when it was created, as London East postal districts stopped at E18 at the time. The show's creators opted for E20 instead of E19 as it was thought to sound better.",
"title": "Setting"
},
{
"paragraph_id": 46,
"text": "In March 2011, Royal Mail allocated the E20 postal district to the 2012 Olympic Park. In September 2011, the postcode for Albert Square was revealed in an episode as E20 6PQ.",
"title": "Setting"
},
{
"paragraph_id": 47,
"text": "EastEnders is built around the idea of relationships and strong families, with each character having a place in the community. This theme encompasses the whole Square, making the entire community a family of sorts, prey to upsets and conflict, but pulling together in times of trouble. Co-creator Tony Holland was from a large East End family, and such families have typified EastEnders. The first central family was the combination of the Fowler family, consisting of Pauline Fowler (Wendy Richard), her husband Arthur (Bill Treacher), and teenage children Mark (David Scarboro/Todd Carty) and Michelle (Susan Tully). Pauline's family, the Beales, consisted of Pauline's twin brother Pete Beale (Peter Dean), his wife Kathy (Gillian Taylforth) and their teenage son Ian (Adam Woodyatt). Pauline and Pete's domineering mother Lou Beale (Anna Wing) lived with Pauline and her family. Holland drew on the names of his own family for the characters.",
"title": "Characters"
},
{
"paragraph_id": 48,
"text": "The Watts and Mitchell families have been central to many notable EastEnders storylines, the show having been dominated by the Watts in the 1980s, with the 1990s focusing on the Mitchells and Butchers. The early 2000s saw a shift in attention towards the newly introduced female Slater clan, before a renewal of emphasis upon the restored Watts family beginning in 2003. In 2006, EastEnders became largely dominated by the Mitchell, Masood and Branning families, though the early 2010s also saw a renewed focus on the Moon and Slater family, and, from 2013 onwards, the Carters. In 2016, the Fowlers were revived and merged with the Slaters, with Martin Fowler (James Bye) marrying Stacey Slater (Lacey Turner). The late 2010s saw the newly introduced Taylor family become central to the show's main storylines, and in 2019, the first Sikh family, the Panesars, were introduced. The early 2020s was dominated by the Mitchells, Brannings, Panesars, Slaters, as well as the newly introduced Knight family. Key people involved in the production of EastEnders have stressed how important the idea of strong families is to the programme.",
"title": "Characters"
},
{
"paragraph_id": 49,
"text": "EastEnders has an emphasis on strong family matriarchs, with examples including Pauline Fowler (Wendy Richard) and Peggy Mitchell (Barbara Windsor), helping to attract a female audience. John Yorke, the former BBC's head of drama production, put this down to Tony Holland's \"gay sensibility, which showed a love for strong women\". The matriarchal role is one that has been seen in various reincarnations since the programme's inception, often depicted as the centre of the family unit. The original matriarch was Lou Beale (Anna Wing), though later examples include Mo Harris (Laila Morse), Pat Butcher (Pam St Clement), Zainab Masood (Nina Wadia), Cora Cross (Ann Mitchell), Kathy Beale (Gillian Taylforth), Jean Slater (Gillian Wright), and Suki Panesar (Balvinder Sopal). These characters are often seen as being loud and interfering but most importantly, responsible for the well-being of the family.",
"title": "Characters"
},
{
"paragraph_id": 50,
"text": "The show often includes strong, brassy, long-suffering women who exhibit diva-like behaviour and stoically battle through an array of tragedy and misfortune. Such characters include Angie Watts (Anita Dobson), Kathy Beale (Gillian Taylforth), Sharon Watts (Letitia Dean), Pat Butcher (Pam St Clement), Peggy Mitchell (Barbara Windsor), Kat Slater (Jessie Wallace), Denise Fox (Diane Parish), Tanya Branning (Jo Joyner) and Linda Carter (Kellie Bright). Conversely there are female characters who handle tragedy less well, depicted as eternal victims and endless sufferers, who include Ronnie Mitchell (Samantha Womack), Little Mo Mitchell (Kacey Ainsworth), Laura Beale (Hannah Waterman), Sue Osman (Sandy Ratcliff), Lisa Fowler (Lucy Benjamin), Mel Owen (Tamzin Outhwaite) and Rainie Cross (Tanya Franks). The 'tart with a heart' is another recurring character. Often, their promiscuity masks a hidden vulnerability and a desire to be loved. Such characters have included Pat Butcher (Pam St Clement), Tiffany Mitchell (Martine McCutcheon) and Kat Slater (Jessie Wallace).",
"title": "Characters"
},
{
"paragraph_id": 51,
"text": "A gender balance in the show is maintained via the inclusion of various \"macho\" male personalities such as Phil Mitchell (Steve McFadden), Grant Mitchell (Ross Kemp), Dan Sullivan (Craig Fairbrass), and George Knight (Colin Salmon), \"bad boys\" such as Den Watts (Leslie Grantham), Sean Slater (Robert Kazinsky), Michael Moon (Steve John Shepherd), Derek Branning (Jamie Foreman), Vincent Hubbard (Richard Blackwood), and Ravi Gulati (Aaron Thiara) and \"heartthrobs\" such as Simon Wicks (Nick Berry), Joe Wicks (Paul Nicholls), Jamie Mitchell (Jack Ryder), Dennis Rickman (Nigel Harman), Joey Branning (David Witts), Kush Kazemi (Davood Ghadami) and Zack Hudson (James Farrar). Another recurring male character type is the smartly dressed businessman, often involved in gang culture and crime and seen as a local authority figure. Examples include Steve Owen (Martin Kemp), Jack Dalton (Hywel Bennett), Andy Hunter (Michael Higgs), Johnny Allen (Billy Murray), Derek Branning (Jamie Foreman), and Nish Panesar (Navin Chowdhry). Following criticism aimed at the show's over-emphasis on \"gangsters\" in 2005, such characters have been significantly reduced. Another recurring male character seen in EastEnders is the \"loser\" or \"soft touch\", males often comically under the thumb of their female counterparts, which have included Arthur Fowler (Bill Treacher), Ricky Butcher (Sid Owen), Garry Hobbs (Ricky Groves), Lofty Holloway (Tom Watt), Billy Mitchell (Perry Fenwick) and Howie Danes (Delroy Atkinson).",
"title": "Characters"
},
{
"paragraph_id": 52,
"text": "Other recurring character types that have appeared throughout the serial are \"cheeky-chappies\" Pete Beale (Peter Dean), Alfie Moon (Shane Richie), Garry Hobbs (Ricky Groves) and Kush Kazemi (Davood Ghadami), \"lost girls\" such as Mary Smith (Linda Davidson), Donna Ludlow (Matilda Ziegler), Mandy Salter (Nicola Stapleton), Janine Butcher (Charlie Brooks), Zoe Slater (Michelle Ryan), Whitney Dean (Shona McGarty), and Hayley Slater (Katie Jarvis), delinquents such as Stacey Slater (Lacey Turner), Jay Brown (Jamie Borthwick), Lola Pearce (Danielle Harold), Bobby Beale (Eliot Carrington/Clay Milner Russell) and Keegan Baker (Zack Morris), \"villains\" such as Nick Cotton (John Altman), Trevor Morgan (Alex Ferns), May Wright (Amanda Drew), Yusef Khan (Ace Bhatti), Archie Mitchell (Larry Lamb), Dean Wicks (Matt Di Angelo), Stuart Highway (Ricky Champ) and Gray Atkins (Toby-Alexander Smith), \"bitches\" such as Cindy Beale (Michelle Collins), Janine Butcher (Charlie Brooks), Chrissie Watts (Tracy-Ann Oberman), Suzy Branning (Maggie O'Neill), Lucy Beale (Melissa Suffield/Hetti Bywater), Clare Bates (Gemma Bissix), Abi Branning (Lorna Fitzgerald), Babe Smith (Annette Badland) and Suki Panesar (Balvinder Sopal), \"brawlers\" or \"fighters\" such as Mary Smith (Linda Davidson), Bianca Jackson (Patsy Palmer), Kat Slater (Jessie Wallace), Stacey Slater (Lacey Turner), Shirley Carter (Linda Henry), Chelsea Fox (Tiana Benjamin/Zaraah Abrahams), Roxy Mitchell (Rita Simons) and Karen Taylor (Lorraine Stanley), and cockney \"wide boys\" or \"wheeler dealers\" such as Frank Butcher (Mike Reid), Alfie Moon (Shane Richie), Kevin Wicks (Phil Daniels), Darren Miller (Charlie G. Hawkins), Fatboy (Ricky Norwood), Jay Brown (Jamie Borthwick) and Kheerat Panesar (Jaz Deol).",
"title": "Characters"
},
{
"paragraph_id": 53,
"text": "Over the years EastEnders has typically featured a number of elderly residents, who are used to show vulnerability, nostalgia, stalwart-like attributes and are sometimes used for comedic purposes. The original elderly residents included Lou Beale (Anna Wing), Ethel Skinner (Gretchen Franklin) and Dot Cotton (June Brown). Over the years they have been joined by the likes of Mo Butcher (Edna Doré), Jules Tavernier (Tommy Eytle), Marge Green (Pat Coombs), Nellie Ellis (Elizabeth Kelly), Jim Branning (John Bardon), Charlie Slater (Derek Martin), Mo Harris (Laila Morse), Patrick Trueman (Rudolph Walker), Cora Cross (Ann Mitchell), Les Coker (Roger Sloman), Rose Cotton (Polly Perkins), Pam Coker (Lin Blakley), Stan Carter (Timothy West), Babe Smith (Annette Badland), Claudette Hubbard (Ellen Thomas), Sylvie Carter (Linda Marlowe), Ted Murray (Christopher Timothy), Joyce Murray (Maggie Steed), Arshad Ahmed (Madhav Sharma), Mariam Ahmed (Indira Joshi) and Vi Highway (Gwen Taylor). The programme has more recently included a higher number of teenagers and successful young adults in a bid to capture the younger television audience. This has spurred criticism, most notably from the actress Anna Wing, who portrayed Lou Beale in the show. She commented, \"I don't want to be disloyal, but I think you need a few mature people in a soap because they give it backbone and body... if all the main people are young it gets a bit thin and inexperienced. It gets too lightweight.\"",
"title": "Characters"
},
{
"paragraph_id": 54,
"text": "EastEnders has been known to feature a \"comedy double-act\", originally demonstrated with the characters of Dot and Ethel, whose friendship was one of the serial's most enduring. Other examples include Paul Priestly (Mark Thrippleton) and Trevor Short (Phil McDermott), In 1989 especially, characters were brought in who were deliberately conceived as comic or light-hearted. Such characters included Julie Cooper (Louise Plowright)—a brassy maneater; Marge Green—a batty older lady played by veteran comedy actress Pat Coombs; Trevor Short (Phil McDermott)—the \"village idiot\"; his friend, northern heartbreaker Paul Priestly (Mark Thrippleton); wheeler-dealer Vince Johnson (Hepburn Graham) and Laurie Bates (Gary Powell), who became Pete Beale's (Peter Dean) sparring partner. The majority of EastEnders' characters are working-class. Middle-class characters do occasionally become regulars, but have been less successful and rarely become long-term characters. In the main, middle-class characters exist as villains, such as James Wilmott-Brown (William Boyde), May Wright (Amanda Drew), Stella Crawford (Sophie Thompson), Yusef Khan (Ace Bhatti) and Gray Atkins (Toby-Alexander Smith) or are used to promote positive liberal influences, such as Colin Russell (Michael Cashman), Rachel Kominski (Jacquetta May) and Derek Harkinson (Ian Lavender).",
"title": "Characters"
},
{
"paragraph_id": 55,
"text": "EastEnders has always featured a culturally diverse cast which has included black, Asian, Turkish, Polish and Latvian characters. \"The expansion of minority representation signals a move away from the traditional soap opera format, providing more opportunities for audience identification with the characters and hence a wider appeal\". Despite this, the programme has been criticised by the Commission for Racial Equality, who argued in 2002 that EastEnders was not giving a realistic representation of the East End's \"ethnic make-up\". They suggested that the average proportion of visible minority faces on EastEnders was substantially lower than the actual ethnic minority population in East London boroughs, and it, therefore, reflected the East End in the 1960s, not the East End of the 2000s. The programme has since attempted to address these issues. A sari shop was opened and various characters of different ethnicities were introduced throughout 2006 and 2007, including the Fox family, the Ahmeds, and various background artists. This was part of producer Diederick Santer's plan to \"diversify\", to make EastEnders \"feel more 21st century\". EastEnders has had varying success with ethnic minority characters. Possibly the least successful were the Indian Ferreira family, who were not well received by critics or viewers and were dismissed as unrealistic by the Asian community in the UK.",
"title": "Characters"
},
{
"paragraph_id": 56,
"text": "EastEnders has been praised for its portrayal of characters with disabilities, including Adam Best (David Proud) (spina bifida), Noah Chambers (Micah Thomas) and Frankie Lewis (Rose Ayling-Ellis) (deaf), Jean Slater (Gillian Wright) and her daughter Stacey (Lacey Turner) (bipolar disorder), Janet Mitchell (Grace) (Down syndrome), Jim Branning (John Bardon) (stroke) and Dinah Wilson (Anjela Lauren Smith) (multiple sclerosis). The show also features a large number of gay, lesbian and bisexual characters (see list of soap operas with LGBT characters), including Colin Russell (Michael Cashman), Barry Clark (Gary Hailes), Simon Raymond (Andrew Lynford), Tony Hills (Mark Homer), Sonia Fowler (Natalie Cassidy), Naomi Julien (Petra Letang), Tina Carter (Luisa Bradshaw-White), Tosh Mackintosh (Rebecca Scroggs), Christian Clarke (John Partridge), Syed Masood (Marc Elliott), Ben Mitchell (Harry Reid/Max Bowden), Paul Coker (Jonny Labey), Iqra Ahmed (Priya Davdra), Ash Panesar (Gurlaine Kaur Garcha), Bernadette Taylor (Clair Norris), Callum Highway (Tony Clay) and Eve Unwin (Heather Peace). Kyle Slater (Riley Carter Millington), a transgender character, was introduced in 2015.",
"title": "Characters"
},
{
"paragraph_id": 57,
"text": "EastEnders has a high cast turnover and characters are regularly changed to facilitate storylines or refresh the format. The show has also become known for the return of characters after they have left the show. Sharon Watts (Letitia Dean) returned in August 2012 for her third stint on the show. Den Watts (Leslie Grantham) returned 14 years after he was believed to have died in September 2003, a feat repeated by Kathy Beale (Gillian Taylforth) in 2015, and Cindy Beale (Michelle Collins) in 2023. Speaking extras, including Tracey the barmaid (Jane Slaughter) (who has been in the show since the first episode in 1985), have made appearances throughout the show's duration, without being the focus of any major storylines. The character of Nick Cotton (John Altman) gained a reputation for making constant exits and returns since the programme's first year until the character died in 2015.",
"title": "Characters"
},
{
"paragraph_id": 58,
"text": "As of June 2023, Gillian Taylforth, Letitia Dean and Adam Woodyatt are the only members of the original cast remaining in the show, in their roles of Kathy Beale, Sharon Watts and Ian Beale respectively. Tracey is the longest-serving female character in the show, having appeared since 1985, albeit as a minor character.",
"title": "Characters"
},
{
"paragraph_id": 59,
"text": "EastEnders programme makers took the decision that the show was to be about \"everyday life\" in the inner city \"today\" and regarded it as a \"slice of life\". Creator/producer Julia Smith declared that \"We don't make life, we reflect it\". She also said, \"We decided to go for a realistic, fairly outspoken type of drama which could encompass stories about homosexuality, rape, unemployment, racial prejudice, etc., in a believable context. Above all, we wanted realism\". In 2011, the head of BBC drama, John Yorke, said that the real East End had changed significantly since EastEnders started, and the show no longer truly reflected real life, but that it had an \"emotional truthfulness\" and was partly \"true to the original vision\" and partly \"adapt[ing] to a changing world\", adding that \"If it was a show where every house cost a fortune and everyone drove a Lexus, it wouldn't be EastEnders. You have to show shades of that change, but certain things are immutable, I would argue, like The Vic and the market.\"",
"title": "Storylines"
},
{
"paragraph_id": 60,
"text": "In the 1980s, EastEnders featured \"gritty\" storylines involving drugs and crime, representing the issues faced by working-class Britain under Thatcherism. Storylines included the cot death of 14-month-old Hassan Osman, Nick Cotton's (John Altman) homophobia, racism and murder of Reg Cox (Johnnie Clayton), Arthur Fowler's (Bill Treacher) unemployment reflecting the recession of the 1980s, the rape of Kathy Beale (Gillian Taylforth) in 1988 by James Willmott-Brown (William Boyde) and Michelle Fowler's (Susan Tully) teenage pregnancy. The show also dealt with prostitution, mixed-race relationships, shoplifting, sexism, divorce, domestic violence and mugging. In 1989, the programme came under criticism in the British media for being too depressing, and according to writer Colin Brake, the programme makers were determined to change this. In 1989, there was a deliberate attempt to increase the lighter, more comic aspects of life in Albert Square. This led to the introduction of some characters who were deliberately conceived as comic or light-hearted. Brake suggested that humour was an important element in EastEnders' storylines during 1989, with a greater amount of slapstick and light comedy than before. He classed 1989's changes as a brave experiment, and suggested that while some found this period of EastEnders entertaining, many other viewers felt that the comedy stretched the programme's credibility. Although the programme still covered many issues in 1989, such as domestic violence, drugs, rape and racism, Brake reflected that the new emphasis on a more balanced mix between \"light and heavy storylines\" gave the illusion that the show had lost a \"certain edge\".",
"title": "Storylines"
},
{
"paragraph_id": 61,
"text": "As the show progressed into the 1990s, EastEnders still featured hard-hitting issues such as Mark Fowler (Todd Carty) revealing he was HIV positive in 1991, the death of his wife Gill (Susanna Dawson) from an AIDS-related illness in 1992, murder, adoption, abortion, Peggy Mitchell's (Barbara Windsor) battle with breast cancer, and Phil Mitchell's (Steve McFadden) alcoholism and violence towards wife Kathy. Mental health issues were confronted in 1996 when 16-year-old Joe Wicks developed schizophrenia following the off-screen death of his sister in a car crash. The long-running storyline of Mark Fowler's HIV was so successful in raising awareness that in 1999, a survey by the National Aids Trust found teenagers got most of their information about HIV from the soap, though one campaigner noted that in some ways the storyline was not reflective of what was happening at the time as the condition was more common among the gay community. Still, heterosexual Mark struggled with various issues connected to his HIV status, including public fears of contamination, a marriage breakdown connected to his inability to have children and the side effects of combination therapies.",
"title": "Storylines"
},
{
"paragraph_id": 62,
"text": "In the early 2000s, EastEnders covered the issue of euthanasia with Ethel Skinner's (Gretchen Franklin) death in a pact with her friend Dot Cotton (June Brown), the unveiling of Kat Slater's (Jessie Wallace) sexual abuse by her uncle Harry (Michael Elphick) as a child (which led to the birth of her daughter Zoe (Michelle Ryan), who had been brought up to believe that Kat was her sister), the domestic abuse of Little Mo Morgan (Kacey Ainsworth) by husband Trevor (Alex Ferns) (which involved marital rape and culminated in Trevor's death after he tried to kill Little Mo in a fire), Sonia Jackson (Natalie Cassidy) giving birth at the age of 15 and then putting her baby up for adoption, and Janine Butcher's (Charlie Brooks) prostitution, agoraphobia and drug addiction. The soap also tackled the issue of mental illness and carers of people who have mental conditions, illustrated with mother and daughter Jean (Gillian Wright) and Stacey Slater (Lacey Turner); Jean has bipolar disorder, and teenage daughter Stacey was her carer (this storyline won a Mental Health Media Award in September 2006). Stacey went on to struggle with the disorder herself. The issue of illiteracy was highlighted by the characters of middle-aged Keith (David Spinx) and his young son Darren (Charlie G. Hawkins). EastEnders has also covered the issue of Down syndrome, as Billy (Perry Fenwick) and Honey Mitchell's (Emma Barton) baby, Janet Mitchell (Grace), was born with the condition in 2006. EastEnders covered child abuse with its storyline involving Phil Mitchell's (Steve McFadden) 11-year-old son Ben (Charlie Jones) and lawyer girlfriend Stella Crawford (Sophie Thompson), and child grooming involving the characters Tony King (Chris Coghill) as the perpetrator and Whitney Dean (Shona McGarty) as the victim.",
"title": "Storylines"
},
{
"paragraph_id": 63,
"text": "Aside from this, soap opera staples of youthful romance, jealousy, domestic rivalry, gossip and extramarital affairs are regularly featured, with high-profile storylines occurring several times a year. Whodunits also feature regularly, including the \"Who Shot Phil?\" story arc in 2001 that attracted over 19 million viewers and was one of the biggest successes in British soap television; the \"Who Killed Archie?\" storyline, which was revealed in a special live episode of the show that drew a peak of 17 million viewers; and the \"Who Killed Lucy Beale?\" saga.",
"title": "Storylines"
},
{
"paragraph_id": 64,
"text": "The exterior set for the fictional Albert Square is located in the permanent backlot of the BBC Elstree Centre, Borehamwood, Hertfordshire, at 51°39′32″N 0°16′40″W / 51.65889°N 0.27778°W / 51.65889; -0.27778, and is outdoors and open to the weather. It was initially built in 1984 with a specification that it should last for at least 15 years at a cost of £750,000. The EastEnders lot was designed by Keith Harris, who was a senior designer within the production team together with supervising art directors Peter Findley and Gina Parr. The main buildings on the square consisted originally of hollow shells, constructed from marine plywood facades mounted onto steel frames. The lower walls, pavements, etc., were constructed of real brick and tarmac. The set had to be made to look as if it had been standing for years. This was done by a number of means, including chipping the pavements, using chemicals to crack the top layer of the paint work, using varnish to create damp patches underneath the railway bridge, and making garden walls in such a way they appeared to sag. The final touches were added in summer 1984, these included a telephone box, telegraph pole that was provided by British Telecom, lampposts that were provided by Hertsmere Borough Council and a number of vehicles parked on the square. On each set all the appliances are fully functional such as gas cookers, the laundry washing machines and The Queen Victoria beer pumps.",
"title": "Production"
},
{
"paragraph_id": 65,
"text": "The walls were intentionally built crooked to give them an aged appearance. The drains around the set are real so rainwater can naturally flow from the streets. The square was built in two phases with only three sides being built, plus Bridge Street, to begin with in 1984, in time to be used for the show's first episode. Then in 1986, Harris added an extension to the set, building the fourth side of Albert Square, and in 1987, Turpin Road began to be featured more, which included buildings such as The Dagmar.",
"title": "Production"
},
{
"paragraph_id": 66,
"text": "In 1993, George Street was added, and soon after Walford East Underground station was built, to create further locations when EastEnders went from two to three episodes per week. The set was constructed by the BBC in-house construction department under construction manager Mike Hagan. Most of the buildings on Albert Square have no interior filming space, with a few exceptions, and most do not have rears or gardens. Some interior shots are filmed in the actual buildings.",
"title": "Production"
},
{
"paragraph_id": 67,
"text": "In February 2008, it was reported that the set would transfer to Pinewood Studios in Buckinghamshire, where a new set would be built as the set was looking \"shabby\", with its flaws showing up on high-definition television broadcasts; however, by April 2010 a follow-up report confirmed that Albert Square would remain at Elstree Studios for at least another four years, taking the set through its 25th anniversary. The set was consequently rebuilt for high definition on the same site, using mostly real brick with some areas using a new improved plastic brick. Throughout rebuilding filming would still take place, and so scaffolding was often seen on screen during the process, with some storylines written to accommodate the rebuilding, such as the Queen Vic fire.",
"title": "Production"
},
{
"paragraph_id": 68,
"text": "In 2014, then executive producer Dominic Treadwell-Collins said that he wanted Albert Square to look like a real-life east London neighbourhood so that the soap would \"better reflect the more fashionable areas of east London beloved of young professionals\" giving a flavour of the \"creeping gentrification\" of east London. He added: \"It should feel more like London. It's been frozen in aspic for too long.\" The BBC announced that they would rebuild the EastEnders set to secure the long-term future of the show, with completion expected to be in 2018. The set will provide a modern, upgraded exterior filming resource for EastEnders, and will copy the appearance of the existing buildings; however, it will be 20 per cent bigger, in order to enable greater editorial ambition and improve working conditions for staff. A temporary set will be created on-site to enable filming to continue while the permanent structure is rebuilt.",
"title": "Production"
},
{
"paragraph_id": 69,
"text": "In May 2016 the rebuild was delayed until 2020 and forecast to cost in excess of £15 million, although the main part of the set is scheduled to be able to start filming in May 2019. In December 2018, it was revealed that the new set was now planned to cost £59 million but a National Audit Office (NAO) report stated that it would actually cost £86.7 million and be completed two-and-a-half years later than planned, in 2023; the NAO concluded that the BBC \"could not provide value for money on the project\". The NAO's forecast cost is more than the annual combined budget for BBC Radio 1 and Radio 2. The BBC said the new set would be more suitable for HD filming and better reflect the modern East End of London. In March 2019 there was criticism from a group of MPs about how the BBC handled the redevelopment of the set. In March 2020 during the suspension of filming, the interior sets were used for a new adaptation of Talking Heads. This marked the first time that it had been used for anything other than EastEnders. In January 2022 the new £86.7m exterior set of EastEnders was officially unveiled by the BBC replacing the original set built in 1984. The new scenes from the new set will first appear from new episodes airing in spring.",
"title": "Production"
},
{
"paragraph_id": 70,
"text": "The majority of EastEnders episodes are filmed at the BBC Elstree Centre in Borehamwood, Hertfordshire. In January 1987, EastEnders had three production teams each comprising a director, production manager, production assistant and assistant floor manager. Other permanent staff included the producer's office, script department and designer, meaning between 30 and 35 people would be working full-time on EastEnders, rising to 60 to 70 on filming days. When the number of episodes was increased to four per week, more studio space was needed, so Top of the Pops was moved from its studio at Elstree to BBC Television Centre in April 2001. Episodes are produced in \"quartets\" of four episodes, each of which starts filming on a Tuesday and takes nine days to record. Each day, between 25 and 30 scenes are recorded. During the filming week, actors can film for as many as eight to twelve episodes. Exterior scenes are filmed on a specially constructed film lot, and interior scenes take place in six studios. The episodes are usually filmed about six to eight weeks in advance of broadcast. During the winter period, filming can take place up to twelve weeks in advance, due to less daylight for outdoor filming sessions. This time difference has been known to cause problems when filming outdoor scenes. On 8 February 2007, heavy snow fell on the set and filming had to be cancelled as the scenes due to be filmed on the day were to be transmitted in April. EastEnders is normally recorded using four cameras. When a quartet is completed, it is edited by the director, videotape editor and script supervisor. The producer then reviews the edits and decides if anything needs to be re-edited, which the director will do. A week later, sound is added to the episodes and they are technically reviewed, and are ready for transmission if they are deemed of acceptable quality.",
"title": "Production"
},
{
"paragraph_id": 71,
"text": "Although episodes are predominantly recorded weeks before they are broadcast, occasionally, EastEnders includes current events in their episodes. In 1987, EastEnders covered the general election. Using a plan devised by co-creators Smith and Holland, five minutes of material was cut from four of the pre-recorded episodes preceding the election. These were replaced by specially recorded election material, including representatives from each major party, and a scene recorded on the day after the election reflecting the result, which was broadcast the following Tuesday. The result of the 2010 general election was referenced on 7 May 2010 episode. During the 2006 FIFA World Cup, actors filmed short scenes following the tournament's events that were edited into the programme in the following episode. Last-minute scenes have also been recorded to reference the fiftieth anniversary of the end of the Second World War in 1995, the two-minute silence on Remembrance Day 2005 (2005 also being the year for the sixtieth anniversary of the end of the Second World War and the 200th anniversary of the Battle of Trafalgar), Barack Obama's election victory in 2008, the death of Michael Jackson in 2009, the 2010 Comprehensive Spending Review, Andy Murray winning the Men's Singles at the 2013 Wimbledon Championships, the Wedding of Prince William and Catherine Middleton, the birth of Prince George of Wales, Scotland voting no against independence in 2014, and the 100th anniversary of the beginning of the Great War.",
"title": "Production"
},
{
"paragraph_id": 72,
"text": "EastEnders is often filmed on location, away from the studios in Borehamwood. Sometimes an entire quartet is filmed on location, which has a practical function and are the result of EastEnders making a \"double bank\", when an extra week's worth of episodes are recorded at the same time as the regular schedule, enabling the production of the programme to stop for a two-week break at Christmas. These episodes often air in late June or early July and again in late October or early November. The first time this happened was in December 1985 when Pauline (Wendy Richard) and Arthur Fowler (Bill Treacher) travelled to the Southend-on-Sea to find their son Mark, who had run away from home. In 1986, EastEnders filmed overseas for the first time, in Venice, and this was also the first time it was not filmed on videotape, as a union rule at the time prevented producers taking a video crew abroad and a film crew had to be used instead. In 2011, it was reported that eight per cent of the series is filmed on location.",
"title": "Production"
},
{
"paragraph_id": 73,
"text": "If scenes during a normal week are to be filmed on location, this is done during the normal recording week. Off-set locations that have been used for filming include Clacton (1989), Devon (September 1990), Hertfordshire (used for scenes set in Gretna Green in July 1991), Portsmouth (November 1991), Milan (1997), Ireland (1997), Amsterdam (December 1999), Brighton (2001) and Portugal (2003). In 2003, filming took place at Loch Fyne Hotel and Leisure Club in Inveraray, The Arkinglass Estate in Cairndow and Grims Dyke Hotel, Harrow Weald, north London, for a week of episodes set in Scotland. The episode shown on 9 April 2007 featured scenes filmed at St Giles Church and The Blacksmiths Arms public house in Wormshill, the Ringlestone Inn, two miles away and Court Lodge Farm in Stansted, Kent. and the Port of Dover, Kent. .",
"title": "Production"
},
{
"paragraph_id": 74,
"text": "Other locations have included the court house, a disused office block, Evershed House, and St Peter's Church, all in St Albans, an abandoned mental facility in Worthing, and a wedding dress shop in Muswell Hill, north London. A week of episodes in 2011 saw filming take place on a beach in Thorpe Bay and a pier in Southend-on-Sea—during which a stuntman was injured when a gust of wind threw him off balance and he fell onto rocks— with other scenes filmed on the Essex coast. In 2012, filming took place in Keynsham, Somerset. In January 2013, on-location filming at Grahame Park in Colindale, north London, was interrupted by at least seven youths who threw a firework at the set and threatened to cut members of the crew. In October 2013, scenes were filmed on a road near London Southend Airport in Essex.",
"title": "Production"
},
{
"paragraph_id": 75,
"text": "EastEnders has featured seven live broadcasts. For its 25th anniversary in February 2010, a live episode was broadcast in which Stacey Slater (Lacey Turner) was revealed as Archie Mitchell's (Larry Lamb) killer. Turner was told only 30 minutes before the live episode and to maintain suspense, she whispers this revelation to former lover and current father-in-law, Max Branning, in the very final moments of the live show. Many other cast members only found out at the same time as the public, when the episode was broadcast. On 23 July 2012, a segment of that evening's episode was screened live as Billy Mitchell (Perry Fenwick) carried the Olympic flame around Walford in preparation for the 2012 Summer Olympics. In February 2015, for the soap's 30th anniversary, five episodes in a week featured live inserts throughout them. Episodes airing on Tuesday 17, Wednesday 18 and Thursday 19 (which featured an hour long episode and a second episode) all featured at least one live insert. The show revealed that the killer of Lucy Beale (Hetti Bywater) was her younger brother, Bobby (Eliot Carrington), during the second episode on Thursday, after a ten-month mystery regarding who killed her. In a flashback episode which revisited the night of the murder, Bobby was revealed to have killed his sister. The aftermath episode, which aired on Friday 20, was completely live and explained in detail Lucy's death. Carrington was told he was Lucy's killer on Monday 16, while Laurie Brett (who plays Bobby's adoptive mother, Jane) was informed in November, due to the character playing a huge role in the cover-up of Lucy's murder. Bywater only discovered Bobby was responsible for Lucy's death on the morning of Thursday, 19 February, several hours before they filmed the scenes revealing Bobby as Lucy's killer.",
"title": "Production"
},
{
"paragraph_id": 76,
"text": "Each episode should run for 27 minutes and 15 seconds; however, if any episode runs over or under then it is the job of post-production to cut or add scenes where appropriate. As noted in the 1994 behind-the-scenes book, EastEnders: The First 10 Years, after filming, tapes were sent to the videotape editor, who then edited the scenes together into an episode. The videotape editor used the director's notes so they knew which scenes the director wanted to appear in a particular episode. The producer might have asked for further changes to be made. The episode was then copied onto D3 video. The final process was to add the audio which included background noise such as a train or a jukebox music and to check it met the BBC's technical standard for broadcasting.",
"title": "Production"
},
{
"paragraph_id": 77,
"text": "Since 2010, EastEnders no longer uses tapes in the recording or editing process. After footage is recorded, the material is sent digitally to the post-production team. The editors then assemble all the scenes recorded for the director to view and note any changes that are needed. The sound team also have the capability to access the edited episode, enabling them to dub the sound and create the final version.",
"title": "Production"
},
{
"paragraph_id": 78,
"text": "According to the book How to Study Television, in 1995 EastEnders cost the BBC £40,000 per episode on average. A 2012 agreement between the BBC, the Writers' Guild of Great Britain and the Personal Managers' Association set out the pay rate for EastEnders scripts as £137.70 per minute of transmission time (£4,131 for 30 minutes), which is 85 per cent of the rate for scripts for other BBC television series. The writers would be paid 75 per cent of that fee for any repeats of the episode. In 2011, it was reported that actors receive a per-episode fee of between £400 and £1,200, and are guaranteed a certain number of episodes per year, perhaps as few as 30 or as many as 100, therefore annual salaries could range from £12,000 to £200,000 depending on the popularity of a character. Some actors' salaries were leaked in 2006, revealing that Natalie Cassidy (Sonia Fowler) was paid £150,000, Cliff Parisi (Minty Peterson) received £220,000, Barbara Windsor (Peggy Mitchell) and Steve McFadden (Phil Mitchell) each received £360,000 and Wendy Richard (Pauline Fowler) had a salary of £370,000. In 2017, it was revealed that Danny Dyer (Mick Carter) and Adam Woodyatt (Ian Beale) were the highest-paid actors in EastEnders, earning between £200,000 and £249,999, followed by Laurie Brett (Jane Beale), Letitia Dean (Sharon Watts), Tameka Empson (Kim Fox), Linda Henry (Shirley Carter), Scott Maslen (Jack Branning), Diane Parish (Denise Fox), Gillian Taylforth (Kathy Beale) and Lacey Turner (Stacey Slater), earning between £150,000 and £199,999.",
"title": "Production"
},
{
"paragraph_id": 79,
"text": "A 2011 report from the National Audit Office (NAO) showed that EastEnders had an annual budget of £29.9 million. Of that, £2.9 million was spent on scripts and £6.9 million went towards paying actors, extras and chaperones for child actors. According to the NAO, BBC executives approved £500,000 of additional funding for the 25th anniversary live episode (19 February 2010). With a total cost of £696,000, the difference was covered from the 2009–2010 series budget for EastEnders. When repeats and omnibus editions are shown, the BBC pays additional fees to cast and scriptwriters and incurs additional editing costs, which in the period 2009–2010, amounted to £5.5 million. According to a Radio Times article for 212 episodes it works out at £141,000 per episode or 3.5p per viewer hour.",
"title": "Production"
},
{
"paragraph_id": 80,
"text": "In 2014, two new studios were built and they were equipped with low-energy lighting which has saved approximately 90,000 kwh per year. A carbon literacy course was run with Heads of Departments of EastEnders attending and as a result, representatives from each department agreed to meet quarterly to share new sustainability ideas. The paper usage was reduced by 50 per cent across script distribution and other weekly documents and 20 per cent across all other paper usage. The production team now use recycled paper and recycled stationery.",
"title": "Production"
},
{
"paragraph_id": 81,
"text": "Also changes to working online has also saved transportation cost of distribution 2,500 DVDs per year. Sets, costumes, paste pots and paint are all recycled by the design department. Cars used by the studio are low emission vehicles and the production team take more efficient energy efficient generators out on location. Caterers no longer use polystyrene cups and recycling on location must be provided.",
"title": "Production"
},
{
"paragraph_id": 82,
"text": "As a result of EastEnders' sustainability, it was awarded albert+, an award that recognises the production's commitment to becoming a more eco-friendly television production. The albert+ logo was first shown at the end of the EastEnders titles for episode 5281 on 9 May 2016.",
"title": "Production"
},
{
"paragraph_id": 83,
"text": "Since 1985, EastEnders has remained at the centre of BBC One's primetime schedule. From 2001 to 2022, it was broadcast at 7:30 pm on Tuesday and Thursday, and 8 pm on Monday and Friday. EastEnders was originally broadcast twice weekly at 7:00 pm on Tuesdays and Thursdays from 19 February 1985; however, in September 1985 the two episodes were moved to 7:30 pm as Michael Grade did not want the soap running in direct competition with Emmerdale Farm, and this remained the same until 7 April 1994. The BBC had originally planned to take advantage of the \"summer break\" that Emmerdale Farm usually took to capitalise on ratings, but ITV added extra episodes and repeats so that Emmerdale Farm was not taken off the air over the summer. Realising the futility of the situation, Grade decided to move the show to the later 7:30 pm slot.",
"title": "Scheduling"
},
{
"paragraph_id": 84,
"text": "EastEnders output then increased to three times a week on Mondays, Tuesdays and Thursdays from 11 April 1994 until 2 August 2001. From 10 August 2001, EastEnders then added its fourth episode (shown on Fridays). This caused some controversy as the first Friday episode clashed with Coronation Street, which was moved to 8 pm to make way for an hour-long episode of rural soap Emmerdale. In this first head-to-head battle, EastEnders claimed victory over its rival.",
"title": "Scheduling"
},
{
"paragraph_id": 85,
"text": "In early 2003, viewers could watch episodes of EastEnders on digital channel BBC Three before they were broadcast on BBC One. This was to coincide with the relaunch of the channel and helped BBC Three break the one million viewers mark for the first time with 1.03 million who watched to see Mark Fowler's departure. According to the EastEnders website, there are, on average, 208 episodes outputted each year.",
"title": "Scheduling"
},
{
"paragraph_id": 86,
"text": "On 21 February 2022, it was announced that from 7 March 2022, EastEnders would begin airing from Monday to Thursday at 7:30 pm, therefore no longer airing on a Friday. This meant that EastEnders would clash with Emmerdale, but the producers stated that due to the importance of online streaming figures, they were not concerned about the soaps clashing on the live television guides.",
"title": "Scheduling"
},
{
"paragraph_id": 87,
"text": "The omnibus edition, a compilation of the week's episodes in a continuous sequence, originally aired on BBC One on Sunday afternoons, until 1 April 2012, when it was changed to a late Friday night or early Saturday morning slot, commencing on 6 April 2012, though the exact time differed. It reverted to a weekend daytime slot as from January 2013 on BBC Two. In 2014, the omnibus moved back to around midnight on Friday nights, and in April 2015, the omnibus was axed, following detailed audience research and the introduction of 30-day catch up on BBC iPlayer and the planning of BBC One +1. The last omnibus on the BBC was shown on 24 April 2015. While W was showing same-day repeats of EastEnders, they also returned the weekend omnibus, starting on 20 February 2016.",
"title": "Scheduling"
},
{
"paragraph_id": 88,
"text": "From 20 February to 26 May 1995, as part of the programme's 10th Anniversary celebrations, episodes from 1985 were repeated each weekday morning at 10 am, starting from episode one. Four specially selected episodes from 1985, 1986 and 1987 were also repeated on BBC1 on Friday evenings at 8 pm under the banner \"The Unforgettable EastEnders\". These included the wedding of Michelle Fowler and Lofty Holloway, the revelation of the father of Michelle's baby, a two-hander between Dot Cotton and Ethel Skinner and the 1986 Christmas episode featuring Den Watts presenting Angie Watts with divorce papers.",
"title": "Scheduling"
},
{
"paragraph_id": 89,
"text": "EastEnders was regularly repeated at 10 pm on BBC Choice from the channel's launch in 1998, a practice continued by BBC Three for many years until mid-2012 with the repeat moving to 10:30 pm. From 25 December 2010 – 29 April 2011 and 31 July 2012 – 13 August 2012 to the show was repeated on BBC HD in a Simulcast with BBC Three. In 2015, the BBC Three repeat moved back to 10 pm. In February 2016, the repeat moved to W, the rebranded Watch, after BBC Three became an online-only channel. W stopped showing EastEnders in April 2018. Following the reinstatement of BBC Three as a linear channel in 2022, the nightly 'narrative repeat' was not reinstated; instead, the channel retransmits that week's four BBC One episodes at the weekend, airing two episodes on each of Saturday and Sunday evenings, unless live sports or music/events coverage takes precedence. Episodes of EastEnders are available on-demand through BBC iPlayer for 30 days after their original screening.",
"title": "Scheduling"
},
{
"paragraph_id": 90,
"text": "On 1 December 2012, the BBC uploaded the first 54 episodes of EastEnders to YouTube, and on 23 July 2013 they uploaded a further 14 episodes bringing the total to 68. These have since been taken down. In April 2018, it was announced that Drama would be showing repeats starting 6 August 2018 during weekdays and they are also available on-demand on the UKTV Play catch-up service for 30 days after the broadcast. In December 2019, Christmas episodes were added to Britbox UK.",
"title": "Scheduling"
},
{
"paragraph_id": 91,
"text": "EastEnders is broadcast around the world in many English-speaking countries. New Zealand became the first to broadcast EastEnders overseas, the first episode being shown on 30 August 1985. This was followed by the Netherlands on 8 December 1986, Australia on 5 January 1987, Norway on 27 April, and Barcelona on 30 June (dubbed into Catalan). On 9 July 1987, it was announced that the show would be aired in the United States on PBS. BBC Worldwide licensed 200 hours of EastEnders for broadcast in Serbia on RTS (dubbed into Serbian); it began airing the first episode in December 1997. The series was broadcast in the United States until BBC America ceased broadcasts of the serial in 2003, amidst fan protests. In June 2004, the satellite television provider Dish Network picked up EastEnders, broadcasting episodes starting at the point where BBC America had ceased broadcasting them, offering the series as a pay-per-view item. Episodes air two months behind the UK schedule. Episodes from prior years are still shown on various PBS stations in the US. Since 7 March 2017, EastEnders has been available in the United States on demand, 24 hours after it has aired in the United Kingdom via BritBox, a joint venture between BBC and ITV.",
"title": "Scheduling"
},
{
"paragraph_id": 92,
"text": "The series was screened in Australia by ABC TV from 1987 until 1991. It is aired in Australia on Satellite & Streaming services on BBC UKTV, from Mondays to Thursdays 7:50 pm–8:30 pm with two advertisement breaks of five minutes each. Episodes are shown roughly one week after their UK broadcast. In New Zealand, it was shown by TVNZ on TVNZ 1 for several years, and then on Prime each weekday afternoon. It is shown on BBC UKTV from Mondays to Thursdays at 8 pm. Episodes are roughly two weeks behind the UK.",
"title": "Scheduling"
},
{
"paragraph_id": 93,
"text": "EastEnders is shown on BBC Entertainment (formerly BBC Prime) in Europe and in Africa, where it is approximately six episodes behind the UK. It was also shown on BBC Prime in Asia, but when the channel was replaced by BBC Entertainment, it ceased broadcasting the series. In Canada, EastEnders was shown on BBC Canada until 2010, at which point it was picked up by VisionTV.",
"title": "Scheduling"
},
{
"paragraph_id": 94,
"text": "In Ireland, EastEnders was shown on TV3 from September 1998 until March 2001, when it moved over to RTÉ One, after RTÉ lost to TV3 the rights to air rival soap Coronation Street. Additionally, episodes of EastEnders are available on-demand through RTÉ Online for seven days after their original screening.",
"title": "Scheduling"
},
{
"paragraph_id": 95,
"text": "In 1991 the BBC sold the programme's format rights to a Dutch production company IDTV, the programme was renamed Het Oude Noorden (Translation: Old North). The Dutch version was re-written from already existing EastEnders scripts. The schedule remained the same as EastEnders twice weekly episodes; however, some notable changes included the programme is now set in Rotterdam rather than London, characters are given Dutch names (Den and Angie became Ger and Ankie) and The Queen Victoria pub is renamed \"Cade Faas\".",
"title": "International versions"
},
{
"paragraph_id": 96,
"text": "According to Barbara Jurgen who re-wrote the scripts for a Dutch audience he said \"The power of the show is undeniable. The Scripts are full of hard, sharp drama, plus great one-liners which will translate well to Holland.\" The Dutch version began broadcasting on VARA 13 March 1993 but was cancelled after 20 episodes.",
"title": "International versions"
},
{
"paragraph_id": 97,
"text": "On 26 December 1988, the first EastEnders \"bubble\" was shown, titled \"CivvyStreet\". Since then, \"Return of Nick Cotton\" (2000), \"Ricky & Bianca\" (2002), \"Dot's Story\" (2003), \"Perfectly Frank\" (2003) and \"Pat and Mo\" (2004) have all been broadcast, each episode looking into lives of various characters and revealing part of their backstories or lives since leaving EastEnders. In 1993, the two-part story \"Dimensions in Time\", a charity cross-over with Doctor Who, was shown.",
"title": "Spin-offs and merchandise"
},
{
"paragraph_id": 98,
"text": "In 1998, EastEnders Revealed was launched on BBC Choice (now BBC Three). The show takes a look behind the scenes of the EastEnders and investigates particular places, characters or families within EastEnders. An episode of EastEnders Revealed that was commissioned for BBC Three attracted 611,000 viewers. As part of the BBC's digital push, EastEnders Xtra was introduced in 2005. The show was presented by Angellica Bell and was available to digital viewers at 8:30 pm on Monday nights. It was also shown after the Sunday omnibus. The series went behind the scenes of the show and spoke to some of the cast members. A new breed of behind-the-scenes programmes have been broadcast on BBC Three since 1 December 2006. These are all documentaries related to current storylines in EastEnders, in a similar format to EastEnders Revealed, though not using the EastEnders Revealed name.",
"title": "Spin-offs and merchandise"
},
{
"paragraph_id": 99,
"text": "In October 2009, a 12-part Internet spin-off series entitled EastEnders: E20 was announced. The series was conceived by executive producer Diederick Santer \"as a way of nurturing new, young talent, both on- and off-screen, and exploring the stories of the soaps' anonymous bystanders.\" E20 features a group of sixth-form characters and targets the \"Hollyoaks demographic\". It was written by a team of young writers and was shown three times a week on the EastEnders website from 8 January 2010. A second 10-part series started in September 2010, with twice-weekly episodes available online and an omnibus on BBC Three. A third series of 15 episodes started in September 2011.",
"title": "Spin-offs and merchandise"
},
{
"paragraph_id": 100,
"text": "EastEnders and rival soap opera Coronation Street took part in a crossover episode for Children in Need on 19 November 2010 called \"East Street\". On 4 April 2015, EastEnders confirmed plans for a BBC One series featuring Kat and Alfie Moon. The six-part drama, Kat & Alfie: Redwater, was created by executive producer Dominic Treadwell-Collins and his team. In the spin-off, the Moons visit Ireland where they \"search for answers to some very big questions\".",
"title": "Spin-offs and merchandise"
},
{
"paragraph_id": 101,
"text": "Until its closure, BBC Store released 553 EastEnders episodes from various years, including the special episode \"CivvyStreet\", available to buy as digital downloads.",
"title": "Spin-offs and merchandise"
},
{
"paragraph_id": 102,
"text": "An example of EastEnders' popularity is that after episodes, electricity use in the United Kingdom rises significantly as viewers who have waited for the show to end begin boiling water for tea, a phenomenon known as TV pickup. Over five minutes, power demand rises by three GW, the equivalent of 1.5 to 1.75 million kettles. National Grid personnel watch the show to know when closing credits begin so they can prepare for the surge, asking for additional power from France if necessary.",
"title": "Popularity and viewership"
},
{
"paragraph_id": 103,
"text": "EastEnders is the BBC's most consistent programme in terms of ratings, and as of 2021, episodes typically receive between 4 and 6 million viewers. EastEnders two biggest ratings rivals are the ITV soaps Coronation Street (produced by Granada Television in Manchester) and Emmerdale (Produced by Yorkshire Television in Leeds).",
"title": "Popularity and viewership"
},
{
"paragraph_id": 104,
"text": "The launch show in 1985 attracted 17.35 million viewers. 25 July 1985 was the first time the show's viewership rose to first position in the weekly top 10 shows for BBC One. The highest rated episode of EastEnders is the Christmas Day 1986 episode, which attracted a combined 30.15 million viewers who tuned into either the original transmission or the omnibus to see Den Watts hand over divorce papers to his wife Angie. This remains the highest rated episode of a soap in British television history.",
"title": "Popularity and viewership"
},
{
"paragraph_id": 105,
"text": "In 2001, EastEnders clashed with Coronation Street for the first time. EastEnders won the battle with 8.4 million viewers (41% share) whilst Coronation Street lagged behind with 7.3 million viewers (34% share). On 21 September 2004, Louise Berridge, the then executive producer, quit following criticism of the show. The following day the show received its lowest ever ratings at that time (6.2 million) when ITV scheduled an hour-long episode of Emmerdale against it. Emmerdale was watched by 8.1 million viewers. The poor ratings motivated the press into reporting viewers were bored with implausible and ill-thought-out storylines. Under new producers, EastEnders and Emmerdale continued to clash at times, and Emmerdale tended to come out on top, giving EastEnders lower than average ratings. In 2006, EastEnders regularly attracted between 8 and 12 million viewers in official ratings. EastEnders received its second lowest ratings on 17 May 2007, when 4.0 million viewers tuned in. This was also the lowest ever audience share, with just 19.6 per cent. This was attributed to a conflicting one-hour special episode of Emmerdale on ITV1; however, ratings for the 10 pm EastEnders repeat on BBC Three reached an all-time high of 1.4 million; however, there have been times when EastEnders had higher ratings than Emmerdale despite the two going head-to-head.",
"title": "Popularity and viewership"
},
{
"paragraph_id": 106,
"text": "The ratings increased in 2010, thanks to the \"Who Killed Archie?\" storyline and second wedding of Ricky Butcher (Sid Owen) and Bianca Jackson (Patsy Palmer), and the show's first live episode on 19 February 2010. The live-episode averaged 15.6 million viewers, peaking at 16.6 million in the final five minutes of broadcast. In January 2010, the average audience was higher than that of Coronation Street for the first time in three years. During the 30th anniversary week in which there were live elements and the climax of the Who Killed Lucy Beale? storyline, 10.84 million viewers tuned in for the 30th anniversary episode itself in an hour long special on 19 February 2015 (peaking with 11.9 million). Later on in the same evening, a special flashback episode averaged 10.3 million viewers, and peaked with 11.2 million. The following day, the anniversary week was rounded off with another fully live episode (the second after 2010) with 9.97 million viewers watching the aftermath of the reveal, the Beale family finding out the truth of Lucy's killer and deciding to keep it a secret. In 2013, the average audience share for an episode was around 30 per cent.",
"title": "Popularity and viewership"
},
{
"paragraph_id": 107,
"text": "Due to the impact of the COVID-19 pandemic on the soap, EastEnders suffered a ratings drop after 2020. Despite once being the highest-rated soap, it dropped to third in the rankings in 2021, behind Coronation Street and Emmerdale, with 4.09 million viewers. BBC's head of drama, Piers Wenger, explained that since the episode duration had been shortened and the airtime frequently suffered changes, it had led to the audience not knowing when to watch it. Digital Spy opined that the ratings drop was accredited to \"lacklustre storylines\" and thought that storylines on rival soaps were better. Later that year, EastEnders suffered its lowest rating ever, with 1.7 million viewers watching live. The Daily Mirror's Jamie Roberts felt that viewers had \"turned their back\" on the soap due to its lack of interesting stories and iconic characters. Ratings expert Stephen Price also noted that the drop is partly due to the rise of streaming services.",
"title": "Popularity and viewership"
},
{
"paragraph_id": 108,
"text": "EastEnders has received both praise and criticism for most of its storylines, which have dealt with difficult themes, such as violence, rape, murder and child abuse.",
"title": "Criticism"
},
{
"paragraph_id": 109,
"text": "Mary Whitehouse, social critic, argued at the time that EastEnders represented a violation of \"family viewing time\" and that it undermined the watershed policy. She regarded EastEnders as a fundamental assault on the family and morality itself. She made reference to representation of family life and emphasis on psychological and emotional violence within the show. She was also critical of language such as \"bleeding\", \"bloody hell\", \"bastard\" and \"for Christ's sake\"; however, Whitehouse also praised the programme, describing Michelle Fowler's decision not to have an abortion as a \"very positive storyline\". She also felt that EastEnders had been cleaned up as a result of her protests, though she later commented that EastEnders had returned to its old ways. Her criticisms were widely reported in the tabloid press as ammunition in its existing hostility towards the BBC. The stars of Coronation Street in particular aligned themselves with Mary Whitehouse, gaining headlines such as \"STREETS AHEAD! RIVALS LASH SEEDY EASTENDERS\" and \"CLEAN UP SOAP! Street Star Bill Lashes \"Steamy\" EastEnders\".",
"title": "Criticism"
},
{
"paragraph_id": 110,
"text": "EastEnders has been criticised for being too violent, most notably during a domestic violence storyline between Little Mo Morgan (Kacey Ainsworth) and her husband Trevor Morgan (Alex Ferns). As EastEnders is shown pre-watershed, there were worries that some scenes in this storyline were too graphic for its audience. Complaints against a scene in which Little Mo's face was pushed in gravy on Christmas Day were upheld by the Broadcasting Standards Council; however, a helpline after this episode attracted over 2000 calls. Erin Pizzey, who became internationally famous for having started one of the first women's refuges, said that EastEnders had done more to raise the issue of violence against women in one story than she had done in 25 years. The character of Phil Mitchell (played by Steve McFadden since early 1990) has been criticised on several occasions for glorifying violence and proving a bad role model to children. On one occasion following a scene in an episode broadcast in October 2002, where Phil brutally beat his godson, Jamie Mitchell (Jack Ryder), 31 complaints came from viewers.",
"title": "Criticism"
},
{
"paragraph_id": 111,
"text": "In 2003, cast member Shaun Williamson, who was in the final months of his role of Barry Evans, said that the programme had become much grittier over the past 10 to 15 years, and found it \"frightening\" that parents let their young children watch.",
"title": "Criticism"
},
{
"paragraph_id": 112,
"text": "In 2005, the BBC was accused of anti-religious bias by a House of Lords committee, who cited EastEnders as an example. Indarjit Singh, editor of the Sikh Messenger and patron of the World Congress of Faiths, said: \"EastEnders' Dot Cotton is an example. She quotes endlessly from the Bible and it ridicules religion to some extent.\" In July 2010, complaints were received following the storyline of Christian minister Lucas Johnson (Don Gilet) committing a number of murders that he believed was his duty to God, claiming that the storyline was offensive to Christians.",
"title": "Criticism"
},
{
"paragraph_id": 113,
"text": "In 2008, EastEnders, along with Coronation Street, was criticised by Martin McGuinness, then Northern Ireland's deputy first minister, for \"the level of concentration around the pub\" and the \"antics portrayed in The [...] Queen Vic\".",
"title": "Criticism"
},
{
"paragraph_id": 114,
"text": "In 2017, viewers complained on Twitter about scenes implying that Keanu Taylor (Danny Walters) is the father of his 15-year-old sister Bernadette Taylor's (Clair Norris) unborn baby, with the pair agreeing to keep the pregnancy secret from their mother, Karen Taylor (Lorraine Stanley); however, the baby's father is revealed as one of Bernadette's school friends.",
"title": "Criticism"
},
{
"paragraph_id": 115,
"text": "In 1997, several episodes were shot and set in Ireland, resulting in criticisms for portraying the Irish in a negatively stereotypical way. Ted Barrington, the Irish Ambassador to the UK at the time, described the portrayal of Ireland as an \"unrepresentative caricature\", stating he was worried by the negative stereotypes and the images of drunkenness, backwardness and isolation. Jana Bennett, the BBC's then director of production, later apologised for the episodes, stating on BBC1's news bulletin: \"It is clear that a significant number of viewers have been upset by the recent episodes of EastEnders, and we are very sorry, because the production team and programme makers did not mean to cause any offence.\" A year later BBC chairman Christopher Bland admitted that as result of the Irish-set EastEnders episodes, the station failed in its pledge to represent all groups accurately and avoid reinforcing prejudice.",
"title": "Criticism"
},
{
"paragraph_id": 116,
"text": "In 2008, the show was criticised for stereotyping their Asian and Black characters, by having a black single mother, Denise Fox (Diane Parish), and an Asian shopkeeper, Zainab Masood (Nina Wadia). There has been criticism that the programme does not authentically portray the ethnic diversity of the population of East London, with the programme being \"twice as white\" as the real East End.",
"title": "Criticism"
},
{
"paragraph_id": 117,
"text": "In 1992, writer David Yallop successfully sued the BBC for £68,000 after it was revealed he had been hired by producer Mike Gibbon in 1989 to pen several controversial storylines in an effort to \"slim down\" the cast; however, after Gibbon left the programme, executive producers chose not to use Yallop's storylines, which put the BBC in breach of the contract Yallop had signed with them. Unused storylines penned by Yallop, which were revealed in the press during the trial, included the death of Cindy Beale's (Michelle Collins) infant son Steven; Sufia Karim (Rani Singh) being killed during a shotgun raid at the corner shop; Pauline Fowler (Wendy Richard) dying of undiscovered cancer; and an IRA explosion at the Walford community centre, killing Pete Beale (Peter Dean) and Diane Butcher (Sophie Lawrence), and leaving Simon Wicks (Nick Berry) paralysed below the waist. A suicide was also planned, but the character this storyline was assigned to was not revealed.",
"title": "Criticism"
},
{
"paragraph_id": 118,
"text": "Some storylines have provoked high levels of viewer complaints. In August 2006, a scene involving Carly Wicks (Kellie Shirley) and Jake Moon (Joel Beckett) having sex on the floor of Scarlet nightclub, and another scene involving Owen Turner (Lee Ross) violently attacking Denise Fox (Diane Parish), prompted 129 and 128 complaints, respectively.",
"title": "Criticism"
},
{
"paragraph_id": 119,
"text": "In March 2008, scenes showing Tanya Branning (Jo Joyner) and boyfriend, Sean Slater (Robert Kazinsky), burying Tanya's husband Max (Jake Wood) alive, attracted many complaints. The UK communications regulator Ofcom later found that the episodes depicting the storyline were in breach of the 2005 Broadcasting Code. They contravened the rules regarding protection of children by appropriate scheduling, appropriate depiction of violence before the 9 p.m. watershed and appropriate depiction of potentially offensive content. In September 2008, EastEnders began a grooming and paedophilia storyline involving characters Tony King (Chris Coghill), Whitney Dean (Shona McGarty), Bianca Jackson (Patsy Palmer), Lauren Branning (Madeline Duggan) and Peter Beale (Thomas Law). The storyline attracted over 200 complaints.",
"title": "Criticism"
},
{
"paragraph_id": 120,
"text": "In December 2010, Ronnie Branning (Samantha Womack) swapped her newborn baby, who died in cot, with Kat Moon's (Jessie Wallace) living baby. Around 3,400 complaints were received, with viewers branding the storyline \"insensitive\", \"irresponsible\" and \"desperate\". Roz Laws from the Sunday Mercury called the plot \"shocking and ridiculous\" and asked \"are we really supposed to believe that Kat won't recognise that the baby looks different?\" The Foundation for the Study of Infant Deaths (FSID) praised the storyline, and its director Joyce Epstein explained, \"We are very grateful to EastEnders for their accurate depiction of the devastating effect that the sudden death of an infant can have on a family. We hope that this story will help raise the public's awareness of cot death, which claims 300 babies' lives each year.\" By 7 January, that storyline had generated the most complaints in show history: the BBC received about 8,500 complaints, and media regulator Ofcom received 374; however, despite the controversy, EastEnders pulled in rating highs of 9–10 million throughout the duration of the storyline.",
"title": "Criticism"
},
{
"paragraph_id": 121,
"text": "In October 2014, the BBC defended a storyline, after receiving 278 complaints about 6 October 2014 episode where pub landlady Linda Carter (Kellie Bright) was raped by Dean Wicks (Matt Di Angelo). On 17 November 2014 it was announced that Ofcom will investigate over the storyline. On 5 January 2015, the investigation was cleared by Ofcom. A spokesman of Ofcom said: \"After carefully investigating complaints about this scene, Ofcom found the BBC took appropriate steps to limit offence to viewers. This included a warning before the episode and implying the assault, rather than depicting it. Ofcom also took into account the programme's role in presenting sometimes challenging or distressing social issues.\"",
"title": "Criticism"
},
{
"paragraph_id": 122,
"text": "In 2022, EastEnders aired their first male rape scene which saw Lewis Butler (Aidan O'Callaghan) rape Ben Mitchell (Max Bowden). The BBC received complaints from viewers who were unhappy with the content in the episode. Viewers felt that the scenes were too violent and graphic for a pre-watershed time slot. The BBC responded by stating: \"EastEnders has been a pre-watershed BBC One staple for over 37 years and has a rich history of dealing with challenging and difficult issues and Ben's story is one of these. We have worked closely with organisations and experts in the field to tell this story which we hope will raise awareness of sexual assaults and the issues surrounding them. We are always mindful of the timeslot in which EastEnders is shown and we took great care to signpost this storyline prior to transmission, through on-air continuity and publicity as well as providing a BBC Action Line at the end of the episode which offers advice and support to those affected by the issue\".",
"title": "Criticism"
},
{
"paragraph_id": 123,
"text": "In 2010, EastEnders came under criticism from the police for the way that they were portrayed during the \"Who Killed Archie?\" storyline. During the storyline, DCI Jill Marsden (Sophie Stanton) and DC Wayne Hughes (Jamie Treacher) talk to locals about the case and Hughes accepts a bribe. The police claimed that such scenes were \"damaging\" to their reputation and added that the character DC Deanne Cunningham (Zoë Henry) was \"irritatingly inaccurate\". In response to the criticism, EastEnders apologised for offending real life detectives and confirmed that they use a police consultant for such storylines.",
"title": "Criticism"
},
{
"paragraph_id": 124,
"text": "In October 2012, a storyline involving Lola Pearce (Danielle Harold), forced to hand over her baby Lexi Pearce, was criticised by the charity The Who Cares? Trust, who called the storyline an \"unhelpful portrayal\" and said it had already received calls from members of the public who were \"distressed about the EastEnders scene where a social worker snatches a baby from its mother's arms\". The scenes were also condemned by the British Association of Social Workers (BASW), calling the BBC \"too lazy and arrogant\" to correctly portray the child protection process, and saying that the baby was taken \"without sufficient grounds to do so\". Bridget Robb, acting chief of the BASW, said the storyline provoked \"real anger among a profession well used to a less than accurate public and media perception of their jobs .. EastEnders' shabby portrayal of an entire profession has made a tough job even tougher.\"",
"title": "Criticism"
},
{
"paragraph_id": 125,
"text": "Since its premiere in 1985, EastEnders has had a large impact on British popular culture. It has frequently been referred to in many different media, including songs and television programmes.",
"title": "In popular culture"
},
{
"paragraph_id": 126,
"text": "Many books have been written about EastEnders. Notably, from 1985 to 1988, author and television writer Hugh Miller wrote 17 novels, detailing the lives of many of the show's original characters before 1985, when events on screen took place.",
"title": "Further reading"
},
{
"paragraph_id": 127,
"text": "Kate Lock also wrote four novels centred on more recent characters; Steve Owen (Martin Kemp), Grant Mitchell (Ross Kemp), Bianca Jackson (Patsy Palmer) and Tiffany Mitchell (Martine McCutcheon). Lock also wrote a character guide entitled Who's Who in EastEnders (ISBN 978-0-563-55178-2) in 2000, examining main characters from the first 15 years of the show.",
"title": "Further reading"
},
{
"paragraph_id": 128,
"text": "Show creators Julia Smith and Tony Holland also wrote a book about the show in 1987, entitled EastEnders: The Inside Story (ISBN 978-0-563-20601-9), telling the story of how the show made it to screen. Two special anniversary books have been written about the show; EastEnders: The First 10 Years: A Celebration (ISBN 978-0-563-37057-4) by Colin Brakein 1995 and EastEnders: 20 Years in Albert Square (ISBN 978-0-563-52165-5) by Rupert Smith in 2005.",
"title": "Further reading"
}
]
| EastEnders is a British television soap opera created by Julia Smith and Tony Holland which has been broadcast on BBC One since February 1985. Set in the fictional borough of Walford in the East End of London, the programme follows the stories of local residents and their families as they go about their daily lives. Within eight months of the show's original launch, it had reached the number one spot in BARB's television ratings, and has consistently remained among the top-rated series in Britain. Four EastEnders episodes are listed in the all-time top 10 most-watched programmes in the UK, including the number one spot, when over 30 million watched the 1986 Christmas Day episode. EastEnders has been important in the history of British television drama, tackling many subjects that are considered to be controversial or taboo in British culture, and portraying a social life previously unseen on UK mainstream television. Since co-creator Holland was from a large family in the East End, a theme heavily featured in EastEnders is strong families, and each character is supposed to have their own place in the fictional community. The Beales, Brannings, Mitchells, Slaters and the Watts are some of the families that have been central to the soap's notable and dramatic storylines. EastEnders has been filmed at the BBC Elstree Centre since its inception, with a set that is outdoors and open to weather. In 2014, the BBC announced plans to rebuild the set entirely. Filming commenced on the new set in January 2022, and it was first used on-screen in March 2022. Demolition on the old set commenced in November 2022. EastEnders has received both praise and criticism for many of its storylines, which have dealt with difficult themes including violence, rape, murder and abuse. It has been criticised for various storylines, including the 2010 baby swap storyline, which attracted over 6,000 complaints, as well as complaints of showing too much violence and allegations of national and racial stereotypes. However, EastEnders has also been commended for representing real-life issues and spreading awareness on social topics. The cast and crew of the show have received and been nominated for various awards. | 2001-10-28T07:33:18Z | 2023-12-29T20:00:59Z | [
"Template:Nbsp",
"Template:EastEnders characters",
"Template:Authority control",
"Template:EastEnders",
"Template:Short description",
"Template:Use British English",
"Template:Portal",
"Template:Cite book",
"Template:Cite journal",
"Template:BBC programme",
"Template:Better source needed",
"Template:'",
"Template:As of",
"Template:Cite web",
"Template:BAFTA TV Award for Best Drama Series 1998–2009",
"Template:YouTube",
"Template:Pp-semi-indef",
"Template:Reflist",
"Template:Cite news",
"Template:Commons category",
"Template:Harvnb",
"Template:Webarchive",
"Template:Cbignore",
"Template:Soap operas in the United Kingdom",
"Template:EastEnders: E20",
"Template:Cite episode",
"Template:Anchor",
"Template:Update section",
"Template:See also",
"Template:Coord",
"Template:Dead link",
"Template:Cite press release",
"Template:Cite magazine",
"Template:Use dmy dates",
"Template:Redirect",
"Template:Infobox television",
"Template:Further",
"Template:Main",
"Template:ISBN",
"Template:IMDb title"
]
| https://en.wikipedia.org/wiki/EastEnders |
9,996 | Embroidery | Embroidery is the craft of decorating fabric or other materials using a needle to apply thread or yarn. Embroidery may also incorporate other materials such as pearls, beads, quills, and sequins. In modern days, embroidery is usually seen on caps, hats, coats, overlays, blankets, dress shirts, denim, dresses, stockings, scarfs, and golf shirts. Embroidery is available in a wide variety of thread or yarn colour. It is often used to personalize gifts or clothing items.
Some of the basic techniques or stitches of the earliest embroidery are chain stitch, buttonhole or blanket stitch, running stitch, satin stitch, and cross stitch. Those stitches remain the fundamental techniques of hand embroidery today.
The process used to tailor, patch, mend and reinforce cloth fostered the development of sewing techniques, and the decorative possibilities of sewing led to the art of embroidery. Indeed, the remarkable stability of basic embroidery stitches has been noted:
It is a striking fact that in the development of embroidery ... there are no changes of materials or techniques which can be felt or interpreted as advances from a primitive to a later, more refined stage. On the other hand, we often find in early works a technical accomplishment and high standard of craftsmanship rarely attained in later times.
The art of embroidery has been found worldwide and several early examples have been found. Works in China have been dated to the Warring States period (5th–3rd century BC). In a garment from Migration period Sweden, roughly 300–700 AD, the edges of bands of trimming are reinforced with running stitch, back stitch, stem stitch, tailor's buttonhole stitch, and Whip stitch, but it is uncertain whether this work simply reinforced the seams or should be interpreted as decorative embroidery.
Depending on time, location and materials available, embroidery could be the domain of a few experts or a widespread, popular technique. This flexibility led to a variety of works, from the royal to the mundane. Examples of high status items include elaborately embroidered clothing, religious objects, and household items often were seen as a mark of wealth and status.
In medieval England, Opus Anglicanum, a technique used by professional workshops and guilds in medieval England, was used to embellish textiles used in church rituals. In 16th century England, some books, usually bibles or other religious texts, had embroidered bindings. The Bodleian Library in Oxford contains one presented to Queen Elizabeth I in 1583. It also owns a copy of The Epistles of Saint Paul, whose cover was reputedly embroidered by the Queen.
In 18th-century England and its colonies, with the rise of the merchant class and the wider availability of luxury materials, rich embroideries began to appear in a secular context. These embroideries took the form of items displayed private homes of well-to-do citizens, as opposed to a church or royal setting. Even so, the embroideries themselves may still have had religious themes. Samplers employing fine silks were produced by the daughters of wealthy families. Embroidery was a skill marking a girl's path into womanhood as well as conveying rank and social standing.
Embroidery was an important art and signifier of social status in the Medieval Islamic world as well. The 17th-century Turkish traveler Evliya Çelebi called it the "craft of the two hands". In cities such as Damascus, Cairo and Istanbul, embroidery was visible on handkerchiefs, uniforms, flags, calligraphy, shoes, robes, tunics, horse trappings, slippers, sheaths, pouches, covers, and even on leather belts. Craftsmen embroidered items with gold and silver thread. Embroidery cottage industries, some employing over 800 people, grew to supply these items.
In the 16th century, in the reign of the Mughal Emperor Akbar, his chronicler Abu al-Fazl ibn Mubarak wrote in the famous Ain-i-Akbari:
"His majesty [Akbar] pays much attention to various stuffs; hence Irani, Ottoman, and Mongolian articles of wear are in much abundance especially textiles embroidered in the patterns of Nakshi, Saadi, Chikhan, Ari, Zardozi, Wastli, Gota and Kohra. The imperial workshops in the towns of Lahore, Agra, Fatehpur and Ahmedabad turn out many masterpieces of workmanship in fabrics, and the figures and patterns, knots and variety of fashions which now prevail astonish even the most experienced travelers. Taste for fine material has since become general, and the drapery of embroidered fabrics used at feasts surpasses every description."
Conversely, embroidery is also a folk art, using materials that were accessible to nonprofessionals. Examples include Hardanger embroidery from Norway, Merezhka from Ukraine, Mountmellick embroidery from Ireland, Nakshi kantha from Bangladesh and West Bengal, and Brazilian embroidery. Many techniques had a practical use such as Sashiko from Japan, which was used as a way to reinforce clothing.
While historically viewed as a pastime, activity, or hobby, intended just for women, embroidery has often been used as a form of biography. Women who were unable to access a formal education or, at times, writing implements, were often taught embroidery and utilized it as a means of documenting their lives. In terms of documenting the histories of marginalized groups, especially women of color both within the United States and around the world, embroidery is a means of studying the every day lives of those whose lives largely went unstudied throughout much of history.
Embroidery can be classified according to what degree the design takes into account the nature of the base material and by the relationship of stitch placement to the fabric. The main categories are free or surface embroidery, counted-thread embroidery, and needlepoint or canvas work.
In free or surface embroidery, designs are applied without regard to the weave of the underlying fabric. Examples include crewel and traditional Chinese and Japanese embroidery.
Counted-thread embroidery patterns are created by making stitches over a predetermined number of threads in the foundation fabric. Counted-thread embroidery is more easily worked on an even-weave foundation fabric such as embroidery canvas, aida cloth, or specially woven cotton and linen fabrics. Examples include cross-stitch and some forms of blackwork embroidery.
While similar to counted thread in regards to technique, in canvas work or needlepoint, threads are stitched through a fabric mesh to create a dense pattern that completely covers the foundation fabric. Examples of canvas work include bargello and Berlin wool work.
Embroidery can also be classified by the similarity of its appearance. In drawn thread work and cutwork, the foundation fabric is deformed or cut away to create holes that are then embellished with embroidery, often with thread in the same color as the foundation fabric. When created with white thread on white linen or cotton, this work is collectively referred to as whitework. However, whitework can either be counted or free. Hardanger embroidery is a counted embroidery and the designs are often geometric. Conversely, styles such as Broderie anglaise are similar to free embroidery, with floral or Abstract designs that are not dependent on the weave of the fabric.
A needle is the main stitching tool in embroidery, and comes in various sizes and types. The fabrics and yarns used in traditional embroidery vary from place to place. Wool, linen, and silk have been in use for thousands of years for both fabric and yarn. Today, embroidery thread is manufactured in cotton, rayon, and novelty yarns as well as in traditional wool, linen, and silk. Ribbon embroidery uses narrow ribbon in silk or silk/organza blend ribbon, most commonly to create floral motifs.
Surface embroidery techniques such as chain stitch and couching or laid-work are the most economical of expensive yarns; couching is generally used for goldwork. Canvas work techniques, in which large amounts of yarn are buried on the back of the work, use more materials but provide a sturdier and more substantial finished textile.
In both canvas work and surface embroidery an embroidery hoop or frame can be used to stretch the material and ensure even stitching tension that prevents pattern distortion. Modern canvas work tends to follow symmetrical counted stitching patterns with designs emerging from the repetition of one or just a few similar stitches in a variety of hues. In contrast, many forms of surface embroidery make use of a wide range of stitching patterns in a single piece of work.
The development of machine embroidery and its mass production came about in stages during the Industrial Revolution. The first embroidery machine was the hand embroidery machine, invented in France in 1832 by Josué Heilmann. The next evolutionary step was the schiffli embroidery machine. The latter borrowed from the sewing machine and the Jacquard loom to fully automate its operation. The manufacture of machine-made embroideries in St. Gallen in eastern Switzerland flourished in the latter half of the 19th century. Both St. Gallen, Switzerland and Plauen, Germany were important centers for machine embroidery and embroidery machine development. Many Swiss and Germans immigrated to Hudson county, New Jersey in the early twentieth century and developed a machine embroidery industry there. Shiffli machines have continued to evolve and are still used for industrial scale embroidery.
Contemporary embroidery is stitched with a computerized embroidery machine using patterns digitized with embroidery software. In machine embroidery, different types of "fills" add texture and design to the finished work. Machine embroidery is used to add logos and monograms to business shirts or jackets, gifts, and team apparel as well as to decorate household linens, draperies, and decorator fabrics that mimic the elaborate hand embroidery of the past.
Machine embroidery is most typically done with rayon thread, although polyester thread can also be used. Cotton thread, on the other hand, is prone to breaking and should be avoided if under 30 wt.
There has also been a development in free hand machine embroidery, new machines have been designed that allow for the user to create free-motion embroidery which has its place in textile arts, quilting, dressmaking, home furnishings and more. Users can use the embroidery software to digitize the digital embroidery designs. These digitized design are then transferred to the embroidery machine with the help of a flash drive and then the embroidery machine embroiders the selected design onto the fabric.
Since the late 2010s, there has been an exponential growth in the popularity of embroidering by hand. As a result of visual social media such as Pinterest and Instagram, artists are able to share their work more extensively, which has inspired younger generations to pick up needle and threads.
Contemporary embroidery artists believe hand embroidery has grown in popularity as a result of an increasing need for relaxation and digitally disconnecting practices.
Modern hand embroidery, as opposed to cross-stitching, is characterized by a more "liberal" approach, where stitches are more freely combined in unconventional ways to create various textures and designs.
In Greek mythology the goddess Athena is said to have passed down the art of embroidery (along with weaving) to humans, leading to the famed competition between herself and the mortal Arachne. | [
{
"paragraph_id": 0,
"text": "Embroidery is the craft of decorating fabric or other materials using a needle to apply thread or yarn. Embroidery may also incorporate other materials such as pearls, beads, quills, and sequins. In modern days, embroidery is usually seen on caps, hats, coats, overlays, blankets, dress shirts, denim, dresses, stockings, scarfs, and golf shirts. Embroidery is available in a wide variety of thread or yarn colour. It is often used to personalize gifts or clothing items.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Some of the basic techniques or stitches of the earliest embroidery are chain stitch, buttonhole or blanket stitch, running stitch, satin stitch, and cross stitch. Those stitches remain the fundamental techniques of hand embroidery today.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The process used to tailor, patch, mend and reinforce cloth fostered the development of sewing techniques, and the decorative possibilities of sewing led to the art of embroidery. Indeed, the remarkable stability of basic embroidery stitches has been noted:",
"title": "History"
},
{
"paragraph_id": 3,
"text": "It is a striking fact that in the development of embroidery ... there are no changes of materials or techniques which can be felt or interpreted as advances from a primitive to a later, more refined stage. On the other hand, we often find in early works a technical accomplishment and high standard of craftsmanship rarely attained in later times.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The art of embroidery has been found worldwide and several early examples have been found. Works in China have been dated to the Warring States period (5th–3rd century BC). In a garment from Migration period Sweden, roughly 300–700 AD, the edges of bands of trimming are reinforced with running stitch, back stitch, stem stitch, tailor's buttonhole stitch, and Whip stitch, but it is uncertain whether this work simply reinforced the seams or should be interpreted as decorative embroidery.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Depending on time, location and materials available, embroidery could be the domain of a few experts or a widespread, popular technique. This flexibility led to a variety of works, from the royal to the mundane. Examples of high status items include elaborately embroidered clothing, religious objects, and household items often were seen as a mark of wealth and status.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In medieval England, Opus Anglicanum, a technique used by professional workshops and guilds in medieval England, was used to embellish textiles used in church rituals. In 16th century England, some books, usually bibles or other religious texts, had embroidered bindings. The Bodleian Library in Oxford contains one presented to Queen Elizabeth I in 1583. It also owns a copy of The Epistles of Saint Paul, whose cover was reputedly embroidered by the Queen.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In 18th-century England and its colonies, with the rise of the merchant class and the wider availability of luxury materials, rich embroideries began to appear in a secular context. These embroideries took the form of items displayed private homes of well-to-do citizens, as opposed to a church or royal setting. Even so, the embroideries themselves may still have had religious themes. Samplers employing fine silks were produced by the daughters of wealthy families. Embroidery was a skill marking a girl's path into womanhood as well as conveying rank and social standing.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Embroidery was an important art and signifier of social status in the Medieval Islamic world as well. The 17th-century Turkish traveler Evliya Çelebi called it the \"craft of the two hands\". In cities such as Damascus, Cairo and Istanbul, embroidery was visible on handkerchiefs, uniforms, flags, calligraphy, shoes, robes, tunics, horse trappings, slippers, sheaths, pouches, covers, and even on leather belts. Craftsmen embroidered items with gold and silver thread. Embroidery cottage industries, some employing over 800 people, grew to supply these items.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In the 16th century, in the reign of the Mughal Emperor Akbar, his chronicler Abu al-Fazl ibn Mubarak wrote in the famous Ain-i-Akbari:",
"title": "History"
},
{
"paragraph_id": 10,
"text": "\"His majesty [Akbar] pays much attention to various stuffs; hence Irani, Ottoman, and Mongolian articles of wear are in much abundance especially textiles embroidered in the patterns of Nakshi, Saadi, Chikhan, Ari, Zardozi, Wastli, Gota and Kohra. The imperial workshops in the towns of Lahore, Agra, Fatehpur and Ahmedabad turn out many masterpieces of workmanship in fabrics, and the figures and patterns, knots and variety of fashions which now prevail astonish even the most experienced travelers. Taste for fine material has since become general, and the drapery of embroidered fabrics used at feasts surpasses every description.\"",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Conversely, embroidery is also a folk art, using materials that were accessible to nonprofessionals. Examples include Hardanger embroidery from Norway, Merezhka from Ukraine, Mountmellick embroidery from Ireland, Nakshi kantha from Bangladesh and West Bengal, and Brazilian embroidery. Many techniques had a practical use such as Sashiko from Japan, which was used as a way to reinforce clothing.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "While historically viewed as a pastime, activity, or hobby, intended just for women, embroidery has often been used as a form of biography. Women who were unable to access a formal education or, at times, writing implements, were often taught embroidery and utilized it as a means of documenting their lives. In terms of documenting the histories of marginalized groups, especially women of color both within the United States and around the world, embroidery is a means of studying the every day lives of those whose lives largely went unstudied throughout much of history.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Embroidery can be classified according to what degree the design takes into account the nature of the base material and by the relationship of stitch placement to the fabric. The main categories are free or surface embroidery, counted-thread embroidery, and needlepoint or canvas work.",
"title": "Classification"
},
{
"paragraph_id": 14,
"text": "In free or surface embroidery, designs are applied without regard to the weave of the underlying fabric. Examples include crewel and traditional Chinese and Japanese embroidery.",
"title": "Classification"
},
{
"paragraph_id": 15,
"text": "Counted-thread embroidery patterns are created by making stitches over a predetermined number of threads in the foundation fabric. Counted-thread embroidery is more easily worked on an even-weave foundation fabric such as embroidery canvas, aida cloth, or specially woven cotton and linen fabrics. Examples include cross-stitch and some forms of blackwork embroidery.",
"title": "Classification"
},
{
"paragraph_id": 16,
"text": "While similar to counted thread in regards to technique, in canvas work or needlepoint, threads are stitched through a fabric mesh to create a dense pattern that completely covers the foundation fabric. Examples of canvas work include bargello and Berlin wool work.",
"title": "Classification"
},
{
"paragraph_id": 17,
"text": "Embroidery can also be classified by the similarity of its appearance. In drawn thread work and cutwork, the foundation fabric is deformed or cut away to create holes that are then embellished with embroidery, often with thread in the same color as the foundation fabric. When created with white thread on white linen or cotton, this work is collectively referred to as whitework. However, whitework can either be counted or free. Hardanger embroidery is a counted embroidery and the designs are often geometric. Conversely, styles such as Broderie anglaise are similar to free embroidery, with floral or Abstract designs that are not dependent on the weave of the fabric.",
"title": "Classification"
},
{
"paragraph_id": 18,
"text": "A needle is the main stitching tool in embroidery, and comes in various sizes and types. The fabrics and yarns used in traditional embroidery vary from place to place. Wool, linen, and silk have been in use for thousands of years for both fabric and yarn. Today, embroidery thread is manufactured in cotton, rayon, and novelty yarns as well as in traditional wool, linen, and silk. Ribbon embroidery uses narrow ribbon in silk or silk/organza blend ribbon, most commonly to create floral motifs.",
"title": "Materials"
},
{
"paragraph_id": 19,
"text": "Surface embroidery techniques such as chain stitch and couching or laid-work are the most economical of expensive yarns; couching is generally used for goldwork. Canvas work techniques, in which large amounts of yarn are buried on the back of the work, use more materials but provide a sturdier and more substantial finished textile.",
"title": "Materials"
},
{
"paragraph_id": 20,
"text": "In both canvas work and surface embroidery an embroidery hoop or frame can be used to stretch the material and ensure even stitching tension that prevents pattern distortion. Modern canvas work tends to follow symmetrical counted stitching patterns with designs emerging from the repetition of one or just a few similar stitches in a variety of hues. In contrast, many forms of surface embroidery make use of a wide range of stitching patterns in a single piece of work.",
"title": "Materials"
},
{
"paragraph_id": 21,
"text": "The development of machine embroidery and its mass production came about in stages during the Industrial Revolution. The first embroidery machine was the hand embroidery machine, invented in France in 1832 by Josué Heilmann. The next evolutionary step was the schiffli embroidery machine. The latter borrowed from the sewing machine and the Jacquard loom to fully automate its operation. The manufacture of machine-made embroideries in St. Gallen in eastern Switzerland flourished in the latter half of the 19th century. Both St. Gallen, Switzerland and Plauen, Germany were important centers for machine embroidery and embroidery machine development. Many Swiss and Germans immigrated to Hudson county, New Jersey in the early twentieth century and developed a machine embroidery industry there. Shiffli machines have continued to evolve and are still used for industrial scale embroidery.",
"title": "Machine embroidery"
},
{
"paragraph_id": 22,
"text": "Contemporary embroidery is stitched with a computerized embroidery machine using patterns digitized with embroidery software. In machine embroidery, different types of \"fills\" add texture and design to the finished work. Machine embroidery is used to add logos and monograms to business shirts or jackets, gifts, and team apparel as well as to decorate household linens, draperies, and decorator fabrics that mimic the elaborate hand embroidery of the past.",
"title": "Machine embroidery"
},
{
"paragraph_id": 23,
"text": "Machine embroidery is most typically done with rayon thread, although polyester thread can also be used. Cotton thread, on the other hand, is prone to breaking and should be avoided if under 30 wt.",
"title": "Machine embroidery"
},
{
"paragraph_id": 24,
"text": "There has also been a development in free hand machine embroidery, new machines have been designed that allow for the user to create free-motion embroidery which has its place in textile arts, quilting, dressmaking, home furnishings and more. Users can use the embroidery software to digitize the digital embroidery designs. These digitized design are then transferred to the embroidery machine with the help of a flash drive and then the embroidery machine embroiders the selected design onto the fabric.",
"title": "Machine embroidery"
},
{
"paragraph_id": 25,
"text": "Since the late 2010s, there has been an exponential growth in the popularity of embroidering by hand. As a result of visual social media such as Pinterest and Instagram, artists are able to share their work more extensively, which has inspired younger generations to pick up needle and threads.",
"title": "Resurgence of hand embroidery"
},
{
"paragraph_id": 26,
"text": "Contemporary embroidery artists believe hand embroidery has grown in popularity as a result of an increasing need for relaxation and digitally disconnecting practices.",
"title": "Resurgence of hand embroidery"
},
{
"paragraph_id": 27,
"text": "Modern hand embroidery, as opposed to cross-stitching, is characterized by a more \"liberal\" approach, where stitches are more freely combined in unconventional ways to create various textures and designs.",
"title": "Resurgence of hand embroidery"
},
{
"paragraph_id": 28,
"text": "In Greek mythology the goddess Athena is said to have passed down the art of embroidery (along with weaving) to humans, leading to the famed competition between herself and the mortal Arachne.",
"title": "In literature"
}
]
| Embroidery is the craft of decorating fabric or other materials using a needle to apply thread or yarn. Embroidery may also incorporate other materials such as pearls, beads, quills, and sequins. In modern days, embroidery is usually seen on caps, hats, coats, overlays, blankets, dress shirts, denim, dresses, stockings, scarfs, and golf shirts. Embroidery is available in a wide variety of thread or yarn colour. It is often used to personalize gifts or clothing items. Some of the basic techniques or stitches of the earliest embroidery are chain stitch, buttonhole or blanket stitch, running stitch, satin stitch, and cross stitch. Those stitches remain the fundamental techniques of hand embroidery today. | 2002-02-25T15:43:11Z | 2023-12-13T01:11:58Z | [
"Template:Cite journal",
"Template:Sewing",
"Template:Blockquote",
"Template:Cite web",
"Template:Harvnb",
"Template:Cite magazine",
"Template:Commons category-inline",
"Template:Decorative arts",
"Template:Short description",
"Template:About",
"Template:Cite news",
"Template:ISBN",
"Template:Authority control",
"Template:Sfn",
"Template:Circa",
"Template:Wikt",
"Template:Embroidery",
"Template:Dynamic list",
"Template:Reflist",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/Embroidery |
9,997 | Edward Mitchell Bannister | Edward Mitchell Bannister (November 2, 1828 – January 9, 1901) was a Canadian–American oil painter of the American Barbizon school. Born in Canada, he spent his adult life in New England in the United States. There, along with his wife Christiana Carteaux, he was a prominent member of African-American cultural and political communities, such as the Boston abolition movement. Bannister received national recognition after he won a first prize in painting at the 1876 Philadelphia Centennial Exhibition. He was also a founding member of the Providence Art Club and the Rhode Island School of Design.
Bannister's style and predominantly pastoral subject matter reflected his admiration for the French artist Jean-François Millet and the French Barbizon school. A lifelong sailor, he also looked to the Rhode Island seaside for inspiration. Bannister continually experimented, and his artwork displays his Idealist philosophy and his control of color and atmosphere. He began his professional practice as a photographer and portraitist before developing his better-known landscape style.
Later in his life, Bannister's style of landscape painting fell out of favor. With decreasing painting sales, he and Christiana Carteaux moved out of College Hill in Providence to Boston and then a smaller house on Wilson Street in Providence. Bannister was overlooked in American art historical studies and exhibitions after his death in 1901, until institutions like the National Museum of African Art returned him to national attention in the 1960s and 1970s.
Bannister was born on November 2, 1828, in Saint Andrews, New Brunswick, near the St. Croix River. His father, Edward Bannister, was a black Barbadian and his mother's parentage is uncertain; Bannister himself was sometimes identified as mixed race. Bannister's father died in 1832, so Edward and his younger brother William were raised by their mother, Hannah Alexander Bannister. Early on, Bannister was apprenticed to a cobbler, but his drawing skill was already noted among his friends and family. Bannister credited his mother with igniting his early interest in art. She died in 1844, after which Bannister and his brother lived on the farm of the wealthy lawyer and merchant Harris Hatch. There, he practiced drawing by reproducing Hatch family portraits and copying British engravings in the family library.
Bannister and his brother found work aboard ships as mates and cooks for several months before immigrating to Boston, sometime in the late 1840s. In the 1850 US census, they are listed as living at the same boarding house, with the Revaleon family, and working as barbers. The brothers' role as barbers and status as mixed race gave them relatively high standing as middle-class professionals within Boston.
Although he aspired to work as a painter, Bannister had difficulty finding an apprenticeship or academic programs that would accept him, due to racial prejudice. Boston was an abolitionist stronghold, but it was also one of the most segregated cities in the US in 1860. Bannister would later express his frustration with being blocked from artistic education: "Whatever may be my success as an artist is due more to inherited potential than to instruction" and "All I would do I cannot ... simply for the want of proper training."
Bannister received his first oil painting commission, The Ship Outward Bound, in 1854 from an African American doctor, John V. DeGrasse. Jacob R. Andrews, a gilder, painter, and member of the Histrionic Club, created the commission's gilt frame. DeGrasse later commissioned Bannister to paint portraits of him and his wife. Patronage like DeGrasse's was critical to Bannister's early career, as the African American community wanted to support and highlight its contributions to high culture. African Americans found portraiture an "ideal medium" for expressing their freedom and opportunity, which is probably why most of Bannister's earliest commissions are within that genre.
Through abolitionist newspapers like The Anglo-African and The Liberator and the writings of Martin R. Delany, Bannister likely learned about other African American artists like Robert S. Duncanson, James Presley Ball, Patrick H. Reason, and David Bustill Bowser. Their work would have made Bannister's ambition seem all the more possible. Although most cultural institutions barred Black Bostonians from entrance, Bannister would have had access to several, like the Boston Athenæum library, with collections of European art sources and exhibitions of Luminist marine painters like Robert Salmon and Fitz Hugh Lane.
Bannister met Christiana Carteaux, a hairdresser and businesswoman born in Rhode Island to African American and Narragansett parents, in 1853 when he applied to be a barber in her salon. Both were members of Boston's diverse abolitionist movement, and barbershops were important meeting places for African American abolitionists. They married on June 10, 1857, and she became, in effect, his most important patron. The couple boarded for two years with Lewis Hayden and Harriet Bell Hayden at 66 Southac Street, a stop on Boston's Underground Railroad (a support network for escaped slaves).
In 1855 William Cooper Nell acknowledged Bannister's rising artistic status in The Colored Patriots of the American Revolution for his The Ship Outward Bound. Bannister also received encouragement to continue painting from artist Francis Bicknell Carpenter. By 1858, Bannister was listed as an artist in Boston's city directory. Around 1862, he spent a year training in photography in New York, likely to support his painting practice. He then found work as a photographer, taking solar plates and tinting photos. One of Bannister's earliest commissioned portraits was of Prudence Nelson Bell in 1864, which is around when he found studio space at the Studio Building in Boston. At the Studio Building, he came into contact with other prominent artists, like Elihu Vedder and John La Farge. Once Bannister was established as an artist, abolitionist William Wells Brown praised him in a 1865 book:
Mr. Bannister possesses genius, which is now showing itself in his studio in Boston; for he has long since thrown aside the scissors and the comb, and transfers the face to the canvas, instead of taking the hair from the head. [...] Mr. Bannister is spare-made, slim, with an interesting cast of countenance, quick in his walk, and easy in his manners. He is a lover of poetry and the classics, and is always hunting up some new model for his gifted pencil and brush.
Bannister was part of Boston's African American artistic community, which included Edmonia Lewis, William H. Simpson, and Nelson A. Primus. He sang as a tenor in the Crispus Attucks Choir, which performed anti-slavery songs at public events, and acted with the Histrionic Club, as well as serving as a delegate for the New England Colored Citizens Conventions in August 1859 and 1865. His name also appears on several public petitions published in The Liberator.
Bannister and Carteaux were devout members of the militant abolitionist Twelfth Baptist Church, located on Southac Street near their home at the Hayden House. In May 1859, Bannister served as the secretary for the church's meetings to respond to the Oberlin–Wellington Rescue of imprisoned fugitive slaves and, in 1863, to plan celebrations for the Emancipation Proclamation.
During the US Civil War, Carteaux lobbied for equal pay for African American soldiers and organized the 1864 soldiers’ relief fair for the Massachusetts 54th infantry regiment, 55th infantry regiment, and 5th cavalry regiment, which had gone without pay for over a year and a half. Bannister donated his full-length portrait of Robert Gould Shaw, the commander of the 54th killed in action, to raise money for the cause. Bannister's portrait of Gould Shaw was displayed with the label "Our Martyr", according to abolitionist Lydia Maria Child. The portrait was praised in the New York Weekly Anglo-African as "a fine specimen of art" and inspired a poem by Martha Perry Lowe entitled The Picture of Col. Shaw in Boston. The painting was purchased by the state of Massachusetts and installed in its state house, but its current location is unknown.
The Bannister portrait of Robert Gould Shaw was one of several memorials to Gould Shaw by members of Boston's African American artistic community such as Edmonia Lewis. These artworks, put to the practical purpose of raising money for Black soldiers, contradicted the ideals of Boston Brahmin abolitionists, such as the Gould Shaws. Although the Brahmins supported abolition, they saw it as an abstract good rather than a concrete cause in need of material support. The portrait's paternalistic praise from Lowe and Child exemplified the divide between Boston's white abolitionists and the African American community. Through art like the 1884 Robert Gould Shaw Memorial, the Boston Brahmins rejected the possessive "Our Martyr" label given to him by Black artists like Bannister and Edmonia Lewis.
Bannister's activism also took other forms: on June 17, 1865, Bannister marshaled around two hundred members of the Twelfth Baptist Sunday School at a Grand Temperance Celebration on Boston Common. They marched under a banner reading "Equal rights for all men".
Bannister eventually studied at the Lowell Institute with the artist William Rimmer, while Rimmer taught evening life drawing classes at the Institute between 1863 and 1865. Rimmer was known for his skill in artistic anatomy, an area Bannister knew was one of his weaknesses. Because of Bannister's daytime photography business, he mostly took his drawing classes at night. Through Rimmer and the community at the Studio Building, Bannister was inspired by the Barbizon School-influenced paintings of William Morris Hunt, who had studied in Europe and held public exhibitions in Boston around the 1860s. At the Lowell Institute, Bannister formed a lifelong friendship with painter John Nelson Arnold; both later became founding members of the Providence Art Club. Bannister also formed a temporary painting partnership with Asa R. Lewis that lasted from 1868 to 1869. During that partnership of "Bannister & Lewis", Bannister began to advertise himself as both a portrait and landscape painter.
Despite his early commissions, Bannister still struggled to receive wider recognition for his work due to racism in the US. Following emancipation and the end of the US Civil War, the abolitionists began to disperse and, with them, their patronage. Due to increasing competition, Bannister did little to support Primus, who had come to him seeking an apprenticeship. An article in the New York Herald belittled both Bannister and his work: "The negro has an appreciation for art while being manifestly unable to produce it." The article reportedly spurred his desire to achieve success as an artist. At the same time, Bannister had begun to receive more recognition within Boston art circles.
Supported by Carteaux, Bannister became a full-time painter in 1870, shortly after they moved to Providence, Rhode Island, at the end of 1869. He first took a studio in the Mercantile National Bank Building then moved to the Woods Building in Providence, where he shared a floor with artists like Sydney Burleigh and became friends with Providence painter George William Whitaker. He painted more landscapes over time—receiving an 1872 award at the Rhode Island Industrial Exposition for Summer Afternoon—and began submitting paintings to the Boston Art Club.
Bannister received national commendation for his work when he won first prize for his large oil Under the Oaks at the 1876 Philadelphia Centennial. Even then, the judge wanted to rescind the award after learning his identity until other exhibition artists protested; afterwards, Bannister reflected: "I was and am proud to know that the jury of award did not know anything about me, my antecedents, color or race. There was no sentimental sympathy leading to the award of the medal." Bannister had intentionally submitted his painting with only a signature attached to ensure he would be judged fairly. As his career matured, he received more commissions and accumulated many honors, several from the Massachusetts Charitable Mechanics Association (silver medals in 1881 and 1884). Collectors and local notables Isaac Comstock Bates and Joseph Ely were among his patrons.
In 1880 Bannister joined with other professional artists, amateurs, and art collectors to found the Providence Art Club to stimulate the appreciation of art in the community. Their first meeting was in Bannister's studio in the Woods Building at the bottom of College Hill. He was the second to sign the club's charter, served on its initial executive board, and taught regular Saturday art classes. He continued to show paintings at Boston Art Club exhibitions, as well as in Connecticut and at New York's National Academy of Design, and exhibited A New England Hillside at the New Orleans Cotton Exposition in 1885. There, Bannister's work was segregated and ignored by the judging committees. With that experience in mind, Bannister decided not to submit any works to the 1893 World's Columbian Exposition since they would have to be pre-judged in Boston before they could even be sent to Chicago.
In the 1880s Bannister bought a small sloop, the Fanchon, and spent summers sketching, painting watercolors, and sailing Narragansett Bay and up to Bar Harbor in Maine. He would return with his studies and use them as the basis for winter commissions. He supplemented his sailing trips with journeys to exhibitions in New York, but a planned trip to Europe fell through due to lack of money.
In 1885, with other art club members, Bannister helped found the Anne Eliza Club (or "A&E Club")—a communal men's discussion group named after the waitress at the Providence Art Club. Through his teaching there and at the Providence Art Club, he became a mentor to younger Providence artists, like Charles Walter Stetson. Stetson often mentioned Bannister in his personal diaries and once praised him by writing, "He is my only confidant in Art matters & I am his." Rhode Island engineer George Henry Corliss commissioned a painting from Bannister in 1886, as his reputation grew.
Bannister and Carteaux were consistent members of the African American community in Providence. They lived for a time in the boarding house of Ransom Parker, who had participated in the Dorr Rebellion, and were friends with merchant George Henry, Reverend Mahlon Van Horne, Brown graduate John Hope, and abolitionist George T. Downing, an ally from the Bannisters' political work in Boston. Carteaux founded the Home for Aged Colored Women, which is known as the Bannister Center today. Edward exhibited his painting Christ Healing the Sick in the home in 1892 and donated his portrait of Carteaux to it as well. Although he was a respected member of the Providence Art Club, Bannister's abolitionism likely led to conflict with its mostly white members, who exhibited art with minstrel stereotypes by E. W. Kemble and W. L. Shephard in 1887 and 1893.
Around 1890, Bannister sold the Fanchon to Judge George Newman Bliss. His largest exhibition of works was held in 1891, when he showed 33 works at the Spring Providence Art Club Exhibition. Later in the 1890s, Bannister seems to have sold fewer paintings, perhaps due to waning popularity, and exhibited less often. In 1898 Bannister closed his studio and the couple moved to Boston for a year before returning to a smaller home on Wilson Street, Providence, in 1900.
Bannister died of a heart attack on January 9, 1901, while attending an evening prayer meeting at his church, Elmwood Avenue Free Baptist Church. He had experienced heart trouble for some time but had completed two paintings only the previous day. During the service, he offered a prayer and shortly after sat down, gasping. His last words were reportedly "Jesus, help me".
After his death the Providence Art Club held a memorial exhibition in his name that focused on his artistic achievements, without mentioning his contribution to abolitionism. In the exhibition pamphlet, they wrote: "His gentle disposition, his urbanity of manner, and his generous appreciation of the work of others, made him a welcome guest in all artistic circles. [...] He painted with profound feeling, not for pecuniary results, but to leave upon the canvas his impression of natural scenery, and to express his delight in the wondrous beauty of land and sea and sky."
He is buried in the North Burial Ground in Providence, under a stone monument designed by his art club friends. The disparity between Bannister's financial difficulties at the end of his life and the support shown by Providence's artists after his death led his friend John Nelson Arnold to say about the memorial: "In the labor incident to this work I was constantly reminded of the remark attributed to the mother of Robert Burns on being shown the splendid monument erected to the memory of her gifted son: 'He asked for bread and they gave him a stone.'"
Carteaux was admitted to her Home for Aged Colored Women in September 1902; she died in 1903 in a state mental institution in Cranston. She and Bannister are buried together.
The young Bannister advertised himself as a portraitist, but later became popular for his landscapes and seascapes. Drawing on his knowledge of poetry, classics, and English literature as an autodidact, he also painted biblical, mythological, and genre scenes. Much like George Inness, his work reflected the composition, mood, and influences of French Barbizon painters Jean-Baptiste-Camille Corot, Jean-François Millet, and Charles-François Daubigny. Defending Millet in The Artist and His Critics, Bannister saw him as the most "spiritual artist of our time" who voiced "the sad, uncomplaining life he saw about him—and with which he sympathized so deeply."
Historian Joseph Skerrett has noted the influence of the Hudson River School on Bannister, while maintaining that he consistently experimented throughout his career: "Bannister managed to please a conservative New England taste in art while continuing to try new methods and styles." For their mutual affinity with the Hudson River School, Bannister has been compared to his contemporary, the Ohio-based African American painter Robert S. Duncanson. Unlike Hudson River School artists, Bannister did not create meticulous landscapes but paid more attention to creating "massive but revealing shapes of trees and mountains" and works more picturesque than sublime. Bannister also avoided the "nationalist grandeur" often found in Hudson River School paintings.
Bannister often made pencil or pastel studies in preparation for larger oil paintings. Several of his compositions refer to classical, mathematical methods like the Golden Ratio or "Harmonic Grid", and make careful use of symmetry and asymmetry. In other paintings, his contrast of darks and lights create dynamic diagonals or circles that divide the composition. His paintings are known for their delicate use of color to depict shadow and atmosphere and their loose brushwork. His later palette exhibited lighter, more muted colors: the Boston Common scene he painted late in his life is a notable example. This change in style stands in contrast to his earlier stated disapproval of Impressionist painting.
Art historian Traci Lee Costa has argued that a "reductive" emphasis on Bannister's biography has taken attention away from scholarly analysis of his artwork. In the lecture The Artist and His Critics given to the Anne Eliza Club on April 15, 1886, and published afterward, Bannister spelled out his belief that making art is a highly spiritual practice—the pinnacle of human achievement. In its nearly religious approach and focus on subjective representations of nature, Bannister's philosophy has been compared to both German Idealism and American Transcendentalism. In his lecture, Bannister referenced the works of American Transcendentalist Washington Allston. Bannister's friend George W. Whitaker referred to him as "The Idealist" in a 1914 article "Reminiscences of Providence Artists". The lecture and its idealistic view are linked to Bannister's Approaching Storm (see right), which he completed in the same year. Approaching Storm features a human figure at its center, which is nonetheless rendered small by the surrounding landscape. Despite the implied drama, Bannister used a cool color palette of blues and greens, with contrasting yellows that provide warmth against the darker, almost purple sky. The contrast of melancholy elements against more cheerful pastoral themes appears in many of Bannister's paintings.
Although committed to freedom and equal rights for African Americans, Bannister did not often directly represent those issues in his paintings. The farms that Bannister painted were reminders of southern Rhode Island's history of chattel slavery, unlike French Barbizon scenes. In Hay Gatherers, Bannister depicts African American field laborers in a rural landscape. Unlike Bannister's idyllic pastorals, Hay Gatherers represents racial oppression and labor exploitation in Rhode Island, particularly South County where most of the state's plantations were. The women workers are separated from the field of wildflowers at the painting's lower left and other field workers in the background by stands of trees, suggesting their closeness to freedom even while they are still within the grasp of plantation labor. Through the geometric composition of Hay Gatherers, which divides the figures and the landscapes into triangular sections, Bannister combined his work on seemingly idealized landscapes with his earlier political art, visible in his humanist portraits such as Newspaper Boy. Bannister's Fort Dumpling, Jamestown, Rhode Island uses a similar triangular composition, whereby people relaxing are juxtaposed against but separated from sailboats in the background, a reminder of the "maritime legacy of slavery".
Bannister often conveyed political meaning in his paintings through allegory and allusion. One of his first commissions, The Ship Outward Bound, might have been a veiled reference to the forced return of Anthony Burns to slavery and Virginia under the Fugitive Slave Act of 1850 in 1854. In African American culture, an image of a ship leaving harbor was a reminder of the Transatlantic Slave Trade. Bannister's 1885 drawing The Woodsman is thought to be Bannister's response to the murder of Amasa Sprague, an event that spurred the abolition of capital punishment in Rhode Island after the dubious conviction and hanging of John Gordon. Similarly, his Governor Sprague's White Horse depicted the horse that William Sprague IV rode into the First Battle of Bull Run.
Bannister has been criticized for not often directly representing African Americans, outside of his early portraiture. He and artists like Henry Ossawa Tanner were deemed inauthentic during the Harlem Renaissance for producing works that appealed to white aesthetics. Many of Bannister's works were commissioned landscapes and portraits that reinforced European ideas, even though his art subtly dismantled racial stereotypes. In that way, Bannister has been compared to later Bostonian poet William Stanley Braithwaite, whose writing did not clearly reflect his identity. Bannister's work reflected his desire to excel and contribute to racial uplift, while still needing to depend on white patronage to reach a wider audience. Art historian Juanita Holland wrote of Bannister's dilemma: "This was a large part of the double bind that [Boston's] black artists faced: they needed to both address and represent an African American identity, while finding a way for their white viewers to look past race to a perception of the work in more universal terms."
Bannister was the only major African American artist of the late nineteenth century who developed his talents without European exposure; he was well known in the artistic community of Providence and admired within the wider East Coast art world. After his death, he was largely forgotten by art history for almost a century, principally due to racial prejudice. His art was often omitted from 20th-century art histories, and his style of melancholic, serene landscapes also fell out of fashion. Still, he and his paintings are an indelible part of a refigured relationship between African American culture and the landscapes of Reconstruction-era America.
Bannister's art continued to be supported by galleries like the Barnett-Aden Gallery and the Art Institute of Chicago. Following the civil rights movement in the 1960s, his work was again celebrated and widely collected. In collaboration with the Rhode Island School of Design and the Frederick Douglass Institute, the National Museum of African Art held an exhibition titled Edward Mitchell Bannister, 1828–1901: Providence Artist in 1973. The Rhode Island Heritage Hall of Fame inducted Bannister in 1976, and Rhode Island College created the Bannister Gallery in 1978 with an inaugural exhibition Four from Providence : Bannister, Prophet, Alston, Jennings.
The New York-based Kenkebala Gallery held two exhibitions of Bannister's work, one in 1992 curated by Corrinne Jennings in collaboration with the Whitney and one in 2001 on the centennial of Bannister's death. From June 9 to October 8, 2018, the Gilbert Stuart Museum held an exhibition honoring Bannister and Carteaux's relationship, "My Greatest Successes Have Come Through Her": The Artistic Partnership of Edward and Christiana Bannister, as part of its Rhode Island Masters exhibition series. Bannister's portrait of Christiana Carteaux was the center of the exhibition.
In September 2017, a Providence City Council committee unanimously voted to rename Magee Street (which had been named after a Rhode Island slave trader) to Bannister Street, in honor of Edward and Christiana Bannister. The Providence Art Club unveiled a bronze bust of Bannister made by Providence artist Gage Prentiss in May 2021. As of 2018, art historian Anne Louise Avery is compiling the first catalogue raisonné and a major biography of Bannister's work.
In September 2023, a bronze sculpture of Bannister by artist Gage Prentiss was unveiled in Providence's Market Square. Bannister is depicted in life size, sitting on a bench.
In 1884 Bannister and Carteaux moved from the boarding house of Ransom Parker to 93 Benevolent Street, and lived there until 1899. The two-and-a-half-story wooden house was built circa 1854 by engineer Charles E. Paine and is now known as "The Vault" or "The Bannister House". Euchlin Reeves and Louise Herreshoff purchased the house in the late 1930s and renovated it to add a brick exterior. The renovation was made to create consistency with their next-door property, so both houses could hold their "little museum" of antiques. Herreshoff died in 1967 and the porcelain collection filling the Bannister House was donated to Washington and Lee University.
The house is now listed as contributing to College Hill's historical designation. Brown University bought the property in 1989 and used it to store refrigerators. Due to a lack of plans for its preservation and use, the Providence Preservation Society put the Bannister House on its 2001 list of most endangered buildings in Providence. Brown University president Ruth Simmons assured historian and former Rhode Island deputy secretary of state Ray Rickman that the house would be preserved, although the university debated whether to sell the house to a third party.
Because its disrepair and long disuse made the house unsuitable for residence, Brown renovated the property in 2015 and restored it to its original appearance. It was sold in 2016 as part of the Brown to Brown Home Ownership Program—the program specifies that if the house is ever sold, it has to be sold back to the university. | [
{
"paragraph_id": 0,
"text": "Edward Mitchell Bannister (November 2, 1828 – January 9, 1901) was a Canadian–American oil painter of the American Barbizon school. Born in Canada, he spent his adult life in New England in the United States. There, along with his wife Christiana Carteaux, he was a prominent member of African-American cultural and political communities, such as the Boston abolition movement. Bannister received national recognition after he won a first prize in painting at the 1876 Philadelphia Centennial Exhibition. He was also a founding member of the Providence Art Club and the Rhode Island School of Design.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Bannister's style and predominantly pastoral subject matter reflected his admiration for the French artist Jean-François Millet and the French Barbizon school. A lifelong sailor, he also looked to the Rhode Island seaside for inspiration. Bannister continually experimented, and his artwork displays his Idealist philosophy and his control of color and atmosphere. He began his professional practice as a photographer and portraitist before developing his better-known landscape style.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Later in his life, Bannister's style of landscape painting fell out of favor. With decreasing painting sales, he and Christiana Carteaux moved out of College Hill in Providence to Boston and then a smaller house on Wilson Street in Providence. Bannister was overlooked in American art historical studies and exhibitions after his death in 1901, until institutions like the National Museum of African Art returned him to national attention in the 1960s and 1970s.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Bannister was born on November 2, 1828, in Saint Andrews, New Brunswick, near the St. Croix River. His father, Edward Bannister, was a black Barbadian and his mother's parentage is uncertain; Bannister himself was sometimes identified as mixed race. Bannister's father died in 1832, so Edward and his younger brother William were raised by their mother, Hannah Alexander Bannister. Early on, Bannister was apprenticed to a cobbler, but his drawing skill was already noted among his friends and family. Bannister credited his mother with igniting his early interest in art. She died in 1844, after which Bannister and his brother lived on the farm of the wealthy lawyer and merchant Harris Hatch. There, he practiced drawing by reproducing Hatch family portraits and copying British engravings in the family library.",
"title": "Biography"
},
{
"paragraph_id": 4,
"text": "Bannister and his brother found work aboard ships as mates and cooks for several months before immigrating to Boston, sometime in the late 1840s. In the 1850 US census, they are listed as living at the same boarding house, with the Revaleon family, and working as barbers. The brothers' role as barbers and status as mixed race gave them relatively high standing as middle-class professionals within Boston.",
"title": "Biography"
},
{
"paragraph_id": 5,
"text": "Although he aspired to work as a painter, Bannister had difficulty finding an apprenticeship or academic programs that would accept him, due to racial prejudice. Boston was an abolitionist stronghold, but it was also one of the most segregated cities in the US in 1860. Bannister would later express his frustration with being blocked from artistic education: \"Whatever may be my success as an artist is due more to inherited potential than to instruction\" and \"All I would do I cannot ... simply for the want of proper training.\"",
"title": "Biography"
},
{
"paragraph_id": 6,
"text": "Bannister received his first oil painting commission, The Ship Outward Bound, in 1854 from an African American doctor, John V. DeGrasse. Jacob R. Andrews, a gilder, painter, and member of the Histrionic Club, created the commission's gilt frame. DeGrasse later commissioned Bannister to paint portraits of him and his wife. Patronage like DeGrasse's was critical to Bannister's early career, as the African American community wanted to support and highlight its contributions to high culture. African Americans found portraiture an \"ideal medium\" for expressing their freedom and opportunity, which is probably why most of Bannister's earliest commissions are within that genre.",
"title": "Biography"
},
{
"paragraph_id": 7,
"text": "Through abolitionist newspapers like The Anglo-African and The Liberator and the writings of Martin R. Delany, Bannister likely learned about other African American artists like Robert S. Duncanson, James Presley Ball, Patrick H. Reason, and David Bustill Bowser. Their work would have made Bannister's ambition seem all the more possible. Although most cultural institutions barred Black Bostonians from entrance, Bannister would have had access to several, like the Boston Athenæum library, with collections of European art sources and exhibitions of Luminist marine painters like Robert Salmon and Fitz Hugh Lane.",
"title": "Biography"
},
{
"paragraph_id": 8,
"text": "Bannister met Christiana Carteaux, a hairdresser and businesswoman born in Rhode Island to African American and Narragansett parents, in 1853 when he applied to be a barber in her salon. Both were members of Boston's diverse abolitionist movement, and barbershops were important meeting places for African American abolitionists. They married on June 10, 1857, and she became, in effect, his most important patron. The couple boarded for two years with Lewis Hayden and Harriet Bell Hayden at 66 Southac Street, a stop on Boston's Underground Railroad (a support network for escaped slaves).",
"title": "Biography"
},
{
"paragraph_id": 9,
"text": "In 1855 William Cooper Nell acknowledged Bannister's rising artistic status in The Colored Patriots of the American Revolution for his The Ship Outward Bound. Bannister also received encouragement to continue painting from artist Francis Bicknell Carpenter. By 1858, Bannister was listed as an artist in Boston's city directory. Around 1862, he spent a year training in photography in New York, likely to support his painting practice. He then found work as a photographer, taking solar plates and tinting photos. One of Bannister's earliest commissioned portraits was of Prudence Nelson Bell in 1864, which is around when he found studio space at the Studio Building in Boston. At the Studio Building, he came into contact with other prominent artists, like Elihu Vedder and John La Farge. Once Bannister was established as an artist, abolitionist William Wells Brown praised him in a 1865 book:",
"title": "Biography"
},
{
"paragraph_id": 10,
"text": "Mr. Bannister possesses genius, which is now showing itself in his studio in Boston; for he has long since thrown aside the scissors and the comb, and transfers the face to the canvas, instead of taking the hair from the head. [...] Mr. Bannister is spare-made, slim, with an interesting cast of countenance, quick in his walk, and easy in his manners. He is a lover of poetry and the classics, and is always hunting up some new model for his gifted pencil and brush.",
"title": "Biography"
},
{
"paragraph_id": 11,
"text": "Bannister was part of Boston's African American artistic community, which included Edmonia Lewis, William H. Simpson, and Nelson A. Primus. He sang as a tenor in the Crispus Attucks Choir, which performed anti-slavery songs at public events, and acted with the Histrionic Club, as well as serving as a delegate for the New England Colored Citizens Conventions in August 1859 and 1865. His name also appears on several public petitions published in The Liberator.",
"title": "Biography"
},
{
"paragraph_id": 12,
"text": "Bannister and Carteaux were devout members of the militant abolitionist Twelfth Baptist Church, located on Southac Street near their home at the Hayden House. In May 1859, Bannister served as the secretary for the church's meetings to respond to the Oberlin–Wellington Rescue of imprisoned fugitive slaves and, in 1863, to plan celebrations for the Emancipation Proclamation.",
"title": "Biography"
},
{
"paragraph_id": 13,
"text": "During the US Civil War, Carteaux lobbied for equal pay for African American soldiers and organized the 1864 soldiers’ relief fair for the Massachusetts 54th infantry regiment, 55th infantry regiment, and 5th cavalry regiment, which had gone without pay for over a year and a half. Bannister donated his full-length portrait of Robert Gould Shaw, the commander of the 54th killed in action, to raise money for the cause. Bannister's portrait of Gould Shaw was displayed with the label \"Our Martyr\", according to abolitionist Lydia Maria Child. The portrait was praised in the New York Weekly Anglo-African as \"a fine specimen of art\" and inspired a poem by Martha Perry Lowe entitled The Picture of Col. Shaw in Boston. The painting was purchased by the state of Massachusetts and installed in its state house, but its current location is unknown.",
"title": "Biography"
},
{
"paragraph_id": 14,
"text": "The Bannister portrait of Robert Gould Shaw was one of several memorials to Gould Shaw by members of Boston's African American artistic community such as Edmonia Lewis. These artworks, put to the practical purpose of raising money for Black soldiers, contradicted the ideals of Boston Brahmin abolitionists, such as the Gould Shaws. Although the Brahmins supported abolition, they saw it as an abstract good rather than a concrete cause in need of material support. The portrait's paternalistic praise from Lowe and Child exemplified the divide between Boston's white abolitionists and the African American community. Through art like the 1884 Robert Gould Shaw Memorial, the Boston Brahmins rejected the possessive \"Our Martyr\" label given to him by Black artists like Bannister and Edmonia Lewis.",
"title": "Biography"
},
{
"paragraph_id": 15,
"text": "Bannister's activism also took other forms: on June 17, 1865, Bannister marshaled around two hundred members of the Twelfth Baptist Sunday School at a Grand Temperance Celebration on Boston Common. They marched under a banner reading \"Equal rights for all men\".",
"title": "Biography"
},
{
"paragraph_id": 16,
"text": "Bannister eventually studied at the Lowell Institute with the artist William Rimmer, while Rimmer taught evening life drawing classes at the Institute between 1863 and 1865. Rimmer was known for his skill in artistic anatomy, an area Bannister knew was one of his weaknesses. Because of Bannister's daytime photography business, he mostly took his drawing classes at night. Through Rimmer and the community at the Studio Building, Bannister was inspired by the Barbizon School-influenced paintings of William Morris Hunt, who had studied in Europe and held public exhibitions in Boston around the 1860s. At the Lowell Institute, Bannister formed a lifelong friendship with painter John Nelson Arnold; both later became founding members of the Providence Art Club. Bannister also formed a temporary painting partnership with Asa R. Lewis that lasted from 1868 to 1869. During that partnership of \"Bannister & Lewis\", Bannister began to advertise himself as both a portrait and landscape painter.",
"title": "Biography"
},
{
"paragraph_id": 17,
"text": "Despite his early commissions, Bannister still struggled to receive wider recognition for his work due to racism in the US. Following emancipation and the end of the US Civil War, the abolitionists began to disperse and, with them, their patronage. Due to increasing competition, Bannister did little to support Primus, who had come to him seeking an apprenticeship. An article in the New York Herald belittled both Bannister and his work: \"The negro has an appreciation for art while being manifestly unable to produce it.\" The article reportedly spurred his desire to achieve success as an artist. At the same time, Bannister had begun to receive more recognition within Boston art circles.",
"title": "Biography"
},
{
"paragraph_id": 18,
"text": "Supported by Carteaux, Bannister became a full-time painter in 1870, shortly after they moved to Providence, Rhode Island, at the end of 1869. He first took a studio in the Mercantile National Bank Building then moved to the Woods Building in Providence, where he shared a floor with artists like Sydney Burleigh and became friends with Providence painter George William Whitaker. He painted more landscapes over time—receiving an 1872 award at the Rhode Island Industrial Exposition for Summer Afternoon—and began submitting paintings to the Boston Art Club.",
"title": "Biography"
},
{
"paragraph_id": 19,
"text": "Bannister received national commendation for his work when he won first prize for his large oil Under the Oaks at the 1876 Philadelphia Centennial. Even then, the judge wanted to rescind the award after learning his identity until other exhibition artists protested; afterwards, Bannister reflected: \"I was and am proud to know that the jury of award did not know anything about me, my antecedents, color or race. There was no sentimental sympathy leading to the award of the medal.\" Bannister had intentionally submitted his painting with only a signature attached to ensure he would be judged fairly. As his career matured, he received more commissions and accumulated many honors, several from the Massachusetts Charitable Mechanics Association (silver medals in 1881 and 1884). Collectors and local notables Isaac Comstock Bates and Joseph Ely were among his patrons.",
"title": "Biography"
},
{
"paragraph_id": 20,
"text": "In 1880 Bannister joined with other professional artists, amateurs, and art collectors to found the Providence Art Club to stimulate the appreciation of art in the community. Their first meeting was in Bannister's studio in the Woods Building at the bottom of College Hill. He was the second to sign the club's charter, served on its initial executive board, and taught regular Saturday art classes. He continued to show paintings at Boston Art Club exhibitions, as well as in Connecticut and at New York's National Academy of Design, and exhibited A New England Hillside at the New Orleans Cotton Exposition in 1885. There, Bannister's work was segregated and ignored by the judging committees. With that experience in mind, Bannister decided not to submit any works to the 1893 World's Columbian Exposition since they would have to be pre-judged in Boston before they could even be sent to Chicago.",
"title": "Biography"
},
{
"paragraph_id": 21,
"text": "In the 1880s Bannister bought a small sloop, the Fanchon, and spent summers sketching, painting watercolors, and sailing Narragansett Bay and up to Bar Harbor in Maine. He would return with his studies and use them as the basis for winter commissions. He supplemented his sailing trips with journeys to exhibitions in New York, but a planned trip to Europe fell through due to lack of money.",
"title": "Biography"
},
{
"paragraph_id": 22,
"text": "In 1885, with other art club members, Bannister helped found the Anne Eliza Club (or \"A&E Club\")—a communal men's discussion group named after the waitress at the Providence Art Club. Through his teaching there and at the Providence Art Club, he became a mentor to younger Providence artists, like Charles Walter Stetson. Stetson often mentioned Bannister in his personal diaries and once praised him by writing, \"He is my only confidant in Art matters & I am his.\" Rhode Island engineer George Henry Corliss commissioned a painting from Bannister in 1886, as his reputation grew.",
"title": "Biography"
},
{
"paragraph_id": 23,
"text": "Bannister and Carteaux were consistent members of the African American community in Providence. They lived for a time in the boarding house of Ransom Parker, who had participated in the Dorr Rebellion, and were friends with merchant George Henry, Reverend Mahlon Van Horne, Brown graduate John Hope, and abolitionist George T. Downing, an ally from the Bannisters' political work in Boston. Carteaux founded the Home for Aged Colored Women, which is known as the Bannister Center today. Edward exhibited his painting Christ Healing the Sick in the home in 1892 and donated his portrait of Carteaux to it as well. Although he was a respected member of the Providence Art Club, Bannister's abolitionism likely led to conflict with its mostly white members, who exhibited art with minstrel stereotypes by E. W. Kemble and W. L. Shephard in 1887 and 1893.",
"title": "Biography"
},
{
"paragraph_id": 24,
"text": "Around 1890, Bannister sold the Fanchon to Judge George Newman Bliss. His largest exhibition of works was held in 1891, when he showed 33 works at the Spring Providence Art Club Exhibition. Later in the 1890s, Bannister seems to have sold fewer paintings, perhaps due to waning popularity, and exhibited less often. In 1898 Bannister closed his studio and the couple moved to Boston for a year before returning to a smaller home on Wilson Street, Providence, in 1900.",
"title": "Biography"
},
{
"paragraph_id": 25,
"text": "Bannister died of a heart attack on January 9, 1901, while attending an evening prayer meeting at his church, Elmwood Avenue Free Baptist Church. He had experienced heart trouble for some time but had completed two paintings only the previous day. During the service, he offered a prayer and shortly after sat down, gasping. His last words were reportedly \"Jesus, help me\".",
"title": "Biography"
},
{
"paragraph_id": 26,
"text": "After his death the Providence Art Club held a memorial exhibition in his name that focused on his artistic achievements, without mentioning his contribution to abolitionism. In the exhibition pamphlet, they wrote: \"His gentle disposition, his urbanity of manner, and his generous appreciation of the work of others, made him a welcome guest in all artistic circles. [...] He painted with profound feeling, not for pecuniary results, but to leave upon the canvas his impression of natural scenery, and to express his delight in the wondrous beauty of land and sea and sky.\"",
"title": "Biography"
},
{
"paragraph_id": 27,
"text": "He is buried in the North Burial Ground in Providence, under a stone monument designed by his art club friends. The disparity between Bannister's financial difficulties at the end of his life and the support shown by Providence's artists after his death led his friend John Nelson Arnold to say about the memorial: \"In the labor incident to this work I was constantly reminded of the remark attributed to the mother of Robert Burns on being shown the splendid monument erected to the memory of her gifted son: 'He asked for bread and they gave him a stone.'\"",
"title": "Biography"
},
{
"paragraph_id": 28,
"text": "Carteaux was admitted to her Home for Aged Colored Women in September 1902; she died in 1903 in a state mental institution in Cranston. She and Bannister are buried together.",
"title": "Biography"
},
{
"paragraph_id": 29,
"text": "The young Bannister advertised himself as a portraitist, but later became popular for his landscapes and seascapes. Drawing on his knowledge of poetry, classics, and English literature as an autodidact, he also painted biblical, mythological, and genre scenes. Much like George Inness, his work reflected the composition, mood, and influences of French Barbizon painters Jean-Baptiste-Camille Corot, Jean-François Millet, and Charles-François Daubigny. Defending Millet in The Artist and His Critics, Bannister saw him as the most \"spiritual artist of our time\" who voiced \"the sad, uncomplaining life he saw about him—and with which he sympathized so deeply.\"",
"title": "Artistic style"
},
{
"paragraph_id": 30,
"text": "Historian Joseph Skerrett has noted the influence of the Hudson River School on Bannister, while maintaining that he consistently experimented throughout his career: \"Bannister managed to please a conservative New England taste in art while continuing to try new methods and styles.\" For their mutual affinity with the Hudson River School, Bannister has been compared to his contemporary, the Ohio-based African American painter Robert S. Duncanson. Unlike Hudson River School artists, Bannister did not create meticulous landscapes but paid more attention to creating \"massive but revealing shapes of trees and mountains\" and works more picturesque than sublime. Bannister also avoided the \"nationalist grandeur\" often found in Hudson River School paintings.",
"title": "Artistic style"
},
{
"paragraph_id": 31,
"text": "Bannister often made pencil or pastel studies in preparation for larger oil paintings. Several of his compositions refer to classical, mathematical methods like the Golden Ratio or \"Harmonic Grid\", and make careful use of symmetry and asymmetry. In other paintings, his contrast of darks and lights create dynamic diagonals or circles that divide the composition. His paintings are known for their delicate use of color to depict shadow and atmosphere and their loose brushwork. His later palette exhibited lighter, more muted colors: the Boston Common scene he painted late in his life is a notable example. This change in style stands in contrast to his earlier stated disapproval of Impressionist painting.",
"title": "Artistic style"
},
{
"paragraph_id": 32,
"text": "Art historian Traci Lee Costa has argued that a \"reductive\" emphasis on Bannister's biography has taken attention away from scholarly analysis of his artwork. In the lecture The Artist and His Critics given to the Anne Eliza Club on April 15, 1886, and published afterward, Bannister spelled out his belief that making art is a highly spiritual practice—the pinnacle of human achievement. In its nearly religious approach and focus on subjective representations of nature, Bannister's philosophy has been compared to both German Idealism and American Transcendentalism. In his lecture, Bannister referenced the works of American Transcendentalist Washington Allston. Bannister's friend George W. Whitaker referred to him as \"The Idealist\" in a 1914 article \"Reminiscences of Providence Artists\". The lecture and its idealistic view are linked to Bannister's Approaching Storm (see right), which he completed in the same year. Approaching Storm features a human figure at its center, which is nonetheless rendered small by the surrounding landscape. Despite the implied drama, Bannister used a cool color palette of blues and greens, with contrasting yellows that provide warmth against the darker, almost purple sky. The contrast of melancholy elements against more cheerful pastoral themes appears in many of Bannister's paintings.",
"title": "Artistic style"
},
{
"paragraph_id": 33,
"text": "Although committed to freedom and equal rights for African Americans, Bannister did not often directly represent those issues in his paintings. The farms that Bannister painted were reminders of southern Rhode Island's history of chattel slavery, unlike French Barbizon scenes. In Hay Gatherers, Bannister depicts African American field laborers in a rural landscape. Unlike Bannister's idyllic pastorals, Hay Gatherers represents racial oppression and labor exploitation in Rhode Island, particularly South County where most of the state's plantations were. The women workers are separated from the field of wildflowers at the painting's lower left and other field workers in the background by stands of trees, suggesting their closeness to freedom even while they are still within the grasp of plantation labor. Through the geometric composition of Hay Gatherers, which divides the figures and the landscapes into triangular sections, Bannister combined his work on seemingly idealized landscapes with his earlier political art, visible in his humanist portraits such as Newspaper Boy. Bannister's Fort Dumpling, Jamestown, Rhode Island uses a similar triangular composition, whereby people relaxing are juxtaposed against but separated from sailboats in the background, a reminder of the \"maritime legacy of slavery\".",
"title": "Artistic style"
},
{
"paragraph_id": 34,
"text": "Bannister often conveyed political meaning in his paintings through allegory and allusion. One of his first commissions, The Ship Outward Bound, might have been a veiled reference to the forced return of Anthony Burns to slavery and Virginia under the Fugitive Slave Act of 1850 in 1854. In African American culture, an image of a ship leaving harbor was a reminder of the Transatlantic Slave Trade. Bannister's 1885 drawing The Woodsman is thought to be Bannister's response to the murder of Amasa Sprague, an event that spurred the abolition of capital punishment in Rhode Island after the dubious conviction and hanging of John Gordon. Similarly, his Governor Sprague's White Horse depicted the horse that William Sprague IV rode into the First Battle of Bull Run.",
"title": "Artistic style"
},
{
"paragraph_id": 35,
"text": "Bannister has been criticized for not often directly representing African Americans, outside of his early portraiture. He and artists like Henry Ossawa Tanner were deemed inauthentic during the Harlem Renaissance for producing works that appealed to white aesthetics. Many of Bannister's works were commissioned landscapes and portraits that reinforced European ideas, even though his art subtly dismantled racial stereotypes. In that way, Bannister has been compared to later Bostonian poet William Stanley Braithwaite, whose writing did not clearly reflect his identity. Bannister's work reflected his desire to excel and contribute to racial uplift, while still needing to depend on white patronage to reach a wider audience. Art historian Juanita Holland wrote of Bannister's dilemma: \"This was a large part of the double bind that [Boston's] black artists faced: they needed to both address and represent an African American identity, while finding a way for their white viewers to look past race to a perception of the work in more universal terms.\"",
"title": "Artistic style"
},
{
"paragraph_id": 36,
"text": "Bannister was the only major African American artist of the late nineteenth century who developed his talents without European exposure; he was well known in the artistic community of Providence and admired within the wider East Coast art world. After his death, he was largely forgotten by art history for almost a century, principally due to racial prejudice. His art was often omitted from 20th-century art histories, and his style of melancholic, serene landscapes also fell out of fashion. Still, he and his paintings are an indelible part of a refigured relationship between African American culture and the landscapes of Reconstruction-era America.",
"title": "Legacy"
},
{
"paragraph_id": 37,
"text": "Bannister's art continued to be supported by galleries like the Barnett-Aden Gallery and the Art Institute of Chicago. Following the civil rights movement in the 1960s, his work was again celebrated and widely collected. In collaboration with the Rhode Island School of Design and the Frederick Douglass Institute, the National Museum of African Art held an exhibition titled Edward Mitchell Bannister, 1828–1901: Providence Artist in 1973. The Rhode Island Heritage Hall of Fame inducted Bannister in 1976, and Rhode Island College created the Bannister Gallery in 1978 with an inaugural exhibition Four from Providence : Bannister, Prophet, Alston, Jennings.",
"title": "Legacy"
},
{
"paragraph_id": 38,
"text": "The New York-based Kenkebala Gallery held two exhibitions of Bannister's work, one in 1992 curated by Corrinne Jennings in collaboration with the Whitney and one in 2001 on the centennial of Bannister's death. From June 9 to October 8, 2018, the Gilbert Stuart Museum held an exhibition honoring Bannister and Carteaux's relationship, \"My Greatest Successes Have Come Through Her\": The Artistic Partnership of Edward and Christiana Bannister, as part of its Rhode Island Masters exhibition series. Bannister's portrait of Christiana Carteaux was the center of the exhibition.",
"title": "Legacy"
},
{
"paragraph_id": 39,
"text": "In September 2017, a Providence City Council committee unanimously voted to rename Magee Street (which had been named after a Rhode Island slave trader) to Bannister Street, in honor of Edward and Christiana Bannister. The Providence Art Club unveiled a bronze bust of Bannister made by Providence artist Gage Prentiss in May 2021. As of 2018, art historian Anne Louise Avery is compiling the first catalogue raisonné and a major biography of Bannister's work.",
"title": "Legacy"
},
{
"paragraph_id": 40,
"text": "In September 2023, a bronze sculpture of Bannister by artist Gage Prentiss was unveiled in Providence's Market Square. Bannister is depicted in life size, sitting on a bench.",
"title": "Legacy"
},
{
"paragraph_id": 41,
"text": "In 1884 Bannister and Carteaux moved from the boarding house of Ransom Parker to 93 Benevolent Street, and lived there until 1899. The two-and-a-half-story wooden house was built circa 1854 by engineer Charles E. Paine and is now known as \"The Vault\" or \"The Bannister House\". Euchlin Reeves and Louise Herreshoff purchased the house in the late 1930s and renovated it to add a brick exterior. The renovation was made to create consistency with their next-door property, so both houses could hold their \"little museum\" of antiques. Herreshoff died in 1967 and the porcelain collection filling the Bannister House was donated to Washington and Lee University.",
"title": "House"
},
{
"paragraph_id": 42,
"text": "The house is now listed as contributing to College Hill's historical designation. Brown University bought the property in 1989 and used it to store refrigerators. Due to a lack of plans for its preservation and use, the Providence Preservation Society put the Bannister House on its 2001 list of most endangered buildings in Providence. Brown University president Ruth Simmons assured historian and former Rhode Island deputy secretary of state Ray Rickman that the house would be preserved, although the university debated whether to sell the house to a third party.",
"title": "House"
},
{
"paragraph_id": 43,
"text": "Because its disrepair and long disuse made the house unsuitable for residence, Brown renovated the property in 2015 and restored it to its original appearance. It was sold in 2016 as part of the Brown to Brown Home Ownership Program—the program specifies that if the house is ever sold, it has to be sold back to the university.",
"title": "House"
}
]
| Edward Mitchell Bannister was a Canadian–American oil painter of the American Barbizon school. Born in Canada, he spent his adult life in New England in the United States. There, along with his wife Christiana Carteaux, he was a prominent member of African-American cultural and political communities, such as the Boston abolition movement. Bannister received national recognition after he won a first prize in painting at the 1876 Philadelphia Centennial Exhibition. He was also a founding member of the Providence Art Club and the Rhode Island School of Design. Bannister's style and predominantly pastoral subject matter reflected his admiration for the French artist Jean-François Millet and the French Barbizon school. A lifelong sailor, he also looked to the Rhode Island seaside for inspiration. Bannister continually experimented, and his artwork displays his Idealist philosophy and his control of color and atmosphere. He began his professional practice as a photographer and portraitist before developing his better-known landscape style. Later in his life, Bannister's style of landscape painting fell out of favor. With decreasing painting sales, he and Christiana Carteaux moved out of College Hill in Providence to Boston and then a smaller house on Wilson Street in Providence. Bannister was overlooked in American art historical studies and exhibitions after his death in 1901, until institutions like the National Museum of African Art returned him to national attention in the 1960s and 1970s. | 2002-02-25T15:51:15Z | 2023-11-19T18:01:07Z | [
"Template:Authority control",
"Template:Infobox artist",
"Template:Refn",
"Template:Em dash",
"Template:Cite book",
"Template:Portalbar",
"Template:Snd",
"Template:R",
"Template:Cite thesis",
"Template:Commons category",
"Template:Use American English",
"Template:Circa",
"Template:Blockquote",
"Template:Reflist",
"Template:Cite news",
"Template:Cite journal",
"Template:Short description",
"Template:Featured article",
"Template:Use mdy dates",
"Template:Nbs",
"Template:Cite web"
]
| https://en.wikipedia.org/wiki/Edward_Mitchell_Bannister |
10,000 | Eiffel | Eiffel may refer to: | [
{
"paragraph_id": 0,
"text": "Eiffel may refer to:",
"title": ""
}
]
| Eiffel may refer to: | 2001-10-29T19:39:17Z | 2023-10-27T17:34:03Z | [
"Template:Wiktionary",
"Template:TOC right",
"Template:Canned search",
"Template:Srt",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/Eiffel |
10,002 | Emil Kraepelin | Emil Wilhelm Georg Magnus Kraepelin (/ˈkrɛpəlɪn/; German: [ˈeːmiːl 'kʁɛːpəliːn]; 15 February 1856 – 7 October 1926) was a German psychiatrist. H. J. Eysenck's Encyclopedia of Psychology identifies him as the founder of modern scientific psychiatry, psychopharmacology and psychiatric genetics.
Kraepelin believed the chief origin of psychiatric disease to be biological and genetic malfunction. His theories dominated psychiatry at the start of the 20th century and, despite the later psychodynamic influence of Sigmund Freud and his disciples, enjoyed a revival at century's end. While he proclaimed his own high clinical standards of gathering information "by means of expert analysis of individual cases", he also drew on reported observations of officials not trained in psychiatry.
His textbooks do not contain detailed case histories of individuals but mosaic-like compilations of typical statements and behaviors from patients with a specific diagnosis. He has been described as "a scientific manager" and "a political operator", who developed "a large-scale, clinically oriented, epidemiological research programme".
Kraepelin, whose father, Karl Wilhelm, was a former opera singer, music teacher, and later successful story teller, was born in 1856 in Neustrelitz, in the Duchy of Mecklenburg-Strelitz in Germany. He was first introduced to biology by his brother Karl, 10 years older and, later, the director of the Zoological Museum of Hamburg.
Kraepelin began his medical studies in 1874 at the University of Leipzig and completed them at the University of Würzburg (1877–78). At Leipzig, he studied neuropathology under Paul Flechsig and experimental psychology with Wilhelm Wundt. Kraepelin would be a disciple of Wundt and had a lifelong interest in experimental psychology based on his theories. While there, Kraepelin wrote a prize-winning essay, "The Influence of Acute Illness in the Causation of Mental Disorders".
At Würzburg he completed his Rigorosum (roughly equivalent to an MBBS viva-voce examination) in March 1878, his Staatsexamen (licensing examination) in July 1878, and his Approbation (his license to practice medicine; roughly equivalent to an MBBS) on 9 August 1878. From August 1878 to 1882, he worked with Bernhard von Gudden at the University of Munich.
Returning to the University of Leipzig in February 1882, he worked in Wilhelm Heinrich Erb's neurology clinic and in Wundt's psychopharmacology laboratory. He completed his habilitation thesis at Leipzig; it was entitled "The Place of Psychology in Psychiatry". On 3 December 1883 he completed his umhabilitation ("rehabilitation" = habilitation recognition procedure) at Munich.
Kraepelin's major work, Compendium der Psychiatrie: Zum Gebrauche für Studirende und Aerzte (Compendium of Psychiatry: For the Use of Students and Physicians), was first published in 1883 and was expanded in subsequent multivolume editions to Ein Lehrbuch der Psychiatrie (A Textbook: Foundations of Psychiatry and Neuroscience). In it, he argued that psychiatry was a branch of medical science and should be investigated by observation and experimentation like the other natural sciences. He called for research into the physical causes of mental illness, and started to establish the foundations of the modern classification system for mental disorders. Kraepelin proposed that by studying case histories and identifying specific disorders, the progression of mental illness could be predicted, after taking into account individual differences in personality and patient age at the onset of disease.
In 1884, he became senior physician in the Prussian provincial town of Leubus, Silesia Province, and the following year he was appointed director of the Treatment and Nursing Institute in Dresden. On 1 July 1886, at the age of 30, Kraepelin was named Professor of Psychiatry at the University of Dorpat (today the University of Tartu) in what is today Tartu, Estonia (see Burgmair et al., vol. IV). Four years later, on 5 December 1890, he became department head at the University of Heidelberg, where he remained until 1904. While at Dorpat he became the director of the 80-bed University Clinic, where he began to study and record many clinical histories in detail and "was led to consider the importance of the course of the illness with regard to the classification of mental disorders".
In 1903, Kraepelin moved to Munich to become Professor of Clinical Psychiatry at the University of Munich.
In 1908, he was elected a member of the Royal Swedish Academy of Sciences.
In 1912, at the request of the DVP (Deutscher Verein für Psychiatrie; German Association for Psychiatry), of which he was the head from 1906 to 1920, he began plans to establish a centre for research. Following a large donation from the Jewish German-American banker James Loeb, who had at one time been a patient, and promises of support from "patrons of science", the German Institute for Psychiatric Research was founded in 1917 in Munich. Initially housed in existing hospital buildings, it was maintained by further donations from Loeb and his relatives. In 1924 it came under the auspices of the Kaiser Wilhelm Society for the Advancement of Science. The German-American Rockefeller family's Rockefeller Foundation made a large donation enabling the development of a new dedicated building for the institute along Kraepelin's guidelines, which was officially opened in 1928.
Kraepelin spoke out against the barbarous treatment that was prevalent in the psychiatric asylums of the time, and crusaded against alcohol, capital punishment and the imprisonment rather than treatment of the insane. For the sedation of agitated patients, Kraepelin recommended potassium bromide. He rejected psychoanalytical theories that posited innate or early sexuality as the cause of mental illness, and he rejected philosophical speculation as unscientific. He focused on collecting clinical data and was particularly interested in neuropathology (e.g., diseased tissue).
In the later period of his career, as a convinced champion of social Darwinism, he actively promoted a policy and research agenda in racial hygiene and eugenics.
Kraepelin retired from teaching at the age of 66, spending his remaining years establishing the institute. The ninth and final edition of his Textbook was published in 1927, shortly after his death. It comprised four volumes and was ten times larger than the first edition of 1883.
In the last years of his life, Kraepelin was preoccupied with Buddhist teachings and was planning to visit Buddhist shrines at the time of his death, according to his daughter, Antonie Schmidt-Kraepelin.
Kraepelin announced that he had found a new way of looking at mental illness, referring to the traditional view as "symptomatic" and to his view as "clinical". This turned out to be his paradigm-setting synthesis of the hundreds of mental disorders classified by the 19th century, grouping diseases together based on classification of syndrome—common patterns of symptoms over time—rather than by simple similarity of major symptoms in the manner of his predecessors.
Kraepelin described his work in the 5th edition of his textbook as a "decisive step from a symptomatic to a clinical view of insanity. . . . The importance of external clinical signs has . . . been subordinated to consideration of the conditions of origin, the course, and the terminus which result from individual disorders. Thus, all purely symptomatic categories have disappeared from the nosology".
Kraepelin is specifically credited with the classification of what was previously considered to be a unitary concept of psychosis, into two distinct forms (known as the Kraepelinian dichotomy):
Drawing on his long-term research, and using the criteria of course, outcome and prognosis, he developed the concept of dementia praecox, which he defined as the "sub-acute development of a peculiar simple condition of mental weakness occurring at a youthful age". When he first introduced this concept as a diagnostic entity in the fourth German edition of his Lehrbuch der Psychiatrie in 1893, it was placed among the degenerative disorders alongside, but separate from, catatonia and dementia paranoides. At that time, the concept corresponded by and large with Ewald Hecker's hebephrenia. In the sixth edition of the Lehrbuch in 1899 all three of these clinical types are treated as different expressions of one disease, dementia praecox.
One of the cardinal principles of his method was the recognition that any given symptom may appear in virtually any one of these disorders; e.g., there is almost no single symptom occurring in dementia praecox which cannot sometimes be found in manic depression. What distinguishes each disease symptomatically (as opposed to the underlying pathology) is not any particular (pathognomonic) symptom or symptoms, but a specific pattern of symptoms. In the absence of a direct physiological or genetic test or marker for each disease, it is only possible to distinguish them by their specific pattern of symptoms. Thus, Kraepelin's system is a method for pattern recognition, not grouping by common symptoms.
It has been claimed that Kraepelin also demonstrated specific patterns in the genetics of these disorders and patterns in their course and outcome, but no specific biomarkers have yet been identified. Generally speaking, there tend to be more people with schizophrenia among the relatives of schizophrenic patients than in the general population, while manic depression is more frequent in the relatives of manic depressives. Though, of course, this does not demonstrate genetic linkage, as this might be a socio-environmental factor as well.
He also reported a pattern to the course and outcome of these conditions. Kraepelin believed that schizophrenia had a deteriorating course in which mental function continuously (although perhaps erratically) declines, while manic-depressive patients experienced a course of illness which was intermittent, where patients were relatively symptom-free during the intervals which separate acute episodes. This led Kraepelin to name what we now know as schizophrenia, dementia praecox (the dementia part signifying the irreversible mental decline). It later became clear that dementia praecox did not necessarily lead to mental decline and was thus renamed schizophrenia by Eugen Bleuler to correct Kraepelin's misnomer.
In addition, as Kraepelin accepted in 1920, "It is becoming increasingly obvious that we cannot satisfactorily distinguish these two diseases"; however, he maintained that "On the one hand we find those patients with irreversible dementia and severe cortical lesions. On the other are those patients whose personality remains intact". Nevertheless, overlap between the diagnoses and neurological abnormalities (when found) have continued, and in fact a diagnostic category of schizoaffective disorder would be brought in to cover the intermediate cases.
Kraepelin devoted very few pages to his speculations about the etiology of his two major insanities, dementia praecox and manic-depressive insanity. However, from 1896 to his death in 1926 he held to the speculation that these insanities (particularly dementia praecox) would one day probably be found to be caused by a gradual systemic or "whole body" disease process, probably metabolic, which affected many of the organs and nerves in the body but affected the brain in a final, decisive cascade.
In the first through sixth edition of Kraepelin's influential psychiatry textbook, there was a section on moral insanity, which meant then a disorder of the emotions or moral sense without apparent delusions or hallucinations, and which Kraepelin defined as "lack or weakness of those sentiments which counter the ruthless satisfaction of egotism". He attributed this mainly to degeneration. This has been described as a psychiatric redefinition of Cesare Lombroso's theories of the "born criminal", conceptualised as a "moral defect", though Kraepelin stressed it was not yet possible to recognise them by physical characteristics.
In fact from 1904 Kraepelin changed the section heading to "The born criminal", moving it from under "Congenital feeble-mindedness" to a new chapter on "Psychopathic personalities". They were treated under a theory of degeneration. Four types were distinguished: born criminals (inborn delinquents), pathological liars, querulous persons, and Triebmenschen (persons driven by a basic compulsion, including vagabonds, spendthrifts, and dipsomaniacs).
The concept of "psychopathic inferiorities" had been recently popularised in Germany by Julius Ludwig August Koch, who proposed congenital and acquired types. Kraepelin had no evidence or explanation suggesting a congenital cause, and his assumption therefore appears to have been simple "biologism". Others, such as Gustav Aschaffenburg, argued for a varying combination of causes. Kraepelin's assumption of a moral defect rather than a positive drive towards crime has also been questioned, as it implies that the moral sense is somehow inborn and unvarying, yet it was known to vary by time and place, and Kraepelin never considered that the moral sense might just be different.
Kurt Schneider criticized Kraepelin's nosology on topics such as Haltlose for appearing to be a list of behaviors that he considered undesirable, rather than medical conditions, though Schneider's alternative version has also been criticised on the same basis. Nevertheless, many essentials of these diagnostic systems were introduced into the diagnostic systems, and remarkable similarities remain in the DSM-V and ICD-10. The issues would today mainly be considered under the category of personality disorders, or in terms of Kraepelin's focus on psychopathy.
Kraepelin had referred to psychopathic conditions (or "states") in his 1896 edition, including compulsive insanity, impulsive insanity, homosexuality, and mood disturbances. From 1904, however, he instead termed those "original disease conditions, and introduced the new alternative category of psychopathic personalities. In the eighth edition from 1909 that category would include, in addition to a separate "dissocial" type, the excitable, the unstable, the Triebmenschen driven persons, eccentrics, the liars and swindlers, and the quarrelsome. It has been described as remarkable that Kraepelin now considered mood disturbances to be not part of the same category, but only attenuated (more mild) phases of manic depressive illness; this corresponds to current classification schemes.
Kraepelin postulated that there is a specific brain or other biological pathology underlying each of the major psychiatric disorders. As a colleague of Alois Alzheimer, he was a co-discoverer of Alzheimer's disease, and his laboratory discovered its pathological basis. Kraepelin was confident that it would someday be possible to identify the pathological basis of each of the major psychiatric disorders.
Upon moving to become Professor of Clinical Psychiatry at the University of Munich in 1903, Kraepelin increasingly wrote on social policy issues. He was a strong and influential proponent of eugenics and racial hygiene. His publications included a focus on alcoholism, crime, degeneration and hysteria.
Kraepelin was convinced that such institutions as the education system and the welfare state, because of their trend to break the processes of natural selection, undermined the Germans' biological "struggle for survival". He was concerned to preserve and enhance the German people, the Volk, in the sense of nation or race. He appears to have held Lamarckian concepts of evolution, such that cultural deterioration could be inherited. He was a strong ally and promoter of the work of fellow psychiatrist (and pupil and later successor as director of the clinic) Ernst Rüdin to clarify the mechanisms of genetic inheritance as to make a so-called "empirical genetic prognosis".
Martin Brune has pointed out that Kraepelin and Rüdin also appear to have been ardent advocates of a self-domestication theory, a version of social Darwinism which held that modern culture was not allowing people to be weeded out, resulting in more mental disorder and deterioration of the gene pool. Kraepelin saw a number of "symptoms" of this, such as "weakening of viability and resistance, decreasing fertility, proletarianisation, and moral damage due to "penning up people" [Zusammenpferchung]. He also wrote that "the number of idiots, epileptics, psychopaths, criminals, prostitutes, and tramps who descend from alcoholic and syphilitic parents, and who transfer their inferiority to their offspring, is incalculable". He felt that "the well-known example of the Jews, with their strong disposition towards nervous and mental disorders, teaches us that their extraordinarily advanced domestication may eventually imprint clear marks on the race". Brune states that Kraepelin's nosological system "was, to a great deal, built on the degeneration paradigm".
Kraepelin's great contribution in classifying schizophrenia and manic depression remains relatively unknown to the general public, and his work, which had neither the literary quality nor paradigmatic power of Freud's, is little read outside scholarly circles. Kraepelin's contributions were also to a large extent marginalized throughout a good part of the 20th century during the success of Freudian etiological theories. However, his views now dominate many quarters of psychiatric research and academic psychiatry. His fundamental theories on the diagnosis of psychiatric disorders form the basis of the major diagnostic systems in use today, especially the American Psychiatric Association's DSM-IV and the World Health Organization's ICD system, based on the Research Diagnostic Criteria and earlier Feighner Criteria developed by espoused "neo-Kraepelinians", though Robert Spitzer and others in the DSM committees were keen not to include assumptions about causation as Kraepelin had.
Kraepelin has been described as a "scientific manager" and political operator, who developed a large-scale, clinically oriented, epidemiological research programme. In this role he took in clinical information from a wide range of sources and networks. Despite proclaiming high clinical standards for himself to gather information "by means of expert analysis of individual cases", he would also draw on the reported observations of officials not trained in psychiatry. The various editions of his textbooks do not contain detailed case histories of individuals, however, but mosaiclike compilations of typical statements and behaviors from patients with a specific diagnosis.
Kraepelin wrote in a knapp und klar (concise and clear) style that made his books useful tools for physicians. Abridged and clumsy English translations of the sixth and seventh editions of his textbook in 1902 and 1907 (respectively) by Allan Ross Diefendorf (1871–1943), an assistant physician at the Connecticut Hospital for the Insane at Middletown, inadequately conveyed the literary quality of his writings that made them so valuable to practitioners.
Among the doctors trained by Alois Alzheimer and Emil Kraepelin at Munich at the beginning of the 20th century were the Spanish neuropathologists and neuropsychiatrists Nicolás Achúcarro and Gonzalo Rodríguez Lafora, two distinguished disciples of Santiago Ramón y Cajal and members of the Spanish Neurological School.
In the Heidelberg and early Munich years he edited Psychologische Arbeiten, a journal on experimental psychology. One of his own famous contributions to this journal also appeared in the form of a monograph (105 pp.) entitled Über Sprachstörungen im Traume (On Language Disturbances in Dreams). Kraepelin, on the basis of the dream-psychosis analogy, studied for more than 20 years language disorder in dreams in order to study indirectly schizophasia. The dreams Kraepelin collected are mainly his own. They lack extensive comment by the dreamer. In order to study them the full range of biographical knowledge available today on Kraepelin is necessary (see, e.g., Burgmair et al., I-IX).
For biographies of Kraepelin see:
For English translations of Kraepelin's work see: | [
{
"paragraph_id": 0,
"text": "Emil Wilhelm Georg Magnus Kraepelin (/ˈkrɛpəlɪn/; German: [ˈeːmiːl 'kʁɛːpəliːn]; 15 February 1856 – 7 October 1926) was a German psychiatrist. H. J. Eysenck's Encyclopedia of Psychology identifies him as the founder of modern scientific psychiatry, psychopharmacology and psychiatric genetics.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Kraepelin believed the chief origin of psychiatric disease to be biological and genetic malfunction. His theories dominated psychiatry at the start of the 20th century and, despite the later psychodynamic influence of Sigmund Freud and his disciples, enjoyed a revival at century's end. While he proclaimed his own high clinical standards of gathering information \"by means of expert analysis of individual cases\", he also drew on reported observations of officials not trained in psychiatry.",
"title": ""
},
{
"paragraph_id": 2,
"text": "His textbooks do not contain detailed case histories of individuals but mosaic-like compilations of typical statements and behaviors from patients with a specific diagnosis. He has been described as \"a scientific manager\" and \"a political operator\", who developed \"a large-scale, clinically oriented, epidemiological research programme\".",
"title": ""
},
{
"paragraph_id": 3,
"text": "Kraepelin, whose father, Karl Wilhelm, was a former opera singer, music teacher, and later successful story teller, was born in 1856 in Neustrelitz, in the Duchy of Mecklenburg-Strelitz in Germany. He was first introduced to biology by his brother Karl, 10 years older and, later, the director of the Zoological Museum of Hamburg.",
"title": "Family and early life"
},
{
"paragraph_id": 4,
"text": "Kraepelin began his medical studies in 1874 at the University of Leipzig and completed them at the University of Würzburg (1877–78). At Leipzig, he studied neuropathology under Paul Flechsig and experimental psychology with Wilhelm Wundt. Kraepelin would be a disciple of Wundt and had a lifelong interest in experimental psychology based on his theories. While there, Kraepelin wrote a prize-winning essay, \"The Influence of Acute Illness in the Causation of Mental Disorders\".",
"title": "Education and career"
},
{
"paragraph_id": 5,
"text": "At Würzburg he completed his Rigorosum (roughly equivalent to an MBBS viva-voce examination) in March 1878, his Staatsexamen (licensing examination) in July 1878, and his Approbation (his license to practice medicine; roughly equivalent to an MBBS) on 9 August 1878. From August 1878 to 1882, he worked with Bernhard von Gudden at the University of Munich.",
"title": "Education and career"
},
{
"paragraph_id": 6,
"text": "Returning to the University of Leipzig in February 1882, he worked in Wilhelm Heinrich Erb's neurology clinic and in Wundt's psychopharmacology laboratory. He completed his habilitation thesis at Leipzig; it was entitled \"The Place of Psychology in Psychiatry\". On 3 December 1883 he completed his umhabilitation (\"rehabilitation\" = habilitation recognition procedure) at Munich.",
"title": "Education and career"
},
{
"paragraph_id": 7,
"text": "Kraepelin's major work, Compendium der Psychiatrie: Zum Gebrauche für Studirende und Aerzte (Compendium of Psychiatry: For the Use of Students and Physicians), was first published in 1883 and was expanded in subsequent multivolume editions to Ein Lehrbuch der Psychiatrie (A Textbook: Foundations of Psychiatry and Neuroscience). In it, he argued that psychiatry was a branch of medical science and should be investigated by observation and experimentation like the other natural sciences. He called for research into the physical causes of mental illness, and started to establish the foundations of the modern classification system for mental disorders. Kraepelin proposed that by studying case histories and identifying specific disorders, the progression of mental illness could be predicted, after taking into account individual differences in personality and patient age at the onset of disease.",
"title": "Education and career"
},
{
"paragraph_id": 8,
"text": "In 1884, he became senior physician in the Prussian provincial town of Leubus, Silesia Province, and the following year he was appointed director of the Treatment and Nursing Institute in Dresden. On 1 July 1886, at the age of 30, Kraepelin was named Professor of Psychiatry at the University of Dorpat (today the University of Tartu) in what is today Tartu, Estonia (see Burgmair et al., vol. IV). Four years later, on 5 December 1890, he became department head at the University of Heidelberg, where he remained until 1904. While at Dorpat he became the director of the 80-bed University Clinic, where he began to study and record many clinical histories in detail and \"was led to consider the importance of the course of the illness with regard to the classification of mental disorders\".",
"title": "Education and career"
},
{
"paragraph_id": 9,
"text": "In 1903, Kraepelin moved to Munich to become Professor of Clinical Psychiatry at the University of Munich.",
"title": "Education and career"
},
{
"paragraph_id": 10,
"text": "In 1908, he was elected a member of the Royal Swedish Academy of Sciences.",
"title": "Education and career"
},
{
"paragraph_id": 11,
"text": "In 1912, at the request of the DVP (Deutscher Verein für Psychiatrie; German Association for Psychiatry), of which he was the head from 1906 to 1920, he began plans to establish a centre for research. Following a large donation from the Jewish German-American banker James Loeb, who had at one time been a patient, and promises of support from \"patrons of science\", the German Institute for Psychiatric Research was founded in 1917 in Munich. Initially housed in existing hospital buildings, it was maintained by further donations from Loeb and his relatives. In 1924 it came under the auspices of the Kaiser Wilhelm Society for the Advancement of Science. The German-American Rockefeller family's Rockefeller Foundation made a large donation enabling the development of a new dedicated building for the institute along Kraepelin's guidelines, which was officially opened in 1928.",
"title": "Education and career"
},
{
"paragraph_id": 12,
"text": "Kraepelin spoke out against the barbarous treatment that was prevalent in the psychiatric asylums of the time, and crusaded against alcohol, capital punishment and the imprisonment rather than treatment of the insane. For the sedation of agitated patients, Kraepelin recommended potassium bromide. He rejected psychoanalytical theories that posited innate or early sexuality as the cause of mental illness, and he rejected philosophical speculation as unscientific. He focused on collecting clinical data and was particularly interested in neuropathology (e.g., diseased tissue).",
"title": "Education and career"
},
{
"paragraph_id": 13,
"text": "In the later period of his career, as a convinced champion of social Darwinism, he actively promoted a policy and research agenda in racial hygiene and eugenics.",
"title": "Education and career"
},
{
"paragraph_id": 14,
"text": "Kraepelin retired from teaching at the age of 66, spending his remaining years establishing the institute. The ninth and final edition of his Textbook was published in 1927, shortly after his death. It comprised four volumes and was ten times larger than the first edition of 1883.",
"title": "Education and career"
},
{
"paragraph_id": 15,
"text": "In the last years of his life, Kraepelin was preoccupied with Buddhist teachings and was planning to visit Buddhist shrines at the time of his death, according to his daughter, Antonie Schmidt-Kraepelin.",
"title": "Education and career"
},
{
"paragraph_id": 16,
"text": "Kraepelin announced that he had found a new way of looking at mental illness, referring to the traditional view as \"symptomatic\" and to his view as \"clinical\". This turned out to be his paradigm-setting synthesis of the hundreds of mental disorders classified by the 19th century, grouping diseases together based on classification of syndrome—common patterns of symptoms over time—rather than by simple similarity of major symptoms in the manner of his predecessors.",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 17,
"text": "Kraepelin described his work in the 5th edition of his textbook as a \"decisive step from a symptomatic to a clinical view of insanity. . . . The importance of external clinical signs has . . . been subordinated to consideration of the conditions of origin, the course, and the terminus which result from individual disorders. Thus, all purely symptomatic categories have disappeared from the nosology\".",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 18,
"text": "Kraepelin is specifically credited with the classification of what was previously considered to be a unitary concept of psychosis, into two distinct forms (known as the Kraepelinian dichotomy):",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 19,
"text": "Drawing on his long-term research, and using the criteria of course, outcome and prognosis, he developed the concept of dementia praecox, which he defined as the \"sub-acute development of a peculiar simple condition of mental weakness occurring at a youthful age\". When he first introduced this concept as a diagnostic entity in the fourth German edition of his Lehrbuch der Psychiatrie in 1893, it was placed among the degenerative disorders alongside, but separate from, catatonia and dementia paranoides. At that time, the concept corresponded by and large with Ewald Hecker's hebephrenia. In the sixth edition of the Lehrbuch in 1899 all three of these clinical types are treated as different expressions of one disease, dementia praecox.",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 20,
"text": "One of the cardinal principles of his method was the recognition that any given symptom may appear in virtually any one of these disorders; e.g., there is almost no single symptom occurring in dementia praecox which cannot sometimes be found in manic depression. What distinguishes each disease symptomatically (as opposed to the underlying pathology) is not any particular (pathognomonic) symptom or symptoms, but a specific pattern of symptoms. In the absence of a direct physiological or genetic test or marker for each disease, it is only possible to distinguish them by their specific pattern of symptoms. Thus, Kraepelin's system is a method for pattern recognition, not grouping by common symptoms.",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 21,
"text": "It has been claimed that Kraepelin also demonstrated specific patterns in the genetics of these disorders and patterns in their course and outcome, but no specific biomarkers have yet been identified. Generally speaking, there tend to be more people with schizophrenia among the relatives of schizophrenic patients than in the general population, while manic depression is more frequent in the relatives of manic depressives. Though, of course, this does not demonstrate genetic linkage, as this might be a socio-environmental factor as well.",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 22,
"text": "He also reported a pattern to the course and outcome of these conditions. Kraepelin believed that schizophrenia had a deteriorating course in which mental function continuously (although perhaps erratically) declines, while manic-depressive patients experienced a course of illness which was intermittent, where patients were relatively symptom-free during the intervals which separate acute episodes. This led Kraepelin to name what we now know as schizophrenia, dementia praecox (the dementia part signifying the irreversible mental decline). It later became clear that dementia praecox did not necessarily lead to mental decline and was thus renamed schizophrenia by Eugen Bleuler to correct Kraepelin's misnomer.",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 23,
"text": "In addition, as Kraepelin accepted in 1920, \"It is becoming increasingly obvious that we cannot satisfactorily distinguish these two diseases\"; however, he maintained that \"On the one hand we find those patients with irreversible dementia and severe cortical lesions. On the other are those patients whose personality remains intact\". Nevertheless, overlap between the diagnoses and neurological abnormalities (when found) have continued, and in fact a diagnostic category of schizoaffective disorder would be brought in to cover the intermediate cases.",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 24,
"text": "Kraepelin devoted very few pages to his speculations about the etiology of his two major insanities, dementia praecox and manic-depressive insanity. However, from 1896 to his death in 1926 he held to the speculation that these insanities (particularly dementia praecox) would one day probably be found to be caused by a gradual systemic or \"whole body\" disease process, probably metabolic, which affected many of the organs and nerves in the body but affected the brain in a final, decisive cascade.",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 25,
"text": "In the first through sixth edition of Kraepelin's influential psychiatry textbook, there was a section on moral insanity, which meant then a disorder of the emotions or moral sense without apparent delusions or hallucinations, and which Kraepelin defined as \"lack or weakness of those sentiments which counter the ruthless satisfaction of egotism\". He attributed this mainly to degeneration. This has been described as a psychiatric redefinition of Cesare Lombroso's theories of the \"born criminal\", conceptualised as a \"moral defect\", though Kraepelin stressed it was not yet possible to recognise them by physical characteristics.",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 26,
"text": "In fact from 1904 Kraepelin changed the section heading to \"The born criminal\", moving it from under \"Congenital feeble-mindedness\" to a new chapter on \"Psychopathic personalities\". They were treated under a theory of degeneration. Four types were distinguished: born criminals (inborn delinquents), pathological liars, querulous persons, and Triebmenschen (persons driven by a basic compulsion, including vagabonds, spendthrifts, and dipsomaniacs).",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 27,
"text": "The concept of \"psychopathic inferiorities\" had been recently popularised in Germany by Julius Ludwig August Koch, who proposed congenital and acquired types. Kraepelin had no evidence or explanation suggesting a congenital cause, and his assumption therefore appears to have been simple \"biologism\". Others, such as Gustav Aschaffenburg, argued for a varying combination of causes. Kraepelin's assumption of a moral defect rather than a positive drive towards crime has also been questioned, as it implies that the moral sense is somehow inborn and unvarying, yet it was known to vary by time and place, and Kraepelin never considered that the moral sense might just be different.",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 28,
"text": "Kurt Schneider criticized Kraepelin's nosology on topics such as Haltlose for appearing to be a list of behaviors that he considered undesirable, rather than medical conditions, though Schneider's alternative version has also been criticised on the same basis. Nevertheless, many essentials of these diagnostic systems were introduced into the diagnostic systems, and remarkable similarities remain in the DSM-V and ICD-10. The issues would today mainly be considered under the category of personality disorders, or in terms of Kraepelin's focus on psychopathy.",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 29,
"text": "Kraepelin had referred to psychopathic conditions (or \"states\") in his 1896 edition, including compulsive insanity, impulsive insanity, homosexuality, and mood disturbances. From 1904, however, he instead termed those \"original disease conditions, and introduced the new alternative category of psychopathic personalities. In the eighth edition from 1909 that category would include, in addition to a separate \"dissocial\" type, the excitable, the unstable, the Triebmenschen driven persons, eccentrics, the liars and swindlers, and the quarrelsome. It has been described as remarkable that Kraepelin now considered mood disturbances to be not part of the same category, but only attenuated (more mild) phases of manic depressive illness; this corresponds to current classification schemes.",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 30,
"text": "Kraepelin postulated that there is a specific brain or other biological pathology underlying each of the major psychiatric disorders. As a colleague of Alois Alzheimer, he was a co-discoverer of Alzheimer's disease, and his laboratory discovered its pathological basis. Kraepelin was confident that it would someday be possible to identify the pathological basis of each of the major psychiatric disorders.",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 31,
"text": "Upon moving to become Professor of Clinical Psychiatry at the University of Munich in 1903, Kraepelin increasingly wrote on social policy issues. He was a strong and influential proponent of eugenics and racial hygiene. His publications included a focus on alcoholism, crime, degeneration and hysteria.",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 32,
"text": "Kraepelin was convinced that such institutions as the education system and the welfare state, because of their trend to break the processes of natural selection, undermined the Germans' biological \"struggle for survival\". He was concerned to preserve and enhance the German people, the Volk, in the sense of nation or race. He appears to have held Lamarckian concepts of evolution, such that cultural deterioration could be inherited. He was a strong ally and promoter of the work of fellow psychiatrist (and pupil and later successor as director of the clinic) Ernst Rüdin to clarify the mechanisms of genetic inheritance as to make a so-called \"empirical genetic prognosis\".",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 33,
"text": "Martin Brune has pointed out that Kraepelin and Rüdin also appear to have been ardent advocates of a self-domestication theory, a version of social Darwinism which held that modern culture was not allowing people to be weeded out, resulting in more mental disorder and deterioration of the gene pool. Kraepelin saw a number of \"symptoms\" of this, such as \"weakening of viability and resistance, decreasing fertility, proletarianisation, and moral damage due to \"penning up people\" [Zusammenpferchung]. He also wrote that \"the number of idiots, epileptics, psychopaths, criminals, prostitutes, and tramps who descend from alcoholic and syphilitic parents, and who transfer their inferiority to their offspring, is incalculable\". He felt that \"the well-known example of the Jews, with their strong disposition towards nervous and mental disorders, teaches us that their extraordinarily advanced domestication may eventually imprint clear marks on the race\". Brune states that Kraepelin's nosological system \"was, to a great deal, built on the degeneration paradigm\".",
"title": "Theories and classification schemes"
},
{
"paragraph_id": 34,
"text": "Kraepelin's great contribution in classifying schizophrenia and manic depression remains relatively unknown to the general public, and his work, which had neither the literary quality nor paradigmatic power of Freud's, is little read outside scholarly circles. Kraepelin's contributions were also to a large extent marginalized throughout a good part of the 20th century during the success of Freudian etiological theories. However, his views now dominate many quarters of psychiatric research and academic psychiatry. His fundamental theories on the diagnosis of psychiatric disorders form the basis of the major diagnostic systems in use today, especially the American Psychiatric Association's DSM-IV and the World Health Organization's ICD system, based on the Research Diagnostic Criteria and earlier Feighner Criteria developed by espoused \"neo-Kraepelinians\", though Robert Spitzer and others in the DSM committees were keen not to include assumptions about causation as Kraepelin had.",
"title": "Influence"
},
{
"paragraph_id": 35,
"text": "Kraepelin has been described as a \"scientific manager\" and political operator, who developed a large-scale, clinically oriented, epidemiological research programme. In this role he took in clinical information from a wide range of sources and networks. Despite proclaiming high clinical standards for himself to gather information \"by means of expert analysis of individual cases\", he would also draw on the reported observations of officials not trained in psychiatry. The various editions of his textbooks do not contain detailed case histories of individuals, however, but mosaiclike compilations of typical statements and behaviors from patients with a specific diagnosis.",
"title": "Influence"
},
{
"paragraph_id": 36,
"text": "Kraepelin wrote in a knapp und klar (concise and clear) style that made his books useful tools for physicians. Abridged and clumsy English translations of the sixth and seventh editions of his textbook in 1902 and 1907 (respectively) by Allan Ross Diefendorf (1871–1943), an assistant physician at the Connecticut Hospital for the Insane at Middletown, inadequately conveyed the literary quality of his writings that made them so valuable to practitioners.",
"title": "Influence"
},
{
"paragraph_id": 37,
"text": "Among the doctors trained by Alois Alzheimer and Emil Kraepelin at Munich at the beginning of the 20th century were the Spanish neuropathologists and neuropsychiatrists Nicolás Achúcarro and Gonzalo Rodríguez Lafora, two distinguished disciples of Santiago Ramón y Cajal and members of the Spanish Neurological School.",
"title": "Influence"
},
{
"paragraph_id": 38,
"text": "In the Heidelberg and early Munich years he edited Psychologische Arbeiten, a journal on experimental psychology. One of his own famous contributions to this journal also appeared in the form of a monograph (105 pp.) entitled Über Sprachstörungen im Traume (On Language Disturbances in Dreams). Kraepelin, on the basis of the dream-psychosis analogy, studied for more than 20 years language disorder in dreams in order to study indirectly schizophasia. The dreams Kraepelin collected are mainly his own. They lack extensive comment by the dreamer. In order to study them the full range of biographical knowledge available today on Kraepelin is necessary (see, e.g., Burgmair et al., I-IX).",
"title": "Dreaming for psychiatry's sake"
},
{
"paragraph_id": 39,
"text": "For biographies of Kraepelin see:",
"title": "External links"
},
{
"paragraph_id": 40,
"text": "For English translations of Kraepelin's work see:",
"title": "External links"
}
]
| Emil Wilhelm Georg Magnus Kraepelin was a German psychiatrist. H. J. Eysenck's Encyclopedia of Psychology identifies him as the founder of modern scientific psychiatry, psychopharmacology and psychiatric genetics. Kraepelin believed the chief origin of psychiatric disease to be biological and genetic malfunction. His theories dominated psychiatry at the start of the 20th century and, despite the later psychodynamic influence of Sigmund Freud and his disciples, enjoyed a revival at century's end. While he proclaimed his own high clinical standards of gathering information "by means of expert analysis of individual cases", he also drew on reported observations of officials not trained in psychiatry. His textbooks do not contain detailed case histories of individuals but mosaic-like compilations of typical statements and behaviors from patients with a specific diagnosis. He has been described as "a scientific manager" and "a political operator", who developed "a large-scale, clinically oriented, epidemiological research programme". | 2001-10-30T23:09:12Z | 2023-12-28T05:43:58Z | [
"Template:Webarchive",
"Template:Citation needed",
"Template:Portal",
"Template:Cite journal",
"Template:Cite magazine",
"Template:Use dmy dates",
"Template:ISBN",
"Template:Reflist",
"Template:Bipolar disorder",
"Template:Infobox scientist",
"Template:Cite web",
"Template:PM20",
"Template:Authority control",
"Template:Short description",
"Template:IPAc-en",
"Template:IPA-de",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/Emil_Kraepelin |
10,003 | Evoluon | 51°26′37″N 5°26′49″E / 51.44361°N 5.44694°E / 51.44361; 5.44694 The Evoluon was built in 1966 as a science museum by the electronics and electrical company Philips. It quickly became a landmark in Eindhoven, where Philips was headquartered at the time. The museum closed in 1989 and the building reopened as a conference centre and exhibition venue in 1998.
The building is unique due to its very futuristic design, resembling a landed flying saucer. It was designed by architects Leo de Bever and Louis Christiaan Kalff, while the exhibition it housed was conceived by James Gardner. De Bever and Kalff only got two demands for the design of the building, it had to be "spectacular" and it had to be possible to hold exhibitions in the building.
Its concrete dome is 77 metres (253 ft) in diameter and is held in place by 169 kilometres (105 mi) of reinforcing steel bars.
In the 1960s and 1970s the Evoluon attracted large numbers of visitors due to its innovative interactive exhibitions. When competing science museums opened in other cities, the number of visitors declined and the original museum closed down in 1989. The building was converted into a conference centre which opened in 1998.
In the UK the Evoluon is chiefly remembered from Bert Haanstra's wordless short film entitled simply Evoluon, commissioned by Philips to publicise the museum, and shown as a trade test colour film on BBC television from 1968 to 1972.
In October 2013 the Evoluon was used to stage four 3D-concerts by the German electronic band Kraftwerk, each before an audience of 1,200 spectators. Key band member Ralf Hütter handpicked the venue for its retro-futuristic look. Bespoke 3D-visuals of the saucer section of the building descending from space were used in the live rendition of their track Spacelab.
On September 24, 2022, the Evoluon reopened to the public with the RetroFuture exhibition.
Media related to Evoluon at Wikimedia Commons | [
{
"paragraph_id": 0,
"text": "51°26′37″N 5°26′49″E / 51.44361°N 5.44694°E / 51.44361; 5.44694 The Evoluon was built in 1966 as a science museum by the electronics and electrical company Philips. It quickly became a landmark in Eindhoven, where Philips was headquartered at the time. The museum closed in 1989 and the building reopened as a conference centre and exhibition venue in 1998.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The building is unique due to its very futuristic design, resembling a landed flying saucer. It was designed by architects Leo de Bever and Louis Christiaan Kalff, while the exhibition it housed was conceived by James Gardner. De Bever and Kalff only got two demands for the design of the building, it had to be \"spectacular\" and it had to be possible to hold exhibitions in the building.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Its concrete dome is 77 metres (253 ft) in diameter and is held in place by 169 kilometres (105 mi) of reinforcing steel bars.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In the 1960s and 1970s the Evoluon attracted large numbers of visitors due to its innovative interactive exhibitions. When competing science museums opened in other cities, the number of visitors declined and the original museum closed down in 1989. The building was converted into a conference centre which opened in 1998.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In the UK the Evoluon is chiefly remembered from Bert Haanstra's wordless short film entitled simply Evoluon, commissioned by Philips to publicise the museum, and shown as a trade test colour film on BBC television from 1968 to 1972.",
"title": ""
},
{
"paragraph_id": 5,
"text": "In October 2013 the Evoluon was used to stage four 3D-concerts by the German electronic band Kraftwerk, each before an audience of 1,200 spectators. Key band member Ralf Hütter handpicked the venue for its retro-futuristic look. Bespoke 3D-visuals of the saucer section of the building descending from space were used in the live rendition of their track Spacelab.",
"title": ""
},
{
"paragraph_id": 6,
"text": "On September 24, 2022, the Evoluon reopened to the public with the RetroFuture exhibition.",
"title": ""
},
{
"paragraph_id": 7,
"text": "Media related to Evoluon at Wikimedia Commons",
"title": "External links"
}
]
| The Evoluon was built in 1966 as a science museum by the electronics and electrical company Philips. It quickly became a landmark in Eindhoven, where Philips was headquartered at the time. The museum closed in 1989 and the building reopened as a conference centre and exhibition venue in 1998. The building is unique due to its very futuristic design, resembling a landed flying saucer. It was designed by architects Leo de Bever and Louis Christiaan Kalff, while the exhibition it housed was conceived by James Gardner. De Bever and Kalff only got two demands for the design of the building, it had to be "spectacular" and it had to be possible to hold exhibitions in the building. Its concrete dome is 77 metres (253 ft) in diameter and is held in place by 169 kilometres (105 mi) of reinforcing steel bars. In the 1960s and 1970s the Evoluon attracted large numbers of visitors due to its innovative interactive exhibitions. When competing science museums opened in other cities, the number of visitors declined and the original museum closed down in 1989. The building was converted into a conference centre which opened in 1998. In the UK the Evoluon is chiefly remembered from Bert Haanstra's wordless short film entitled simply Evoluon, commissioned by Philips to publicise the museum, and shown as a trade test colour film on BBC television from 1968 to 1972. In October 2013 the Evoluon was used to stage four 3D-concerts by the German electronic band Kraftwerk, each before an audience of 1,200 spectators. Key band member Ralf Hütter handpicked the venue for its retro-futuristic look. Bespoke 3D-visuals of the saucer section of the building descending from space were used in the live rendition of their track Spacelab. On September 24, 2022, the Evoluon reopened to the public with the RetroFuture exhibition. | 2023-01-12T09:32:05Z | [
"Template:Short description",
"Template:Coord",
"Template:Convert",
"Template:Wide image",
"Template:Reflist",
"Template:Cite web",
"Template:Commons category-inline"
]
| https://en.wikipedia.org/wiki/Evoluon |
|
10,004 | Educational essentialism | Educational essentialism is an educational philosophy whose adherents believe that children should learn the traditional basic subjects thoroughly. In this philosophical school of thought, the aim is to instill students with the "essentials" of academic knowledge, enacting a back-to-basics approach. Essentialism ensures that the accumulated wisdom of our civilization as taught in the traditional academic disciplines is passed on from teacher to student. Such disciplines might include Reading, Writing, Literature, Foreign Languages, History, Mathematics, Classical Languages, Science, Art, and Music. Moreover, this traditional approach is meant to train the mind, promote reasoning, and ensure a common culture.
Essentialism is a relatively conservative stance to education that strives to teach students the knowledge of a society and civilization through a core curriculum. This core curriculum involves such areas that include; the study of the surrounding environment, basic natural laws, and the disciplines that promote a happier, more educated living. Other non-traditional areas are also integrated as well in moderation to balance the education. Essentialists' goals are to instill students with the "essentials" of academic knowledge, patriotism, and character development through traditional (or back-to-basic) approaches. This is to promote reasoning, train the mind, and ensure a common culture for all citizens.
Essentialism is the most typically enacted philosophy in American classrooms today. Traces of this can be found in the organized learning centered on teachers and textbooks, in addition to the regular assignments and evaluations.
The role of the teacher as the leader of the classroom is a very important tenet of Educational essentialism. The teacher is the center of the classroom, so they should be rigid and disciplinary. Establishing order in the classroom is crucial for student learning; effective teaching cannot take place in a loud and disorganized environment. It is the teacher's responsibility to keep order in the classroom. The teacher must interpret essentials of the learning process, take the leadership position and set the tone of the classroom. These needs require an educator who is academically well-qualified with an appreciation for learning and development. The teacher must control the students with distributions of rewards and penalties. It has been argued that recent teacher education policies in some countries extend essentialism to teacher education policy frameworks.
The Essentialist movement first began in the United States in the year 1938. In Atlantic City, New Jersey, a group met for the first time called "The Essentialist's Committee for the Advancement of Education." Their emphasis was to reform the educational system to a rationality-based system.
The term essentialist first appeared in the book An Introduction to the Philosophy of Education which was written by Michael John Demiashkevich. In his book, Demiashkevich labels some specific educators (including William C. Bagley) as “essentialists." Demiashkevich compared the essentialists to the different viewpoints of the Progressive Education Association. He described how the Progressives preached a “hedonistic doctrine of change” whereas the essentialists stressed the moral responsibility of man for his actions and looked toward permanent principles of behavior (Demiashkevich likened the arguments to those between the Socratics and the Sophists in Greek philosophy). In 1938 Bagley and other educators met together where Bagley gave a speech detailing the main points of the essentialism movement and attacking the public education in the United States. One point that Bagley noted was that students in the U.S. were not getting an education on the same levels as students in Europe who were the same age.
A recent branch has emerged within the essentialist school of thought called "neoessentialism." Emerging in the eighties as a response to the essentialist ideals of the thirties as well as to the criticism of the fifties and the advocates for education in the seventies, neoessentialism was created to try to appease the problems facing the United States at the time. The most notable change within this school of thought is that it called for the creation of a new discipline, computer science.
William Bagley (1874–1946) was an important historical essentialist. William C. Bagley completed his undergraduate degree at Michigan Agricultural College in 1895. It wasn't until after finishing his undergraduate studies that he truly wanted to be a teacher. Bagley did his Graduate studies at the University of Chicago and at Cornell University. He acquired his Ph.D. in 1900, after which he took his first school job as a Principal in a St. Louis, Missouri Elementary School. Bagley's devotion increased during his work at Montana State Normal School in Dillon, Montana. It was here where he decided to dedicate his time to the education of teachers and where he published The Educative Process, launching his name across the nation. Throughout his career Bagley argued against the conservative position that teachers were not in need of special training for their work. He believed that liberal arts material was important in teacher education. Bagley also believed the dominant theories of education of the time were weak and lacking.
In April 1938, he published the Essentialist's Platform, in which he outlined three major points of essentialism. He described the right of students to a well-educated and culturally knowledgeable teacher. Secondly, he discussed the importance of teaching the ideals of community to each group of students. Lastly, Bagley wrote of the importance of accuracy, thoroughness and effort on the part of the student in the classroom.
Another important essentialist is E. D. Hirsch (1928-). Hirsch was Founder and Chairman of the Core Knowledge Foundation and authored several books concerning fact-based approaches to education. Now retired, he spent many years teaching at the University of Virginia while also being an advocate for the "back to basics" movement. In his most popular book, Cultural Literacy — What Every American Needs To Know, he offers lists, quotations, and information regarding what he believes is essential knowledge.
See also Arthur Bestor.
The Core Knowledge Schools were founded on the philosophy of essentialist E.D. Hirsch. Although it is difficult to maintain a pure and strict essentialist-only curriculum, these schools have the central aim of establishing a common knowledge base for all citizens. To do so, they follow a nationwide, content-specific, and teacher-centered curriculum. The Core Knowledge curriculum also allows for local variance above and beyond the core curriculum. Central curricular aims are academic excellence and the learning of knowledge, and teachers who are masters of their knowledge areas serve this aim.
Because Essentialism is largely teacher-centered, the role of the student is often called into question. Presumably, in an essentialist classroom, the teacher is the one designing the curriculum for the students based upon the core disciplines. Moreover, he or she is enacting the curriculum and setting the standards which the students must meet. The teacher's evaluative role may undermine students' interest in study. As a result, the students begin to take on more of a passive role in their education as they are forced to meet and learn such standards and information.
Furthermore, there is also speculation that an essentialist education helps in promoting the cultural lag. This philosophy of education is very traditional in the mindset of passing on the knowledge of the culture via the academic disciplines. Thus, students are forced to think in the mindset of the larger culture, and individual creativity, and subversive investigation are often not emphasized, or even outright discouraged. | [
{
"paragraph_id": 0,
"text": "Educational essentialism is an educational philosophy whose adherents believe that children should learn the traditional basic subjects thoroughly. In this philosophical school of thought, the aim is to instill students with the \"essentials\" of academic knowledge, enacting a back-to-basics approach. Essentialism ensures that the accumulated wisdom of our civilization as taught in the traditional academic disciplines is passed on from teacher to student. Such disciplines might include Reading, Writing, Literature, Foreign Languages, History, Mathematics, Classical Languages, Science, Art, and Music. Moreover, this traditional approach is meant to train the mind, promote reasoning, and ensure a common culture.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Essentialism is a relatively conservative stance to education that strives to teach students the knowledge of a society and civilization through a core curriculum. This core curriculum involves such areas that include; the study of the surrounding environment, basic natural laws, and the disciplines that promote a happier, more educated living. Other non-traditional areas are also integrated as well in moderation to balance the education. Essentialists' goals are to instill students with the \"essentials\" of academic knowledge, patriotism, and character development through traditional (or back-to-basic) approaches. This is to promote reasoning, train the mind, and ensure a common culture for all citizens.",
"title": "Principles of essentialism"
},
{
"paragraph_id": 2,
"text": "Essentialism is the most typically enacted philosophy in American classrooms today. Traces of this can be found in the organized learning centered on teachers and textbooks, in addition to the regular assignments and evaluations.",
"title": "Principles of essentialism"
},
{
"paragraph_id": 3,
"text": "The role of the teacher as the leader of the classroom is a very important tenet of Educational essentialism. The teacher is the center of the classroom, so they should be rigid and disciplinary. Establishing order in the classroom is crucial for student learning; effective teaching cannot take place in a loud and disorganized environment. It is the teacher's responsibility to keep order in the classroom. The teacher must interpret essentials of the learning process, take the leadership position and set the tone of the classroom. These needs require an educator who is academically well-qualified with an appreciation for learning and development. The teacher must control the students with distributions of rewards and penalties. It has been argued that recent teacher education policies in some countries extend essentialism to teacher education policy frameworks.",
"title": "Principles of essentialism"
},
{
"paragraph_id": 4,
"text": "The Essentialist movement first began in the United States in the year 1938. In Atlantic City, New Jersey, a group met for the first time called \"The Essentialist's Committee for the Advancement of Education.\" Their emphasis was to reform the educational system to a rationality-based system.",
"title": "History of essentialism"
},
{
"paragraph_id": 5,
"text": "The term essentialist first appeared in the book An Introduction to the Philosophy of Education which was written by Michael John Demiashkevich. In his book, Demiashkevich labels some specific educators (including William C. Bagley) as “essentialists.\" Demiashkevich compared the essentialists to the different viewpoints of the Progressive Education Association. He described how the Progressives preached a “hedonistic doctrine of change” whereas the essentialists stressed the moral responsibility of man for his actions and looked toward permanent principles of behavior (Demiashkevich likened the arguments to those between the Socratics and the Sophists in Greek philosophy). In 1938 Bagley and other educators met together where Bagley gave a speech detailing the main points of the essentialism movement and attacking the public education in the United States. One point that Bagley noted was that students in the U.S. were not getting an education on the same levels as students in Europe who were the same age.",
"title": "History of essentialism"
},
{
"paragraph_id": 6,
"text": "A recent branch has emerged within the essentialist school of thought called \"neoessentialism.\" Emerging in the eighties as a response to the essentialist ideals of the thirties as well as to the criticism of the fifties and the advocates for education in the seventies, neoessentialism was created to try to appease the problems facing the United States at the time. The most notable change within this school of thought is that it called for the creation of a new discipline, computer science.",
"title": "History of essentialism"
},
{
"paragraph_id": 7,
"text": "William Bagley (1874–1946) was an important historical essentialist. William C. Bagley completed his undergraduate degree at Michigan Agricultural College in 1895. It wasn't until after finishing his undergraduate studies that he truly wanted to be a teacher. Bagley did his Graduate studies at the University of Chicago and at Cornell University. He acquired his Ph.D. in 1900, after which he took his first school job as a Principal in a St. Louis, Missouri Elementary School. Bagley's devotion increased during his work at Montana State Normal School in Dillon, Montana. It was here where he decided to dedicate his time to the education of teachers and where he published The Educative Process, launching his name across the nation. Throughout his career Bagley argued against the conservative position that teachers were not in need of special training for their work. He believed that liberal arts material was important in teacher education. Bagley also believed the dominant theories of education of the time were weak and lacking.",
"title": "History of essentialism"
},
{
"paragraph_id": 8,
"text": "In April 1938, he published the Essentialist's Platform, in which he outlined three major points of essentialism. He described the right of students to a well-educated and culturally knowledgeable teacher. Secondly, he discussed the importance of teaching the ideals of community to each group of students. Lastly, Bagley wrote of the importance of accuracy, thoroughness and effort on the part of the student in the classroom.",
"title": "History of essentialism"
},
{
"paragraph_id": 9,
"text": "Another important essentialist is E. D. Hirsch (1928-). Hirsch was Founder and Chairman of the Core Knowledge Foundation and authored several books concerning fact-based approaches to education. Now retired, he spent many years teaching at the University of Virginia while also being an advocate for the \"back to basics\" movement. In his most popular book, Cultural Literacy — What Every American Needs To Know, he offers lists, quotations, and information regarding what he believes is essential knowledge.",
"title": "History of essentialism"
},
{
"paragraph_id": 10,
"text": "See also Arthur Bestor.",
"title": "History of essentialism"
},
{
"paragraph_id": 11,
"text": "The Core Knowledge Schools were founded on the philosophy of essentialist E.D. Hirsch. Although it is difficult to maintain a pure and strict essentialist-only curriculum, these schools have the central aim of establishing a common knowledge base for all citizens. To do so, they follow a nationwide, content-specific, and teacher-centered curriculum. The Core Knowledge curriculum also allows for local variance above and beyond the core curriculum. Central curricular aims are academic excellence and the learning of knowledge, and teachers who are masters of their knowledge areas serve this aim.",
"title": "Schools enacting an essentialist curriculum"
},
{
"paragraph_id": 12,
"text": "Because Essentialism is largely teacher-centered, the role of the student is often called into question. Presumably, in an essentialist classroom, the teacher is the one designing the curriculum for the students based upon the core disciplines. Moreover, he or she is enacting the curriculum and setting the standards which the students must meet. The teacher's evaluative role may undermine students' interest in study. As a result, the students begin to take on more of a passive role in their education as they are forced to meet and learn such standards and information.",
"title": "Criticism of essentialism"
},
{
"paragraph_id": 13,
"text": "Furthermore, there is also speculation that an essentialist education helps in promoting the cultural lag. This philosophy of education is very traditional in the mindset of passing on the knowledge of the culture via the academic disciplines. Thus, students are forced to think in the mindset of the larger culture, and individual creativity, and subversive investigation are often not emphasized, or even outright discouraged.",
"title": "Criticism of essentialism"
},
{
"paragraph_id": 14,
"text": "",
"title": "References"
}
]
| Educational essentialism is an educational philosophy whose adherents believe that children should learn the traditional basic subjects thoroughly. In this philosophical school of thought, the aim is to instill students with the "essentials" of academic knowledge, enacting a back-to-basics approach. Essentialism ensures that the accumulated wisdom of our civilization as taught in the traditional academic disciplines is passed on from teacher to student. Such disciplines might include Reading, Writing, Literature, Foreign Languages, History, Mathematics, Classical Languages, Science, Art, and Music. Moreover, this traditional approach is meant to train the mind, promote reasoning, and ensure a common culture. | 2002-02-25T15:51:15Z | 2023-11-07T23:20:27Z | [
"Template:Cite book",
"Template:Cite web",
"Template:Cite journal"
]
| https://en.wikipedia.org/wiki/Educational_essentialism |
10,005 | Progressive education | Progressive education, or educational progressivism, is a pedagogical movement that began in the late 19th century and has persisted in various forms to the present. In Europe, progressive education took the form of the New Education Movement. The term progressive was engaged to distinguish this education from the traditional curricula of the 19th century, which was rooted in classical preparation for the early-industrial university and strongly differentiated by social class. By contrast, progressive education finds its roots in modern, post-industrial experience. Most progressive education programs have these qualities in common:
Progressive education can be traced back to the works of John Locke and Jean-Jacques Rousseau, both of whom are known as forerunners of ideas that would be developed by theorists such as John Dewey. Considered one of the first of the British empiricists, Locke believed that "truth and knowledge… arise out of observation and experience rather than manipulation of accepted or given ideas". He further discussed the need for children to have concrete experiences in order to learn. Rousseau deepened this line of thinking in Emile, or On Education, where he argued that subordination of students to teachers and memorization of facts would not lead to an education.
In Germany, Johann Bernhard Basedow (1724–1790) established the Philanthropinum at Dessau in 1774. He developed new teaching methods based on conversation and play with the child, and a program of physical development. Such was his success that he wrote a treatise on his methods, "On the best and hitherto unknown method of teaching children of noblemen".
Christian Gotthilf Salzmann (1744–1811) was the founder of the Schnepfenthal institution, a school dedicated to new modes of education (derived heavily from the ideas of Jean-Jacques Rousseau). He wrote Elements of Morality, for the Use of Children, one of the first books translated into English by Mary Wollstonecraft.
Johann Heinrich Pestalozzi (1746–1827) was a Swiss pedagogue and educational reformer who exemplified Romanticism in his approach. He founded several educational institutions both in German- and French-speaking regions of Switzerland and wrote many works explaining his revolutionary modern principles of education. His motto was "Learning by head, hand and heart". His research and theories closely resemble those outlined by Rousseau in Emile. He is further considered by many to be the "father of modern educational science" His psychological theories pertain to education as they focus on the development of object teaching, that is, he felt that individuals best learned through experiences and through a direct manipulation and experience of objects. He further speculated that children learn through their own internal motivation rather than through compulsion. (See Intrinsic vs. Extrinsic motivation). A teacher's task will be to help guide their students as individuals through their learning and allow it to unfold naturally.
Friedrich Wilhelm August Fröbel (1782–1852) was a student of Pestalozzi who laid the foundation for modern education based on the recognition that children have unique needs and capabilities. He believed in "self-activity" and play as essential factors in child education. The teacher's role was not to indoctrinate but to encourage self-expression through play, both individually and in group activities. He created the concept of kindergarten.
Johann Friedrich Herbart (1776–1841) emphasized the connection between individual development and the resulting societal contribution. The five key ideas which composed his concept of individual maturation were Inner Freedom, Perfection, Benevolence, Justice, and Equity or Recompense. According to Herbart, abilities were not innate but could be instilled, so a thorough education could provide the framework for moral and intellectual development. In order to develop a child to lead to a consciousness of social responsibility, Herbart advocated that teachers utilize a methodology with five formal steps: "Using this structure a teacher prepared a topic of interest to the children, presented that topic, and questioned them inductively, so that they reached new knowledge based on what they had already known, looked back, and deductively summed up the lesson's achievements, then related them to moral precepts for daily living".
John Melchior Bosco (1815–1888) was concerned about the education of street children who had left their villages to find work in the rapidly industrialized city of Turin, Italy. Exploited as cheap labor or imprisoned for unruly behavior, Bosco saw the need for creating a space where they would feel at home. He called it an 'Oratory' where they could play, learn, share friendships, express themselves, develop their creative talents and pick up skills for gainful self-employment. With those who had found work, he set up a mutual-fund society (an early version of the Grameen Bank) to teach them the benefits of saving and self-reliance. The principles underlying his educational method that won over the hearts and minds of thousands of youth who flocked to his oratory were: 'be reasonable', 'be kind', 'believe' and 'be generous in service'. Today his method of education is practiced in nearly 3000 institutions set up around the world by the members of the Salesian Society he founded in 1873.
While studying for his doctorate in Göttingen in 1882–1883, Cecil Reddie was greatly impressed by the progressive educational theories being applied there. Reddie founded Abbotsholme School in Derbyshire, England, in 1889. Its curriculum enacted the ideas of progressive education. Reddie rejected rote learning, classical languages and corporal punishment. He combined studies in modern languages and the sciences and arts with a program of physical exercise, manual labour, recreation, crafts and arts. Schools modeling themselves after Abbotsholme were established throughout Europe, and the model was particularly influential in Germany. Reddie often engaged foreign teachers, who learned its practices, before returning home to start their own schools. Hermann Lietz an Abbotsholme teacher founded five schools (Landerziehungsheime für Jungen) on Abbotsholme's principles. Other people he influenced included Kurt Hahn, Adolphe Ferrière and Edmond Demolins. His ideas also reached Japan, where it turned into "Taisho-era Free Education Movement" (Taisho Jiyu Kyoiku Undo)
Education according to John Dewey is the "participation of the individual in the social consciousness of the race" (Dewey, 1897, para. 1). As such, education should take into account that the student is a social being. The process begins at birth with the child unconsciously gaining knowledge and gradually developing their knowledge to share and partake in society.
For Dewey, education, which regulates "the process of coming to share in the social consciousness," is the "only sure" method of ensuring social progress and reform (Dewey, 1897, para. 60). In this respect, Dewey foreshadows Social Reconstructionism, whereby schools are a means to reconstruct society. As schools become a means for social reconstruction, they must be given the proper equipment to perform this task and guide their students.
The American teacher Helen Parkhurst (1886–1973) developed the Dalton Plan at the beginning of the twentieth century with the goal of reforming the then current pedagogy and classroom management. She wanted to break the teacher-centered lockstep teaching. During her first experiment, which she implemented in a small elementary school as a young teacher in 1904, she noticed that when students are given freedom for self-direction and self-pacing and to help one another, their motivation increases considerably and they learn more. In a later experiment in 1911 and 1912, Parkhurst re-organized the education in a large school for nine- to fourteen-year-olds. Instead of each grade, each subject was appointed its own teacher and its own classroom. The subject teachers made assignments: they converted the subject matter for each grade into learning assignments. In this way, learning became the students' own work; they could carry out their work independently, work at their own pace and plan their work themselves. The classroom turned into a laboratory, a place where students are working, furnished and equipped as work spaces, tailored to meet the requirements of specific subjects. Useful and attractive learning materials, instruments and reference books were put within the students' reach. The benches were replaced by large tables to facilitate co-operation and group instruction. This second experiment formed the basis for the next experiments, those in Dalton and New York, from 1919 onwards. The only addition was the use of graphs, charts enabling students to keep track of their own progress in each subject.
In the nineteen-twenties and nineteen-thirties, Dalton education spread throughout the world. There is no certainty regarding the exact numbers of Dalton schools, but there was Dalton education in America, Australia, England, Germany, the Netherlands, the Soviet Union, India, China and Japan.
Rudolf Steiner (1869–1925) first described the principles of what was to become Waldorf education in 1907. He established a series of schools based on these principles beginning in 1919. The focus of the education is on creating a developmentally appropriate curriculum that holistically integrates practical, artistic, social, and academic experiences. There are more than a thousand schools and many more early childhood centers worldwide; it has also become a popular form of homeschooling.
Maria Montessori (1870–1952) began to develop her philosophy and methods in 1897. She based her work on her observations of children and experimentation with the environment, materials, and lessons available to them. She frequently referred to her work as "scientific pedagogy", arguing for the need to go beyond observation and measurement of students, to developing new methods to transform them. Although Montessori education spread to the United States in 1911 there were conflicts with the American educational establishment and was opposed by William Heard Kilpatrick. However Montessori education returned to the United States in 1960 and has since spread to thousands of schools there.
In 1914 the Montessori Society in England organised its first conference. Hosted by Rev Bertram Hawker, who had set up, in partnership with his local elementary school in the Norfolk coastal village of East Runton, the first Montessori School in England. Pictures of this school, and its children, illustrated the 'Montessori's Own Handbook' (1914). Hawker had been impressed by his visit to Montessori's Casa dei Bambini in Rome, he gave numerous talks on Montessori's work after 1912, assisting in generating a national interest in her work. He organised the Montessori Conference 1914 in partnership with Edmond Holmes, ex-Chief Inspector of Schools, who had written a government report on Montessori. The conference decided that its remit was to promote the 'liberation of the child in the school', and though inspired by Montessori, would encourage, support and network teachers and educationalists who sought, through their schools and methods, that aim. They changed their name the following year to New Ideals in Education. Each subsequent conference was opened with reference to its history and origin as a Montessori Conference recognising her inspiration, reports italicized the members of the Montessori Society in the delegate lists, and numerous further events included Montessori methods and case studies. Montessori, through New Ideals in Education, its committee and members, events and publications, greatly influenced progressive state education in England. (references to be added).
In July 1906, Ernest Thompson Seton sent Robert Baden-Powell a copy of his book The Birchbark Roll of the Woodcraft Indians. Seton was a British-born Canadian-American living in the United States. They shared ideas about youth training programs. In 1907 Baden-Powell wrote a draft called Boy Patrols. In the same year, to test his ideas, he gathered 21 boys of mixed social backgrounds and held a week-long camp in August on Brownsea Island in England. His organizational method, now known as the Patrol System and a key part of Scouting training, allowed the boys to organize themselves into small groups with an elected patrol leader. Baden Powell then wrote Scouting for Boys (London, 1908). The Brownsea camp and the publication of Scouting for Boys are generally regarded as the start of the Scout movement which spread throughout the world. Baden-Powell and his sister Agnes Baden-Powell introduced the Girl Guides in 1910.
Traditional education uses extrinsic motivation, such as grades and prizes. Progressive education is more likely to use intrinsic motivation, basing activities on the interests of the child. Praise may be discouraged as a motivator. Progressive education is a response to traditional methods of teaching. It is defined as an educational movement which gives more value to experience than formal learning. It is based more on experiential learning that concentrate on the development of a child's talents.
21st century skills are a series of higher-order skills, abilities, and learning dispositions that have been identified as being required for success in the rapidly changing, digital society and workplaces. Many of these skills are also defining qualities of progressive education as well as being associated with deeper learning, which is based on mastering skills such as analytic reasoning, complex problem solving, and teamwork. These skills differ from traditional academic skills in that they are not primarily content knowledge-based.
Hermann Lietz founded three Landerziehungsheime (country boarding schools) in 1904 based on Reddie's model for boys of different ages. Lietz eventually succeeded in establishing five more Landerziehungsheime. Edith and Paul Geheeb founded Odenwaldschule in Heppenheim in the Odenwald in 1910 using their concept of progressive education, which integrated the work of the head and hand.
Janusz Korczak was one notable follower and developer of Pestalozzi's ideas. He wrote The names of Pestalozzi, Froebel and Spencer shine with no less brilliance than the names of the greatest inventors of the twentieth century. For they discovered more than the unknown forces of nature; they discovered the unknown half of humanity: children. His Orphan's Home in Warsaw became a model institution and exerted influence on the educational process in other orphanages of the same type.
The Quaker school run in Ballitore, Co Kildare in the 18th century had students from as far away as Bordeaux (where there was a substantial Irish émigré population), the Caribbean and Norway. Notable pupils included Edmund Burke and Napper Tandy. Sgoil Éanna, or in English St Enda's was founded in 1908 by Pádraig Pearse on Montessori principles. Its former assistant headmaster Thomas MacDonagh and other teachers including Pearse; games master Con Colbert; Pearse's brother, Willie, the art teacher, and Joseph Plunkett, and occasional lecturer in English, were executed by the British after the 1916 Rising. Pearse and MacDonagh were two of the seven leaders who signed the Irish Declaration of Independence. Pearse's book The Murder Machine was a denunciation of the English school system of the time and a declaration of his own educational principles.
In Sweden, an early proponent of progressive education was Alva Myrdal, who with her husband Gunnar co-wrote Kris i befolkningsfrågan (1934), a most influential program for the social-democratic hegemony (1932–1976) popularly known as "Folkhemmet". School reforms went through government reports in the 1940s and trials in the 1950s, resulting in the introduction in 1962 of public comprehensive schools ("grundskola") instead of the previously separated parallel schools for theoretical and non-theoretical education.
The ideas from Reddie's Abbotsholme spread to schools such as Bedales School (1893), King Alfred School, London (1898) and St Christopher School, Letchworth (1915), as well as all the Friends' schools, Steiner Waldorf schools and those belonging to the Round Square Conference. The King Alfred School was radical for its time in that it provided a secular education and that boys and girls were educated together. Alexander Sutherland Neill believed children should achieve self-determination and should be encouraged to think critically rather than blindly obeying. He implemented his ideas with the founding of Summerhill School in 1921. Neill believed that children learn better when they are not compelled to attend lessons. The school was also managed democratically, with regular meetings to determine school rules. Pupils had equal voting rights with school staff.
Fröbel's student Margarethe Schurz founded the first kindergarten in the United States at Watertown, Wisconsin, in 1856, and she also inspired Elizabeth Peabody, who went on to found the first English-speaking kindergarten in the United States – the language at Schurz's kindergarten had been German, to serve an immigrant community – in Boston in 1860. This paved the way for the concept's spread in the USA. The German émigré Adolph Douai had also founded a kindergarten in Boston in 1859, but was obliged to close it after only a year. By 1866, however, he was founding others in New York City.
William Heard Kilpatrick (1871–1965) was a pupil of Dewey and one of the most effective practitioners of the concept as well as the more adept at proliferating the progressive education movement and spreading word of the works of Dewey. He is especially well known for his "project method of teaching". This developed the progressive education notion that students were to be engaged and taught so that their knowledge may be directed to society for a socially useful need. Like Dewey he also felt that students should be actively engaged in their learning rather than actively disengaged with the simple reading and regurgitation of material.
The most famous early practitioner of progressive education was Francis Parker; its best-known spokesperson was the philosopher John Dewey. In 1875 Francis Parker became superintendent of schools in Quincy, Massachusetts, after spending two years in Germany studying emerging educational trends on the continent. Parker was opposed to rote learning, believing that there was no value in knowledge without understanding. He argued instead schools should encourage and respect the child's creativity. Parker's Quincy System called for child-centered and experience-based learning. He replaced the traditional curriculum with integrated learning units based on core themes related to the knowledge of different disciplines. He replaced traditional readers, spellers and grammar books with children's own writing, literature, and teacher prepared materials. In 1883 Parker left Massachusetts to become Principal of the Cook County Normal School in Chicago, a school that also served to train teachers in Parker's methods. In 1894 Parker's Talks on Pedagogics, which drew heavily on the thinking of Fröbel, Pestalozzi and Herbart, became one of the first American writings on education to gain international fame.
That same year, philosopher John Dewey moved from the University of Michigan to the newly established University of Chicago where he became chair of the department of philosophy, psychology and education. He and his wife enrolled their children in Parker's school before founding their own school two years later.
Whereas Parker started with practice and then moved to theory, Dewey began with hypotheses and then devised methods and curricula to test them. By the time Dewey moved to Chicago at the age of thirty-five, he had already published two books on psychology and applied psychology. He had become dissatisfied with philosophy as pure speculation and was seeking ways to make philosophy directly relevant to practical issues. Moving away from an early interest in Hegel, Dewey proceeded to reject all forms of dualism and dichotomy in favor of a philosophy of experience as a series of unified wholes in which everything can be ultimately related.
In 1896, John Dewey opened what he called the laboratory school to test his theories and their sociological implications. With Dewey as the director and his wife as principal, the University of Chicago Laboratory school, was dedicated "to discover in administration, selection of subject-matter, methods of learning, teaching, and discipline, how a school could become a cooperative community while developing in individuals their own capacities and satisfy their own needs." (Cremin, 136) For Dewey the two key goals of developing a cooperative community and developing individuals' own capacities were not at odds; they were necessary to each other. This unity of purpose lies at the heart of the progressive education philosophy. In 1912, Dewey sent out students of his philosophy to found The Park School of Buffalo and The Park School of Baltimore to put it into practice. These schools operate to this day within a similar progressive approach.
At Columbia, Dewey worked with other educators such as Charles Eliot and Abraham Flexner to help bring progressivism into the mainstream of American education. In 1917 Columbia established the Lincoln School of Teachers College "as a laboratory for the working out of an elementary and secondary curriculum which shall eliminate obsolete material and endeavor to work up in usable form material adapted to the needs of modern living." (Cremin, 282) Based on Flexner's demand that the modern curriculum "include nothing for which an affirmative case can not be made out" (Cremin, 281) the new school organized its activities around four fundamental fields: science, industry, aesthetics and civics. The Lincoln School built its curriculum around "units of work" that reorganized traditional subject matter into forms embracing the development of children and the changing needs of adult life. The first and second grades carried on a study of community life in which they actually built a city. A third grade project growing out of the day-to-day life of the nearby Hudson River became one of the most celebrated units of the school, a unit on boats, which under the guidance of its legendary teacher Miss Curtis, became an entrée into history, geography, reading, writing, arithmetic, science, art and literature. Each of the units was broadly enough conceived so that different children could concentrate on different aspects depending on their own interests and needs. Each of the units called for widely diverse student activities, and each sought to deal in depth with some critical aspect of contemporary civilization. Finally each unit engaged children working together cooperatively and also provided opportunities for individual research and exploration.
In 1924, Agnes de Lima, the lead writer on education for The New Republic and The Nation, published a collection of her articles on progressive education as a book, titled Our Enemy the Child.
In 1918, the National Education Association, representing superintendents and administrators in smaller districts across the country, issued its report "Cardinal Principles of Secondary Education." It emphasized the education of students in terms of health, a command of fundamental processes, worthy home membership, vocation, citizenship, worthy use of leisure, and ethical character. They emphasized life adjustment and reflected the social efficiency model of progressive education.
From 1919 to 1955, the Progressive Education Association founded by Stanwood Cobb and others worked to promote a more student-centered approach to education. During the Great Depression the organization conducted the Eight-Year Study, evaluating the effects of progressive programs. More than 1500 students over four years were compared to an equal number of carefully matched students at conventional schools. When they reached college, the experimental students were found to equal or surpass traditionally educated students on all outcomes: grades, extracurricular participation, dropout rates, intellectual curiosity, and resourcefulness. Moreover, the study found that the more the school departed from the traditional college preparatory program, the better was the record of the graduates. (Kohn, Schools, 232)
By mid-century, many public school programs had also adopted elements of progressive curriculum. At mid-century Dewey believed that progressive education had "not really penetrated and permeated the foundations of the educational institution."(Kohn, Schools, 6,7) As the influence of progressive pedagogy grew broader and more diffuse, practitioners began to vary their application of progressive principles. As varying interpretations and practices made evaluation of progressive reforms more difficult to assess, critics began to propose alternative approaches.
The seeds of the debate over progressive education can be seen in the differences of Parker and Dewey. These have to do with how much and by whom curriculum should be worked out from grade to grade, how much the child's emerging interests should determine classroom activities, the importance of child-centered vs. societal–centered learning, the relationship of community building to individual growth, and especially the relationship between emotion, thought and experience.
In 1955, the publication of Rudolf Flesch's Why Johnny Can't Read leveled criticism of reading programs at the progressive emphasis on reading in context. The conservative McCarthy era raised questions about the liberal ideas at the roots of the progressive reforms. The launching of Sputnik in 1957 at the height of the Cold War gave rise to a number of intellectually competitive approaches to disciplinary knowledge, such as BSCS biology PSSC physics, led by university professors such as Jerome Bruner and Jerrold Zacharias.
Some Cold War reforms incorporated elements of progressivism. For example, the work of Zacharias and Bruner was based in the developmental psychology of Jean Piaget and incorporated many of Dewey's ideas of experiential education. Bruner's analysis of developmental psychology became the core of a pedagogical movement known as constructivism, which argues that the child is an active participant in making meaning and must be engaged in the progress of education for learning to be effective. This psychological approach has deep connections to the work of both Parker and Dewey and led to a resurgence of their ideas in second half of the century.
In 1965, President Johnson inaugurated the Great Society and the Elementary and Secondary Education Act suffused public school programs with funds for sweeping education reforms. At the same time the influx of federal funding also gave rise to demands for accountability and the behavioral objectives approach of Robert F. Mager and others foreshadowed the No Child Left Behind Act passed in 2002. Against these critics eloquent spokespersons stepped forward in defense of the progressive tradition. The Open Classroom movement, led by Herb Kohl and George Dennison, recalled many of Parker's child centered reforms.
The late 1960s and early 1970s saw a rise and decline in the number of progressive schools. There were several reasons for the decline:
Progressive education has been viewed as an alternative to the test-oriented instruction legislated by the No Child Left Behind educational funding act. Alfie Kohn has been an outspoken critic of the No Child Left Behind Act and a passionate defender of the progressive tradition.
Rabindranath Tagore (1861–1941) was one of the most effective practitioners of the concept of progressive education. He expanded Santiniketan, which is a small town near Bolpur in the Birbhum district of West Bengal, India, approximately 160 km north of Kolkata. He de-emphasized textbook learning in favor of varied learning resources from nature. The emphasis here was on self-motivation rather than on discipline, and on fostering intellectual curiosity rather than competitive excellence. There were courses on a great variety of cultures, and study programs devoted to China, Japan, and the Middle East. He was of the view that education should be a "joyous exercise of our inventive and constructive energies that help us to build up character."
Seikatsu tsuzurikata is a grassroots movement in Japan that has many parallels to the progressive education movement, but it developed completely independently, beginning in the late 1920s. The Japanese progressive educational movement was one of the stepping stones to the modernization of Japan and it has resonated down to the present. | [
{
"paragraph_id": 0,
"text": "Progressive education, or educational progressivism, is a pedagogical movement that began in the late 19th century and has persisted in various forms to the present. In Europe, progressive education took the form of the New Education Movement. The term progressive was engaged to distinguish this education from the traditional curricula of the 19th century, which was rooted in classical preparation for the early-industrial university and strongly differentiated by social class. By contrast, progressive education finds its roots in modern, post-industrial experience. Most progressive education programs have these qualities in common:",
"title": ""
},
{
"paragraph_id": 1,
"text": "Progressive education can be traced back to the works of John Locke and Jean-Jacques Rousseau, both of whom are known as forerunners of ideas that would be developed by theorists such as John Dewey. Considered one of the first of the British empiricists, Locke believed that \"truth and knowledge… arise out of observation and experience rather than manipulation of accepted or given ideas\". He further discussed the need for children to have concrete experiences in order to learn. Rousseau deepened this line of thinking in Emile, or On Education, where he argued that subordination of students to teachers and memorization of facts would not lead to an education.",
"title": "Educational theory"
},
{
"paragraph_id": 2,
"text": "In Germany, Johann Bernhard Basedow (1724–1790) established the Philanthropinum at Dessau in 1774. He developed new teaching methods based on conversation and play with the child, and a program of physical development. Such was his success that he wrote a treatise on his methods, \"On the best and hitherto unknown method of teaching children of noblemen\".",
"title": "Educational theory"
},
{
"paragraph_id": 3,
"text": "Christian Gotthilf Salzmann (1744–1811) was the founder of the Schnepfenthal institution, a school dedicated to new modes of education (derived heavily from the ideas of Jean-Jacques Rousseau). He wrote Elements of Morality, for the Use of Children, one of the first books translated into English by Mary Wollstonecraft.",
"title": "Educational theory"
},
{
"paragraph_id": 4,
"text": "Johann Heinrich Pestalozzi (1746–1827) was a Swiss pedagogue and educational reformer who exemplified Romanticism in his approach. He founded several educational institutions both in German- and French-speaking regions of Switzerland and wrote many works explaining his revolutionary modern principles of education. His motto was \"Learning by head, hand and heart\". His research and theories closely resemble those outlined by Rousseau in Emile. He is further considered by many to be the \"father of modern educational science\" His psychological theories pertain to education as they focus on the development of object teaching, that is, he felt that individuals best learned through experiences and through a direct manipulation and experience of objects. He further speculated that children learn through their own internal motivation rather than through compulsion. (See Intrinsic vs. Extrinsic motivation). A teacher's task will be to help guide their students as individuals through their learning and allow it to unfold naturally.",
"title": "Educational theory"
},
{
"paragraph_id": 5,
"text": "Friedrich Wilhelm August Fröbel (1782–1852) was a student of Pestalozzi who laid the foundation for modern education based on the recognition that children have unique needs and capabilities. He believed in \"self-activity\" and play as essential factors in child education. The teacher's role was not to indoctrinate but to encourage self-expression through play, both individually and in group activities. He created the concept of kindergarten.",
"title": "Educational theory"
},
{
"paragraph_id": 6,
"text": "Johann Friedrich Herbart (1776–1841) emphasized the connection between individual development and the resulting societal contribution. The five key ideas which composed his concept of individual maturation were Inner Freedom, Perfection, Benevolence, Justice, and Equity or Recompense. According to Herbart, abilities were not innate but could be instilled, so a thorough education could provide the framework for moral and intellectual development. In order to develop a child to lead to a consciousness of social responsibility, Herbart advocated that teachers utilize a methodology with five formal steps: \"Using this structure a teacher prepared a topic of interest to the children, presented that topic, and questioned them inductively, so that they reached new knowledge based on what they had already known, looked back, and deductively summed up the lesson's achievements, then related them to moral precepts for daily living\".",
"title": "Educational theory"
},
{
"paragraph_id": 7,
"text": "John Melchior Bosco (1815–1888) was concerned about the education of street children who had left their villages to find work in the rapidly industrialized city of Turin, Italy. Exploited as cheap labor or imprisoned for unruly behavior, Bosco saw the need for creating a space where they would feel at home. He called it an 'Oratory' where they could play, learn, share friendships, express themselves, develop their creative talents and pick up skills for gainful self-employment. With those who had found work, he set up a mutual-fund society (an early version of the Grameen Bank) to teach them the benefits of saving and self-reliance. The principles underlying his educational method that won over the hearts and minds of thousands of youth who flocked to his oratory were: 'be reasonable', 'be kind', 'believe' and 'be generous in service'. Today his method of education is practiced in nearly 3000 institutions set up around the world by the members of the Salesian Society he founded in 1873.",
"title": "Educational theory"
},
{
"paragraph_id": 8,
"text": "While studying for his doctorate in Göttingen in 1882–1883, Cecil Reddie was greatly impressed by the progressive educational theories being applied there. Reddie founded Abbotsholme School in Derbyshire, England, in 1889. Its curriculum enacted the ideas of progressive education. Reddie rejected rote learning, classical languages and corporal punishment. He combined studies in modern languages and the sciences and arts with a program of physical exercise, manual labour, recreation, crafts and arts. Schools modeling themselves after Abbotsholme were established throughout Europe, and the model was particularly influential in Germany. Reddie often engaged foreign teachers, who learned its practices, before returning home to start their own schools. Hermann Lietz an Abbotsholme teacher founded five schools (Landerziehungsheime für Jungen) on Abbotsholme's principles. Other people he influenced included Kurt Hahn, Adolphe Ferrière and Edmond Demolins. His ideas also reached Japan, where it turned into \"Taisho-era Free Education Movement\" (Taisho Jiyu Kyoiku Undo)",
"title": "Educational theory"
},
{
"paragraph_id": 9,
"text": "Education according to John Dewey is the \"participation of the individual in the social consciousness of the race\" (Dewey, 1897, para. 1). As such, education should take into account that the student is a social being. The process begins at birth with the child unconsciously gaining knowledge and gradually developing their knowledge to share and partake in society.",
"title": "Educational theory"
},
{
"paragraph_id": 10,
"text": "For Dewey, education, which regulates \"the process of coming to share in the social consciousness,\" is the \"only sure\" method of ensuring social progress and reform (Dewey, 1897, para. 60). In this respect, Dewey foreshadows Social Reconstructionism, whereby schools are a means to reconstruct society. As schools become a means for social reconstruction, they must be given the proper equipment to perform this task and guide their students.",
"title": "Educational theory"
},
{
"paragraph_id": 11,
"text": "The American teacher Helen Parkhurst (1886–1973) developed the Dalton Plan at the beginning of the twentieth century with the goal of reforming the then current pedagogy and classroom management. She wanted to break the teacher-centered lockstep teaching. During her first experiment, which she implemented in a small elementary school as a young teacher in 1904, she noticed that when students are given freedom for self-direction and self-pacing and to help one another, their motivation increases considerably and they learn more. In a later experiment in 1911 and 1912, Parkhurst re-organized the education in a large school for nine- to fourteen-year-olds. Instead of each grade, each subject was appointed its own teacher and its own classroom. The subject teachers made assignments: they converted the subject matter for each grade into learning assignments. In this way, learning became the students' own work; they could carry out their work independently, work at their own pace and plan their work themselves. The classroom turned into a laboratory, a place where students are working, furnished and equipped as work spaces, tailored to meet the requirements of specific subjects. Useful and attractive learning materials, instruments and reference books were put within the students' reach. The benches were replaced by large tables to facilitate co-operation and group instruction. This second experiment formed the basis for the next experiments, those in Dalton and New York, from 1919 onwards. The only addition was the use of graphs, charts enabling students to keep track of their own progress in each subject.",
"title": "Educational theory"
},
{
"paragraph_id": 12,
"text": "In the nineteen-twenties and nineteen-thirties, Dalton education spread throughout the world. There is no certainty regarding the exact numbers of Dalton schools, but there was Dalton education in America, Australia, England, Germany, the Netherlands, the Soviet Union, India, China and Japan.",
"title": "Educational theory"
},
{
"paragraph_id": 13,
"text": "Rudolf Steiner (1869–1925) first described the principles of what was to become Waldorf education in 1907. He established a series of schools based on these principles beginning in 1919. The focus of the education is on creating a developmentally appropriate curriculum that holistically integrates practical, artistic, social, and academic experiences. There are more than a thousand schools and many more early childhood centers worldwide; it has also become a popular form of homeschooling.",
"title": "Educational theory"
},
{
"paragraph_id": 14,
"text": "Maria Montessori (1870–1952) began to develop her philosophy and methods in 1897. She based her work on her observations of children and experimentation with the environment, materials, and lessons available to them. She frequently referred to her work as \"scientific pedagogy\", arguing for the need to go beyond observation and measurement of students, to developing new methods to transform them. Although Montessori education spread to the United States in 1911 there were conflicts with the American educational establishment and was opposed by William Heard Kilpatrick. However Montessori education returned to the United States in 1960 and has since spread to thousands of schools there.",
"title": "Educational theory"
},
{
"paragraph_id": 15,
"text": "In 1914 the Montessori Society in England organised its first conference. Hosted by Rev Bertram Hawker, who had set up, in partnership with his local elementary school in the Norfolk coastal village of East Runton, the first Montessori School in England. Pictures of this school, and its children, illustrated the 'Montessori's Own Handbook' (1914). Hawker had been impressed by his visit to Montessori's Casa dei Bambini in Rome, he gave numerous talks on Montessori's work after 1912, assisting in generating a national interest in her work. He organised the Montessori Conference 1914 in partnership with Edmond Holmes, ex-Chief Inspector of Schools, who had written a government report on Montessori. The conference decided that its remit was to promote the 'liberation of the child in the school', and though inspired by Montessori, would encourage, support and network teachers and educationalists who sought, through their schools and methods, that aim. They changed their name the following year to New Ideals in Education. Each subsequent conference was opened with reference to its history and origin as a Montessori Conference recognising her inspiration, reports italicized the members of the Montessori Society in the delegate lists, and numerous further events included Montessori methods and case studies. Montessori, through New Ideals in Education, its committee and members, events and publications, greatly influenced progressive state education in England. (references to be added).",
"title": "Educational theory"
},
{
"paragraph_id": 16,
"text": "In July 1906, Ernest Thompson Seton sent Robert Baden-Powell a copy of his book The Birchbark Roll of the Woodcraft Indians. Seton was a British-born Canadian-American living in the United States. They shared ideas about youth training programs. In 1907 Baden-Powell wrote a draft called Boy Patrols. In the same year, to test his ideas, he gathered 21 boys of mixed social backgrounds and held a week-long camp in August on Brownsea Island in England. His organizational method, now known as the Patrol System and a key part of Scouting training, allowed the boys to organize themselves into small groups with an elected patrol leader. Baden Powell then wrote Scouting for Boys (London, 1908). The Brownsea camp and the publication of Scouting for Boys are generally regarded as the start of the Scout movement which spread throughout the world. Baden-Powell and his sister Agnes Baden-Powell introduced the Girl Guides in 1910.",
"title": "Educational theory"
},
{
"paragraph_id": 17,
"text": "Traditional education uses extrinsic motivation, such as grades and prizes. Progressive education is more likely to use intrinsic motivation, basing activities on the interests of the child. Praise may be discouraged as a motivator. Progressive education is a response to traditional methods of teaching. It is defined as an educational movement which gives more value to experience than formal learning. It is based more on experiential learning that concentrate on the development of a child's talents.",
"title": "Educational theory"
},
{
"paragraph_id": 18,
"text": "21st century skills are a series of higher-order skills, abilities, and learning dispositions that have been identified as being required for success in the rapidly changing, digital society and workplaces. Many of these skills are also defining qualities of progressive education as well as being associated with deeper learning, which is based on mastering skills such as analytic reasoning, complex problem solving, and teamwork. These skills differ from traditional academic skills in that they are not primarily content knowledge-based.",
"title": "Educational theory"
},
{
"paragraph_id": 19,
"text": "Hermann Lietz founded three Landerziehungsheime (country boarding schools) in 1904 based on Reddie's model for boys of different ages. Lietz eventually succeeded in establishing five more Landerziehungsheime. Edith and Paul Geheeb founded Odenwaldschule in Heppenheim in the Odenwald in 1910 using their concept of progressive education, which integrated the work of the head and hand.",
"title": "In the West"
},
{
"paragraph_id": 20,
"text": "Janusz Korczak was one notable follower and developer of Pestalozzi's ideas. He wrote The names of Pestalozzi, Froebel and Spencer shine with no less brilliance than the names of the greatest inventors of the twentieth century. For they discovered more than the unknown forces of nature; they discovered the unknown half of humanity: children. His Orphan's Home in Warsaw became a model institution and exerted influence on the educational process in other orphanages of the same type.",
"title": "In the West"
},
{
"paragraph_id": 21,
"text": "The Quaker school run in Ballitore, Co Kildare in the 18th century had students from as far away as Bordeaux (where there was a substantial Irish émigré population), the Caribbean and Norway. Notable pupils included Edmund Burke and Napper Tandy. Sgoil Éanna, or in English St Enda's was founded in 1908 by Pádraig Pearse on Montessori principles. Its former assistant headmaster Thomas MacDonagh and other teachers including Pearse; games master Con Colbert; Pearse's brother, Willie, the art teacher, and Joseph Plunkett, and occasional lecturer in English, were executed by the British after the 1916 Rising. Pearse and MacDonagh were two of the seven leaders who signed the Irish Declaration of Independence. Pearse's book The Murder Machine was a denunciation of the English school system of the time and a declaration of his own educational principles.",
"title": "In the West"
},
{
"paragraph_id": 22,
"text": "In Sweden, an early proponent of progressive education was Alva Myrdal, who with her husband Gunnar co-wrote Kris i befolkningsfrågan (1934), a most influential program for the social-democratic hegemony (1932–1976) popularly known as \"Folkhemmet\". School reforms went through government reports in the 1940s and trials in the 1950s, resulting in the introduction in 1962 of public comprehensive schools (\"grundskola\") instead of the previously separated parallel schools for theoretical and non-theoretical education.",
"title": "In the West"
},
{
"paragraph_id": 23,
"text": "The ideas from Reddie's Abbotsholme spread to schools such as Bedales School (1893), King Alfred School, London (1898) and St Christopher School, Letchworth (1915), as well as all the Friends' schools, Steiner Waldorf schools and those belonging to the Round Square Conference. The King Alfred School was radical for its time in that it provided a secular education and that boys and girls were educated together. Alexander Sutherland Neill believed children should achieve self-determination and should be encouraged to think critically rather than blindly obeying. He implemented his ideas with the founding of Summerhill School in 1921. Neill believed that children learn better when they are not compelled to attend lessons. The school was also managed democratically, with regular meetings to determine school rules. Pupils had equal voting rights with school staff.",
"title": "In the West"
},
{
"paragraph_id": 24,
"text": "Fröbel's student Margarethe Schurz founded the first kindergarten in the United States at Watertown, Wisconsin, in 1856, and she also inspired Elizabeth Peabody, who went on to found the first English-speaking kindergarten in the United States – the language at Schurz's kindergarten had been German, to serve an immigrant community – in Boston in 1860. This paved the way for the concept's spread in the USA. The German émigré Adolph Douai had also founded a kindergarten in Boston in 1859, but was obliged to close it after only a year. By 1866, however, he was founding others in New York City.",
"title": "In the West"
},
{
"paragraph_id": 25,
"text": "William Heard Kilpatrick (1871–1965) was a pupil of Dewey and one of the most effective practitioners of the concept as well as the more adept at proliferating the progressive education movement and spreading word of the works of Dewey. He is especially well known for his \"project method of teaching\". This developed the progressive education notion that students were to be engaged and taught so that their knowledge may be directed to society for a socially useful need. Like Dewey he also felt that students should be actively engaged in their learning rather than actively disengaged with the simple reading and regurgitation of material.",
"title": "In the West"
},
{
"paragraph_id": 26,
"text": "The most famous early practitioner of progressive education was Francis Parker; its best-known spokesperson was the philosopher John Dewey. In 1875 Francis Parker became superintendent of schools in Quincy, Massachusetts, after spending two years in Germany studying emerging educational trends on the continent. Parker was opposed to rote learning, believing that there was no value in knowledge without understanding. He argued instead schools should encourage and respect the child's creativity. Parker's Quincy System called for child-centered and experience-based learning. He replaced the traditional curriculum with integrated learning units based on core themes related to the knowledge of different disciplines. He replaced traditional readers, spellers and grammar books with children's own writing, literature, and teacher prepared materials. In 1883 Parker left Massachusetts to become Principal of the Cook County Normal School in Chicago, a school that also served to train teachers in Parker's methods. In 1894 Parker's Talks on Pedagogics, which drew heavily on the thinking of Fröbel, Pestalozzi and Herbart, became one of the first American writings on education to gain international fame.",
"title": "In the West"
},
{
"paragraph_id": 27,
"text": "That same year, philosopher John Dewey moved from the University of Michigan to the newly established University of Chicago where he became chair of the department of philosophy, psychology and education. He and his wife enrolled their children in Parker's school before founding their own school two years later.",
"title": "In the West"
},
{
"paragraph_id": 28,
"text": "Whereas Parker started with practice and then moved to theory, Dewey began with hypotheses and then devised methods and curricula to test them. By the time Dewey moved to Chicago at the age of thirty-five, he had already published two books on psychology and applied psychology. He had become dissatisfied with philosophy as pure speculation and was seeking ways to make philosophy directly relevant to practical issues. Moving away from an early interest in Hegel, Dewey proceeded to reject all forms of dualism and dichotomy in favor of a philosophy of experience as a series of unified wholes in which everything can be ultimately related.",
"title": "In the West"
},
{
"paragraph_id": 29,
"text": "In 1896, John Dewey opened what he called the laboratory school to test his theories and their sociological implications. With Dewey as the director and his wife as principal, the University of Chicago Laboratory school, was dedicated \"to discover in administration, selection of subject-matter, methods of learning, teaching, and discipline, how a school could become a cooperative community while developing in individuals their own capacities and satisfy their own needs.\" (Cremin, 136) For Dewey the two key goals of developing a cooperative community and developing individuals' own capacities were not at odds; they were necessary to each other. This unity of purpose lies at the heart of the progressive education philosophy. In 1912, Dewey sent out students of his philosophy to found The Park School of Buffalo and The Park School of Baltimore to put it into practice. These schools operate to this day within a similar progressive approach.",
"title": "In the West"
},
{
"paragraph_id": 30,
"text": "At Columbia, Dewey worked with other educators such as Charles Eliot and Abraham Flexner to help bring progressivism into the mainstream of American education. In 1917 Columbia established the Lincoln School of Teachers College \"as a laboratory for the working out of an elementary and secondary curriculum which shall eliminate obsolete material and endeavor to work up in usable form material adapted to the needs of modern living.\" (Cremin, 282) Based on Flexner's demand that the modern curriculum \"include nothing for which an affirmative case can not be made out\" (Cremin, 281) the new school organized its activities around four fundamental fields: science, industry, aesthetics and civics. The Lincoln School built its curriculum around \"units of work\" that reorganized traditional subject matter into forms embracing the development of children and the changing needs of adult life. The first and second grades carried on a study of community life in which they actually built a city. A third grade project growing out of the day-to-day life of the nearby Hudson River became one of the most celebrated units of the school, a unit on boats, which under the guidance of its legendary teacher Miss Curtis, became an entrée into history, geography, reading, writing, arithmetic, science, art and literature. Each of the units was broadly enough conceived so that different children could concentrate on different aspects depending on their own interests and needs. Each of the units called for widely diverse student activities, and each sought to deal in depth with some critical aspect of contemporary civilization. Finally each unit engaged children working together cooperatively and also provided opportunities for individual research and exploration.",
"title": "In the West"
},
{
"paragraph_id": 31,
"text": "In 1924, Agnes de Lima, the lead writer on education for The New Republic and The Nation, published a collection of her articles on progressive education as a book, titled Our Enemy the Child.",
"title": "In the West"
},
{
"paragraph_id": 32,
"text": "In 1918, the National Education Association, representing superintendents and administrators in smaller districts across the country, issued its report \"Cardinal Principles of Secondary Education.\" It emphasized the education of students in terms of health, a command of fundamental processes, worthy home membership, vocation, citizenship, worthy use of leisure, and ethical character. They emphasized life adjustment and reflected the social efficiency model of progressive education.",
"title": "In the West"
},
{
"paragraph_id": 33,
"text": "From 1919 to 1955, the Progressive Education Association founded by Stanwood Cobb and others worked to promote a more student-centered approach to education. During the Great Depression the organization conducted the Eight-Year Study, evaluating the effects of progressive programs. More than 1500 students over four years were compared to an equal number of carefully matched students at conventional schools. When they reached college, the experimental students were found to equal or surpass traditionally educated students on all outcomes: grades, extracurricular participation, dropout rates, intellectual curiosity, and resourcefulness. Moreover, the study found that the more the school departed from the traditional college preparatory program, the better was the record of the graduates. (Kohn, Schools, 232)",
"title": "In the West"
},
{
"paragraph_id": 34,
"text": "By mid-century, many public school programs had also adopted elements of progressive curriculum. At mid-century Dewey believed that progressive education had \"not really penetrated and permeated the foundations of the educational institution.\"(Kohn, Schools, 6,7) As the influence of progressive pedagogy grew broader and more diffuse, practitioners began to vary their application of progressive principles. As varying interpretations and practices made evaluation of progressive reforms more difficult to assess, critics began to propose alternative approaches.",
"title": "In the West"
},
{
"paragraph_id": 35,
"text": "The seeds of the debate over progressive education can be seen in the differences of Parker and Dewey. These have to do with how much and by whom curriculum should be worked out from grade to grade, how much the child's emerging interests should determine classroom activities, the importance of child-centered vs. societal–centered learning, the relationship of community building to individual growth, and especially the relationship between emotion, thought and experience.",
"title": "In the West"
},
{
"paragraph_id": 36,
"text": "In 1955, the publication of Rudolf Flesch's Why Johnny Can't Read leveled criticism of reading programs at the progressive emphasis on reading in context. The conservative McCarthy era raised questions about the liberal ideas at the roots of the progressive reforms. The launching of Sputnik in 1957 at the height of the Cold War gave rise to a number of intellectually competitive approaches to disciplinary knowledge, such as BSCS biology PSSC physics, led by university professors such as Jerome Bruner and Jerrold Zacharias.",
"title": "In the West"
},
{
"paragraph_id": 37,
"text": "Some Cold War reforms incorporated elements of progressivism. For example, the work of Zacharias and Bruner was based in the developmental psychology of Jean Piaget and incorporated many of Dewey's ideas of experiential education. Bruner's analysis of developmental psychology became the core of a pedagogical movement known as constructivism, which argues that the child is an active participant in making meaning and must be engaged in the progress of education for learning to be effective. This psychological approach has deep connections to the work of both Parker and Dewey and led to a resurgence of their ideas in second half of the century.",
"title": "In the West"
},
{
"paragraph_id": 38,
"text": "In 1965, President Johnson inaugurated the Great Society and the Elementary and Secondary Education Act suffused public school programs with funds for sweeping education reforms. At the same time the influx of federal funding also gave rise to demands for accountability and the behavioral objectives approach of Robert F. Mager and others foreshadowed the No Child Left Behind Act passed in 2002. Against these critics eloquent spokespersons stepped forward in defense of the progressive tradition. The Open Classroom movement, led by Herb Kohl and George Dennison, recalled many of Parker's child centered reforms.",
"title": "In the West"
},
{
"paragraph_id": 39,
"text": "The late 1960s and early 1970s saw a rise and decline in the number of progressive schools. There were several reasons for the decline:",
"title": "In the West"
},
{
"paragraph_id": 40,
"text": "Progressive education has been viewed as an alternative to the test-oriented instruction legislated by the No Child Left Behind educational funding act. Alfie Kohn has been an outspoken critic of the No Child Left Behind Act and a passionate defender of the progressive tradition.",
"title": "In the West"
},
{
"paragraph_id": 41,
"text": "Rabindranath Tagore (1861–1941) was one of the most effective practitioners of the concept of progressive education. He expanded Santiniketan, which is a small town near Bolpur in the Birbhum district of West Bengal, India, approximately 160 km north of Kolkata. He de-emphasized textbook learning in favor of varied learning resources from nature. The emphasis here was on self-motivation rather than on discipline, and on fostering intellectual curiosity rather than competitive excellence. There were courses on a great variety of cultures, and study programs devoted to China, Japan, and the Middle East. He was of the view that education should be a \"joyous exercise of our inventive and constructive energies that help us to build up character.\"",
"title": "In the East"
},
{
"paragraph_id": 42,
"text": "Seikatsu tsuzurikata is a grassroots movement in Japan that has many parallels to the progressive education movement, but it developed completely independently, beginning in the late 1920s. The Japanese progressive educational movement was one of the stepping stones to the modernization of Japan and it has resonated down to the present.",
"title": "In the East"
}
]
| Progressive education, or educational progressivism, is a pedagogical movement that began in the late 19th century and has persisted in various forms to the present. In Europe, progressive education took the form of the New Education Movement. The term progressive was engaged to distinguish this education from the traditional curricula of the 19th century, which was rooted in classical preparation for the early-industrial university and strongly differentiated by social class. By contrast, progressive education finds its roots in modern, post-industrial experience. Most progressive education programs have these qualities in common: Emphasis on learning by doing – hands-on projects, expeditionary learning, experiential learning
Integrated curriculum focused on thematic units
Strong emphasis on problem solving and critical thinking
Group work and development of social skills
Understanding and action as the goals of learning as opposed to rote knowledge
Collaborative and cooperative learning projects
Education for social responsibility and democracy
Integration of community service and service learning projects into the daily curriculum
Selection of subject content by looking forward to ask what skills will be needed in future society
De-emphasis on textbooks in favor of varied learning resources
Emphasis on lifelong learning and social skills
Assessment by evaluation of child's projects and productions | 2001-10-30T03:40:54Z | 2023-12-16T09:04:42Z | [
"Template:Cite news",
"Template:In lang",
"Template:No footnotes",
"Template:Category see also",
"Template:Rp",
"Template:Main",
"Template:Columns-list",
"Template:Education",
"Template:Authority control",
"Template:Short description",
"Template:Progressivism",
"Template:Cite web",
"Template:Cite book",
"Template:Cite journal",
"Template:Webarchive",
"Template:ISBN",
"Template:TOC right",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/Progressive_education |
10,006 | Electronic musical instrument | An electronic musical instrument or electrophone is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener.
An electronic instrument might include a user interface for controlling its sound, often by adjusting the pitch, frequency, or duration of each note. A common user interface is the musical keyboard, which functions similarly to the keyboard on an acoustic piano where the keys are each linked mechanically to swinging string hammers - whereas with an electronic keyboard, the keyboard interface is linked to a synth module, computer or other electronic or digital sound generator, which then creates a sound. However, it is increasingly common to separate user interface and sound-generating functions into a music controller (input device) and a music synthesizer, respectively, with the two devices communicating through a musical performance description language such as MIDI or Open Sound Control. The solid state nature of electronic keyboards also offers differing "feel" and "response", offering a novel experience in playing relative to operating a mechanically linked piano keyboard.
All electronic musical instruments can be viewed as a subset of audio signal processing applications. Simple electronic musical instruments are sometimes called sound effects; the border between sound effects and actual musical instruments is often unclear.
In the 21st century, electronic musical instruments are now widely used in most styles of music. In popular music styles such as electronic dance music, almost all of the instrument sounds used in recordings are electronic instruments (e.g., bass synth, synthesizer, drum machine). Development of new electronic musical instruments, controllers, and synthesizers continues to be a highly active and interdisciplinary field of research. Specialized conferences, such as the International Conference on New Interfaces for Musical Expression, have organized to report cutting-edge work, as well as to provide a showcase for artists who perform or create music with new electronic music instruments, controllers, and synthesizers.
In musicology, electronic musical instruments are known as electrophones. Electrophones are the fifth category of musical instrument under the Hornbostel-Sachs system. Musicologists typically only classify music as electrophones if the sound is initially produced by electricity, excluding electronically controlled acoustic instruments such as pipe organs and amplified instruments such as electric guitars.
The category was added to the Hornbostel-Sachs musical instrument classification system by Sachs in 1940, in his 1940 book The History of Musical Instruments; the original 1914 version of the system did not include it. Sachs divided electrophones into three subcategories:
The last category included instruments such as theremins or synthesizers, which he called radioelectric instruments.
Francis William Galpin provided such a group in his own classification system, which is closer to Mahillon than Sachs-Hornbostel. For example, in Galpin's 1937 book A Textbook of European Musical Instruments, he lists electrophones with three second-level divisions for sound generation ("by oscillation", "electro-magnetic", and "electro-static"), as well as third-level and fourth-level categories based on the control method.
Present-day ethnomusicologists, such as Margaret Kartomi and Terry Ellingson, suggest that, in keeping with the spirit of the original Hornbostel Sachs classification scheme, if one categorizes instruments by what first produces the initial sound in the instrument, that only subcategory 53 should remain in the electrophones category. Thus, it has been more recently proposed, for example, that the pipe organ (even if it uses electric key action to control solenoid valves) remain in the aerophones category, and that the electric guitar remain in the chordophones category, and so on.
In the 18th-century, musicians and composers adapted a number of acoustic instruments to exploit the novelty of electricity. Thus, in the broadest sense, the first electrified musical instrument was the Denis d'or keyboard, dating from 1753, followed shortly by the clavecin électrique by the Frenchman Jean-Baptiste de Laborde in 1761. The Denis d'or consisted of a keyboard instrument of over 700 strings, electrified temporarily to enhance sonic qualities. The clavecin électrique was a keyboard instrument with plectra (picks) activated electrically. However, neither instrument used electricity as a sound source.
The first electric synthesizer was invented in 1876 by Elisha Gray. The "Musical Telegraph" was a chance by-product of his telephone technology when Gray discovered that he could control sound from a self-vibrating electromagnetic circuit and so invented a basic oscillator. The Musical Telegraph used steel reeds oscillated by electromagnets and transmitted over a telephone line. Gray also built a simple loudspeaker device into later models, which consisted of a diaphragm vibrating in a magnetic field.
A significant invention, which later had a profound effect on electronic music, was the audion in 1906. This was the first thermionic valve, or vacuum tube and which led to the generation and amplification of electrical signals, radio broadcasting, and electronic computation, among other things. Other early synthesizers included the Telharmonium (1897), the Theremin (1919), Jörg Mager's Spharophon (1924) and Partiturophone, Taubmann's similar Electronde (1933), Maurice Martenot's ondes Martenot ("Martenot waves", 1928), Trautwein's Trautonium (1930). The Mellertion (1933) used a non-standard scale, Bertrand's Dynaphone could produce octaves and perfect fifths, while the Emicon was an American, keyboard-controlled instrument constructed in 1930 and the German Hellertion combined four instruments to produce chords. Three Russian instruments also appeared, Oubouhof's Croix Sonore (1934), Ivor Darreg's microtonal 'Electronic Keyboard Oboe' (1937) and the ANS synthesizer, constructed by the Russian scientist Evgeny Murzin from 1937 to 1958. Only two models of this latter were built and the only surviving example is currently stored at the Lomonosov University in Moscow. It has been used in many Russian movies—like Solaris—to produce unusual, "cosmic" sounds.
Hugh Le Caine, John Hanert, Raymond Scott, composer Percy Grainger (with Burnett Cross), and others built a variety of automated electronic-music controllers during the late 1940s and 1950s. In 1959 Daphne Oram produced a novel method of synthesis, her "Oramics" technique, driven by drawings on a 35 mm film strip; it was used for a number of years at the BBC Radiophonic Workshop. This workshop was also responsible for the theme to the TV series Doctor Who a piece, largely created by Delia Derbyshire, that more than any other ensured the popularity of electronic music in the UK.
In 1897 Thaddeus Cahill patented an instrument called the Telharmonium (or Teleharmonium, also known as the Dynamaphone). Using tonewheels to generate musical sounds as electrical signals by additive synthesis, it was capable of producing any combination of notes and overtones, at any dynamic level. This technology was later used to design the Hammond organ. Between 1901 and 1910 Cahill had three progressively larger and more complex versions made, the first weighing seven tons, the last in excess of 200 tons. Portability was managed only by rail and with the use of thirty boxcars. By 1912, public interest had waned, and Cahill's enterprise was bankrupt.
Another development, which aroused the interest of many composers, occurred in 1919–1920. In Leningrad, Leon Theremin built and demonstrated his Etherophone, which was later renamed the Theremin. This led to the first compositions for electronic instruments, as opposed to noisemakers and re-purposed machines. The Theremin was notable for being the first musical instrument played without touching it. In 1929, Joseph Schillinger composed First Airphonic Suite for Theremin and Orchestra, premièred with the Cleveland Orchestra with Leon Theremin as soloist. The next year Henry Cowell commissioned Theremin to create the first electronic rhythm machine, called the Rhythmicon. Cowell wrote some compositions for it, which he and Schillinger premiered in 1932.
The ondes Martenot is played with a keyboard or by moving a ring along a wire, creating "wavering" sounds similar to a theremin. It was invented in 1928 by the French cellist Maurice Martenot, who was inspired by the accidental overlaps of tones between military radio oscillators, and wanted to create an instrument with the expressiveness of the cello.
The French composer Olivier Messiaen used the ondes Martenot in pieces such as his 1949 symphony Turangalîla-Symphonie, and his sister-in-law Jeanne Loriod was a celebrated player. It appears in numerous film and television soundtracks, particularly science fiction and horror films. Contemporary users of the ondes Martenot include Tom Waits, Daft Punk and the Radiohead guitarist Jonny Greenwood.
The Trautonium was invented in 1928. It was based on the subharmonic scale, and the resulting sounds were often used to emulate bell or gong sounds, as in the 1950s Bayreuth productions of Parsifal. In 1942, Richard Strauss used it for the bell- and gong-part in the Dresden première of his Japanese Festival Music. This new class of instruments, microtonal by nature, was only adopted slowly by composers at first, but by the early 1930s there was a burst of new works incorporating these and other electronic instruments.
In 1929 Laurens Hammond established his company for the manufacture of electronic instruments. He went on to produce the Hammond organ, which was based on the principles of the Telharmonium, along with other developments including early reverberation units. The Hammond organ is an electromechanical instrument, as it used both mechanical elements and electronic parts. A Hammond organ used spinning metal tonewheels to produce different sounds. A magnetic pickup similar in design to the pickups in an electric guitar is used to transmit the pitches in the tonewheels to an amplifier and speaker enclosure. While the Hammond organ was designed to be a lower-cost alternative to a pipe organ for church music, musicians soon discovered that the Hammond was an excellent instrument for blues and jazz; indeed, an entire genre of music developed built around this instrument, known as the organ trio (typically Hammond organ, drums, and a third instrument, either saxophone or guitar).
The first commercially manufactured synthesizer was the Novachord, built by the Hammond Organ Company from 1938 to 1942, which offered 72-note polyphony using 12 oscillators driving monostable-based divide-down circuits, basic envelope control and resonant low-pass filters. The instrument featured 163 vacuum tubes and weighed 500 pounds. The instrument's use of envelope control is significant, since this is perhaps the most significant distinction between the modern synthesizer and other electronic instruments.
The most commonly used electronic instruments are synthesizers, so-called because they artificially generate sound using a variety of techniques. All early circuit-based synthesis involved the use of analogue circuitry, particularly voltage controlled amplifiers, oscillators and filters. An important technological development was the invention of the Clavivox synthesizer in 1956 by Raymond Scott with subassembly by Robert Moog. French composer and engineer Edgard Varèse created a variety of compositions using electronic horns, whistles, and tape. Most notably, he wrote Poème électronique for the Phillips pavilion at the Brussels World Fair in 1958.
RCA produced experimental devices to synthesize voice and music in the 1950s. The Mark II Music Synthesizer, housed at the Columbia-Princeton Electronic Music Center in New York City. Designed by Herbert Belar and Harry Olson at RCA, with contributions from Vladimir Ussachevsky and Peter Mauzey, it was installed at Columbia University in 1957. Consisting of a room-sized array of interconnected sound synthesis components, it was only capable of producing music by programming, using a paper tape sequencer punched with holes to control pitch sources and filters, similar to a mechanical player piano but capable of generating a wide variety of sounds. The vacuum tube system had to be patched to create timbres.
In the 1960s synthesizers were still usually confined to studios due to their size. They were usually modular in design, their stand-alone signal sources and processors connected with patch cords or by other means and controlled by a common controlling device. Harald Bode, Don Buchla, Hugh Le Caine, Raymond Scott and Paul Ketoff were among the first to build such instruments, in the late 1950s and early 1960s. Buchla later produced a commercial modular synthesizer, the Buchla Music Easel. Robert Moog, who had been a student of Peter Mauzey and one of the RCA Mark II engineers, created a synthesizer that could reasonably be used by musicians, designing the circuits while he was at Columbia-Princeton. The Moog synthesizer was first displayed at the Audio Engineering Society convention in 1964. It required experience to set up sounds but was smaller and more intuitive than what had come before, less like a machine and more like a musical instrument. Moog established standards for control interfacing, using a logarithmic 1-volt-per-octave for pitch control and a separate triggering signal. This standardization allowed synthesizers from different manufacturers to operate simultaneously. Pitch control was usually performed either with an organ-style keyboard or a music sequencer producing a timed series of control voltages. During the late 1960s hundreds of popular recordings used Moog synthesizers. Other early commercial synthesizer manufacturers included ARP, who also started with modular synthesizers before producing all-in-one instruments, and British firm EMS.
In 1970, Moog designed the Minimoog, a non-modular synthesizer with a built-in keyboard. The analogue circuits were interconnected with switches in a simplified arrangement called "normalization." Though less flexible than a modular design, normalization made the instrument more portable and easier to use. The Minimoog sold 12,000 units. Further standardized the design of subsequent synthesizers with its integrated keyboard, pitch and modulation wheels and VCO->VCF->VCA signal flow. It has become celebrated for its "fat" sound—and its tuning problems. Miniaturized solid-state components allowed synthesizers to become self-contained, portable instruments that soon appeared in live performance and quickly became widely used in popular music and electronic art music.
Many early analog synthesizers were monophonic, producing only one tone at a time. Popular monophonic synthesizers include the Moog Minimoog. A few, such as the Moog Sonic Six, ARP Odyssey and EML 101, could produce two different pitches at a time when two keys were pressed. Polyphony (multiple simultaneous tones, which enables chords) was only obtainable with electronic organ designs at first. Popular electronic keyboards combining organ circuits with synthesizer processing included the ARP Omni and Moog's Polymoog and Opus 3.
By 1976 affordable polyphonic synthesizers began to appear, such as the Yamaha CS-50, CS-60 and CS-80, the Sequential Circuits Prophet-5 and the Oberheim Four-Voice. These remained complex, heavy and relatively costly. The recording of settings in digital memory allowed storage and recall of sounds. The first practical polyphonic synth, and the first to use a microprocessor as a controller, was the Sequential Circuits Prophet-5 introduced in late 1977. For the first time, musicians had a practical polyphonic synthesizer that could save all knob settings in computer memory and recall them at the touch of a button. The Prophet-5's design paradigm became a new standard, slowly pushing out more complex and recondite modular designs.
In 1935, another significant development was made in Germany. Allgemeine Elektricitäts Gesellschaft (AEG) demonstrated the first commercially produced magnetic tape recorder, called the Magnetophon. Audio tape, which had the advantage of being fairly light as well as having good audio fidelity, ultimately replaced the bulkier wire recorders.
The term "electronic music" (which first came into use during the 1930s) came to include the tape recorder as an essential element: "electronically produced sounds recorded on tape and arranged by the composer to form a musical composition". It was also indispensable to Musique concrète.
Tape also gave rise to the first, analogue, sample-playback keyboards, the Chamberlin and its more famous successor the Mellotron, an electro-mechanical, polyphonic keyboard originally developed and built in Birmingham, England in the early 1960s.
During the 1940s–1960s, Raymond Scott, an American composer of electronic music, invented various kind of music sequencers for his electric compositions. Step sequencers played rigid patterns of notes using a grid of (usually) 16 buttons, or steps, each step being 1/16 of a measure. These patterns of notes were then chained together to form longer compositions. Software sequencers were continuously utilized since the 1950s in the context of computer music, including computer-played music (software sequencer), computer-composed music (music synthesis), and computer sound generation (sound synthesis).
The first digital synthesizers were academic experiments in sound synthesis using digital computers. FM synthesis was developed for this purpose; as a way of generating complex sounds digitally with the smallest number of computational operations per sound sample. In 1983 Yamaha introduced the first stand-alone digital synthesizer, the DX-7. It used frequency modulation synthesis (FM synthesis), first developed by John Chowning at Stanford University during the late sixties. Chowning exclusively licensed his FM synthesis patent to Yamaha in 1975. Yamaha subsequently released their first FM synthesizers, the GS-1 and GS-2, which were costly and heavy. There followed a pair of smaller, preset versions, the CE20 and CE25 Combo Ensembles, targeted primarily at the home organ market and featuring four-octave keyboards. Yamaha's third generation of digital synthesizers was a commercial success; it consisted of the DX7 and DX9 (1983). Both models were compact, reasonably priced, and dependent on custom digital integrated circuits to produce FM tonalities. The DX7 was the first mass market all-digital synthesizer. It became indispensable to many music artists of the 1980s, and demand soon exceeded supply. The DX7 sold over 200,000 units within three years.
The DX series was not easy to program but offered a detailed, percussive sound that led to the demise of the electro-mechanical Rhodes piano, which was heavier and larger than a DX synth. Following the success of FM synthesis Yamaha signed a contract with Stanford University in 1989 to develop digital waveguide synthesis, leading to the first commercial physical modeling synthesizer, Yamaha's VL-1, in 1994. The DX-7 was affordable enough for amateurs and young bands to buy, unlike the costly synthesizers of previous generations, which were mainly used by top professionals.
The Fairlight CMI (Computer Musical Instrument), the first polyphonic digital sampler, was the harbinger of sample-based synthesizers. Designed in 1978 by Peter Vogel and Kim Ryrie and based on a dual microprocessor computer designed by Tony Furse in Sydney, Australia, the Fairlight CMI gave musicians the ability to modify volume, attack, decay, and use special effects like vibrato. Sample waveforms could be displayed on-screen and modified using a light pen. The Synclavier from New England Digital was a similar system. Jon Appleton (with Jones and Alonso) invented the Dartmouth Digital Synthesizer, later to become the New England Digital Corp's Synclavier. The Kurzweil K250, first produced in 1983, was also a successful polyphonic digital music synthesizer, noted for its ability to reproduce several instruments synchronously and having a velocity-sensitive keyboard.
An important new development was the advent of computers for the purpose of composing music, as opposed to manipulating or creating sounds. Iannis Xenakis began what is called musique stochastique, or stochastic music, which is a method of composing that employs mathematical probability systems. Different probability algorithms were used to create a piece under a set of parameters. Xenakis used graph paper and a ruler to aid in calculating the velocity trajectories of glissando for his orchestral composition Metastasis (1953–54), but later turned to the use of computers to compose pieces like ST/4 for string quartet and ST/48 for orchestra (both 1962).
The impact of computers continued in 1956. Lejaren Hiller and Leonard Issacson composed Illiac Suite for string quartet, the first complete work of computer-assisted composition using algorithmic composition.
In 1957, Max Mathews at Bell Lab wrote MUSIC-N series, a first computer program family for generating digital audio waveforms through direct synthesis. Then Barry Vercoe wrote MUSIC 11 based on MUSIC IV-BF, a next-generation music synthesis program (later evolving into csound, which is still widely used).
In mid 80s, Miller Puckette at IRCAM developed graphic signal-processing software for 4X called Max (after Max Mathews), and later ported it to Macintosh (with Dave Zicarelli extending it for Opcode) for real-time MIDI control, bringing algorithmic composition availability to most composers with modest computer programming background.
In 1980, a group of musicians and music merchants met to standardize an interface by which new instruments could communicate control instructions with other instruments and the prevalent microcomputer. This standard was dubbed MIDI (Musical Instrument Digital Interface). A paper was authored by Dave Smith of Sequential Circuits and proposed to the Audio Engineering Society in 1981. Then, in August 1983, the MIDI Specification 1.0 was finalized.
The advent of MIDI technology allows a single keystroke, control wheel motion, pedal movement, or command from a microcomputer to activate every device in the studio remotely and in synchrony, with each device responding according to conditions predetermined by the composer.
MIDI instruments and software made powerful control of sophisticated instruments easily affordable by many studios and individuals. Acoustic sounds became reintegrated into studios via sampling and sampled-ROM-based instruments.
The increasing power and decreasing cost of sound-generating electronics (and especially of the personal computer), combined with the standardization of the MIDI and Open Sound Control musical performance description languages, has facilitated the separation of musical instruments into music controllers and music synthesizers.
By far the most common musical controller is the musical keyboard. Other controllers include the radiodrum, Akai's EWI and Yamaha's WX wind controllers, the guitar-like SynthAxe, the BodySynth, the Buchla Thunder, the Continuum Fingerboard, the Roland Octapad, various isomorphic keyboards including the Thummer, and Kaossilator Pro, and kits like I-CubeX.
The Reactable is a round translucent table with a backlit interactive display. By placing and manipulating blocks called tangibles on the table surface, while interacting with the visual display via finger gestures, a virtual modular synthesizer is operated, creating music or sound effects.
AudioCubes are autonomous wireless cubes powered by an internal computer system and rechargeable battery. They have internal RGB lighting, and are capable of detecting each other's location, orientation and distance. The cubes can also detect distances to the user's hands and fingers. Through interaction with the cubes, a variety of music and sound software can be operated. AudioCubes have applications in sound design, music production, DJing and live performance.
The Kaossilator and Kaossilator Pro are compact instruments where the position of a finger on the touch pad controls two note-characteristics; usually the pitch is changed with a left-right motion and the tonal property, filter or other parameter changes with an up-down motion. The touch pad can be set to different musical scales and keys. The instrument can record a repeating loop of adjustable length, set to any tempo, and new loops of sound can be layered on top of existing ones. This lends itself to electronic dance-music but is more limited for controlled sequences of notes, as the pad on a regular Kaossilator is featureless.
The Eigenharp is a large instrument resembling a bassoon, which can be interacted with through big buttons, a drum sequencer and a mouthpiece. The sound processing is done on a separate computer.
The AlphaSphere is a spherical instrument that consists of 48 tactile pads that respond to pressure as well as touch. Custom software allows the pads to be indefinitely programmed individually or by groups in terms of function, note, and pressure parameter among many other settings. The primary concept of the AlphaSphere is to increase the level of expression available to electronic musicians, by allowing for the playing style of a musical instrument.
Chiptune, chipmusic, or chip music is music written in sound formats where many of the sound textures are synthesized or sequenced in real time by a computer or video game console sound chip, sometimes including sample-based synthesis and low bit sample playback. Many chip music devices featured synthesizers in tandem with low rate sample playback.
During the late 1970s and early 1980s, do-it-yourself designs were published in hobby electronics magazines (such the Formant modular synth, a DIY clone of the Moog system, published by Elektor) and kits were supplied by companies such as Paia in the US, and Maplin Electronics in the UK.
In 1966, Reed Ghazala discovered and began to teach math "circuit bending"—the application of the creative short circuit, a process of chance short-circuiting, creating experimental electronic instruments, exploring sonic elements mainly of timbre and with less regard to pitch or rhythm, and influenced by John Cage’s aleatoric music concept.
Much of this manipulation of circuits directly, especially to the point of destruction, was pioneered by Louis and Bebe Barron in the early 1950s, such as their work with John Cage on the Williams Mix and especially in the soundtrack to Forbidden Planet.
Modern circuit bending is the creative customization of the circuits within electronic devices such as low voltage, battery-powered guitar effects, children's toys and small digital synthesizers to create new musical or visual instruments and sound generators. Emphasizing spontaneity and randomness, the techniques of circuit bending have been commonly associated with noise music, though many more conventional contemporary musicians and musical groups have been known to experiment with "bent" instruments. Circuit bending usually involves dismantling the machine and adding components such as switches and potentiometers that alter the circuit. With the revived interest for analogue synthesizer circuit bending became a cheap solution for many experimental musicians to create their own individual analogue sound generators. Nowadays many schematics can be found to build noise generators such as the Atari Punk Console or the Dub Siren as well as simple modifications for children toys such as the Speak & Spell that are often modified by circuit benders.
The modular synthesizer is a type of synthesizer consisting of separate interchangeable modules. These are also available as kits for hobbyist DIY constructors. Many hobbyist designers also make available bare PCB boards and front panels for sale to other hobbyists.
Technologies
Instrument families
Individual instruments (historical)
Individual instruments (modern)
In Indian and Asian traditional music | [
{
"paragraph_id": 0,
"text": "An electronic musical instrument or electrophone is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener.",
"title": ""
},
{
"paragraph_id": 1,
"text": "An electronic instrument might include a user interface for controlling its sound, often by adjusting the pitch, frequency, or duration of each note. A common user interface is the musical keyboard, which functions similarly to the keyboard on an acoustic piano where the keys are each linked mechanically to swinging string hammers - whereas with an electronic keyboard, the keyboard interface is linked to a synth module, computer or other electronic or digital sound generator, which then creates a sound. However, it is increasingly common to separate user interface and sound-generating functions into a music controller (input device) and a music synthesizer, respectively, with the two devices communicating through a musical performance description language such as MIDI or Open Sound Control. The solid state nature of electronic keyboards also offers differing \"feel\" and \"response\", offering a novel experience in playing relative to operating a mechanically linked piano keyboard.",
"title": ""
},
{
"paragraph_id": 2,
"text": "All electronic musical instruments can be viewed as a subset of audio signal processing applications. Simple electronic musical instruments are sometimes called sound effects; the border between sound effects and actual musical instruments is often unclear.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In the 21st century, electronic musical instruments are now widely used in most styles of music. In popular music styles such as electronic dance music, almost all of the instrument sounds used in recordings are electronic instruments (e.g., bass synth, synthesizer, drum machine). Development of new electronic musical instruments, controllers, and synthesizers continues to be a highly active and interdisciplinary field of research. Specialized conferences, such as the International Conference on New Interfaces for Musical Expression, have organized to report cutting-edge work, as well as to provide a showcase for artists who perform or create music with new electronic music instruments, controllers, and synthesizers.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In musicology, electronic musical instruments are known as electrophones. Electrophones are the fifth category of musical instrument under the Hornbostel-Sachs system. Musicologists typically only classify music as electrophones if the sound is initially produced by electricity, excluding electronically controlled acoustic instruments such as pipe organs and amplified instruments such as electric guitars.",
"title": "Classification"
},
{
"paragraph_id": 5,
"text": "The category was added to the Hornbostel-Sachs musical instrument classification system by Sachs in 1940, in his 1940 book The History of Musical Instruments; the original 1914 version of the system did not include it. Sachs divided electrophones into three subcategories:",
"title": "Classification"
},
{
"paragraph_id": 6,
"text": "The last category included instruments such as theremins or synthesizers, which he called radioelectric instruments.",
"title": "Classification"
},
{
"paragraph_id": 7,
"text": "Francis William Galpin provided such a group in his own classification system, which is closer to Mahillon than Sachs-Hornbostel. For example, in Galpin's 1937 book A Textbook of European Musical Instruments, he lists electrophones with three second-level divisions for sound generation (\"by oscillation\", \"electro-magnetic\", and \"electro-static\"), as well as third-level and fourth-level categories based on the control method.",
"title": "Classification"
},
{
"paragraph_id": 8,
"text": "Present-day ethnomusicologists, such as Margaret Kartomi and Terry Ellingson, suggest that, in keeping with the spirit of the original Hornbostel Sachs classification scheme, if one categorizes instruments by what first produces the initial sound in the instrument, that only subcategory 53 should remain in the electrophones category. Thus, it has been more recently proposed, for example, that the pipe organ (even if it uses electric key action to control solenoid valves) remain in the aerophones category, and that the electric guitar remain in the chordophones category, and so on.",
"title": "Classification"
},
{
"paragraph_id": 9,
"text": "In the 18th-century, musicians and composers adapted a number of acoustic instruments to exploit the novelty of electricity. Thus, in the broadest sense, the first electrified musical instrument was the Denis d'or keyboard, dating from 1753, followed shortly by the clavecin électrique by the Frenchman Jean-Baptiste de Laborde in 1761. The Denis d'or consisted of a keyboard instrument of over 700 strings, electrified temporarily to enhance sonic qualities. The clavecin électrique was a keyboard instrument with plectra (picks) activated electrically. However, neither instrument used electricity as a sound source.",
"title": "Early examples"
},
{
"paragraph_id": 10,
"text": "The first electric synthesizer was invented in 1876 by Elisha Gray. The \"Musical Telegraph\" was a chance by-product of his telephone technology when Gray discovered that he could control sound from a self-vibrating electromagnetic circuit and so invented a basic oscillator. The Musical Telegraph used steel reeds oscillated by electromagnets and transmitted over a telephone line. Gray also built a simple loudspeaker device into later models, which consisted of a diaphragm vibrating in a magnetic field.",
"title": "Early examples"
},
{
"paragraph_id": 11,
"text": "A significant invention, which later had a profound effect on electronic music, was the audion in 1906. This was the first thermionic valve, or vacuum tube and which led to the generation and amplification of electrical signals, radio broadcasting, and electronic computation, among other things. Other early synthesizers included the Telharmonium (1897), the Theremin (1919), Jörg Mager's Spharophon (1924) and Partiturophone, Taubmann's similar Electronde (1933), Maurice Martenot's ondes Martenot (\"Martenot waves\", 1928), Trautwein's Trautonium (1930). The Mellertion (1933) used a non-standard scale, Bertrand's Dynaphone could produce octaves and perfect fifths, while the Emicon was an American, keyboard-controlled instrument constructed in 1930 and the German Hellertion combined four instruments to produce chords. Three Russian instruments also appeared, Oubouhof's Croix Sonore (1934), Ivor Darreg's microtonal 'Electronic Keyboard Oboe' (1937) and the ANS synthesizer, constructed by the Russian scientist Evgeny Murzin from 1937 to 1958. Only two models of this latter were built and the only surviving example is currently stored at the Lomonosov University in Moscow. It has been used in many Russian movies—like Solaris—to produce unusual, \"cosmic\" sounds.",
"title": "Early examples"
},
{
"paragraph_id": 12,
"text": "Hugh Le Caine, John Hanert, Raymond Scott, composer Percy Grainger (with Burnett Cross), and others built a variety of automated electronic-music controllers during the late 1940s and 1950s. In 1959 Daphne Oram produced a novel method of synthesis, her \"Oramics\" technique, driven by drawings on a 35 mm film strip; it was used for a number of years at the BBC Radiophonic Workshop. This workshop was also responsible for the theme to the TV series Doctor Who a piece, largely created by Delia Derbyshire, that more than any other ensured the popularity of electronic music in the UK.",
"title": "Early examples"
},
{
"paragraph_id": 13,
"text": "In 1897 Thaddeus Cahill patented an instrument called the Telharmonium (or Teleharmonium, also known as the Dynamaphone). Using tonewheels to generate musical sounds as electrical signals by additive synthesis, it was capable of producing any combination of notes and overtones, at any dynamic level. This technology was later used to design the Hammond organ. Between 1901 and 1910 Cahill had three progressively larger and more complex versions made, the first weighing seven tons, the last in excess of 200 tons. Portability was managed only by rail and with the use of thirty boxcars. By 1912, public interest had waned, and Cahill's enterprise was bankrupt.",
"title": "Early examples"
},
{
"paragraph_id": 14,
"text": "Another development, which aroused the interest of many composers, occurred in 1919–1920. In Leningrad, Leon Theremin built and demonstrated his Etherophone, which was later renamed the Theremin. This led to the first compositions for electronic instruments, as opposed to noisemakers and re-purposed machines. The Theremin was notable for being the first musical instrument played without touching it. In 1929, Joseph Schillinger composed First Airphonic Suite for Theremin and Orchestra, premièred with the Cleveland Orchestra with Leon Theremin as soloist. The next year Henry Cowell commissioned Theremin to create the first electronic rhythm machine, called the Rhythmicon. Cowell wrote some compositions for it, which he and Schillinger premiered in 1932.",
"title": "Early examples"
},
{
"paragraph_id": 15,
"text": "The ondes Martenot is played with a keyboard or by moving a ring along a wire, creating \"wavering\" sounds similar to a theremin. It was invented in 1928 by the French cellist Maurice Martenot, who was inspired by the accidental overlaps of tones between military radio oscillators, and wanted to create an instrument with the expressiveness of the cello.",
"title": "Early examples"
},
{
"paragraph_id": 16,
"text": "The French composer Olivier Messiaen used the ondes Martenot in pieces such as his 1949 symphony Turangalîla-Symphonie, and his sister-in-law Jeanne Loriod was a celebrated player. It appears in numerous film and television soundtracks, particularly science fiction and horror films. Contemporary users of the ondes Martenot include Tom Waits, Daft Punk and the Radiohead guitarist Jonny Greenwood.",
"title": "Early examples"
},
{
"paragraph_id": 17,
"text": "The Trautonium was invented in 1928. It was based on the subharmonic scale, and the resulting sounds were often used to emulate bell or gong sounds, as in the 1950s Bayreuth productions of Parsifal. In 1942, Richard Strauss used it for the bell- and gong-part in the Dresden première of his Japanese Festival Music. This new class of instruments, microtonal by nature, was only adopted slowly by composers at first, but by the early 1930s there was a burst of new works incorporating these and other electronic instruments.",
"title": "Early examples"
},
{
"paragraph_id": 18,
"text": "In 1929 Laurens Hammond established his company for the manufacture of electronic instruments. He went on to produce the Hammond organ, which was based on the principles of the Telharmonium, along with other developments including early reverberation units. The Hammond organ is an electromechanical instrument, as it used both mechanical elements and electronic parts. A Hammond organ used spinning metal tonewheels to produce different sounds. A magnetic pickup similar in design to the pickups in an electric guitar is used to transmit the pitches in the tonewheels to an amplifier and speaker enclosure. While the Hammond organ was designed to be a lower-cost alternative to a pipe organ for church music, musicians soon discovered that the Hammond was an excellent instrument for blues and jazz; indeed, an entire genre of music developed built around this instrument, known as the organ trio (typically Hammond organ, drums, and a third instrument, either saxophone or guitar).",
"title": "Early examples"
},
{
"paragraph_id": 19,
"text": "The first commercially manufactured synthesizer was the Novachord, built by the Hammond Organ Company from 1938 to 1942, which offered 72-note polyphony using 12 oscillators driving monostable-based divide-down circuits, basic envelope control and resonant low-pass filters. The instrument featured 163 vacuum tubes and weighed 500 pounds. The instrument's use of envelope control is significant, since this is perhaps the most significant distinction between the modern synthesizer and other electronic instruments.",
"title": "Early examples"
},
{
"paragraph_id": 20,
"text": "The most commonly used electronic instruments are synthesizers, so-called because they artificially generate sound using a variety of techniques. All early circuit-based synthesis involved the use of analogue circuitry, particularly voltage controlled amplifiers, oscillators and filters. An important technological development was the invention of the Clavivox synthesizer in 1956 by Raymond Scott with subassembly by Robert Moog. French composer and engineer Edgard Varèse created a variety of compositions using electronic horns, whistles, and tape. Most notably, he wrote Poème électronique for the Phillips pavilion at the Brussels World Fair in 1958.",
"title": "Analogue synthesis 1950–1980"
},
{
"paragraph_id": 21,
"text": "RCA produced experimental devices to synthesize voice and music in the 1950s. The Mark II Music Synthesizer, housed at the Columbia-Princeton Electronic Music Center in New York City. Designed by Herbert Belar and Harry Olson at RCA, with contributions from Vladimir Ussachevsky and Peter Mauzey, it was installed at Columbia University in 1957. Consisting of a room-sized array of interconnected sound synthesis components, it was only capable of producing music by programming, using a paper tape sequencer punched with holes to control pitch sources and filters, similar to a mechanical player piano but capable of generating a wide variety of sounds. The vacuum tube system had to be patched to create timbres.",
"title": "Analogue synthesis 1950–1980"
},
{
"paragraph_id": 22,
"text": "In the 1960s synthesizers were still usually confined to studios due to their size. They were usually modular in design, their stand-alone signal sources and processors connected with patch cords or by other means and controlled by a common controlling device. Harald Bode, Don Buchla, Hugh Le Caine, Raymond Scott and Paul Ketoff were among the first to build such instruments, in the late 1950s and early 1960s. Buchla later produced a commercial modular synthesizer, the Buchla Music Easel. Robert Moog, who had been a student of Peter Mauzey and one of the RCA Mark II engineers, created a synthesizer that could reasonably be used by musicians, designing the circuits while he was at Columbia-Princeton. The Moog synthesizer was first displayed at the Audio Engineering Society convention in 1964. It required experience to set up sounds but was smaller and more intuitive than what had come before, less like a machine and more like a musical instrument. Moog established standards for control interfacing, using a logarithmic 1-volt-per-octave for pitch control and a separate triggering signal. This standardization allowed synthesizers from different manufacturers to operate simultaneously. Pitch control was usually performed either with an organ-style keyboard or a music sequencer producing a timed series of control voltages. During the late 1960s hundreds of popular recordings used Moog synthesizers. Other early commercial synthesizer manufacturers included ARP, who also started with modular synthesizers before producing all-in-one instruments, and British firm EMS.",
"title": "Analogue synthesis 1950–1980"
},
{
"paragraph_id": 23,
"text": "In 1970, Moog designed the Minimoog, a non-modular synthesizer with a built-in keyboard. The analogue circuits were interconnected with switches in a simplified arrangement called \"normalization.\" Though less flexible than a modular design, normalization made the instrument more portable and easier to use. The Minimoog sold 12,000 units. Further standardized the design of subsequent synthesizers with its integrated keyboard, pitch and modulation wheels and VCO->VCF->VCA signal flow. It has become celebrated for its \"fat\" sound—and its tuning problems. Miniaturized solid-state components allowed synthesizers to become self-contained, portable instruments that soon appeared in live performance and quickly became widely used in popular music and electronic art music.",
"title": "Analogue synthesis 1950–1980"
},
{
"paragraph_id": 24,
"text": "Many early analog synthesizers were monophonic, producing only one tone at a time. Popular monophonic synthesizers include the Moog Minimoog. A few, such as the Moog Sonic Six, ARP Odyssey and EML 101, could produce two different pitches at a time when two keys were pressed. Polyphony (multiple simultaneous tones, which enables chords) was only obtainable with electronic organ designs at first. Popular electronic keyboards combining organ circuits with synthesizer processing included the ARP Omni and Moog's Polymoog and Opus 3.",
"title": "Analogue synthesis 1950–1980"
},
{
"paragraph_id": 25,
"text": "By 1976 affordable polyphonic synthesizers began to appear, such as the Yamaha CS-50, CS-60 and CS-80, the Sequential Circuits Prophet-5 and the Oberheim Four-Voice. These remained complex, heavy and relatively costly. The recording of settings in digital memory allowed storage and recall of sounds. The first practical polyphonic synth, and the first to use a microprocessor as a controller, was the Sequential Circuits Prophet-5 introduced in late 1977. For the first time, musicians had a practical polyphonic synthesizer that could save all knob settings in computer memory and recall them at the touch of a button. The Prophet-5's design paradigm became a new standard, slowly pushing out more complex and recondite modular designs.",
"title": "Analogue synthesis 1950–1980"
},
{
"paragraph_id": 26,
"text": "In 1935, another significant development was made in Germany. Allgemeine Elektricitäts Gesellschaft (AEG) demonstrated the first commercially produced magnetic tape recorder, called the Magnetophon. Audio tape, which had the advantage of being fairly light as well as having good audio fidelity, ultimately replaced the bulkier wire recorders.",
"title": "Tape recording"
},
{
"paragraph_id": 27,
"text": "The term \"electronic music\" (which first came into use during the 1930s) came to include the tape recorder as an essential element: \"electronically produced sounds recorded on tape and arranged by the composer to form a musical composition\". It was also indispensable to Musique concrète.",
"title": "Tape recording"
},
{
"paragraph_id": 28,
"text": "Tape also gave rise to the first, analogue, sample-playback keyboards, the Chamberlin and its more famous successor the Mellotron, an electro-mechanical, polyphonic keyboard originally developed and built in Birmingham, England in the early 1960s.",
"title": "Tape recording"
},
{
"paragraph_id": 29,
"text": "During the 1940s–1960s, Raymond Scott, an American composer of electronic music, invented various kind of music sequencers for his electric compositions. Step sequencers played rigid patterns of notes using a grid of (usually) 16 buttons, or steps, each step being 1/16 of a measure. These patterns of notes were then chained together to form longer compositions. Software sequencers were continuously utilized since the 1950s in the context of computer music, including computer-played music (software sequencer), computer-composed music (music synthesis), and computer sound generation (sound synthesis).",
"title": "Sound sequencer"
},
{
"paragraph_id": 30,
"text": "The first digital synthesizers were academic experiments in sound synthesis using digital computers. FM synthesis was developed for this purpose; as a way of generating complex sounds digitally with the smallest number of computational operations per sound sample. In 1983 Yamaha introduced the first stand-alone digital synthesizer, the DX-7. It used frequency modulation synthesis (FM synthesis), first developed by John Chowning at Stanford University during the late sixties. Chowning exclusively licensed his FM synthesis patent to Yamaha in 1975. Yamaha subsequently released their first FM synthesizers, the GS-1 and GS-2, which were costly and heavy. There followed a pair of smaller, preset versions, the CE20 and CE25 Combo Ensembles, targeted primarily at the home organ market and featuring four-octave keyboards. Yamaha's third generation of digital synthesizers was a commercial success; it consisted of the DX7 and DX9 (1983). Both models were compact, reasonably priced, and dependent on custom digital integrated circuits to produce FM tonalities. The DX7 was the first mass market all-digital synthesizer. It became indispensable to many music artists of the 1980s, and demand soon exceeded supply. The DX7 sold over 200,000 units within three years.",
"title": "Digital era 1980–2000"
},
{
"paragraph_id": 31,
"text": "The DX series was not easy to program but offered a detailed, percussive sound that led to the demise of the electro-mechanical Rhodes piano, which was heavier and larger than a DX synth. Following the success of FM synthesis Yamaha signed a contract with Stanford University in 1989 to develop digital waveguide synthesis, leading to the first commercial physical modeling synthesizer, Yamaha's VL-1, in 1994. The DX-7 was affordable enough for amateurs and young bands to buy, unlike the costly synthesizers of previous generations, which were mainly used by top professionals.",
"title": "Digital era 1980–2000"
},
{
"paragraph_id": 32,
"text": "The Fairlight CMI (Computer Musical Instrument), the first polyphonic digital sampler, was the harbinger of sample-based synthesizers. Designed in 1978 by Peter Vogel and Kim Ryrie and based on a dual microprocessor computer designed by Tony Furse in Sydney, Australia, the Fairlight CMI gave musicians the ability to modify volume, attack, decay, and use special effects like vibrato. Sample waveforms could be displayed on-screen and modified using a light pen. The Synclavier from New England Digital was a similar system. Jon Appleton (with Jones and Alonso) invented the Dartmouth Digital Synthesizer, later to become the New England Digital Corp's Synclavier. The Kurzweil K250, first produced in 1983, was also a successful polyphonic digital music synthesizer, noted for its ability to reproduce several instruments synchronously and having a velocity-sensitive keyboard.",
"title": "Digital era 1980–2000"
},
{
"paragraph_id": 33,
"text": "An important new development was the advent of computers for the purpose of composing music, as opposed to manipulating or creating sounds. Iannis Xenakis began what is called musique stochastique, or stochastic music, which is a method of composing that employs mathematical probability systems. Different probability algorithms were used to create a piece under a set of parameters. Xenakis used graph paper and a ruler to aid in calculating the velocity trajectories of glissando for his orchestral composition Metastasis (1953–54), but later turned to the use of computers to compose pieces like ST/4 for string quartet and ST/48 for orchestra (both 1962).",
"title": "Digital era 1980–2000"
},
{
"paragraph_id": 34,
"text": "The impact of computers continued in 1956. Lejaren Hiller and Leonard Issacson composed Illiac Suite for string quartet, the first complete work of computer-assisted composition using algorithmic composition.",
"title": "Digital era 1980–2000"
},
{
"paragraph_id": 35,
"text": "In 1957, Max Mathews at Bell Lab wrote MUSIC-N series, a first computer program family for generating digital audio waveforms through direct synthesis. Then Barry Vercoe wrote MUSIC 11 based on MUSIC IV-BF, a next-generation music synthesis program (later evolving into csound, which is still widely used).",
"title": "Digital era 1980–2000"
},
{
"paragraph_id": 36,
"text": "In mid 80s, Miller Puckette at IRCAM developed graphic signal-processing software for 4X called Max (after Max Mathews), and later ported it to Macintosh (with Dave Zicarelli extending it for Opcode) for real-time MIDI control, bringing algorithmic composition availability to most composers with modest computer programming background.",
"title": "Digital era 1980–2000"
},
{
"paragraph_id": 37,
"text": "In 1980, a group of musicians and music merchants met to standardize an interface by which new instruments could communicate control instructions with other instruments and the prevalent microcomputer. This standard was dubbed MIDI (Musical Instrument Digital Interface). A paper was authored by Dave Smith of Sequential Circuits and proposed to the Audio Engineering Society in 1981. Then, in August 1983, the MIDI Specification 1.0 was finalized.",
"title": "Digital era 1980–2000"
},
{
"paragraph_id": 38,
"text": "The advent of MIDI technology allows a single keystroke, control wheel motion, pedal movement, or command from a microcomputer to activate every device in the studio remotely and in synchrony, with each device responding according to conditions predetermined by the composer.",
"title": "Digital era 1980–2000"
},
{
"paragraph_id": 39,
"text": "MIDI instruments and software made powerful control of sophisticated instruments easily affordable by many studios and individuals. Acoustic sounds became reintegrated into studios via sampling and sampled-ROM-based instruments.",
"title": "Digital era 1980–2000"
},
{
"paragraph_id": 40,
"text": "The increasing power and decreasing cost of sound-generating electronics (and especially of the personal computer), combined with the standardization of the MIDI and Open Sound Control musical performance description languages, has facilitated the separation of musical instruments into music controllers and music synthesizers.",
"title": "Modern electronic musical instruments"
},
{
"paragraph_id": 41,
"text": "By far the most common musical controller is the musical keyboard. Other controllers include the radiodrum, Akai's EWI and Yamaha's WX wind controllers, the guitar-like SynthAxe, the BodySynth, the Buchla Thunder, the Continuum Fingerboard, the Roland Octapad, various isomorphic keyboards including the Thummer, and Kaossilator Pro, and kits like I-CubeX.",
"title": "Modern electronic musical instruments"
},
{
"paragraph_id": 42,
"text": "The Reactable is a round translucent table with a backlit interactive display. By placing and manipulating blocks called tangibles on the table surface, while interacting with the visual display via finger gestures, a virtual modular synthesizer is operated, creating music or sound effects.",
"title": "Modern electronic musical instruments"
},
{
"paragraph_id": 43,
"text": "AudioCubes are autonomous wireless cubes powered by an internal computer system and rechargeable battery. They have internal RGB lighting, and are capable of detecting each other's location, orientation and distance. The cubes can also detect distances to the user's hands and fingers. Through interaction with the cubes, a variety of music and sound software can be operated. AudioCubes have applications in sound design, music production, DJing and live performance.",
"title": "Modern electronic musical instruments"
},
{
"paragraph_id": 44,
"text": "The Kaossilator and Kaossilator Pro are compact instruments where the position of a finger on the touch pad controls two note-characteristics; usually the pitch is changed with a left-right motion and the tonal property, filter or other parameter changes with an up-down motion. The touch pad can be set to different musical scales and keys. The instrument can record a repeating loop of adjustable length, set to any tempo, and new loops of sound can be layered on top of existing ones. This lends itself to electronic dance-music but is more limited for controlled sequences of notes, as the pad on a regular Kaossilator is featureless.",
"title": "Modern electronic musical instruments"
},
{
"paragraph_id": 45,
"text": "The Eigenharp is a large instrument resembling a bassoon, which can be interacted with through big buttons, a drum sequencer and a mouthpiece. The sound processing is done on a separate computer.",
"title": "Modern electronic musical instruments"
},
{
"paragraph_id": 46,
"text": "The AlphaSphere is a spherical instrument that consists of 48 tactile pads that respond to pressure as well as touch. Custom software allows the pads to be indefinitely programmed individually or by groups in terms of function, note, and pressure parameter among many other settings. The primary concept of the AlphaSphere is to increase the level of expression available to electronic musicians, by allowing for the playing style of a musical instrument.",
"title": "Modern electronic musical instruments"
},
{
"paragraph_id": 47,
"text": "Chiptune, chipmusic, or chip music is music written in sound formats where many of the sound textures are synthesized or sequenced in real time by a computer or video game console sound chip, sometimes including sample-based synthesis and low bit sample playback. Many chip music devices featured synthesizers in tandem with low rate sample playback.",
"title": "Chip music"
},
{
"paragraph_id": 48,
"text": "During the late 1970s and early 1980s, do-it-yourself designs were published in hobby electronics magazines (such the Formant modular synth, a DIY clone of the Moog system, published by Elektor) and kits were supplied by companies such as Paia in the US, and Maplin Electronics in the UK.",
"title": "DIY culture"
},
{
"paragraph_id": 49,
"text": "In 1966, Reed Ghazala discovered and began to teach math \"circuit bending\"—the application of the creative short circuit, a process of chance short-circuiting, creating experimental electronic instruments, exploring sonic elements mainly of timbre and with less regard to pitch or rhythm, and influenced by John Cage’s aleatoric music concept.",
"title": "DIY culture"
},
{
"paragraph_id": 50,
"text": "Much of this manipulation of circuits directly, especially to the point of destruction, was pioneered by Louis and Bebe Barron in the early 1950s, such as their work with John Cage on the Williams Mix and especially in the soundtrack to Forbidden Planet.",
"title": "DIY culture"
},
{
"paragraph_id": 51,
"text": "Modern circuit bending is the creative customization of the circuits within electronic devices such as low voltage, battery-powered guitar effects, children's toys and small digital synthesizers to create new musical or visual instruments and sound generators. Emphasizing spontaneity and randomness, the techniques of circuit bending have been commonly associated with noise music, though many more conventional contemporary musicians and musical groups have been known to experiment with \"bent\" instruments. Circuit bending usually involves dismantling the machine and adding components such as switches and potentiometers that alter the circuit. With the revived interest for analogue synthesizer circuit bending became a cheap solution for many experimental musicians to create their own individual analogue sound generators. Nowadays many schematics can be found to build noise generators such as the Atari Punk Console or the Dub Siren as well as simple modifications for children toys such as the Speak & Spell that are often modified by circuit benders.",
"title": "DIY culture"
},
{
"paragraph_id": 52,
"text": "The modular synthesizer is a type of synthesizer consisting of separate interchangeable modules. These are also available as kits for hobbyist DIY constructors. Many hobbyist designers also make available bare PCB boards and front panels for sale to other hobbyists.",
"title": "DIY culture"
},
{
"paragraph_id": 53,
"text": "Technologies",
"title": "See also"
},
{
"paragraph_id": 54,
"text": "Instrument families",
"title": "See also"
},
{
"paragraph_id": 55,
"text": "Individual instruments (historical)",
"title": "See also"
},
{
"paragraph_id": 56,
"text": "Individual instruments (modern)",
"title": "See also"
},
{
"paragraph_id": 57,
"text": "In Indian and Asian traditional music",
"title": "See also"
}
]
| An electronic musical instrument or electrophone is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener. An electronic instrument might include a user interface for controlling its sound, often by adjusting the pitch, frequency, or duration of each note. A common user interface is the musical keyboard, which functions similarly to the keyboard on an acoustic piano where the keys are each linked mechanically to swinging string hammers - whereas with an electronic keyboard, the keyboard interface is linked to a synth module, computer or other electronic or digital sound generator, which then creates a sound. However, it is increasingly common to separate user interface and sound-generating functions into a music controller and a music synthesizer, respectively, with the two devices communicating through a musical performance description language such as MIDI or Open Sound Control. The solid state nature of electronic keyboards also offers differing "feel" and "response", offering a novel experience in playing relative to operating a mechanically linked piano keyboard. All electronic musical instruments can be viewed as a subset of audio signal processing applications. Simple electronic musical instruments are sometimes called sound effects; the border between sound effects and actual musical instruments is often unclear. In the 21st century, electronic musical instruments are now widely used in most styles of music. In popular music styles such as electronic dance music, almost all of the instrument sounds used in recordings are electronic instruments. Development of new electronic musical instruments, controllers, and synthesizers continues to be a highly active and interdisciplinary field of research. Specialized conferences, such as the International Conference on New Interfaces for Musical Expression, have organized to report cutting-edge work, as well as to provide a showcase for artists who perform or create music with new electronic music instruments, controllers, and synthesizers. | 2001-11-02T13:56:01Z | 2023-12-12T16:08:09Z | [
"Template:Column",
"Template:Cite journal",
"Template:Cite book",
"Template:Cite web",
"Template:Webarchive",
"Template:Short description",
"Template:Redirect",
"Template:Main",
"Template:Clear",
"Template:Multiple image",
"Template:Columns-end",
"Template:Cite news",
"Template:Harvnb",
"Template:Electrophones",
"Template:Cite magazine",
"Template:Electronic music",
"Template:Authority control",
"Template:See also",
"Template:Circa",
"Template:Columns-start",
"Template:Reflist",
"Template:Citation"
]
| https://en.wikipedia.org/wiki/Electronic_musical_instrument |
10,008 | Electrode | An electrode is an electrical conductor used to make contact with a nonmetallic part of a circuit (e.g. a semiconductor, an electrolyte, a vacuum or air). Electrodes are essential parts of batteries that can consist of a variety of materials depending on the type of battery.
The electrophore, invented by Johan Wilcke, was an early version of an electrode used to study static electricity.
Electrodes are an essential part of any battery. The first electrochemical battery made was devised by Alessandro Volta and was aptly named the Voltaic cell. This battery consisted of a stack of copper and zinc electrodes separated by brine-soaked paper disks. Due to fluctuation in the voltage provided by the voltaic cell it wasn't very practical. The first practical battery was invented in 1839 and named the Daniell cell after John Frederic Daniell. It still made use of the zinc–copper electrode combination. Since then many more batteries have been developed using various materials. The basis of all these is still using two electrodes, anodes and cathodes.
'Anode' was coined by William Whewell at Michael Faraday's request, derived from the Greek words ἄνο (ano), 'upwards' and ὁδός (hodós), 'a way'. The Anode is the electrode through which the conventional current enters from the electrical circuit of an electrochemical cell (battery) into the non-metallic cell. The electrons then flow to the other side of the battery. Benjamin Franklin surmised that the electrical flow moved from positive to negative. The electrons flow away from the anode and the conventional current towards it. From both can be concluded that the charge of the anode is negative. The electron entering the anode comes from the oxidation reaction that takes place next to it.
The cathode is in many ways the opposite of the anode. The name (also coined by Whewell) comes from the Greek words κάτω (kato), 'downwards' and ὁδός (hodós), 'a way'. It is the positive electrode, meaning the electrons flow from the electrical circuit through the cathode into the non-metallic part of the electrochemical cell. At the cathode, the reduction reaction takes place with the electrons arriving from the wire connected to the cathode and are absorbed by the oxidizing agent.
A primary cell is a battery designed to be used once and then discarded. This is due to the electrochemical reactions taking place at the electrodes in the cell not being reversible. An example of a primary cell is the discardable alkaline battery commonly used in flashlights. Consisting of a zinc anode and a manganese oxide cathode in which ZnO is formed.
The half-reactions are:
Overall reaction:
The ZnO is prone to clumping and will give less efficient discharge if recharged again. It is possible to recharge these batteries but is due to safety concerns advised against by the manufacturer. Other primary cells include zinc–carbon, zinc–chloride, and lithium iron disulfide.
Contrary to the primary cell a secondary cell can be recharged. The first was the lead–acid battery, invented in 1859 by French physicist Gaston Planté. This type of battery is still the most widely used in among others automobiles. The cathode consists of lead dioxide (PbO2) and the anode of solid lead. Other commonly used rechargeable batteries are nickel–cadmium, nickel–metal hydride, and Lithium-ion. The last of which will be explained more thoroughly in this article due to its importance.
Marcus theory is a theory originally developed by Nobel laureate Rudolph A. Marcus and explains the rate at which an electron can move from one chemical species to another, for this article this can be seen as 'jumping' from the electrode to a species in the solvent or vice versa. We can represent the problem as calculating the transfer rate for the transfer of an electron from donor to an acceptor
The potential energy of the system is a function of the translational, rotational, and vibrational coordinates of the reacting species and the molecules of the surrounding medium, collectively called the reaction coordinates. The abscissa the figure to the right represents these. From the classical electron transfer theory, the expression of the reaction rate constant (probability of reaction) can be calculated, if a non-adiabatic process and parabolic potential energy are assumed, by finding the point of intersection (Qx). One important thing to note, and was noted by Marcus when he came up with the theory, the electron transfer must abide by the law of conservation of energy and the Frank-Condon principle. Doing this and then rearranging this leads to the expression of the free energy activation ( Δ G † {\displaystyle \Delta G^{\dagger }} ) in terms of the overall free energy of the reaction ( Δ G 0 {\displaystyle \Delta G^{0}} ).
In which the λ {\displaystyle \lambda } is the reorganisation energy. Filling this result in the classically derived Arrhenius equation
leads to
With A being the pre-exponential factor which is usually experimentally determined, although a semi classical derivation provides more information as will be explained below.
This classically derived result qualitatively reproduced observations of a maximum electron transfer rate under the conditions Δ G † = λ {\displaystyle \Delta G^{\dagger }=\lambda } . For a more extensive mathematical treatment one could read the paper by Newton. An interpretation of this result and what a closer look at the physical meaning of the λ {\displaystyle \lambda } one can read the paper by Marcus.
the situation at hand can be more accurately described by using the displaced harmonic oscillator model, in this model quantum tunneling is allowed. This is needed in order to explain why even at near-zero Kelvin there still are electron transfers, in contradiction to the classical theory.
Without going into too much detail on how the derivation is done, it rests on using Fermi's golden rule from time-dependent perturbation theory with the full Hamiltonian of the system. It is possible to look at the overlap in the wavefunctions of both the reactants and the products (the right and the left side of the chemical reaction) and therefore when their energies are the same and allow for electron transfer. As touched on before this must happen because only then conservation of energy is abided by. Skipping over a few mathematical steps the probability of electron transfer can be calculated (albeit quite difficult) using the following formula
With J {\displaystyle J} being the electronic coupling constant describing the interaction between the two states (reactants and products) and g ( t ) {\displaystyle g(t)} being the line shape function. Taking the classical limit of this expression, meaning ℏ ω ≪ k T {\displaystyle \hbar \omega \ll kT} , and making some substitution an expression is obtained very similar to the classically derived formula, as expected.
The main difference is now the pre-exponential factor has now been described by more physical parameters instead of the experimental factor A {\displaystyle A} . One is once again revered to the sources as listed below for a more in-depth and rigorous mathematical derivation and interpretation.
The physical properties of electrodes are mainly determined by the material of the electrode and the topology of the electrode. The properties required depend on the application and therefore there are many kinds of electrodes in circulation. The defining property for a material to be used as an electrode is that it be conductive. Any conducting material such as metals, semiconductors, graphite or conductive polymers can therefore be used as an electrode. Often electrodes consist of a combination of materials, each with a specific task. Typical constituents are the active materials which serve as the particles which oxidate or reduct, conductive agents which improve the conductivity of the electrode and binders which are used to contain the active particles within the electrode. The efficiency of electrochemical cells is judged by a number of properties, important quantities are the self-discharge time, the discharge voltage and the cycle performance. The physical properties of the electrodes play an important role in determining these quantities. Important properties of the electrodes are: the electrical resistivity, the specific heat capacity (c_p), the electrode potential and the hardness. Of course, for technological applications, the cost of the material is also an important factor. The values of these properties at room temperature (T = 293 K) for some commonly used materials are listed in the table below.
The surface topology of the electrode plays an important role in determining the efficiency of an electrode. The efficiency of the electrode can be reduced due to contact resistance. To create an efficient electrode it is therefore important to design it such that it minimizes the contact resistance.
The production of electrodes for Li-Ion batteries is done in various steps as follows:
For a given selection of constituents of the electrode, the final efficiency is determined by the internal structure of the electrode. The important factors in the internal structure in determining the performance of the electrode are:
These properties can be influenced in the production of the electrodes in a number of manners. The most important step in the manufacturing of the electrodes is creating the electrode slurry. As can be seen above, the important properties of the electrode all have to do with the even distribution of the components of the electrode. Therefore, it is very important that the electrode slurry be as homogeneous as possible. Multiple procedures have been developed to improve this mixing stage and current research is still being done.
A modern application of electrodes is in Lithium-ion batteries (li-ion batteries). A Li-ion battery is a kind of flow battery which can be seen in the image on the right.
Furthermore, a Li-ion battery is an example of a secondary cell since it is rechargeable. It can both act as a galvanic or electrolytic cell. Li-ion batteries use lithium ions as the solute in the electrolyte which are dissolved in an organic solvent. Lithium electrodes were first studied by Gilbert N. Lewis and Frederick G. Keyes in 1913. In the following century these electrodes were used to create and study the first Li-ion batteries. Li-ion batteries are very popular due to their great performance. Applications include mobile phones and electric cars. Due to their popularity, much research is being done to reduce the cost and increase the safety of Li-ion batteries. An integral part of the Li-ion batteries are their anodes and cathodes, therefore much research is being done into increasing the efficiency, safety and reducing the costs of these electrodes specifically.
In Li-ion batteries the cathode consists of a intercalated lithium compound (a layered material consisting of layers of molecules composed of lithium and other elements). A common element which makes up part of the molecules in the compound is cobalt. Another frequently used element is manganese. The best choice of compound usually depends on the application of the battery. Advantages for cobalt-based compounds over manganese-based compounds are their high specific heat capacity, high volumetric heat capacity, low self-discharge rate, high discharge voltage and high cycle durability. There are however also drawbacks in using cobalt-based compounds such as their high cost and their low thermostability. Manganese has similar advantages and a lower cost, however there are some problems associated with using manganese. The main problem is that manganese tends to dissolve into the electrolyte over time. For this reason cobalt is still the most common element which is used in the lithium compounds. There is much research being done into finding new materials which can be used to create cheaper and longer lasting Li-ion batteries
The anodes used in mass-produced Li-ion batteries are either carbon based (usually graphite) or made out of spinel lithium titanate (Li4Ti5O12). Graphite anodes have been successfully implemented in many modern commercially available batteries due to its cheap price, longevity and high energy density. However, it presents issues of dendrite growth, with risks of shorting the battery and posing a safety issue. Li4Ti5O12 has the second largest market share of anodes, due to its stability and good rate capability, but with challenges such as low capacity. During the early 2000s, silicon anode research began picking up pace, becoming one of the decade's most promising candidates for future lithium ion battery anodes. Silicon has one of the highest gravimetric capacities when compared to graphite and Li4Ti5O12 as well as a high volumetric one . Furthermore, Silicon has the advantage of operating under a reasonable open circuit voltage without parasitic lithium reactions. However, Silicon anodes have a major issue of volumetric expansion during lithiation of around 360%. This expansion may pulverize the anode, resulting in poor performance. To fix this problem scientists looked into varying the dimensionality of the Si. Many studies have been developed in Si nanowires, Si tubes as well as Si sheets. As a result, composite hierarchical Si anodes have become the major technology for future applications in lithium-ion batteries. In the early 2020s, technology is reaching commercial levels with factories being built for mass production of anodes in the United States. Furthermore, metallic Lithium is another possible candidate for the anode. It boasts a higher specific capacity than Silicon, however, does come with the drawback of working with the highly unstable metallic lithium. Similarly to graphite anodes, dendrite formation is another major limitation of metallic lithium, with the solid electrolyte interphase being a major design challenge. In the end, if stabilized, metallic lithium would be able to produce batteries that hold the most charge, while being the lightest.
A common failure mechanism of batteries is mechanical shock, which breaks either the electrode or the system’s container, leading to poor conductivity and electrolyte leakage. However, the relevance of mechanical properties of electrodes goes beyond the resistance to collisions due to its environment. During standard operation, the incorporation of ions into electrodes leads to a change in volume. This is well exemplified by Si electrodes in Lithium-ion batteries expanding around 300% during lithiation. Such change may lead to the deformations in the lattice and, therefore stresses in the material. The origin of stresses may be due to geometric constraints in the electrode or inhomogeneous plating of the ion. This phenomenon is very concerning as it may lead to electrode fracture and performance loss. Thus, mechanical properties are crucial to enable the development of new electrodes for long lasting batteries. A possible strategy for measuring the mechanical behavior of electrodes during operation is by using nanoindentation. The method is able to analyze how the stresses evolve during the electrochemical reactions, being a valuable tool in evaluating possible pathways for coupling mechanical behavior and electrochemistry.
More than just affecting the electrode’s morphology, stresses are also able to impact electrochemical reactions. While the chemical driving forces are usually higher in magnitude than the mechanical energies, this is not true for Li-Ion Batteries. A study by Dr. Larché established a direct relation between the applied stress and the chemical potential of the electrode. Though it neglects multiple variables such as the variation of elastic constraints, it subtracts from the total chemical potential the elastic energy induced by the stress.
μ = μ o + k ⋅ T ⋅ log ( γ ⋅ x ) + Ω ⋅ σ {\displaystyle \mu =\mu ^{o}+k\cdot T\cdot \log(\gamma \cdot x)+\Omega \cdot \sigma }
In this equation μ represents the chemical potential, with μ° being its reference value. T stands for the temperature and k the Boltzmann constant. The term γ inside the logarithm is the activity and x is the ratio of the ion to the total composition of the electrode. The novel term Ω is the partial molar volume of the ion in the host and σ corresponds to the mean stress felt by the system. The result of this equation is that diffusion, which is dependent on chemical potential, gets impacted by the added stress and, therefore changes the battery’s performance. Furthermore, mechanical stresses may also impact the electrode’s solid-electrolyte-interphase layer. The interface which regulates the ion and charge transfer and can be degraded by stress. Thus, more ions in the solution will be consumed to reform it, diminishing the overall efficiency of the system.
In a vacuum tube or a semiconductor having polarity (diodes, electrolytic capacitors) the anode is the positive (+) electrode and the cathode the negative (−). The electrons enter the device through the cathode and exit the device through the anode. Many devices have other electrodes to control operation, e.g., base, gate, control grid.
In a three-electrode cell, a counter electrode, also called an auxiliary electrode, is used only to make a connection to the electrolyte so that a current can be applied to the working electrode. The counter electrode is usually made of an inert material, such as a noble metal or graphite, to keep it from dissolving.
In arc welding, an electrode is used to conduct current through a workpiece to fuse two pieces together. Depending upon the process, the electrode is either consumable, in the case of gas metal arc welding or shielded metal arc welding, or non-consumable, such as in gas tungsten arc welding. For a direct current system, the weld rod or stick may be a cathode for a filling type weld or an anode for other welding processes. For an alternating current arc welder, the welding electrode would not be considered an anode or cathode.
For electrical systems which use alternating current, the electrodes are the connections from the circuitry to the object to be acted upon by the electric current but are not designated anode or cathode because the direction of flow of the electrons changes periodically, usually many times per second.
Chemically modified electrodes are electrodes that have their surfaces chemically modified to change the electrode's physical, chemical, electrochemical, optical, electrical, and transportive properties. These electrodes are used for advanced purposes in research and investigation.
Electrodes are used to provide current through nonmetal objects to alter them in numerous ways and to measure conductivity for numerous purposes. Examples include: | [
{
"paragraph_id": 0,
"text": "An electrode is an electrical conductor used to make contact with a nonmetallic part of a circuit (e.g. a semiconductor, an electrolyte, a vacuum or air). Electrodes are essential parts of batteries that can consist of a variety of materials depending on the type of battery.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The electrophore, invented by Johan Wilcke, was an early version of an electrode used to study static electricity.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Electrodes are an essential part of any battery. The first electrochemical battery made was devised by Alessandro Volta and was aptly named the Voltaic cell. This battery consisted of a stack of copper and zinc electrodes separated by brine-soaked paper disks. Due to fluctuation in the voltage provided by the voltaic cell it wasn't very practical. The first practical battery was invented in 1839 and named the Daniell cell after John Frederic Daniell. It still made use of the zinc–copper electrode combination. Since then many more batteries have been developed using various materials. The basis of all these is still using two electrodes, anodes and cathodes.",
"title": "Anode and cathode in electrochemical cells"
},
{
"paragraph_id": 3,
"text": "'Anode' was coined by William Whewell at Michael Faraday's request, derived from the Greek words ἄνο (ano), 'upwards' and ὁδός (hodós), 'a way'. The Anode is the electrode through which the conventional current enters from the electrical circuit of an electrochemical cell (battery) into the non-metallic cell. The electrons then flow to the other side of the battery. Benjamin Franklin surmised that the electrical flow moved from positive to negative. The electrons flow away from the anode and the conventional current towards it. From both can be concluded that the charge of the anode is negative. The electron entering the anode comes from the oxidation reaction that takes place next to it.",
"title": "Anode and cathode in electrochemical cells"
},
{
"paragraph_id": 4,
"text": "The cathode is in many ways the opposite of the anode. The name (also coined by Whewell) comes from the Greek words κάτω (kato), 'downwards' and ὁδός (hodós), 'a way'. It is the positive electrode, meaning the electrons flow from the electrical circuit through the cathode into the non-metallic part of the electrochemical cell. At the cathode, the reduction reaction takes place with the electrons arriving from the wire connected to the cathode and are absorbed by the oxidizing agent.",
"title": "Anode and cathode in electrochemical cells"
},
{
"paragraph_id": 5,
"text": "A primary cell is a battery designed to be used once and then discarded. This is due to the electrochemical reactions taking place at the electrodes in the cell not being reversible. An example of a primary cell is the discardable alkaline battery commonly used in flashlights. Consisting of a zinc anode and a manganese oxide cathode in which ZnO is formed.",
"title": "Anode and cathode in electrochemical cells"
},
{
"paragraph_id": 6,
"text": "The half-reactions are:",
"title": "Anode and cathode in electrochemical cells"
},
{
"paragraph_id": 7,
"text": "Overall reaction:",
"title": "Anode and cathode in electrochemical cells"
},
{
"paragraph_id": 8,
"text": "The ZnO is prone to clumping and will give less efficient discharge if recharged again. It is possible to recharge these batteries but is due to safety concerns advised against by the manufacturer. Other primary cells include zinc–carbon, zinc–chloride, and lithium iron disulfide.",
"title": "Anode and cathode in electrochemical cells"
},
{
"paragraph_id": 9,
"text": "Contrary to the primary cell a secondary cell can be recharged. The first was the lead–acid battery, invented in 1859 by French physicist Gaston Planté. This type of battery is still the most widely used in among others automobiles. The cathode consists of lead dioxide (PbO2) and the anode of solid lead. Other commonly used rechargeable batteries are nickel–cadmium, nickel–metal hydride, and Lithium-ion. The last of which will be explained more thoroughly in this article due to its importance.",
"title": "Anode and cathode in electrochemical cells"
},
{
"paragraph_id": 10,
"text": "Marcus theory is a theory originally developed by Nobel laureate Rudolph A. Marcus and explains the rate at which an electron can move from one chemical species to another, for this article this can be seen as 'jumping' from the electrode to a species in the solvent or vice versa. We can represent the problem as calculating the transfer rate for the transfer of an electron from donor to an acceptor",
"title": "Marcus' theory of electron transfer"
},
{
"paragraph_id": 11,
"text": "The potential energy of the system is a function of the translational, rotational, and vibrational coordinates of the reacting species and the molecules of the surrounding medium, collectively called the reaction coordinates. The abscissa the figure to the right represents these. From the classical electron transfer theory, the expression of the reaction rate constant (probability of reaction) can be calculated, if a non-adiabatic process and parabolic potential energy are assumed, by finding the point of intersection (Qx). One important thing to note, and was noted by Marcus when he came up with the theory, the electron transfer must abide by the law of conservation of energy and the Frank-Condon principle. Doing this and then rearranging this leads to the expression of the free energy activation ( Δ G † {\\displaystyle \\Delta G^{\\dagger }} ) in terms of the overall free energy of the reaction ( Δ G 0 {\\displaystyle \\Delta G^{0}} ).",
"title": "Marcus' theory of electron transfer"
},
{
"paragraph_id": 12,
"text": "In which the λ {\\displaystyle \\lambda } is the reorganisation energy. Filling this result in the classically derived Arrhenius equation",
"title": "Marcus' theory of electron transfer"
},
{
"paragraph_id": 13,
"text": "leads to",
"title": "Marcus' theory of electron transfer"
},
{
"paragraph_id": 14,
"text": "With A being the pre-exponential factor which is usually experimentally determined, although a semi classical derivation provides more information as will be explained below.",
"title": "Marcus' theory of electron transfer"
},
{
"paragraph_id": 15,
"text": "This classically derived result qualitatively reproduced observations of a maximum electron transfer rate under the conditions Δ G † = λ {\\displaystyle \\Delta G^{\\dagger }=\\lambda } . For a more extensive mathematical treatment one could read the paper by Newton. An interpretation of this result and what a closer look at the physical meaning of the λ {\\displaystyle \\lambda } one can read the paper by Marcus.",
"title": "Marcus' theory of electron transfer"
},
{
"paragraph_id": 16,
"text": "the situation at hand can be more accurately described by using the displaced harmonic oscillator model, in this model quantum tunneling is allowed. This is needed in order to explain why even at near-zero Kelvin there still are electron transfers, in contradiction to the classical theory.",
"title": "Marcus' theory of electron transfer"
},
{
"paragraph_id": 17,
"text": "Without going into too much detail on how the derivation is done, it rests on using Fermi's golden rule from time-dependent perturbation theory with the full Hamiltonian of the system. It is possible to look at the overlap in the wavefunctions of both the reactants and the products (the right and the left side of the chemical reaction) and therefore when their energies are the same and allow for electron transfer. As touched on before this must happen because only then conservation of energy is abided by. Skipping over a few mathematical steps the probability of electron transfer can be calculated (albeit quite difficult) using the following formula",
"title": "Marcus' theory of electron transfer"
},
{
"paragraph_id": 18,
"text": "With J {\\displaystyle J} being the electronic coupling constant describing the interaction between the two states (reactants and products) and g ( t ) {\\displaystyle g(t)} being the line shape function. Taking the classical limit of this expression, meaning ℏ ω ≪ k T {\\displaystyle \\hbar \\omega \\ll kT} , and making some substitution an expression is obtained very similar to the classically derived formula, as expected.",
"title": "Marcus' theory of electron transfer"
},
{
"paragraph_id": 19,
"text": "The main difference is now the pre-exponential factor has now been described by more physical parameters instead of the experimental factor A {\\displaystyle A} . One is once again revered to the sources as listed below for a more in-depth and rigorous mathematical derivation and interpretation.",
"title": "Marcus' theory of electron transfer"
},
{
"paragraph_id": 20,
"text": "The physical properties of electrodes are mainly determined by the material of the electrode and the topology of the electrode. The properties required depend on the application and therefore there are many kinds of electrodes in circulation. The defining property for a material to be used as an electrode is that it be conductive. Any conducting material such as metals, semiconductors, graphite or conductive polymers can therefore be used as an electrode. Often electrodes consist of a combination of materials, each with a specific task. Typical constituents are the active materials which serve as the particles which oxidate or reduct, conductive agents which improve the conductivity of the electrode and binders which are used to contain the active particles within the electrode. The efficiency of electrochemical cells is judged by a number of properties, important quantities are the self-discharge time, the discharge voltage and the cycle performance. The physical properties of the electrodes play an important role in determining these quantities. Important properties of the electrodes are: the electrical resistivity, the specific heat capacity (c_p), the electrode potential and the hardness. Of course, for technological applications, the cost of the material is also an important factor. The values of these properties at room temperature (T = 293 K) for some commonly used materials are listed in the table below.",
"title": "Efficiency"
},
{
"paragraph_id": 21,
"text": "The surface topology of the electrode plays an important role in determining the efficiency of an electrode. The efficiency of the electrode can be reduced due to contact resistance. To create an efficient electrode it is therefore important to design it such that it minimizes the contact resistance.",
"title": "Surface effects"
},
{
"paragraph_id": 22,
"text": "The production of electrodes for Li-Ion batteries is done in various steps as follows:",
"title": "Manufacturing"
},
{
"paragraph_id": 23,
"text": "For a given selection of constituents of the electrode, the final efficiency is determined by the internal structure of the electrode. The important factors in the internal structure in determining the performance of the electrode are:",
"title": "Manufacturing"
},
{
"paragraph_id": 24,
"text": "These properties can be influenced in the production of the electrodes in a number of manners. The most important step in the manufacturing of the electrodes is creating the electrode slurry. As can be seen above, the important properties of the electrode all have to do with the even distribution of the components of the electrode. Therefore, it is very important that the electrode slurry be as homogeneous as possible. Multiple procedures have been developed to improve this mixing stage and current research is still being done.",
"title": "Manufacturing"
},
{
"paragraph_id": 25,
"text": "A modern application of electrodes is in Lithium-ion batteries (li-ion batteries). A Li-ion battery is a kind of flow battery which can be seen in the image on the right.",
"title": "Electrodes in lithium ion batteries"
},
{
"paragraph_id": 26,
"text": "Furthermore, a Li-ion battery is an example of a secondary cell since it is rechargeable. It can both act as a galvanic or electrolytic cell. Li-ion batteries use lithium ions as the solute in the electrolyte which are dissolved in an organic solvent. Lithium electrodes were first studied by Gilbert N. Lewis and Frederick G. Keyes in 1913. In the following century these electrodes were used to create and study the first Li-ion batteries. Li-ion batteries are very popular due to their great performance. Applications include mobile phones and electric cars. Due to their popularity, much research is being done to reduce the cost and increase the safety of Li-ion batteries. An integral part of the Li-ion batteries are their anodes and cathodes, therefore much research is being done into increasing the efficiency, safety and reducing the costs of these electrodes specifically.",
"title": "Electrodes in lithium ion batteries"
},
{
"paragraph_id": 27,
"text": "In Li-ion batteries the cathode consists of a intercalated lithium compound (a layered material consisting of layers of molecules composed of lithium and other elements). A common element which makes up part of the molecules in the compound is cobalt. Another frequently used element is manganese. The best choice of compound usually depends on the application of the battery. Advantages for cobalt-based compounds over manganese-based compounds are their high specific heat capacity, high volumetric heat capacity, low self-discharge rate, high discharge voltage and high cycle durability. There are however also drawbacks in using cobalt-based compounds such as their high cost and their low thermostability. Manganese has similar advantages and a lower cost, however there are some problems associated with using manganese. The main problem is that manganese tends to dissolve into the electrolyte over time. For this reason cobalt is still the most common element which is used in the lithium compounds. There is much research being done into finding new materials which can be used to create cheaper and longer lasting Li-ion batteries",
"title": "Electrodes in lithium ion batteries"
},
{
"paragraph_id": 28,
"text": "The anodes used in mass-produced Li-ion batteries are either carbon based (usually graphite) or made out of spinel lithium titanate (Li4Ti5O12). Graphite anodes have been successfully implemented in many modern commercially available batteries due to its cheap price, longevity and high energy density. However, it presents issues of dendrite growth, with risks of shorting the battery and posing a safety issue. Li4Ti5O12 has the second largest market share of anodes, due to its stability and good rate capability, but with challenges such as low capacity. During the early 2000s, silicon anode research began picking up pace, becoming one of the decade's most promising candidates for future lithium ion battery anodes. Silicon has one of the highest gravimetric capacities when compared to graphite and Li4Ti5O12 as well as a high volumetric one . Furthermore, Silicon has the advantage of operating under a reasonable open circuit voltage without parasitic lithium reactions. However, Silicon anodes have a major issue of volumetric expansion during lithiation of around 360%. This expansion may pulverize the anode, resulting in poor performance. To fix this problem scientists looked into varying the dimensionality of the Si. Many studies have been developed in Si nanowires, Si tubes as well as Si sheets. As a result, composite hierarchical Si anodes have become the major technology for future applications in lithium-ion batteries. In the early 2020s, technology is reaching commercial levels with factories being built for mass production of anodes in the United States. Furthermore, metallic Lithium is another possible candidate for the anode. It boasts a higher specific capacity than Silicon, however, does come with the drawback of working with the highly unstable metallic lithium. Similarly to graphite anodes, dendrite formation is another major limitation of metallic lithium, with the solid electrolyte interphase being a major design challenge. In the end, if stabilized, metallic lithium would be able to produce batteries that hold the most charge, while being the lightest.",
"title": "Electrodes in lithium ion batteries"
},
{
"paragraph_id": 29,
"text": "A common failure mechanism of batteries is mechanical shock, which breaks either the electrode or the system’s container, leading to poor conductivity and electrolyte leakage. However, the relevance of mechanical properties of electrodes goes beyond the resistance to collisions due to its environment. During standard operation, the incorporation of ions into electrodes leads to a change in volume. This is well exemplified by Si electrodes in Lithium-ion batteries expanding around 300% during lithiation. Such change may lead to the deformations in the lattice and, therefore stresses in the material. The origin of stresses may be due to geometric constraints in the electrode or inhomogeneous plating of the ion. This phenomenon is very concerning as it may lead to electrode fracture and performance loss. Thus, mechanical properties are crucial to enable the development of new electrodes for long lasting batteries. A possible strategy for measuring the mechanical behavior of electrodes during operation is by using nanoindentation. The method is able to analyze how the stresses evolve during the electrochemical reactions, being a valuable tool in evaluating possible pathways for coupling mechanical behavior and electrochemistry.",
"title": "Electrodes in lithium ion batteries"
},
{
"paragraph_id": 30,
"text": "More than just affecting the electrode’s morphology, stresses are also able to impact electrochemical reactions. While the chemical driving forces are usually higher in magnitude than the mechanical energies, this is not true for Li-Ion Batteries. A study by Dr. Larché established a direct relation between the applied stress and the chemical potential of the electrode. Though it neglects multiple variables such as the variation of elastic constraints, it subtracts from the total chemical potential the elastic energy induced by the stress.",
"title": "Electrodes in lithium ion batteries"
},
{
"paragraph_id": 31,
"text": "μ = μ o + k ⋅ T ⋅ log ( γ ⋅ x ) + Ω ⋅ σ {\\displaystyle \\mu =\\mu ^{o}+k\\cdot T\\cdot \\log(\\gamma \\cdot x)+\\Omega \\cdot \\sigma }",
"title": "Electrodes in lithium ion batteries"
},
{
"paragraph_id": 32,
"text": "In this equation μ represents the chemical potential, with μ° being its reference value. T stands for the temperature and k the Boltzmann constant. The term γ inside the logarithm is the activity and x is the ratio of the ion to the total composition of the electrode. The novel term Ω is the partial molar volume of the ion in the host and σ corresponds to the mean stress felt by the system. The result of this equation is that diffusion, which is dependent on chemical potential, gets impacted by the added stress and, therefore changes the battery’s performance. Furthermore, mechanical stresses may also impact the electrode’s solid-electrolyte-interphase layer. The interface which regulates the ion and charge transfer and can be degraded by stress. Thus, more ions in the solution will be consumed to reform it, diminishing the overall efficiency of the system.",
"title": "Electrodes in lithium ion batteries"
},
{
"paragraph_id": 33,
"text": "In a vacuum tube or a semiconductor having polarity (diodes, electrolytic capacitors) the anode is the positive (+) electrode and the cathode the negative (−). The electrons enter the device through the cathode and exit the device through the anode. Many devices have other electrodes to control operation, e.g., base, gate, control grid.",
"title": "Other anodes and cathodes"
},
{
"paragraph_id": 34,
"text": "In a three-electrode cell, a counter electrode, also called an auxiliary electrode, is used only to make a connection to the electrolyte so that a current can be applied to the working electrode. The counter electrode is usually made of an inert material, such as a noble metal or graphite, to keep it from dissolving.",
"title": "Other anodes and cathodes"
},
{
"paragraph_id": 35,
"text": "In arc welding, an electrode is used to conduct current through a workpiece to fuse two pieces together. Depending upon the process, the electrode is either consumable, in the case of gas metal arc welding or shielded metal arc welding, or non-consumable, such as in gas tungsten arc welding. For a direct current system, the weld rod or stick may be a cathode for a filling type weld or an anode for other welding processes. For an alternating current arc welder, the welding electrode would not be considered an anode or cathode.",
"title": "Welding electrodes"
},
{
"paragraph_id": 36,
"text": "For electrical systems which use alternating current, the electrodes are the connections from the circuitry to the object to be acted upon by the electric current but are not designated anode or cathode because the direction of flow of the electrons changes periodically, usually many times per second.",
"title": "Alternating current electrodes"
},
{
"paragraph_id": 37,
"text": "Chemically modified electrodes are electrodes that have their surfaces chemically modified to change the electrode's physical, chemical, electrochemical, optical, electrical, and transportive properties. These electrodes are used for advanced purposes in research and investigation.",
"title": "Chemically modified electrodes"
},
{
"paragraph_id": 38,
"text": "Electrodes are used to provide current through nonmetal objects to alter them in numerous ways and to measure conductivity for numerous purposes. Examples include:",
"title": "Uses"
}
]
| An electrode is an electrical conductor used to make contact with a nonmetallic part of a circuit. Electrodes are essential parts of batteries that can consist of a variety of materials depending on the type of battery. The electrophore, invented by Johan Wilcke, was an early version of an electrode used to study static electricity. | 2001-10-30T11:29:27Z | 2023-09-14T17:37:28Z | [
"Template:Cite journal",
"Template:Citation",
"Template:Authority control",
"Template:Short description",
"Template:Div col",
"Template:Div col end",
"Template:Cite web",
"Template:Galvanic cells",
"Template:For",
"Template:Commons category",
"Template:Cite book",
"Template:Metalworking navbox",
"Template:Eqm",
"Template:Webarchive",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/Electrode |
10,011 | Epistolary novel | An epistolary novel is a novel written as a series of letters between the fictional characters of a narrative. The term is often extended to cover novels that intersperse documents of other kinds with the letters, most commonly diary entries and newspaper clippings, and sometimes considered to include novels composed of documents even if they do not include letters at all. More recently, epistolaries may include electronic documents such as recordings and radio, blog posts, and e-mails. The word epistolary is derived from Latin from the Greek word ἐπιστολή, epistolē, meaning a letter (see epistle). This type of fiction is also sometimes known by the German term Briefroman or more generally as epistolary fiction.
The epistolary form can be seen as adding greater realism to a story, due to the text existing diegetically within the lives of the characters. It is in particular able to demonstrate differing points of view without recourse to the device of an omniscient narrator. An important strategic device in the epistolary novel for creating the impression of authenticity of the letters is the fictional editor.
There are two theories on the genesis of the epistolary novel: The first claims that the genre is originated from novels with inserted letters, in which the portion containing the third-person narrative in between the letters was gradually reduced. The other theory claims that the epistolary novel arose from miscellanies of letters and poetry: some of the letters were tied together into a (mostly amorous) plot. Both claims have some validity. The first truly epistolary novel, the Spanish "Prison of Love" (Cárcel de amor) (c. 1485) by Diego de San Pedro, belongs to a tradition of novels in which a large number of inserted letters already dominated the narrative. Other well-known examples of early epistolary novels are closely related to the tradition of letter-books and miscellanies of letters. Within the successive editions of Edmé Boursault's Letters of Respect, Gratitude and Love (Lettres de respect, d'obligation et d'amour) (1669), a group of letters written to a girl named Babet were expanded and became more and more distinct from the other letters, until it formed a small epistolary novel entitled Letters to Babet (Lettres à Babet). The immensely famous Letters of a Portuguese Nun (Lettres portugaises) (1669) generally attributed to Gabriel-Joseph de La Vergne, comte de Guilleragues, though a small minority still regard Marianna Alcoforado as the author, is claimed to be intended to be part of a miscellany of Guilleragues prose and poetry. The founder of the epistolary novel in English is said by many to be James Howell (1594–1666) with "Familiar Letters" (1645–50), who writes of prison, foreign adventure, and the love of women.
Perhaps first work to fully utilize the potential of an epistolary novel was Love-Letters Between a Nobleman and His Sister. This work was published anonymously in three volumes (1684, 1685, and 1687), and has been attributed to Aphra Behn though its authorship remains disputed in the 21st century. The novel shows the genre's results of changing perspectives: individual points were presented by the individual characters, and the central voice of the author and moral evaluation disappeared (at least in the first volume; further volumes introduced a narrator). The author furthermore explored a realm of intrigue with complex scenarios such as letters that fall into the wrong hands, faked letters, or letters withheld by protagonists.
The epistolary novel as a genre became popular in the 18th century in the works of such authors as Samuel Richardson, with his immensely successful novels Pamela (1740) and Clarissa (1749). John Cleland's early erotic novel Fanny Hill (1748) is written as a series of letters from the titular character to an unnamed recipient. In France, there was Lettres persanes (1721) by Montesquieu, followed by Julie, ou la nouvelle Héloïse (1761) by Jean-Jacques Rousseau, and Choderlos de Laclos' Les Liaisons dangereuses (1782), which used the epistolary form to great dramatic effect, because the sequence of events was not always related directly or explicitly. In Germany, there was Johann Wolfgang von Goethe's The Sorrows of Young Werther (Die Leiden des jungen Werther) (1774) and Friedrich Hölderlin's Hyperion. The first Canadian novel, The History of Emily Montague (1769) by Frances Brooke, and twenty years later the first American novel, The Power of Sympathy (1789) by William Hill Brown, were both written in epistolary form.
Starting in the 18th century, the epistolary form was subject to much ridicule, resulting in a number of savage burlesques. The most notable example of these was Henry Fielding's Shamela (1741), written as a parody of Pamela. In it, the female narrator can be found wielding a pen and scribbling her diary entries under the most dramatic and unlikely of circumstances. Oliver Goldsmith used the form to satirical effect in The Citizen of the World, subtitled "Letters from a Chinese Philosopher Residing in London to his Friends in the East" (1760–61). So did the diarist Fanny Burney in a successful comic first novel, Evelina (1788).
The epistolary novel slowly became less popular after 18th century. Although Jane Austen tried the epistolary in juvenile writings and her novella Lady Susan (1794), she abandoned this structure for her later work. It is thought that her lost novel First Impressions, which was redrafted to become Pride and Prejudice, may have been epistolary: Pride and Prejudice contains an unusual number of letters quoted in full and some play a critical role in the plot.
The epistolary form nonetheless saw continued use, surviving in exceptions or in fragments in nineteenth-century novels. In Honoré de Balzac's novel Letters of Two Brides, two women who became friends during their education at a convent correspond over a 17-year period, exchanging letters describing their lives. Mary Shelley employs the epistolary form in her novel Frankenstein (1818). Shelley uses the letters as one of a variety of framing devices, as the story is presented through the letters of a sea captain and scientific explorer attempting to reach the north pole who encounters Victor Frankenstein and records the dying man's narrative and confessions. Published in 1848, Anne Brontë's novel The Tenant of Wildfell Hall is framed as a retrospective letter from one of the main heroes to his friend and brother-in-law with the diary of the eponymous tenant inside it. In the late 19th century, Bram Stoker released one of the most widely recognized and successful novels in the epistolary form to date, Dracula. Printed in 1897, the novel is compiled entirely of letters, diary entries, newspaper clippings, telegrams, doctor's notes, ship's logs, and the like.
Epistolary novels can be categorized based on the number of people whose letters are included. This gives three types of epistolary novels: monophonic (giving the letters of only one character, like Letters of a Portuguese Nun and The Sorrows of Young Werther), dialogic (giving the letters of two characters, like Mme Marie Jeanne Riccoboni's Letters of Fanni Butler (1757), and polyphonic (with three or more letter-writing characters, such as in Bram Stoker's Dracula).
A crucial element in polyphonic epistolary novels like Clarissa and Dangerous Liaisons is the dramatic device of 'discrepant awareness': the simultaneous but separate correspondences of the heroines and the villains creating dramatic tension. They can also be classified according to their type and quantity of use of non-letter documents, though this has obvious correlations with the number of voices – for example, newspaper clippings are unlikely to feature heavily in a monophonic epistolary and considerably more likely in a polyphonic one.
The epistolary novel form has continued to be used after the eighteenth century. | [
{
"paragraph_id": 0,
"text": "An epistolary novel is a novel written as a series of letters between the fictional characters of a narrative. The term is often extended to cover novels that intersperse documents of other kinds with the letters, most commonly diary entries and newspaper clippings, and sometimes considered to include novels composed of documents even if they do not include letters at all. More recently, epistolaries may include electronic documents such as recordings and radio, blog posts, and e-mails. The word epistolary is derived from Latin from the Greek word ἐπιστολή, epistolē, meaning a letter (see epistle). This type of fiction is also sometimes known by the German term Briefroman or more generally as epistolary fiction.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The epistolary form can be seen as adding greater realism to a story, due to the text existing diegetically within the lives of the characters. It is in particular able to demonstrate differing points of view without recourse to the device of an omniscient narrator. An important strategic device in the epistolary novel for creating the impression of authenticity of the letters is the fictional editor.",
"title": ""
},
{
"paragraph_id": 2,
"text": "There are two theories on the genesis of the epistolary novel: The first claims that the genre is originated from novels with inserted letters, in which the portion containing the third-person narrative in between the letters was gradually reduced. The other theory claims that the epistolary novel arose from miscellanies of letters and poetry: some of the letters were tied together into a (mostly amorous) plot. Both claims have some validity. The first truly epistolary novel, the Spanish \"Prison of Love\" (Cárcel de amor) (c. 1485) by Diego de San Pedro, belongs to a tradition of novels in which a large number of inserted letters already dominated the narrative. Other well-known examples of early epistolary novels are closely related to the tradition of letter-books and miscellanies of letters. Within the successive editions of Edmé Boursault's Letters of Respect, Gratitude and Love (Lettres de respect, d'obligation et d'amour) (1669), a group of letters written to a girl named Babet were expanded and became more and more distinct from the other letters, until it formed a small epistolary novel entitled Letters to Babet (Lettres à Babet). The immensely famous Letters of a Portuguese Nun (Lettres portugaises) (1669) generally attributed to Gabriel-Joseph de La Vergne, comte de Guilleragues, though a small minority still regard Marianna Alcoforado as the author, is claimed to be intended to be part of a miscellany of Guilleragues prose and poetry. The founder of the epistolary novel in English is said by many to be James Howell (1594–1666) with \"Familiar Letters\" (1645–50), who writes of prison, foreign adventure, and the love of women.",
"title": "Early works"
},
{
"paragraph_id": 3,
"text": "Perhaps first work to fully utilize the potential of an epistolary novel was Love-Letters Between a Nobleman and His Sister. This work was published anonymously in three volumes (1684, 1685, and 1687), and has been attributed to Aphra Behn though its authorship remains disputed in the 21st century. The novel shows the genre's results of changing perspectives: individual points were presented by the individual characters, and the central voice of the author and moral evaluation disappeared (at least in the first volume; further volumes introduced a narrator). The author furthermore explored a realm of intrigue with complex scenarios such as letters that fall into the wrong hands, faked letters, or letters withheld by protagonists.",
"title": "Early works"
},
{
"paragraph_id": 4,
"text": "The epistolary novel as a genre became popular in the 18th century in the works of such authors as Samuel Richardson, with his immensely successful novels Pamela (1740) and Clarissa (1749). John Cleland's early erotic novel Fanny Hill (1748) is written as a series of letters from the titular character to an unnamed recipient. In France, there was Lettres persanes (1721) by Montesquieu, followed by Julie, ou la nouvelle Héloïse (1761) by Jean-Jacques Rousseau, and Choderlos de Laclos' Les Liaisons dangereuses (1782), which used the epistolary form to great dramatic effect, because the sequence of events was not always related directly or explicitly. In Germany, there was Johann Wolfgang von Goethe's The Sorrows of Young Werther (Die Leiden des jungen Werther) (1774) and Friedrich Hölderlin's Hyperion. The first Canadian novel, The History of Emily Montague (1769) by Frances Brooke, and twenty years later the first American novel, The Power of Sympathy (1789) by William Hill Brown, were both written in epistolary form.",
"title": "Early works"
},
{
"paragraph_id": 5,
"text": "Starting in the 18th century, the epistolary form was subject to much ridicule, resulting in a number of savage burlesques. The most notable example of these was Henry Fielding's Shamela (1741), written as a parody of Pamela. In it, the female narrator can be found wielding a pen and scribbling her diary entries under the most dramatic and unlikely of circumstances. Oliver Goldsmith used the form to satirical effect in The Citizen of the World, subtitled \"Letters from a Chinese Philosopher Residing in London to his Friends in the East\" (1760–61). So did the diarist Fanny Burney in a successful comic first novel, Evelina (1788).",
"title": "Early works"
},
{
"paragraph_id": 6,
"text": "The epistolary novel slowly became less popular after 18th century. Although Jane Austen tried the epistolary in juvenile writings and her novella Lady Susan (1794), she abandoned this structure for her later work. It is thought that her lost novel First Impressions, which was redrafted to become Pride and Prejudice, may have been epistolary: Pride and Prejudice contains an unusual number of letters quoted in full and some play a critical role in the plot.",
"title": "Early works"
},
{
"paragraph_id": 7,
"text": "The epistolary form nonetheless saw continued use, surviving in exceptions or in fragments in nineteenth-century novels. In Honoré de Balzac's novel Letters of Two Brides, two women who became friends during their education at a convent correspond over a 17-year period, exchanging letters describing their lives. Mary Shelley employs the epistolary form in her novel Frankenstein (1818). Shelley uses the letters as one of a variety of framing devices, as the story is presented through the letters of a sea captain and scientific explorer attempting to reach the north pole who encounters Victor Frankenstein and records the dying man's narrative and confessions. Published in 1848, Anne Brontë's novel The Tenant of Wildfell Hall is framed as a retrospective letter from one of the main heroes to his friend and brother-in-law with the diary of the eponymous tenant inside it. In the late 19th century, Bram Stoker released one of the most widely recognized and successful novels in the epistolary form to date, Dracula. Printed in 1897, the novel is compiled entirely of letters, diary entries, newspaper clippings, telegrams, doctor's notes, ship's logs, and the like.",
"title": "Early works"
},
{
"paragraph_id": 8,
"text": "Epistolary novels can be categorized based on the number of people whose letters are included. This gives three types of epistolary novels: monophonic (giving the letters of only one character, like Letters of a Portuguese Nun and The Sorrows of Young Werther), dialogic (giving the letters of two characters, like Mme Marie Jeanne Riccoboni's Letters of Fanni Butler (1757), and polyphonic (with three or more letter-writing characters, such as in Bram Stoker's Dracula).",
"title": "Types"
},
{
"paragraph_id": 9,
"text": "A crucial element in polyphonic epistolary novels like Clarissa and Dangerous Liaisons is the dramatic device of 'discrepant awareness': the simultaneous but separate correspondences of the heroines and the villains creating dramatic tension. They can also be classified according to their type and quantity of use of non-letter documents, though this has obvious correlations with the number of voices – for example, newspaper clippings are unlikely to feature heavily in a monophonic epistolary and considerably more likely in a polyphonic one.",
"title": "Types"
},
{
"paragraph_id": 10,
"text": "The epistolary novel form has continued to be used after the eighteenth century.",
"title": "Notable works"
}
]
| An epistolary novel is a novel written as a series of letters between the fictional characters of a narrative. The term is often extended to cover novels that intersperse documents of other kinds with the letters, most commonly diary entries and newspaper clippings, and sometimes considered to include novels composed of documents even if they do not include letters at all. More recently, epistolaries may include electronic documents such as recordings and radio, blog posts, and e-mails. The word epistolary is derived from Latin from the Greek word ἐπιστολή, epistolē, meaning a letter. This type of fiction is also sometimes known by the German term Briefroman or more generally as epistolary fiction. The epistolary form can be seen as adding greater realism to a story, due to the text existing diegetically within the lives of the characters. It is in particular able to demonstrate differing points of view without recourse to the device of an omniscient narrator. An important strategic device in the epistolary novel for creating the impression of authenticity of the letters is the fictional editor. | 2002-02-25T15:51:15Z | 2023-11-30T11:47:04Z | [
"Template:Use dmy dates",
"Template:Crossreference",
"Template:Main",
"Template:Cite web",
"Template:Cite news",
"Template:Short description",
"Template:Circa",
"Template:Fraction",
"Template:Cite thesis",
"Template:Lang-el",
"Template:According to whom",
"Template:See also",
"Template:Reflist",
"Template:Cite journal",
"Template:Narrative modes",
"Template:Portal",
"Template:Cite book",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Epistolary_novel |
10,013 | Evidence-based medicine | Evidence-based medicine (EBM) is "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients." The aim of EBM is to integrate the experience of the clinician, the values of the patient, and the best available scientific information to guide decision-making about clinical management. The term was originally used to describe an approach to teaching the practice of medicine and improving decisions by individual physicians about individual patients.
The EBM Pyramid is a tool that helps in visualizing the hierarchy of evidence in medicine, from least authoritative, like expert opinions, to most authoritative, like systematic reviews.
Medicine has a long history of scientific inquiry about the prevention, diagnosis, and treatment of human disease. In the 11th century AD, Avicenna, a Persian physician and philosopher, developed an approach to EBM that was mostly similar to current ideas and practises.
The concept of a controlled clinical trial was first described in 1662 by Jan Baptist van Helmont in reference to the practice of bloodletting. Wrote Van Helmont:
Let us take out of the Hospitals, out of the Camps, or from elsewhere, 200, or 500 poor People, that have fevers or Pleuritis. Let us divide them in Halfes, let us cast lots, that one halfe of them may fall to my share, and the others to yours; I will cure them without blood-letting and sensible evacuation; but you do, as ye know ... we shall see how many Funerals both of us shall have...
The first published report describing the conduct and results of a controlled clinical trial was by James Lind, a Scottish naval surgeon who conducted research on scurvy during his time aboard HMS Salisbury in the Channel Fleet, while patrolling the Bay of Biscay. Lind divided the sailors participating in his experiment into six groups, so that the effects of various treatments could be fairly compared. Lind found improvement in symptoms and signs of scurvy among the group of men treated with lemons or oranges. He published a treatise describing the results of this experiment in 1753.
An early critique of statistical methods in medicine was published in 1835.
The term 'evidence-based medicine' was introduced in 1990 by Gordon Guyatt of McMaster University.
Alvan Feinstein's publication of Clinical Judgment in 1967 focused attention on the role of clinical reasoning and identified biases that can affect it. In 1972, Archie Cochrane published Effectiveness and Efficiency, which described the lack of controlled trials supporting many practices that had previously been assumed to be effective. In 1973, John Wennberg began to document wide variations in how physicians practiced. Through the 1980s, David M. Eddy described errors in clinical reasoning and gaps in evidence. In the mid-1980s, Alvin Feinstein, David Sackett and others published textbooks on clinical epidemiology, which translated epidemiological methods to physician decision-making. Toward the end of the 1980s, a group at RAND showed that large proportions of procedures performed by physicians were considered inappropriate even by the standards of their own experts.
David M. Eddy first began to use the term 'evidence-based' in 1987 in workshops and a manual commissioned by the Council of Medical Specialty Societies to teach formal methods for designing clinical practice guidelines. The manual was eventually published by the American College of Physicians. Eddy first published the term 'evidence-based' in March 1990, in an article in the Journal of the American Medical Association that laid out the principles of evidence-based guidelines and population-level policies, which Eddy described as "explicitly describing the available evidence that pertains to a policy and tying the policy to evidence instead of standard-of-care practices or the beliefs of experts. The pertinent evidence must be identified, described, and analyzed. The policymakers must determine whether the policy is justified by the evidence. A rationale must be written." He discussed evidence-based policies in several other papers published in JAMA in the spring of 1990. Those papers were part of a series of 28 published in JAMA between 1990 and 1997 on formal methods for designing population-level guidelines and policies.
The term 'evidence-based medicine' was introduced slightly later, in the context of medical education. In the autumn of 1990, Gordon Guyatt used it in an unpublished description of a program at McMaster University for prospective or new medical students. Guyatt and others first published the term two years later (1992) to describe a new approach to teaching the practice of medicine.
In 1996, David Sackett and colleagues clarified the definition of this tributary of evidence-based medicine as "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. ... [It] means integrating individual clinical expertise with the best available external clinical evidence from systematic research." This branch of evidence-based medicine aims to make individual decision making more structured and objective by better reflecting the evidence from research. Population-based data are applied to the care of an individual patient, while respecting the fact that practitioners have clinical expertise reflected in effective and efficient diagnosis and thoughtful identification and compassionate use of individual patients' predicaments, rights, and preferences.
Between 1993 and 2000, the Evidence-Based Medicine Working Group at McMaster University published the methods to a broad physician audience in a series of 25 "Users' Guides to the Medical Literature" in JAMA. In 1995 Rosenberg and Donald defined individual-level, evidence-based medicine as "the process of finding, appraising, and using contemporaneous research findings as the basis for medical decisions." In 2010, Greenhalgh used a definition that emphasized quantitative methods: "the use of mathematical estimates of the risk of benefit and harm, derived from high-quality research on population samples, to inform clinical decision-making in the diagnosis, investigation or management of individual patients."
The two original definitions highlight important differences in how evidence-based medicine is applied to populations versus individuals. When designing guidelines applied to large groups of people in settings with relatively little opportunity for modification by individual physicians, evidence-based policymaking emphasizes that good evidence should exist to document a test's or treatment's effectiveness. In the setting of individual decision-making, practitioners can be given greater latitude in how they interpret research and combine it with their clinical judgment. In 2005, Eddy offered an umbrella definition for the two branches of EBM: "Evidence-based medicine is a set of principles and methods intended to ensure that to the greatest extent possible, medical decisions, guidelines, and other types of policies are based on and consistent with good evidence of effectiveness and benefit."
In the area of evidence-based guidelines and policies, the explicit insistence on evidence of effectiveness was introduced by the American Cancer Society in 1980. The U.S. Preventive Services Task Force (USPSTF) began issuing guidelines for preventive interventions based on evidence-based principles in 1984. In 1985, the Blue Cross Blue Shield Association applied strict evidence-based criteria for covering new technologies. Beginning in 1987, specialty societies such as the American College of Physicians, and voluntary health organizations such as the American Heart Association, wrote many evidence-based guidelines. In 1991, Kaiser Permanente, a managed care organization in the US, began an evidence-based guidelines program. In 1991, Richard Smith wrote an editorial in the British Medical Journal and introduced the ideas of evidence-based policies in the UK. In 1993, the Cochrane Collaboration created a network of 13 countries to produce systematic reviews and guidelines. In 1997, the US Agency for Healthcare Research and Quality (AHRQ, then known as the Agency for Health Care Policy and Research, or AHCPR) established Evidence-based Practice Centers (EPCs) to produce evidence reports and technology assessments to support the development of guidelines. In the same year, a National Guideline Clearinghouse that followed the principles of evidence-based policies was created by AHRQ, the AMA, and the American Association of Health Plans (now America's Health Insurance Plans). In 1999, the National Institute for Clinical Excellence (NICE) was created in the UK.
In the area of medical education, medical schools in Canada, the US, the UK, Australia, and other countries now offer programs that teach evidence-based medicine. A 2009 study of UK programs found that more than half of UK medical schools offered some training in evidence-based medicine, although the methods and content varied considerably, and EBM teaching was restricted by lack of curriculum time, trained tutors and teaching materials. Many programs have been developed to help individual physicians gain better access to evidence. For example, UpToDate was created in the early 1990s. The Cochrane Collaboration began publishing evidence reviews in 1993. In 1995, BMJ Publishing Group launched Clinical Evidence, a 6-monthly periodical that provided brief summaries of the current state of evidence about important clinical questions for clinicians.
By 2000, use of the term evidence-based had extended to other levels of the health care system. An example is evidence-based health services, which seek to increase the competence of health service decision makers and the practice of evidence-based medicine at the organizational or institutional level.
The multiple tributaries of evidence-based medicine share an emphasis on the importance of incorporating evidence from formal research in medical policies and decisions. However, because they differ on the extent to which they require good evidence of effectiveness before promoting a guideline or payment policy, a distinction is sometimes made between evidence-based medicine and science-based medicine, which also takes into account factors such as prior plausibility and compatibility with established science (as when medical organizations promote controversial treatments such as acupuncture). Differences also exist regarding the extent to which it is feasible to incorporate individual-level information in decisions. Thus, evidence-based guidelines and policies may not readily "hybridise" with experience-based practices orientated towards ethical clinical judgement, and can lead to contradictions, contest, and unintended crises. The most effective "knowledge leaders" (managers and clinical leaders) use a broad range of management knowledge in their decision making, rather than just formal evidence. Evidence-based guidelines may provide the basis for governmentality in health care, and consequently play a central role in the governance of contemporary health care systems.
The steps for designing explicit, evidence-based guidelines were described in the late 1980s: formulate the question (population, intervention, comparison intervention, outcomes, time horizon, setting); search the literature to identify studies that inform the question; interpret each study to determine precisely what it says about the question; if several studies address the question, synthesize their results (meta-analysis); summarize the evidence in evidence tables; compare the benefits, harms and costs in a balance sheet; draw a conclusion about the preferred practice; write the guideline; write the rationale for the guideline; have others review each of the previous steps; implement the guideline.
For the purposes of medical education and individual-level decision making, five steps of EBM in practice were described in 1992 and the experience of delegates attending the 2003 Conference of Evidence-Based Health Care Teachers and Developers was summarized into five steps and published in 2005. This five-step process can broadly be categorized as follows:
Systematic reviews of published research studies are a major part of the evaluation of particular treatments. The Cochrane Collaboration is one of the best-known organisations that conducts systematic reviews. Like other producers of systematic reviews, it requires authors to provide a detailed study protocol as well as a reproducible plan of their literature search and evaluations of the evidence. After the best evidence is assessed, treatment is categorized as (1) likely to be beneficial, (2) likely to be harmful, or (3) without evidence to support either benefit or harm.
A 2007 analysis of 1,016 systematic reviews from all 50 Cochrane Collaboration Review Groups found that 44% of the reviews concluded that the intervention was likely to be beneficial, 7% concluded that the intervention was likely to be harmful, and 49% concluded that evidence did not support either benefit or harm. 96% recommended further research. In 2017, a study assessed the role of systematic reviews produced by Cochrane Collaboration to inform US private payers' policymaking; it showed that although the medical policy documents of major US private payers were informed by Cochrane systematic reviews, there was still scope to encourage the further use.
Evidence-based medicine categorizes different types of clinical evidence and rates or grades them according to the strength of their freedom from the various biases that beset medical research. For example, the strongest evidence for therapeutic interventions is provided by systematic review of randomized, well-blinded, placebo-controlled trials with allocation concealment and complete follow-up involving a homogeneous patient population and medical condition. In contrast, patient testimonials, case reports, and even expert opinion have little value as proof because of the placebo effect, the biases inherent in observation and reporting of cases, and difficulties in ascertaining who is an expert (however, some critics have argued that expert opinion "does not belong in the rankings of the quality of empirical evidence because it does not represent a form of empirical evidence" and continue that "expert opinion would seem to be a separate, complex type of knowledge that would not fit into hierarchies otherwise limited to empirical evidence alone.").
Several organizations have developed grading systems for assessing the quality of evidence. For example, in 1989 the U.S. Preventive Services Task Force (USPSTF) put forth the following system:
Another example are the Oxford CEBM Levels of Evidence published by the Centre for Evidence-Based Medicine. First released in September 2000, the Levels of Evidence provide a way to rank evidence for claims about prognosis, diagnosis, treatment benefits, treatment harms, and screening, which most grading schemes do not address. The original CEBM Levels were Evidence-Based On Call to make the process of finding evidence feasible and its results explicit. In 2011, an international team redesigned the Oxford CEBM Levels to make them more understandable and to take into account recent developments in evidence ranking schemes. The Oxford CEBM Levels of Evidence have been used by patients and clinicians, as well as by experts to develop clinical guidelines, such as recommendations for the optimal use of phototherapy and topical therapy in psoriasis and guidelines for the use of the BCLC staging system for diagnosing and monitoring hepatocellular carcinoma in Canada.
In 2000, a system was developed by the Grading of Recommendations Assessment, Development and Evaluation (GRADE) working group. The GRADE system takes into account more dimensions than just the quality of medical research. It requires users who are performing an assessment of the quality of evidence, usually as part of a systematic review, to consider the impact of different factors on their confidence in the results. Authors of GRADE tables assign one of four levels to evaluate the quality of evidence, on the basis of their confidence that the observed effect (a numeric value) is close to the true effect. The confidence value is based on judgments assigned in five different domains in a structured manner. The GRADE working group defines 'quality of evidence' and 'strength of recommendations' based on the quality as two different concepts that are commonly confused with each other.
Systematic reviews may include randomized controlled trials that have low risk of bias, or observational studies that have high risk of bias. In the case of randomized controlled trials, the quality of evidence is high but can be downgraded in five different domains.
In the case of observational studies per GRADE, the quality of evidence starts off lower and may be upgraded in three domains in addition to being subject to downgrading.
Meaning of the levels of quality of evidence as per GRADE:
In guidelines and other publications, recommendation for a clinical service is classified by the balance of risk versus benefit and the level of evidence on which this information is based. The U.S. Preventive Services Task Force uses the following system:
GRADE guideline panelists may make strong or weak recommendations on the basis of further criteria. Some of the important criteria are the balance between desirable and undesirable effects (not considering cost), the quality of the evidence, values and preferences and costs (resource utilization).
Despite the differences between systems, the purposes are the same: to guide users of clinical research information on which studies are likely to be most valid. However, the individual studies still require careful critical appraisal.
Evidence-based medicine attempts to express clinical benefits of tests and treatments using mathematical methods. Tools used by practitioners of evidence-based medicine include:
Evidence-based medicine attempts to objectively evaluate the quality of clinical research by critically assessing techniques reported by researchers in their publications.
There are a number of limitations and criticisms of evidence-based medicine. Two widely cited categorization schemes for the various published critiques of EBM include the three-fold division of Straus and McAlister ("limitations universal to the practice of medicine, limitations unique to evidence-based medicine and misperceptions of evidence-based-medicine") and the five-point categorization of Cohen, Stavri and Hersh (EBM is a poor philosophic basis for medicine, defines evidence too narrowly, is not evidence-based, is limited in usefulness when applied to individual patients, or reduces the autonomy of the doctor/patient relationship).
In no particular order, some published objections include:
A 2018 study, "Why all randomised controlled trials produce biased results", assessed the 10 most cited RCTs and argued that trials face a wide range of biases and constraints, from trials only being able to study a small set of questions amenable to randomisation and generally only being able to assess the average treatment effect of a sample, to limitations in extrapolating results to another context, among many others outlined in the study.
Despite the emphasis on evidence-based medicine, unsafe or ineffective medical practices continue to be applied, because of patient demand for tests or treatments, because of failure to access information about the evidence, or because of the rapid pace of change in the scientific evidence. For example, between 2003 and 2017, the evidence shifted on hundreds of medical practices, including whether hormone replacement therapy was safe, whether babies should be given certain vitamins, and whether antidepressant drugs are effective in people with Alzheimer's disease. Even when the evidence unequivocally shows that a treatment is either not safe or not effective, it may take many years for other treatments to be adopted.
There are many factors that contribute to lack of uptake or implementation of evidence-based recommendations. These include lack of awareness at the individual clinician or patient (micro) level, lack of institutional support at the organisation level (meso) level or higher at the policy (macro) level. In other cases, significant change can require a generation of physicians to retire or die and be replaced by physicians who were trained with more recent evidence.
Physicians may also reject evidence that conflicts with their anecdotal experience or because of cognitive biases – for example, a vivid memory of a rare but shocking outcome (the availability heuristic), such as a patient dying after refusing treatment. They may overtreat to "do something" or to address a patient's emotional needs. They may worry about malpractice charges based on a discrepancy between what the patient expects and what the evidence recommends. They may also overtreat or provide ineffective treatments because the treatment feels biologically plausible.
It is the responsibility of those developing clinical guidelines to include an implementation plan to facilitate uptake. The implementation process will include an implementation plan, analysis of the context, identifying barriers and facilitators and designing the strategies to address them.
Training in evidence based medicine is offered across the continuum of medical education. Educational competencies have been created for the education of health care professionals.
The Berlin questionnaire and the Fresno Test are validated instruments for assessing the effectiveness of education in evidence-based medicine. These questionnaires have been used in diverse settings.
A Campbell systematic review that included 24 trials examined the effectiveness of e-learning in improving evidence-based health care knowledge and practice. It was found that e-learning, compared to no learning, improves evidence-based health care knowledge and skills but not attitudes and behaviour. No difference in outcomes is present when comparing e-learning with face-to-face learning. Combining e-learning and face-to-face learning (blended learning) has a positive impact on evidence-based knowledge, skills, attitude and behavior. As a form of e-learning, some medical school students engage in editing Wikipedia to increase their EBM skills, and some students construct EBM materials to develop their skills in communicating medical knowledge. | [
{
"paragraph_id": 0,
"text": "Evidence-based medicine (EBM) is \"the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.\" The aim of EBM is to integrate the experience of the clinician, the values of the patient, and the best available scientific information to guide decision-making about clinical management. The term was originally used to describe an approach to teaching the practice of medicine and improving decisions by individual physicians about individual patients.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The EBM Pyramid is a tool that helps in visualizing the hierarchy of evidence in medicine, from least authoritative, like expert opinions, to most authoritative, like systematic reviews.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Medicine has a long history of scientific inquiry about the prevention, diagnosis, and treatment of human disease. In the 11th century AD, Avicenna, a Persian physician and philosopher, developed an approach to EBM that was mostly similar to current ideas and practises.",
"title": "Background, history, and definition"
},
{
"paragraph_id": 3,
"text": "The concept of a controlled clinical trial was first described in 1662 by Jan Baptist van Helmont in reference to the practice of bloodletting. Wrote Van Helmont:",
"title": "Background, history, and definition"
},
{
"paragraph_id": 4,
"text": "Let us take out of the Hospitals, out of the Camps, or from elsewhere, 200, or 500 poor People, that have fevers or Pleuritis. Let us divide them in Halfes, let us cast lots, that one halfe of them may fall to my share, and the others to yours; I will cure them without blood-letting and sensible evacuation; but you do, as ye know ... we shall see how many Funerals both of us shall have...",
"title": "Background, history, and definition"
},
{
"paragraph_id": 5,
"text": "The first published report describing the conduct and results of a controlled clinical trial was by James Lind, a Scottish naval surgeon who conducted research on scurvy during his time aboard HMS Salisbury in the Channel Fleet, while patrolling the Bay of Biscay. Lind divided the sailors participating in his experiment into six groups, so that the effects of various treatments could be fairly compared. Lind found improvement in symptoms and signs of scurvy among the group of men treated with lemons or oranges. He published a treatise describing the results of this experiment in 1753.",
"title": "Background, history, and definition"
},
{
"paragraph_id": 6,
"text": "An early critique of statistical methods in medicine was published in 1835.",
"title": "Background, history, and definition"
},
{
"paragraph_id": 7,
"text": "The term 'evidence-based medicine' was introduced in 1990 by Gordon Guyatt of McMaster University.",
"title": "Background, history, and definition"
},
{
"paragraph_id": 8,
"text": "Alvan Feinstein's publication of Clinical Judgment in 1967 focused attention on the role of clinical reasoning and identified biases that can affect it. In 1972, Archie Cochrane published Effectiveness and Efficiency, which described the lack of controlled trials supporting many practices that had previously been assumed to be effective. In 1973, John Wennberg began to document wide variations in how physicians practiced. Through the 1980s, David M. Eddy described errors in clinical reasoning and gaps in evidence. In the mid-1980s, Alvin Feinstein, David Sackett and others published textbooks on clinical epidemiology, which translated epidemiological methods to physician decision-making. Toward the end of the 1980s, a group at RAND showed that large proportions of procedures performed by physicians were considered inappropriate even by the standards of their own experts.",
"title": "Background, history, and definition"
},
{
"paragraph_id": 9,
"text": "David M. Eddy first began to use the term 'evidence-based' in 1987 in workshops and a manual commissioned by the Council of Medical Specialty Societies to teach formal methods for designing clinical practice guidelines. The manual was eventually published by the American College of Physicians. Eddy first published the term 'evidence-based' in March 1990, in an article in the Journal of the American Medical Association that laid out the principles of evidence-based guidelines and population-level policies, which Eddy described as \"explicitly describing the available evidence that pertains to a policy and tying the policy to evidence instead of standard-of-care practices or the beliefs of experts. The pertinent evidence must be identified, described, and analyzed. The policymakers must determine whether the policy is justified by the evidence. A rationale must be written.\" He discussed evidence-based policies in several other papers published in JAMA in the spring of 1990. Those papers were part of a series of 28 published in JAMA between 1990 and 1997 on formal methods for designing population-level guidelines and policies.",
"title": "Background, history, and definition"
},
{
"paragraph_id": 10,
"text": "The term 'evidence-based medicine' was introduced slightly later, in the context of medical education. In the autumn of 1990, Gordon Guyatt used it in an unpublished description of a program at McMaster University for prospective or new medical students. Guyatt and others first published the term two years later (1992) to describe a new approach to teaching the practice of medicine.",
"title": "Background, history, and definition"
},
{
"paragraph_id": 11,
"text": "In 1996, David Sackett and colleagues clarified the definition of this tributary of evidence-based medicine as \"the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. ... [It] means integrating individual clinical expertise with the best available external clinical evidence from systematic research.\" This branch of evidence-based medicine aims to make individual decision making more structured and objective by better reflecting the evidence from research. Population-based data are applied to the care of an individual patient, while respecting the fact that practitioners have clinical expertise reflected in effective and efficient diagnosis and thoughtful identification and compassionate use of individual patients' predicaments, rights, and preferences.",
"title": "Background, history, and definition"
},
{
"paragraph_id": 12,
"text": "Between 1993 and 2000, the Evidence-Based Medicine Working Group at McMaster University published the methods to a broad physician audience in a series of 25 \"Users' Guides to the Medical Literature\" in JAMA. In 1995 Rosenberg and Donald defined individual-level, evidence-based medicine as \"the process of finding, appraising, and using contemporaneous research findings as the basis for medical decisions.\" In 2010, Greenhalgh used a definition that emphasized quantitative methods: \"the use of mathematical estimates of the risk of benefit and harm, derived from high-quality research on population samples, to inform clinical decision-making in the diagnosis, investigation or management of individual patients.\"",
"title": "Background, history, and definition"
},
{
"paragraph_id": 13,
"text": "The two original definitions highlight important differences in how evidence-based medicine is applied to populations versus individuals. When designing guidelines applied to large groups of people in settings with relatively little opportunity for modification by individual physicians, evidence-based policymaking emphasizes that good evidence should exist to document a test's or treatment's effectiveness. In the setting of individual decision-making, practitioners can be given greater latitude in how they interpret research and combine it with their clinical judgment. In 2005, Eddy offered an umbrella definition for the two branches of EBM: \"Evidence-based medicine is a set of principles and methods intended to ensure that to the greatest extent possible, medical decisions, guidelines, and other types of policies are based on and consistent with good evidence of effectiveness and benefit.\"",
"title": "Background, history, and definition"
},
{
"paragraph_id": 14,
"text": "In the area of evidence-based guidelines and policies, the explicit insistence on evidence of effectiveness was introduced by the American Cancer Society in 1980. The U.S. Preventive Services Task Force (USPSTF) began issuing guidelines for preventive interventions based on evidence-based principles in 1984. In 1985, the Blue Cross Blue Shield Association applied strict evidence-based criteria for covering new technologies. Beginning in 1987, specialty societies such as the American College of Physicians, and voluntary health organizations such as the American Heart Association, wrote many evidence-based guidelines. In 1991, Kaiser Permanente, a managed care organization in the US, began an evidence-based guidelines program. In 1991, Richard Smith wrote an editorial in the British Medical Journal and introduced the ideas of evidence-based policies in the UK. In 1993, the Cochrane Collaboration created a network of 13 countries to produce systematic reviews and guidelines. In 1997, the US Agency for Healthcare Research and Quality (AHRQ, then known as the Agency for Health Care Policy and Research, or AHCPR) established Evidence-based Practice Centers (EPCs) to produce evidence reports and technology assessments to support the development of guidelines. In the same year, a National Guideline Clearinghouse that followed the principles of evidence-based policies was created by AHRQ, the AMA, and the American Association of Health Plans (now America's Health Insurance Plans). In 1999, the National Institute for Clinical Excellence (NICE) was created in the UK.",
"title": "Background, history, and definition"
},
{
"paragraph_id": 15,
"text": "In the area of medical education, medical schools in Canada, the US, the UK, Australia, and other countries now offer programs that teach evidence-based medicine. A 2009 study of UK programs found that more than half of UK medical schools offered some training in evidence-based medicine, although the methods and content varied considerably, and EBM teaching was restricted by lack of curriculum time, trained tutors and teaching materials. Many programs have been developed to help individual physicians gain better access to evidence. For example, UpToDate was created in the early 1990s. The Cochrane Collaboration began publishing evidence reviews in 1993. In 1995, BMJ Publishing Group launched Clinical Evidence, a 6-monthly periodical that provided brief summaries of the current state of evidence about important clinical questions for clinicians.",
"title": "Background, history, and definition"
},
{
"paragraph_id": 16,
"text": "By 2000, use of the term evidence-based had extended to other levels of the health care system. An example is evidence-based health services, which seek to increase the competence of health service decision makers and the practice of evidence-based medicine at the organizational or institutional level.",
"title": "Background, history, and definition"
},
{
"paragraph_id": 17,
"text": "The multiple tributaries of evidence-based medicine share an emphasis on the importance of incorporating evidence from formal research in medical policies and decisions. However, because they differ on the extent to which they require good evidence of effectiveness before promoting a guideline or payment policy, a distinction is sometimes made between evidence-based medicine and science-based medicine, which also takes into account factors such as prior plausibility and compatibility with established science (as when medical organizations promote controversial treatments such as acupuncture). Differences also exist regarding the extent to which it is feasible to incorporate individual-level information in decisions. Thus, evidence-based guidelines and policies may not readily \"hybridise\" with experience-based practices orientated towards ethical clinical judgement, and can lead to contradictions, contest, and unintended crises. The most effective \"knowledge leaders\" (managers and clinical leaders) use a broad range of management knowledge in their decision making, rather than just formal evidence. Evidence-based guidelines may provide the basis for governmentality in health care, and consequently play a central role in the governance of contemporary health care systems.",
"title": "Background, history, and definition"
},
{
"paragraph_id": 18,
"text": "The steps for designing explicit, evidence-based guidelines were described in the late 1980s: formulate the question (population, intervention, comparison intervention, outcomes, time horizon, setting); search the literature to identify studies that inform the question; interpret each study to determine precisely what it says about the question; if several studies address the question, synthesize their results (meta-analysis); summarize the evidence in evidence tables; compare the benefits, harms and costs in a balance sheet; draw a conclusion about the preferred practice; write the guideline; write the rationale for the guideline; have others review each of the previous steps; implement the guideline.",
"title": "Methods"
},
{
"paragraph_id": 19,
"text": "For the purposes of medical education and individual-level decision making, five steps of EBM in practice were described in 1992 and the experience of delegates attending the 2003 Conference of Evidence-Based Health Care Teachers and Developers was summarized into five steps and published in 2005. This five-step process can broadly be categorized as follows:",
"title": "Methods"
},
{
"paragraph_id": 20,
"text": "Systematic reviews of published research studies are a major part of the evaluation of particular treatments. The Cochrane Collaboration is one of the best-known organisations that conducts systematic reviews. Like other producers of systematic reviews, it requires authors to provide a detailed study protocol as well as a reproducible plan of their literature search and evaluations of the evidence. After the best evidence is assessed, treatment is categorized as (1) likely to be beneficial, (2) likely to be harmful, or (3) without evidence to support either benefit or harm.",
"title": "Methods"
},
{
"paragraph_id": 21,
"text": "A 2007 analysis of 1,016 systematic reviews from all 50 Cochrane Collaboration Review Groups found that 44% of the reviews concluded that the intervention was likely to be beneficial, 7% concluded that the intervention was likely to be harmful, and 49% concluded that evidence did not support either benefit or harm. 96% recommended further research. In 2017, a study assessed the role of systematic reviews produced by Cochrane Collaboration to inform US private payers' policymaking; it showed that although the medical policy documents of major US private payers were informed by Cochrane systematic reviews, there was still scope to encourage the further use.",
"title": "Methods"
},
{
"paragraph_id": 22,
"text": "Evidence-based medicine categorizes different types of clinical evidence and rates or grades them according to the strength of their freedom from the various biases that beset medical research. For example, the strongest evidence for therapeutic interventions is provided by systematic review of randomized, well-blinded, placebo-controlled trials with allocation concealment and complete follow-up involving a homogeneous patient population and medical condition. In contrast, patient testimonials, case reports, and even expert opinion have little value as proof because of the placebo effect, the biases inherent in observation and reporting of cases, and difficulties in ascertaining who is an expert (however, some critics have argued that expert opinion \"does not belong in the rankings of the quality of empirical evidence because it does not represent a form of empirical evidence\" and continue that \"expert opinion would seem to be a separate, complex type of knowledge that would not fit into hierarchies otherwise limited to empirical evidence alone.\").",
"title": "Methods"
},
{
"paragraph_id": 23,
"text": "Several organizations have developed grading systems for assessing the quality of evidence. For example, in 1989 the U.S. Preventive Services Task Force (USPSTF) put forth the following system:",
"title": "Methods"
},
{
"paragraph_id": 24,
"text": "Another example are the Oxford CEBM Levels of Evidence published by the Centre for Evidence-Based Medicine. First released in September 2000, the Levels of Evidence provide a way to rank evidence for claims about prognosis, diagnosis, treatment benefits, treatment harms, and screening, which most grading schemes do not address. The original CEBM Levels were Evidence-Based On Call to make the process of finding evidence feasible and its results explicit. In 2011, an international team redesigned the Oxford CEBM Levels to make them more understandable and to take into account recent developments in evidence ranking schemes. The Oxford CEBM Levels of Evidence have been used by patients and clinicians, as well as by experts to develop clinical guidelines, such as recommendations for the optimal use of phototherapy and topical therapy in psoriasis and guidelines for the use of the BCLC staging system for diagnosing and monitoring hepatocellular carcinoma in Canada.",
"title": "Methods"
},
{
"paragraph_id": 25,
"text": "In 2000, a system was developed by the Grading of Recommendations Assessment, Development and Evaluation (GRADE) working group. The GRADE system takes into account more dimensions than just the quality of medical research. It requires users who are performing an assessment of the quality of evidence, usually as part of a systematic review, to consider the impact of different factors on their confidence in the results. Authors of GRADE tables assign one of four levels to evaluate the quality of evidence, on the basis of their confidence that the observed effect (a numeric value) is close to the true effect. The confidence value is based on judgments assigned in five different domains in a structured manner. The GRADE working group defines 'quality of evidence' and 'strength of recommendations' based on the quality as two different concepts that are commonly confused with each other.",
"title": "Methods"
},
{
"paragraph_id": 26,
"text": "Systematic reviews may include randomized controlled trials that have low risk of bias, or observational studies that have high risk of bias. In the case of randomized controlled trials, the quality of evidence is high but can be downgraded in five different domains.",
"title": "Methods"
},
{
"paragraph_id": 27,
"text": "In the case of observational studies per GRADE, the quality of evidence starts off lower and may be upgraded in three domains in addition to being subject to downgrading.",
"title": "Methods"
},
{
"paragraph_id": 28,
"text": "Meaning of the levels of quality of evidence as per GRADE:",
"title": "Methods"
},
{
"paragraph_id": 29,
"text": "In guidelines and other publications, recommendation for a clinical service is classified by the balance of risk versus benefit and the level of evidence on which this information is based. The U.S. Preventive Services Task Force uses the following system:",
"title": "Methods"
},
{
"paragraph_id": 30,
"text": "GRADE guideline panelists may make strong or weak recommendations on the basis of further criteria. Some of the important criteria are the balance between desirable and undesirable effects (not considering cost), the quality of the evidence, values and preferences and costs (resource utilization).",
"title": "Methods"
},
{
"paragraph_id": 31,
"text": "Despite the differences between systems, the purposes are the same: to guide users of clinical research information on which studies are likely to be most valid. However, the individual studies still require careful critical appraisal.",
"title": "Methods"
},
{
"paragraph_id": 32,
"text": "Evidence-based medicine attempts to express clinical benefits of tests and treatments using mathematical methods. Tools used by practitioners of evidence-based medicine include:",
"title": "Methods"
},
{
"paragraph_id": 33,
"text": "Evidence-based medicine attempts to objectively evaluate the quality of clinical research by critically assessing techniques reported by researchers in their publications.",
"title": "Methods"
},
{
"paragraph_id": 34,
"text": "There are a number of limitations and criticisms of evidence-based medicine. Two widely cited categorization schemes for the various published critiques of EBM include the three-fold division of Straus and McAlister (\"limitations universal to the practice of medicine, limitations unique to evidence-based medicine and misperceptions of evidence-based-medicine\") and the five-point categorization of Cohen, Stavri and Hersh (EBM is a poor philosophic basis for medicine, defines evidence too narrowly, is not evidence-based, is limited in usefulness when applied to individual patients, or reduces the autonomy of the doctor/patient relationship).",
"title": "Limitations and criticism"
},
{
"paragraph_id": 35,
"text": "In no particular order, some published objections include:",
"title": "Limitations and criticism"
},
{
"paragraph_id": 36,
"text": "A 2018 study, \"Why all randomised controlled trials produce biased results\", assessed the 10 most cited RCTs and argued that trials face a wide range of biases and constraints, from trials only being able to study a small set of questions amenable to randomisation and generally only being able to assess the average treatment effect of a sample, to limitations in extrapolating results to another context, among many others outlined in the study.",
"title": "Limitations and criticism"
},
{
"paragraph_id": 37,
"text": "Despite the emphasis on evidence-based medicine, unsafe or ineffective medical practices continue to be applied, because of patient demand for tests or treatments, because of failure to access information about the evidence, or because of the rapid pace of change in the scientific evidence. For example, between 2003 and 2017, the evidence shifted on hundreds of medical practices, including whether hormone replacement therapy was safe, whether babies should be given certain vitamins, and whether antidepressant drugs are effective in people with Alzheimer's disease. Even when the evidence unequivocally shows that a treatment is either not safe or not effective, it may take many years for other treatments to be adopted.",
"title": "Application of evidence in clinical settings"
},
{
"paragraph_id": 38,
"text": "There are many factors that contribute to lack of uptake or implementation of evidence-based recommendations. These include lack of awareness at the individual clinician or patient (micro) level, lack of institutional support at the organisation level (meso) level or higher at the policy (macro) level. In other cases, significant change can require a generation of physicians to retire or die and be replaced by physicians who were trained with more recent evidence.",
"title": "Application of evidence in clinical settings"
},
{
"paragraph_id": 39,
"text": "Physicians may also reject evidence that conflicts with their anecdotal experience or because of cognitive biases – for example, a vivid memory of a rare but shocking outcome (the availability heuristic), such as a patient dying after refusing treatment. They may overtreat to \"do something\" or to address a patient's emotional needs. They may worry about malpractice charges based on a discrepancy between what the patient expects and what the evidence recommends. They may also overtreat or provide ineffective treatments because the treatment feels biologically plausible.",
"title": "Application of evidence in clinical settings"
},
{
"paragraph_id": 40,
"text": "It is the responsibility of those developing clinical guidelines to include an implementation plan to facilitate uptake. The implementation process will include an implementation plan, analysis of the context, identifying barriers and facilitators and designing the strategies to address them.",
"title": "Application of evidence in clinical settings"
},
{
"paragraph_id": 41,
"text": "Training in evidence based medicine is offered across the continuum of medical education. Educational competencies have been created for the education of health care professionals.",
"title": "Education"
},
{
"paragraph_id": 42,
"text": "The Berlin questionnaire and the Fresno Test are validated instruments for assessing the effectiveness of education in evidence-based medicine. These questionnaires have been used in diverse settings.",
"title": "Education"
},
{
"paragraph_id": 43,
"text": "A Campbell systematic review that included 24 trials examined the effectiveness of e-learning in improving evidence-based health care knowledge and practice. It was found that e-learning, compared to no learning, improves evidence-based health care knowledge and skills but not attitudes and behaviour. No difference in outcomes is present when comparing e-learning with face-to-face learning. Combining e-learning and face-to-face learning (blended learning) has a positive impact on evidence-based knowledge, skills, attitude and behavior. As a form of e-learning, some medical school students engage in editing Wikipedia to increase their EBM skills, and some students construct EBM materials to develop their skills in communicating medical knowledge.",
"title": "Education"
}
]
| Evidence-based medicine (EBM) is "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients." The aim of EBM is to integrate the experience of the clinician, the values of the patient, and the best available scientific information to guide decision-making about clinical management. The term was originally used to describe an approach to teaching the practice of medicine and improving decisions by individual physicians about individual patients. The EBM Pyramid is a tool that helps in visualizing the hierarchy of evidence in medicine, from least authoritative, like expert opinions, to most authoritative, like systematic reviews. | 2001-10-31T00:33:29Z | 2023-12-09T18:21:07Z | [
"Template:Short description",
"Template:Evidence-based practices",
"Template:Blockquote",
"Template:Main",
"Template:Webarchive",
"Template:Health care quality",
"Template:Portal",
"Template:Cite journal",
"Template:Curlie",
"Template:Medical research studies",
"Template:Citation needed",
"Template:Reflist",
"Template:Cite web",
"Template:Clear",
"Template:Medicine",
"Template:Authority control",
"Template:Redirect",
"Template:Refend",
"Template:Open access",
"Template:Use dmy dates",
"Template:Columns-list",
"Template:Refbegin",
"Template:Which",
"Template:Cite book",
"Template:Cite news",
"Template:Evidence-based practice",
"Template:Wiktionary"
]
| https://en.wikipedia.org/wiki/Evidence-based_medicine |
10,016 | End zone | The end zone is the scoring area on the field, according to gridiron-based codes of football. It is the area between the end line and goal line bounded by the sidelines. There are two end zones, each being on an opposite side of the field. It is bordered on all sides by a white line indicating its beginning and end points, with orange, square pylons placed at each of the four corners as a visual aid (however, prior to around the early 1970s, flags were used instead to denote the end zone). Canadian rule books use the terms goal area and dead line instead of end zone and end line respectively, but the latter terms are the more common in colloquial Canadian English. Unlike sports like association football and ice hockey which require the ball/puck to pass completely over the goal line to count as a score, both Canadian and American football merely need any part of the ball to break the vertical plane of the outer edge of the goal line.
A similar concept exists in both rugby football codes, where it is known as the in-goal area. The difference between rugby and gridiron-based codes is that in rugby, the ball must be touched to the ground in the in-goal area to count as a try (the rugby equivalent of a touchdown), whereas in the gridiron-based games, simply possessing the ball in or over the end zone is sufficient to count as a touchdown.
Ultimate frisbee also uses an end zone scoring area. Scores in this sport are counted when a pass is received in the end zone.
The end zones were invented as a result of the legalization of the forward pass in gridiron football. Prior to this, the goal line and end line were the same, and players scored a touchdown by leaving the field of play through that line. Goal posts were placed on the goal line, and any kicks that did not result in field goals but left the field through the end lines were simply recorded as touchbacks (or, in the Canadian game, singles; it was during the pre-end zone era that Hugh Gall set the record for most singles in a game, with eight).
In the earliest days of the forward pass, the pass had to be caught in-bounds and could not be thrown across the goal line (as the receiver would be out of bounds). This also made it difficult to pass the ball when very close to one's own goal line, since merely dropping back to pass or kick would result in a safety (rules of the forward pass at the time required the passer to be five yards behind the line of scrimmage, which would make throwing the forward pass when the ball was snapped from behind one's own five-yard line illegal in itself).
Thus, in 1912, the end zone was introduced in American football. In an era when professional football was still in its early years and college football dominated the game, the resulting enlargement of the field was constrained by fact that many college teams were already playing in well-developed stadiums, complete with stands and other structures at the ends of the fields, thereby making any substantial enlargement of the field unfeasible at many schools. Eventually, a compromise was reached: 12 yards of end zone were added to each end of the field, but in return, the playing field was shortened from 110 yards to 100, resulting in the physical size of the field being only slightly longer than before. Goal posts were originally kept on the goal lines, but after they began to interfere with play, they moved back to the end lines in 1927, where they have remained in college football ever since. The National Football League moved the goal posts up to the goal line again in 1933, then back again to the end line in 1974.
As with many other aspects of gridiron football, Canadian football adopted the forward pass and end zones much later than American football. The forward pass and end zones were adopted in 1929. In Canada, college football has never reached a level of prominence comparable to U.S. college football, and professional football was still in its infancy in the 1920s. As a result, Canadian football was still being played in rudimentary facilities in the late 1920s. A further consideration was that the Canadian Rugby Union (the governing body of Canadian football at the time, now known as Football Canada) wanted to reduce the prominence of single points (then called rouges) in the game. Therefore, the CRU simply appended 25-yard end zones to the ends of the existing 110-yard field, creating a much larger field of play. Since moving the goal posts back 25 yards would have made the scoring of field goals excessively difficult, and since the CRU did not want to reduce the prominence of field goals, the goal posts were left on the goal line where they remain today. However, the rules governing the scoring of singles were changed: teams were required to either kick the ball out of bounds through the end zone or force the opposition to down a kicked ball in their own end zone in order to be awarded a point. By 1986, at which point CFL stadiums were becoming bigger and comparable in development to their American counterparts in an effort to stay financially competitive, the CFL reduced the depth of the end zone to 20 yards.
A team scores a touchdown by entering its opponent's end zone while carrying the ball or catching the ball while being within the end zone. If the ball is carried by a player, it is considered a score when any part of the ball is directly above or beyond any part of the goal line between the pylons. In addition, a two-point conversion may be scored after a touchdown by the same means.
In Ultimate Frisbee, a goal is scored by completing a pass into the end zone.
The end zone in American football is 10 yards long by 53+1⁄3 yards (160 feet) wide. Each corner is marked with a pylon (four apiece).
A full-sized end zone in Canadian football is 20 yards long by 65 yards wide. Prior to the 1980s, the Canadian end zone was 25 yards long. The first stadium to use the 20-yard-long end zone was B.C. Place in Vancouver, which was completed in 1983. The floor of B.C. Place was (and is) too short to accommodate a field 160 yards in length. The shorter end zone proved popular enough that the CFL adopted it league-wide in 1986. At BMO Field, home to the Toronto Argonauts, the end zones are only 18 yards. Like their American counterparts, Canadian endzones are marked with four pylons.
In Canadian football stadiums that also feature a running track, it is usually necessary to truncate the back corners of the end zones, since a rectangular field 150 yards long and 65 yards wide will not fit completely inside an oval-shaped running track. Such truncations are marked as straight diagonal lines, resulting in an end zone with six corners and six pylons. As of 2019, Montreal's Percival Molson Stadium is the only CFL stadium that has the rounded-off end zones.
During the CFL's failed American expansion in the mid-1990s, several stadiums, by necessity, used 15-yard end zones (some had end zones that were even shorter than 15 yards); only Baltimore and San Antonio had the endzones at the standard 20 yards.
Ultimate Frisbee uses an end zone 40 yards wide and 20 yards deep (37 m × 18 m).
The location and dimensions of a goal post differ from league to league, but it is usually within the boundaries of the end zone. In earlier football games (both professional and collegiate), the goal post began at the goal line, and was usually an H-shaped bar. Nowadays, for player safety reasons, almost all goal posts in the professional and collegiate levels of American football are T-shaped (resembling a slingshot), and reside just outside the rear of both end zones; these goalposts were first seen in 1966 and were invented by Jim Trimble and Joel Rottman in Montreal, Quebec, Canada.
The goal posts in Canadian football still reside on the goal line instead of the back of the end zones, partly because the number of field goal attempts would dramatically decrease if the posts were moved 20 yards back in that sport, and also because the larger end zone and wider field makes the resulting interference in play by the goal post a less serious problem.
At the high school level, it is not uncommon to see multi-purpose goal posts that include football goal posts at the top and a soccer net at the bottom; these are usually seen at smaller schools and in multi-purpose stadiums where facilities are used for multiple sports. When these or H-shaped goal posts are used in football, the lower portions of the posts are covered with several inches of heavy foam padding to protect the safety of the players.
Most professional and collegiate teams have their logo, team name, or both painted on the surface of the end zone, with team colors filling the background. Many championship and bowl games at college and professional level are commemorated by the names of the opposing teams each being painted in one of the opposite end zones. In some leagues, along with bowl games, local, national, or bowl game sponsors may also have their logos placed in the end zone. In the CFL, fully painted end zones are nonexistent, though some feature club logos or sponsors. Additionally, the Canadian end zone, being a live-ball part of the field, often features yardage dashes (usually marked every five yards), not unlike the field of play itself.
In many places, particularly in smaller high schools and colleges, end zones are undecorated, or have plain white diagonal stripes spaced several yards apart, in lieu of colors and decorations. One notable use of this design in major college football is the Notre Dame Fighting Irish, who have both end zones at Notre Dame Stadium painted with diagonal white lines. In professional football, since 2004, the Pittsburgh Steelers of the NFL have the south end zone at Acrisure Stadium (formerly Heinz Field) painted with diagonal-lines during most of the regular season, with the north end zone featuring only the city name of Pittsburgh in yellow. This is done because Acrisure Stadium, which has a natural grass playing surface, is also home to the Pittsburgh Panthers of college football and the markings simplify field conversion between the two teams' respective field markings and logos, with both teams sharing a secondary yellow color, but each having different primary colors. After the Panthers' season is over, the Steelers logo is painted in the south end zone.
Likewise, some end zones are painted in tribute to a recently deceased team figure or fan, as is done with the Steelers' AFC North rival Baltimore Ravens at M&T Bank Stadium, where the city name is painted as usual in the end zone, except for the "MO" portion, which is painted in gold or white in tribute to the late Mo Gaba, a young fan of both the Ravens and Orioles.
One of the major quirks of the American Football League was its use of unusual patterns such as argyle in its end zones, a tradition revived in 2009 by the Denver Broncos, itself a former AFL team. The original XFL standardized its playing fields so that all eight of its teams had uniform fields with the XFL logo in each end zone and no team identification. | [
{
"paragraph_id": 0,
"text": "The end zone is the scoring area on the field, according to gridiron-based codes of football. It is the area between the end line and goal line bounded by the sidelines. There are two end zones, each being on an opposite side of the field. It is bordered on all sides by a white line indicating its beginning and end points, with orange, square pylons placed at each of the four corners as a visual aid (however, prior to around the early 1970s, flags were used instead to denote the end zone). Canadian rule books use the terms goal area and dead line instead of end zone and end line respectively, but the latter terms are the more common in colloquial Canadian English. Unlike sports like association football and ice hockey which require the ball/puck to pass completely over the goal line to count as a score, both Canadian and American football merely need any part of the ball to break the vertical plane of the outer edge of the goal line.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A similar concept exists in both rugby football codes, where it is known as the in-goal area. The difference between rugby and gridiron-based codes is that in rugby, the ball must be touched to the ground in the in-goal area to count as a try (the rugby equivalent of a touchdown), whereas in the gridiron-based games, simply possessing the ball in or over the end zone is sufficient to count as a touchdown.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Ultimate frisbee also uses an end zone scoring area. Scores in this sport are counted when a pass is received in the end zone.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The end zones were invented as a result of the legalization of the forward pass in gridiron football. Prior to this, the goal line and end line were the same, and players scored a touchdown by leaving the field of play through that line. Goal posts were placed on the goal line, and any kicks that did not result in field goals but left the field through the end lines were simply recorded as touchbacks (or, in the Canadian game, singles; it was during the pre-end zone era that Hugh Gall set the record for most singles in a game, with eight).",
"title": "History"
},
{
"paragraph_id": 4,
"text": "In the earliest days of the forward pass, the pass had to be caught in-bounds and could not be thrown across the goal line (as the receiver would be out of bounds). This also made it difficult to pass the ball when very close to one's own goal line, since merely dropping back to pass or kick would result in a safety (rules of the forward pass at the time required the passer to be five yards behind the line of scrimmage, which would make throwing the forward pass when the ball was snapped from behind one's own five-yard line illegal in itself).",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Thus, in 1912, the end zone was introduced in American football. In an era when professional football was still in its early years and college football dominated the game, the resulting enlargement of the field was constrained by fact that many college teams were already playing in well-developed stadiums, complete with stands and other structures at the ends of the fields, thereby making any substantial enlargement of the field unfeasible at many schools. Eventually, a compromise was reached: 12 yards of end zone were added to each end of the field, but in return, the playing field was shortened from 110 yards to 100, resulting in the physical size of the field being only slightly longer than before. Goal posts were originally kept on the goal lines, but after they began to interfere with play, they moved back to the end lines in 1927, where they have remained in college football ever since. The National Football League moved the goal posts up to the goal line again in 1933, then back again to the end line in 1974.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "As with many other aspects of gridiron football, Canadian football adopted the forward pass and end zones much later than American football. The forward pass and end zones were adopted in 1929. In Canada, college football has never reached a level of prominence comparable to U.S. college football, and professional football was still in its infancy in the 1920s. As a result, Canadian football was still being played in rudimentary facilities in the late 1920s. A further consideration was that the Canadian Rugby Union (the governing body of Canadian football at the time, now known as Football Canada) wanted to reduce the prominence of single points (then called rouges) in the game. Therefore, the CRU simply appended 25-yard end zones to the ends of the existing 110-yard field, creating a much larger field of play. Since moving the goal posts back 25 yards would have made the scoring of field goals excessively difficult, and since the CRU did not want to reduce the prominence of field goals, the goal posts were left on the goal line where they remain today. However, the rules governing the scoring of singles were changed: teams were required to either kick the ball out of bounds through the end zone or force the opposition to down a kicked ball in their own end zone in order to be awarded a point. By 1986, at which point CFL stadiums were becoming bigger and comparable in development to their American counterparts in an effort to stay financially competitive, the CFL reduced the depth of the end zone to 20 yards.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "A team scores a touchdown by entering its opponent's end zone while carrying the ball or catching the ball while being within the end zone. If the ball is carried by a player, it is considered a score when any part of the ball is directly above or beyond any part of the goal line between the pylons. In addition, a two-point conversion may be scored after a touchdown by the same means.",
"title": "Scoring"
},
{
"paragraph_id": 8,
"text": "In Ultimate Frisbee, a goal is scored by completing a pass into the end zone.",
"title": "Scoring"
},
{
"paragraph_id": 9,
"text": "The end zone in American football is 10 yards long by 53+1⁄3 yards (160 feet) wide. Each corner is marked with a pylon (four apiece).",
"title": "Size"
},
{
"paragraph_id": 10,
"text": "A full-sized end zone in Canadian football is 20 yards long by 65 yards wide. Prior to the 1980s, the Canadian end zone was 25 yards long. The first stadium to use the 20-yard-long end zone was B.C. Place in Vancouver, which was completed in 1983. The floor of B.C. Place was (and is) too short to accommodate a field 160 yards in length. The shorter end zone proved popular enough that the CFL adopted it league-wide in 1986. At BMO Field, home to the Toronto Argonauts, the end zones are only 18 yards. Like their American counterparts, Canadian endzones are marked with four pylons.",
"title": "Size"
},
{
"paragraph_id": 11,
"text": "In Canadian football stadiums that also feature a running track, it is usually necessary to truncate the back corners of the end zones, since a rectangular field 150 yards long and 65 yards wide will not fit completely inside an oval-shaped running track. Such truncations are marked as straight diagonal lines, resulting in an end zone with six corners and six pylons. As of 2019, Montreal's Percival Molson Stadium is the only CFL stadium that has the rounded-off end zones.",
"title": "Size"
},
{
"paragraph_id": 12,
"text": "During the CFL's failed American expansion in the mid-1990s, several stadiums, by necessity, used 15-yard end zones (some had end zones that were even shorter than 15 yards); only Baltimore and San Antonio had the endzones at the standard 20 yards.",
"title": "Size"
},
{
"paragraph_id": 13,
"text": "Ultimate Frisbee uses an end zone 40 yards wide and 20 yards deep (37 m × 18 m).",
"title": "Size"
},
{
"paragraph_id": 14,
"text": "The location and dimensions of a goal post differ from league to league, but it is usually within the boundaries of the end zone. In earlier football games (both professional and collegiate), the goal post began at the goal line, and was usually an H-shaped bar. Nowadays, for player safety reasons, almost all goal posts in the professional and collegiate levels of American football are T-shaped (resembling a slingshot), and reside just outside the rear of both end zones; these goalposts were first seen in 1966 and were invented by Jim Trimble and Joel Rottman in Montreal, Quebec, Canada.",
"title": "The goal post"
},
{
"paragraph_id": 15,
"text": "The goal posts in Canadian football still reside on the goal line instead of the back of the end zones, partly because the number of field goal attempts would dramatically decrease if the posts were moved 20 yards back in that sport, and also because the larger end zone and wider field makes the resulting interference in play by the goal post a less serious problem.",
"title": "The goal post"
},
{
"paragraph_id": 16,
"text": "At the high school level, it is not uncommon to see multi-purpose goal posts that include football goal posts at the top and a soccer net at the bottom; these are usually seen at smaller schools and in multi-purpose stadiums where facilities are used for multiple sports. When these or H-shaped goal posts are used in football, the lower portions of the posts are covered with several inches of heavy foam padding to protect the safety of the players.",
"title": "The goal post"
},
{
"paragraph_id": 17,
"text": "Most professional and collegiate teams have their logo, team name, or both painted on the surface of the end zone, with team colors filling the background. Many championship and bowl games at college and professional level are commemorated by the names of the opposing teams each being painted in one of the opposite end zones. In some leagues, along with bowl games, local, national, or bowl game sponsors may also have their logos placed in the end zone. In the CFL, fully painted end zones are nonexistent, though some feature club logos or sponsors. Additionally, the Canadian end zone, being a live-ball part of the field, often features yardage dashes (usually marked every five yards), not unlike the field of play itself.",
"title": "Decoration"
},
{
"paragraph_id": 18,
"text": "In many places, particularly in smaller high schools and colleges, end zones are undecorated, or have plain white diagonal stripes spaced several yards apart, in lieu of colors and decorations. One notable use of this design in major college football is the Notre Dame Fighting Irish, who have both end zones at Notre Dame Stadium painted with diagonal white lines. In professional football, since 2004, the Pittsburgh Steelers of the NFL have the south end zone at Acrisure Stadium (formerly Heinz Field) painted with diagonal-lines during most of the regular season, with the north end zone featuring only the city name of Pittsburgh in yellow. This is done because Acrisure Stadium, which has a natural grass playing surface, is also home to the Pittsburgh Panthers of college football and the markings simplify field conversion between the two teams' respective field markings and logos, with both teams sharing a secondary yellow color, but each having different primary colors. After the Panthers' season is over, the Steelers logo is painted in the south end zone.",
"title": "Decoration"
},
{
"paragraph_id": 19,
"text": "Likewise, some end zones are painted in tribute to a recently deceased team figure or fan, as is done with the Steelers' AFC North rival Baltimore Ravens at M&T Bank Stadium, where the city name is painted as usual in the end zone, except for the \"MO\" portion, which is painted in gold or white in tribute to the late Mo Gaba, a young fan of both the Ravens and Orioles.",
"title": "Decoration"
},
{
"paragraph_id": 20,
"text": "One of the major quirks of the American Football League was its use of unusual patterns such as argyle in its end zones, a tradition revived in 2009 by the Denver Broncos, itself a former AFL team. The original XFL standardized its playing fields so that all eight of its teams had uniform fields with the XFL logo in each end zone and no team identification.",
"title": "Decoration"
}
]
| The end zone is the scoring area on the field, according to gridiron-based codes of football. It is the area between the end line and goal line bounded by the sidelines. There are two end zones, each being on an opposite side of the field. It is bordered on all sides by a white line indicating its beginning and end points, with orange, square pylons placed at each of the four corners as a visual aid. Canadian rule books use the terms goal area and dead line instead of end zone and end line respectively, but the latter terms are the more common in colloquial Canadian English. Unlike sports like association football and ice hockey which require the ball/puck to pass completely over the goal line to count as a score, both Canadian and American football merely need any part of the ball to break the vertical plane of the outer edge of the goal line. A similar concept exists in both rugby football codes, where it is known as the in-goal area. The difference between rugby and gridiron-based codes is that in rugby, the ball must be touched to the ground in the in-goal area to count as a try, whereas in the gridiron-based games, simply possessing the ball in or over the end zone is sufficient to count as a touchdown. Ultimate frisbee also uses an end zone scoring area. Scores in this sport are counted when a pass is received in the end zone. | 2001-10-31T06:04:17Z | 2023-12-28T09:53:55Z | [
"Template:Commons",
"Template:Cite news",
"Template:Webarchive",
"Template:American football concepts",
"Template:Citation needed",
"Template:Reflist",
"Template:Cite web",
"Template:Short description",
"Template:About",
"Template:More citations needed",
"Template:Frac"
]
| https://en.wikipedia.org/wiki/End_zone |
10,017 | Ettore Ximenes | Ettore Ximenes (11 April 1855 – 20 December 1926) was an Italian sculptor.
Ettore Ximenes was born 11 April 1855 in Palermo, Italy. Son of Antonio Ximenes and Giulia Tolentino, a Sicilian noble woman, Ettore Ximenes initially embarked on literary studies but then took up sculpture and attended the courses at the Palermo Academy of Fine Arts. After 1872, he continued training at the Naples Academy under Domenico Morelli and Stanislao Lista. He also established a close relationship with Vincenzo Gemito.
He returned to Palermo in 1874 and won a competition for a four-year grant, which enabled him to study and open a studio for sculpture in Florence. In 1873 at Vienna, he exhibited Work without Genius. In 1877 at Naples, he exhibited a life-size statue titled The Equilibrium about a gymnast walking on a sphere. He would make copies of this work in small marble and bronze statuettes.
He exhibited a stucco Christ and the Adultress and Il cuore del re (Heart of the King), the latter depicting an oft-repeated story of King Vittorio Emanuele during one of his frequent hunts, encountering and offering charity to a peasant child. At the 1878 Paris World Exposition he displayed: The Brawl and il Marmiton. In Paris, he met with Auguste Rodin and Jean-Baptiste Carpeaux.
In 1878, he also completed a life-size stucco of il Ciceruacchio, a statue of the Italian patriot Angelo Brunetti and his thirteen-year-old son, depicting them at the moment of their execution in 1849 by Austrian troops. The Cicervacchio statue, with its tinge of revolutionary zeal, did not find commissions for completing the work in marble.
He then completed a nude statue of Nanà based on the novel by Émile Zola; the statue was exhibited at the 1879 Salon di Paris. The next year at the Paris Salon, he displayed La Pesca meravigliosa, where a fisherman rescues a bathing maiden. Returning to Italy, he displayed the bust del minister Giuseppe Zanardelli. At the Mostra of Rome, he displayed The assassination of Julius Caesar; and at the Exposition of Venice, Ragazzi messi in fila. Ximenes' realism gave way to Symbolist and Neo-Renaissance elements. In addition to sculpture, he also produced illustrations for the works of Edmondo De Amicis published by the Treves publishing house.
Ximenes was involved in many of the major official monumental projects in Italy from the 1880s on and devoted his energies as from 1911 primarily to commissions for important public works in São Paulo, Kyiv, New York and Buenos Aires. | [
{
"paragraph_id": 0,
"text": "Ettore Ximenes (11 April 1855 – 20 December 1926) was an Italian sculptor.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Ettore Ximenes was born 11 April 1855 in Palermo, Italy. Son of Antonio Ximenes and Giulia Tolentino, a Sicilian noble woman, Ettore Ximenes initially embarked on literary studies but then took up sculpture and attended the courses at the Palermo Academy of Fine Arts. After 1872, he continued training at the Naples Academy under Domenico Morelli and Stanislao Lista. He also established a close relationship with Vincenzo Gemito.",
"title": "Biography"
},
{
"paragraph_id": 2,
"text": "He returned to Palermo in 1874 and won a competition for a four-year grant, which enabled him to study and open a studio for sculpture in Florence. In 1873 at Vienna, he exhibited Work without Genius. In 1877 at Naples, he exhibited a life-size statue titled The Equilibrium about a gymnast walking on a sphere. He would make copies of this work in small marble and bronze statuettes.",
"title": "Biography"
},
{
"paragraph_id": 3,
"text": "He exhibited a stucco Christ and the Adultress and Il cuore del re (Heart of the King), the latter depicting an oft-repeated story of King Vittorio Emanuele during one of his frequent hunts, encountering and offering charity to a peasant child. At the 1878 Paris World Exposition he displayed: The Brawl and il Marmiton. In Paris, he met with Auguste Rodin and Jean-Baptiste Carpeaux.",
"title": "Biography"
},
{
"paragraph_id": 4,
"text": "In 1878, he also completed a life-size stucco of il Ciceruacchio, a statue of the Italian patriot Angelo Brunetti and his thirteen-year-old son, depicting them at the moment of their execution in 1849 by Austrian troops. The Cicervacchio statue, with its tinge of revolutionary zeal, did not find commissions for completing the work in marble.",
"title": "Biography"
},
{
"paragraph_id": 5,
"text": "He then completed a nude statue of Nanà based on the novel by Émile Zola; the statue was exhibited at the 1879 Salon di Paris. The next year at the Paris Salon, he displayed La Pesca meravigliosa, where a fisherman rescues a bathing maiden. Returning to Italy, he displayed the bust del minister Giuseppe Zanardelli. At the Mostra of Rome, he displayed The assassination of Julius Caesar; and at the Exposition of Venice, Ragazzi messi in fila. Ximenes' realism gave way to Symbolist and Neo-Renaissance elements. In addition to sculpture, he also produced illustrations for the works of Edmondo De Amicis published by the Treves publishing house.",
"title": "Biography"
},
{
"paragraph_id": 6,
"text": "Ximenes was involved in many of the major official monumental projects in Italy from the 1880s on and devoted his energies as from 1911 primarily to commissions for important public works in São Paulo, Kyiv, New York and Buenos Aires.",
"title": "Biography"
}
]
| Ettore Ximenes was an Italian sculptor. | 2001-10-31T16:26:51Z | 2023-10-01T02:15:32Z | [
"Template:Commons category",
"Template:Authority control",
"Template:Short description",
"Template:Infobox artist",
"Template:Ndash",
"Template:Reflist",
"Template:Webarchive"
]
| https://en.wikipedia.org/wiki/Ettore_Ximenes |
10,018 | Edsger W. Dijkstra | Edsger Wybe Dijkstra (/ˈdaɪkstrə/ DYKE-strə; Dutch: [ˈɛtsxər ˈʋibə ˈdɛikstra] ; 11 May 1930 – 6 August 2002) was a Dutch computer scientist, programmer, software engineer, and science essayist.
Born in Rotterdam, the Netherlands, Dijkstra studied mathematics and physics and then theoretical physics at the University of Leiden. Adriaan van Wijngaarden offered him a job as the first computer programmer in the Netherlands at the Mathematical Center in Amsterdam, where he worked from 1952 until 1962. He formulated and solved the shortest path problem in 1956, and in 1960 developed the first compiler for the programming language ALGOL 60 in conjunction with colleague Jaap Zonneveld [nl]. In 1962 he moved to Eindhoven, and later to Nuenen, where he became a professor in the Mathematics Department at the Technische Hogeschool Eindhoven. In the late 1960s he built the THE multiprogramming system, which influenced the designs of subsequent systems through its use of software-based paged virtual memory. Dijkstra joined Burroughs Corporation as its sole research fellow in August 1973. The Burroughs years saw him at his most prolific in output of research articles. He wrote nearly 500 documents in the "EWD" series, most of them technical reports, for private circulation within a select group.
Dijkstra accepted the Schlumberger Centennial Chair in the Computer Science Department at the University of Texas at Austin in 1984, working in Austin, Texas until his retirement in November 1999. He and his wife returned from Austin to his original house in Nuenen, where he died on 6 August 2002 after a long struggle with cancer.
He received the 1972 Turing Award for fundamental contributions to developing structured programming languages. Shortly before his death, he received the ACM PODC Influential Paper Award in distributed computing for his work on self-stabilization of program computation. This annual award was renamed the Dijkstra Prize the following year, in his honor.
Edsger W. Dijkstra was born in Rotterdam. His father was a chemist who was president of the Dutch Chemical Society; he taught chemistry at a secondary school and was later its superintendent. His mother was a mathematician, but never had a formal job.
Dijkstra had considered a career in law and had hoped to represent the Netherlands in the United Nations. However, after graduating from school in 1948, at his parents' suggestion he studied mathematics and physics and then theoretical physics at the University of Leiden.
In the early 1950s, electronic computers were a novelty. Dijkstra stumbled on his career by accident, and through his supervisor, Professor Johannes Haantjes [nl], he met Adriaan van Wijngaarden, the director of the Computation Department at the Mathematical Center in Amsterdam, who offered Dijkstra a job; he officially became the Netherlands' first "programmer" in March 1952.
For some time Dijkstra remained committed to physics, working on it in Leiden three days out of each week. With increasing exposure to computing, however, his focus began to shift. As he recalled:
After having programmed for some three years, I had a discussion with A. van Wijngaarden, who was then my boss at the Mathematical Center in Amsterdam, a discussion for which I shall remain grateful to him as long as I live. The point was that I was supposed to study theoretical physics at the University of Leiden simultaneously, and as I found the two activities harder and harder to combine, I had to make up my mind, either to stop programming and become a real, respectable theoretical physicist, or to carry my study of physics to a formal completion only, with a minimum of effort, and to become....., yes what? A programmer? But was that a respectable profession? For after all, what was programming? Where was the sound body of knowledge that could support it as an intellectually respectable discipline? I remember quite vividly how I envied my hardware colleagues, who, when asked about their professional competence, could at least point out that they knew everything about vacuum tubes, amplifiers and the rest, whereas I felt that, when faced with that question, I would stand empty-handed. Full of misgivings I knocked on Van Wijngaarden's office door, asking him whether I could "speak to him for a moment"; when I left his office a number of hours later, I was another person. For after having listened to my problems patiently, he agreed that up till that moment there was not much of a programming discipline, but then he went on to explain quietly that automatic computers were here to stay, that we were just at the beginning and could not I be one of the persons called to make programming a respectable discipline in the years to come? This was a turning point in my life and I completed my study of physics formally as quickly as I could.
When Dijkstra married Maria (Ria) C. Debets in 1957, he was required as a part of the marriage rites to state his profession. He stated that he was a programmer, which was unacceptable to the authorities, there being no such profession then in The Netherlands.
In 1959, he received his PhD from the University of Amsterdam for a thesis entitled 'Communication with an Automatic Computer', devoted to a description of the assembly language designed for the first commercial computer developed in the Netherlands, the Electrologica X1. His thesis supervisor was Van Wijngaarden.
From 1952 until 1962, Dijkstra worked at the Mathematisch Centrum in Amsterdam, where he worked closely with Bram Jan Loopstra and Carel S. Scholten, who had been hired to build a computer. Their mode of interaction was disciplined: They would first decide upon the interface between the hardware and the software, by writing a programming manual. Then the hardware designers would have to be faithful to their part of the contract, while Dijkstra, the programmer, would write software for the nonexistent machine. Two of the lessons he learned from this experience were the importance of clear documentation, and that program debugging can be largely avoided through careful design. Dijkstra formulated and solved the shortest path problem for a demonstration at the official inauguration of the ARMAC computer in 1956. Because of the absence of journals dedicated to automatic computing, he did not publish the result until 1959.
At the Mathematical Center, Dijkstra and his colleague Jaap Zonneveld [nl] developed the first compiler for the programming language ALGOL 60 by August 1960, more than a year before a compiler was produced by another group. ALGOL 60 is known as a key advance in the rise of structured programming.
In 1962, Dijkstra moved to Eindhoven, and later to Nuenen, in the south of the Netherlands, where he became a professor in the Mathematics Department at the Eindhoven University of Technology. The university did not have a separate computer science department and the culture of the mathematics department did not particularly suit him. Dijkstra tried to build a group of computer scientists who could collaborate on solving problems. This was an unusual model of research for the Mathematics Department. In the late 1960s, he built the THE operating system (named for the university, then known as Technische Hogeschool Eindhoven), which has influenced the designs of subsequent operating systems through its use of software-based paged virtual memory.
Dijkstra joined Burroughs Corporation, a company known then for producing computers based on an innovative hardware architecture, as its research fellow in August 1973. His duties consisted of visiting some of the firm's research centers a few times a year and carrying on his own research, which he did in the smallest Burroughs research facility, namely, his study on the second floor of his house in Nuenen. In fact, Dijkstra was the only research fellow of Burroughs and worked for it from home, occasionally travelling to its branches in the United States. As a result, he reduced his appointment at the university to one day a week. That day, Tuesday, soon became known as the day of the famous 'Tuesday Afternoon Club', a seminar during which he discussed with his colleagues scientific articles, looking at all aspects: notation, organisation, presentation, language, content, etc. Shortly after he moved in 1984 to the University of Texas at Austin (USA), a new 'branch' of the Tuesday Afternoon Club emerged in Austin, Texas.
The Burroughs years saw him at his most prolific in output of research articles. He wrote nearly 500 documents in the EWD series (described below), most of them technical reports, for private circulation within a select group.
Dijkstra accepted the Schlumberger Centennial Chair in the Computer Science Department at the University of Texas at Austin in 1984.
Dijkstra worked in Austin until his retirement in November 1999. To mark the occasion and to celebrate his forty-plus years of seminal contributions to computing science, the Department of Computer Sciences organized a symposium, which took place on his 70th birthday in May 2000.
Dijkstra and his wife returned from Austin to his original house in Nuenen (Netherlands) where he found that he had only months to live. He said that he wanted to retire in Austin, Texas, but to die in the Netherlands. Dijkstra died on 6 August 2002 after a long struggle with cancer. According to officials at the University of Texas, the cause of death was cancer. He and his wife were survived by their three children: Marcus, Femke, and the computer scientist Rutger M. Dijkstra.
You can hardly blame M.I.T. for not taking notice of an obscure computer scientist in a small town in the Netherlands.
In the world of computing science, Dijkstra is well known as a "character". In the preface of his book A Discipline of Programming (1976) he stated the following: "For the absence of a bibliography I offer neither explanation nor apology." In fact, most of his articles and books have no references at all. This approach to references was deplored by some researchers. Dijkstra chose this way of working to preserve his self-reliance.
As a university professor for much of his life, Dijkstra saw teaching not just as a required activity but as a serious research endeavour. His approach to teaching was unconventional. His lecturing style has been described as idiosyncratic. When lecturing, the long pauses between sentences have often been attributed to the fact that English is not Dijkstra's first language. However the pauses also served as a way for him to think on his feet and he was regarded as a quick and deep thinker while engaged in the act of lecturing. His courses for students in Austin had little to do with computer science but they dealt with the presentation of mathematical proofs. At the beginning of each semester, he would take a photo of each of his students in order to memorize their names. He never followed a textbook, with the possible exception of his own while it was under preparation. When lecturing, he would write proofs in chalk on a blackboard rather than using overhead foils. He invited the students to suggest ideas, which he then explored, or refused to explore because they violated some of his tenets. He assigned challenging homework problems, and would study his students' solutions thoroughly. He conducted his final examinations orally, over a whole week. Each student was examined in Dijkstra's office or home, and an exam lasted several hours.
Dijkstra was also highly original in his way of assessing people's capacity for a job. When Vladimir Lifschitz came to Austin in 1990 for a job interview, Dijkstra gave him a puzzle. Lifschitz solved it and has been working in Austin since then.
He eschewed the use of computers in his own work for many decades. Even after he succumbed to his UT colleagues' encouragement and acquired a Macintosh computer, he used it only for e-mail and for browsing the World Wide Web. Dijkstra never wrote his articles using a computer. He preferred to rely on his typewriter and later on his Montblanc pen. Dijkstra's favorite writing instrument was the Montblanc Meisterstück fountain pen.
He had no use for word processors, believing that one should be able to write a letter or article without rough drafts, rewriting, or any significant editing. He would work it all out in his head before putting pen to paper, and once mentioned that when he was a physics student he would solve his homework problems in his head while walking the streets of Leiden. Most of Dijkstra's publications were written by him alone. He never had a secretary and took care of all his correspondence alone. When colleagues prepared a Festschrift for his sixtieth birthday, published by Springer-Verlag, he took the trouble to thank each of the 61 contributors separately, in a hand-written letter.
In The Humble Programmer (1972), Dijkstra wrote: "We must not forget that it is not our [computing scientists'] business to make programs, it is our business to design classes of computations that will display a desired behaviour."
Dijkstra also opposed the inclusion of software engineering under the umbrella of academic computer science. He wrote that, "As economics is known as "The Miserable Science", software engineering should be known as "The Doomed Discipline", doomed because it cannot even approach its goal since its goal is self-contradictory." And "software engineering has accepted as its charter 'How to program if you cannot.'"
Dijkstra led a modest lifestyle, to the point of being spartan. His and his wife's house in Nuenen was simple, small and unassuming. He did not own a television, a video player, or a mobile telephone, and did not go to the movies. He played the piano, and, while in Austin, liked to go to concerts. An enthusiastic listener of classical music, Dijkstra's favorite composer was Mozart.
Throughout Dijkstra's career, his work was characterized by elegance and economy. A prolific writer (especially as an essayist), Dijkstra authored more than 1,300 papers, many written by hand in his precise script. They were essays and parables; fairy tales and warnings; comprehensive explanation and pedagogical pretext. Most were about mathematics and computer science; others were trip reports that are more revealing about their author than about the people and places visited. It was his habit to copy each paper and circulate it to a small group of colleagues who would copy and forward the papers to another limited group of scientists.
Dijkstra was well known for his habit of carefully composing manuscripts with his fountain pen. The manuscripts are called EWDs, since Dijkstra numbered them with EWD, his initials, as a prefix. According to Dijkstra himself, the EWDs started when he moved from the Mathematical Centre in Amsterdam to the Eindhoven University of Technology (then Technische Hogeschool Eindhoven). After going to Eindhoven, Dijkstra experienced a writer's block for more than a year. He distributed photocopies of a new EWD among his colleagues. Many recipients photocopied and forwarded their copies, so the EWDs spread throughout the international computer science community. The topics were computer science and mathematics, and included trip reports, letters, and speeches. These short articles span a period of 40 years. Almost all EWDs appearing after 1972 were hand-written. They are rarely longer than 15 pages and are consecutively numbered. The last one, No. 1318, is from 14 April 2002. Within computer science they are known as the EWD reports, or, simply the EWDs. More than 1300 EWDs have been scanned, with a growing number transcribed to facilitate search, and are available online at the Dijkstra archive of the University of Texas.
His interest with simplicity came at an early age and under his mother's guidance. He once said he had asked his mother whether trigonometry was a difficult topic. She replied that he must learn all the formulas and that further, if he required more than five lines to prove something, he was on the wrong track.
Dijkstra was famous for his wit, eloquence, rudeness, abruptness and often cruelty to fellow professionals, and way with words, such as in his remark, "The question of whether Machines Can Think (…) is about as relevant as the question of whether Submarines Can Swim." His advice to a promising researcher, who asked how to select a topic for research, was the phrase: "Do only what only you can do". Dijkstra was also known for his vocal criticism and absence of social skills when interacting with colleagues. As an outspoken and critical visionary, he strongly opposed the teaching of BASIC.
In many of his more witty essays, Dijkstra described a fictional company of which he served as chairman. The company was called Mathematics, Inc., a company that he imagined having commercialized the production of mathematical theorems in the same way that software companies had commercialized the production of computer programs. He invented a number of activities and challenges of Mathematics Inc. and documented them in several papers in the EWD series. The imaginary company had produced a proof of the Riemann Hypothesis but then had great difficulties collecting royalties from mathematicians who had proved results assuming the Riemann Hypothesis. The proof itself was a trade secret. Many of the company's proofs were rushed out the door and then much of the company's effort had to be spent on maintenance. A more successful effort was the Standard Proof for Pythagoras' Theorem, that replaced the more than 100 incompatible existing proofs. Dijkstra described Mathematics Inc. as "the most exciting and most miserable business ever conceived". EWD 443 (1974) describes his fictional company as having over 75 percent of the world's market share.
Dijkstra won the Turing award in 1972 for his advocacy of structured programming, a programming paradigm that makes use of structured control flow as opposed to unstructured jumps to different sections in a program using Goto statements. His 1968 letter to the editor of Communications of ACM, "Go To statement considered harmful", caused a major debate. Modern programmers generally adhere to the paradigm of structured programming.
Among his most famous contributions to computer science is shortest path algorithm, known as Dijkstra's algorithm, widely taught in modern computer science undergraduate courses. His other contributions included the Shunting yard algorithm; the THE multiprogramming system, an important early example of structuring a system as a set of layers; the Banker's algorithm; and the semaphore construct for coordinating multiple processors and programs. Another concept formulated by Dijkstra in the field of distributed computing is that of self-stabilization – an alternative way to ensure the reliability of the system. Dijkstra's algorithm is used in SPF, Shortest Path First, which is used in the routing protocols OSPF and IS-IS.
Among Dijkstra's awards and honors are:
In 1969, the British Computer Society (BCS) received approval for an award and fellowship, Distinguished Fellow of the British Computer Society (DFBCS), to be awarded under bylaw 7 of their royal charter. In 1971, the first election was made, to Dijkstra.
In 1990, on occasion of Dijkstra's 60th birthday, the Department of Computer Science (UTCS) at the University of Texas at Austin organized a two-day seminar in his honor. Speakers came from all over the United States and Europe, and a group of computer scientists contributed research articles which were edited into a book.
In 2002, the C&C Foundation of Japan recognized Dijkstra "for his pioneering contributions to the establishment of the scientific basis for computer software through creative research in basic software theory, algorithm theory, structured programming, and semaphores." Dijkstra was alive to receive notice of the award, but it was accepted by his family in an award ceremony after his death.
Shortly before his death in 2002, Dijkstra received the ACM PODC Influential-Paper Award in distributed computing for his work on self-stabilization of program computation. This annual award was renamed the Dijkstra Prize (Edsger W. Dijkstra Prize in Distributed Computing) the following year, in his honor.
The Dijkstra Award for Outstanding Academic Achievement in Computer Science (Loyola University Chicago, Department of Computer Science) is named for Edsger W. Dijkstra. Beginning in 2005, this award recognizes the top academic performance by a graduating computer science major. Selection is based on GPA in all major courses and election by department faculty.
The Department of Computer Science (UTCS) at the University of Texas at Austin hosted the inaugural Edsger W. Dijkstra Memorial Lecture on 12 October 2010. Tony Hoare, Emeritus Professor at Oxford and Principal Researcher at Microsoft Research, was the speaker for the event. This lecture series was made possible by a generous grant from Schlumberger to honor the memory of Dijkstra. | [
{
"paragraph_id": 0,
"text": "Edsger Wybe Dijkstra (/ˈdaɪkstrə/ DYKE-strə; Dutch: [ˈɛtsxər ˈʋibə ˈdɛikstra] ; 11 May 1930 – 6 August 2002) was a Dutch computer scientist, programmer, software engineer, and science essayist.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Born in Rotterdam, the Netherlands, Dijkstra studied mathematics and physics and then theoretical physics at the University of Leiden. Adriaan van Wijngaarden offered him a job as the first computer programmer in the Netherlands at the Mathematical Center in Amsterdam, where he worked from 1952 until 1962. He formulated and solved the shortest path problem in 1956, and in 1960 developed the first compiler for the programming language ALGOL 60 in conjunction with colleague Jaap Zonneveld [nl]. In 1962 he moved to Eindhoven, and later to Nuenen, where he became a professor in the Mathematics Department at the Technische Hogeschool Eindhoven. In the late 1960s he built the THE multiprogramming system, which influenced the designs of subsequent systems through its use of software-based paged virtual memory. Dijkstra joined Burroughs Corporation as its sole research fellow in August 1973. The Burroughs years saw him at his most prolific in output of research articles. He wrote nearly 500 documents in the \"EWD\" series, most of them technical reports, for private circulation within a select group.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Dijkstra accepted the Schlumberger Centennial Chair in the Computer Science Department at the University of Texas at Austin in 1984, working in Austin, Texas until his retirement in November 1999. He and his wife returned from Austin to his original house in Nuenen, where he died on 6 August 2002 after a long struggle with cancer.",
"title": ""
},
{
"paragraph_id": 3,
"text": "He received the 1972 Turing Award for fundamental contributions to developing structured programming languages. Shortly before his death, he received the ACM PODC Influential Paper Award in distributed computing for his work on self-stabilization of program computation. This annual award was renamed the Dijkstra Prize the following year, in his honor.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Edsger W. Dijkstra was born in Rotterdam. His father was a chemist who was president of the Dutch Chemical Society; he taught chemistry at a secondary school and was later its superintendent. His mother was a mathematician, but never had a formal job.",
"title": "Life and works"
},
{
"paragraph_id": 5,
"text": "Dijkstra had considered a career in law and had hoped to represent the Netherlands in the United Nations. However, after graduating from school in 1948, at his parents' suggestion he studied mathematics and physics and then theoretical physics at the University of Leiden.",
"title": "Life and works"
},
{
"paragraph_id": 6,
"text": "In the early 1950s, electronic computers were a novelty. Dijkstra stumbled on his career by accident, and through his supervisor, Professor Johannes Haantjes [nl], he met Adriaan van Wijngaarden, the director of the Computation Department at the Mathematical Center in Amsterdam, who offered Dijkstra a job; he officially became the Netherlands' first \"programmer\" in March 1952.",
"title": "Life and works"
},
{
"paragraph_id": 7,
"text": "For some time Dijkstra remained committed to physics, working on it in Leiden three days out of each week. With increasing exposure to computing, however, his focus began to shift. As he recalled:",
"title": "Life and works"
},
{
"paragraph_id": 8,
"text": "After having programmed for some three years, I had a discussion with A. van Wijngaarden, who was then my boss at the Mathematical Center in Amsterdam, a discussion for which I shall remain grateful to him as long as I live. The point was that I was supposed to study theoretical physics at the University of Leiden simultaneously, and as I found the two activities harder and harder to combine, I had to make up my mind, either to stop programming and become a real, respectable theoretical physicist, or to carry my study of physics to a formal completion only, with a minimum of effort, and to become....., yes what? A programmer? But was that a respectable profession? For after all, what was programming? Where was the sound body of knowledge that could support it as an intellectually respectable discipline? I remember quite vividly how I envied my hardware colleagues, who, when asked about their professional competence, could at least point out that they knew everything about vacuum tubes, amplifiers and the rest, whereas I felt that, when faced with that question, I would stand empty-handed. Full of misgivings I knocked on Van Wijngaarden's office door, asking him whether I could \"speak to him for a moment\"; when I left his office a number of hours later, I was another person. For after having listened to my problems patiently, he agreed that up till that moment there was not much of a programming discipline, but then he went on to explain quietly that automatic computers were here to stay, that we were just at the beginning and could not I be one of the persons called to make programming a respectable discipline in the years to come? This was a turning point in my life and I completed my study of physics formally as quickly as I could.",
"title": "Life and works"
},
{
"paragraph_id": 9,
"text": "When Dijkstra married Maria (Ria) C. Debets in 1957, he was required as a part of the marriage rites to state his profession. He stated that he was a programmer, which was unacceptable to the authorities, there being no such profession then in The Netherlands.",
"title": "Life and works"
},
{
"paragraph_id": 10,
"text": "In 1959, he received his PhD from the University of Amsterdam for a thesis entitled 'Communication with an Automatic Computer', devoted to a description of the assembly language designed for the first commercial computer developed in the Netherlands, the Electrologica X1. His thesis supervisor was Van Wijngaarden.",
"title": "Life and works"
},
{
"paragraph_id": 11,
"text": "From 1952 until 1962, Dijkstra worked at the Mathematisch Centrum in Amsterdam, where he worked closely with Bram Jan Loopstra and Carel S. Scholten, who had been hired to build a computer. Their mode of interaction was disciplined: They would first decide upon the interface between the hardware and the software, by writing a programming manual. Then the hardware designers would have to be faithful to their part of the contract, while Dijkstra, the programmer, would write software for the nonexistent machine. Two of the lessons he learned from this experience were the importance of clear documentation, and that program debugging can be largely avoided through careful design. Dijkstra formulated and solved the shortest path problem for a demonstration at the official inauguration of the ARMAC computer in 1956. Because of the absence of journals dedicated to automatic computing, he did not publish the result until 1959.",
"title": "Life and works"
},
{
"paragraph_id": 12,
"text": "At the Mathematical Center, Dijkstra and his colleague Jaap Zonneveld [nl] developed the first compiler for the programming language ALGOL 60 by August 1960, more than a year before a compiler was produced by another group. ALGOL 60 is known as a key advance in the rise of structured programming.",
"title": "Life and works"
},
{
"paragraph_id": 13,
"text": "In 1962, Dijkstra moved to Eindhoven, and later to Nuenen, in the south of the Netherlands, where he became a professor in the Mathematics Department at the Eindhoven University of Technology. The university did not have a separate computer science department and the culture of the mathematics department did not particularly suit him. Dijkstra tried to build a group of computer scientists who could collaborate on solving problems. This was an unusual model of research for the Mathematics Department. In the late 1960s, he built the THE operating system (named for the university, then known as Technische Hogeschool Eindhoven), which has influenced the designs of subsequent operating systems through its use of software-based paged virtual memory.",
"title": "Life and works"
},
{
"paragraph_id": 14,
"text": "Dijkstra joined Burroughs Corporation, a company known then for producing computers based on an innovative hardware architecture, as its research fellow in August 1973. His duties consisted of visiting some of the firm's research centers a few times a year and carrying on his own research, which he did in the smallest Burroughs research facility, namely, his study on the second floor of his house in Nuenen. In fact, Dijkstra was the only research fellow of Burroughs and worked for it from home, occasionally travelling to its branches in the United States. As a result, he reduced his appointment at the university to one day a week. That day, Tuesday, soon became known as the day of the famous 'Tuesday Afternoon Club', a seminar during which he discussed with his colleagues scientific articles, looking at all aspects: notation, organisation, presentation, language, content, etc. Shortly after he moved in 1984 to the University of Texas at Austin (USA), a new 'branch' of the Tuesday Afternoon Club emerged in Austin, Texas.",
"title": "Life and works"
},
{
"paragraph_id": 15,
"text": "The Burroughs years saw him at his most prolific in output of research articles. He wrote nearly 500 documents in the EWD series (described below), most of them technical reports, for private circulation within a select group.",
"title": "Life and works"
},
{
"paragraph_id": 16,
"text": "Dijkstra accepted the Schlumberger Centennial Chair in the Computer Science Department at the University of Texas at Austin in 1984.",
"title": "Life and works"
},
{
"paragraph_id": 17,
"text": "Dijkstra worked in Austin until his retirement in November 1999. To mark the occasion and to celebrate his forty-plus years of seminal contributions to computing science, the Department of Computer Sciences organized a symposium, which took place on his 70th birthday in May 2000.",
"title": "Life and works"
},
{
"paragraph_id": 18,
"text": "Dijkstra and his wife returned from Austin to his original house in Nuenen (Netherlands) where he found that he had only months to live. He said that he wanted to retire in Austin, Texas, but to die in the Netherlands. Dijkstra died on 6 August 2002 after a long struggle with cancer. According to officials at the University of Texas, the cause of death was cancer. He and his wife were survived by their three children: Marcus, Femke, and the computer scientist Rutger M. Dijkstra.",
"title": "Life and works"
},
{
"paragraph_id": 19,
"text": "You can hardly blame M.I.T. for not taking notice of an obscure computer scientist in a small town in the Netherlands.",
"title": "Personality"
},
{
"paragraph_id": 20,
"text": "In the world of computing science, Dijkstra is well known as a \"character\". In the preface of his book A Discipline of Programming (1976) he stated the following: \"For the absence of a bibliography I offer neither explanation nor apology.\" In fact, most of his articles and books have no references at all. This approach to references was deplored by some researchers. Dijkstra chose this way of working to preserve his self-reliance.",
"title": "Personality"
},
{
"paragraph_id": 21,
"text": "As a university professor for much of his life, Dijkstra saw teaching not just as a required activity but as a serious research endeavour. His approach to teaching was unconventional. His lecturing style has been described as idiosyncratic. When lecturing, the long pauses between sentences have often been attributed to the fact that English is not Dijkstra's first language. However the pauses also served as a way for him to think on his feet and he was regarded as a quick and deep thinker while engaged in the act of lecturing. His courses for students in Austin had little to do with computer science but they dealt with the presentation of mathematical proofs. At the beginning of each semester, he would take a photo of each of his students in order to memorize their names. He never followed a textbook, with the possible exception of his own while it was under preparation. When lecturing, he would write proofs in chalk on a blackboard rather than using overhead foils. He invited the students to suggest ideas, which he then explored, or refused to explore because they violated some of his tenets. He assigned challenging homework problems, and would study his students' solutions thoroughly. He conducted his final examinations orally, over a whole week. Each student was examined in Dijkstra's office or home, and an exam lasted several hours.",
"title": "Personality"
},
{
"paragraph_id": 22,
"text": "Dijkstra was also highly original in his way of assessing people's capacity for a job. When Vladimir Lifschitz came to Austin in 1990 for a job interview, Dijkstra gave him a puzzle. Lifschitz solved it and has been working in Austin since then.",
"title": "Personality"
},
{
"paragraph_id": 23,
"text": "He eschewed the use of computers in his own work for many decades. Even after he succumbed to his UT colleagues' encouragement and acquired a Macintosh computer, he used it only for e-mail and for browsing the World Wide Web. Dijkstra never wrote his articles using a computer. He preferred to rely on his typewriter and later on his Montblanc pen. Dijkstra's favorite writing instrument was the Montblanc Meisterstück fountain pen.",
"title": "Personality"
},
{
"paragraph_id": 24,
"text": "He had no use for word processors, believing that one should be able to write a letter or article without rough drafts, rewriting, or any significant editing. He would work it all out in his head before putting pen to paper, and once mentioned that when he was a physics student he would solve his homework problems in his head while walking the streets of Leiden. Most of Dijkstra's publications were written by him alone. He never had a secretary and took care of all his correspondence alone. When colleagues prepared a Festschrift for his sixtieth birthday, published by Springer-Verlag, he took the trouble to thank each of the 61 contributors separately, in a hand-written letter.",
"title": "Personality"
},
{
"paragraph_id": 25,
"text": "In The Humble Programmer (1972), Dijkstra wrote: \"We must not forget that it is not our [computing scientists'] business to make programs, it is our business to design classes of computations that will display a desired behaviour.\"",
"title": "Personality"
},
{
"paragraph_id": 26,
"text": "Dijkstra also opposed the inclusion of software engineering under the umbrella of academic computer science. He wrote that, \"As economics is known as \"The Miserable Science\", software engineering should be known as \"The Doomed Discipline\", doomed because it cannot even approach its goal since its goal is self-contradictory.\" And \"software engineering has accepted as its charter 'How to program if you cannot.'\"",
"title": "Personality"
},
{
"paragraph_id": 27,
"text": "Dijkstra led a modest lifestyle, to the point of being spartan. His and his wife's house in Nuenen was simple, small and unassuming. He did not own a television, a video player, or a mobile telephone, and did not go to the movies. He played the piano, and, while in Austin, liked to go to concerts. An enthusiastic listener of classical music, Dijkstra's favorite composer was Mozart.",
"title": "Personality"
},
{
"paragraph_id": 28,
"text": "Throughout Dijkstra's career, his work was characterized by elegance and economy. A prolific writer (especially as an essayist), Dijkstra authored more than 1,300 papers, many written by hand in his precise script. They were essays and parables; fairy tales and warnings; comprehensive explanation and pedagogical pretext. Most were about mathematics and computer science; others were trip reports that are more revealing about their author than about the people and places visited. It was his habit to copy each paper and circulate it to a small group of colleagues who would copy and forward the papers to another limited group of scientists.",
"title": "Essays and other writing"
},
{
"paragraph_id": 29,
"text": "Dijkstra was well known for his habit of carefully composing manuscripts with his fountain pen. The manuscripts are called EWDs, since Dijkstra numbered them with EWD, his initials, as a prefix. According to Dijkstra himself, the EWDs started when he moved from the Mathematical Centre in Amsterdam to the Eindhoven University of Technology (then Technische Hogeschool Eindhoven). After going to Eindhoven, Dijkstra experienced a writer's block for more than a year. He distributed photocopies of a new EWD among his colleagues. Many recipients photocopied and forwarded their copies, so the EWDs spread throughout the international computer science community. The topics were computer science and mathematics, and included trip reports, letters, and speeches. These short articles span a period of 40 years. Almost all EWDs appearing after 1972 were hand-written. They are rarely longer than 15 pages and are consecutively numbered. The last one, No. 1318, is from 14 April 2002. Within computer science they are known as the EWD reports, or, simply the EWDs. More than 1300 EWDs have been scanned, with a growing number transcribed to facilitate search, and are available online at the Dijkstra archive of the University of Texas.",
"title": "Essays and other writing"
},
{
"paragraph_id": 30,
"text": "His interest with simplicity came at an early age and under his mother's guidance. He once said he had asked his mother whether trigonometry was a difficult topic. She replied that he must learn all the formulas and that further, if he required more than five lines to prove something, he was on the wrong track.",
"title": "Essays and other writing"
},
{
"paragraph_id": 31,
"text": "Dijkstra was famous for his wit, eloquence, rudeness, abruptness and often cruelty to fellow professionals, and way with words, such as in his remark, \"The question of whether Machines Can Think (…) is about as relevant as the question of whether Submarines Can Swim.\" His advice to a promising researcher, who asked how to select a topic for research, was the phrase: \"Do only what only you can do\". Dijkstra was also known for his vocal criticism and absence of social skills when interacting with colleagues. As an outspoken and critical visionary, he strongly opposed the teaching of BASIC.",
"title": "Essays and other writing"
},
{
"paragraph_id": 32,
"text": "In many of his more witty essays, Dijkstra described a fictional company of which he served as chairman. The company was called Mathematics, Inc., a company that he imagined having commercialized the production of mathematical theorems in the same way that software companies had commercialized the production of computer programs. He invented a number of activities and challenges of Mathematics Inc. and documented them in several papers in the EWD series. The imaginary company had produced a proof of the Riemann Hypothesis but then had great difficulties collecting royalties from mathematicians who had proved results assuming the Riemann Hypothesis. The proof itself was a trade secret. Many of the company's proofs were rushed out the door and then much of the company's effort had to be spent on maintenance. A more successful effort was the Standard Proof for Pythagoras' Theorem, that replaced the more than 100 incompatible existing proofs. Dijkstra described Mathematics Inc. as \"the most exciting and most miserable business ever conceived\". EWD 443 (1974) describes his fictional company as having over 75 percent of the world's market share.",
"title": "Essays and other writing"
},
{
"paragraph_id": 33,
"text": "Dijkstra won the Turing award in 1972 for his advocacy of structured programming, a programming paradigm that makes use of structured control flow as opposed to unstructured jumps to different sections in a program using Goto statements. His 1968 letter to the editor of Communications of ACM, \"Go To statement considered harmful\", caused a major debate. Modern programmers generally adhere to the paradigm of structured programming.",
"title": "Legacy"
},
{
"paragraph_id": 34,
"text": "Among his most famous contributions to computer science is shortest path algorithm, known as Dijkstra's algorithm, widely taught in modern computer science undergraduate courses. His other contributions included the Shunting yard algorithm; the THE multiprogramming system, an important early example of structuring a system as a set of layers; the Banker's algorithm; and the semaphore construct for coordinating multiple processors and programs. Another concept formulated by Dijkstra in the field of distributed computing is that of self-stabilization – an alternative way to ensure the reliability of the system. Dijkstra's algorithm is used in SPF, Shortest Path First, which is used in the routing protocols OSPF and IS-IS.",
"title": "Legacy"
},
{
"paragraph_id": 35,
"text": "Among Dijkstra's awards and honors are:",
"title": "Awards and honors"
},
{
"paragraph_id": 36,
"text": "In 1969, the British Computer Society (BCS) received approval for an award and fellowship, Distinguished Fellow of the British Computer Society (DFBCS), to be awarded under bylaw 7 of their royal charter. In 1971, the first election was made, to Dijkstra.",
"title": "Awards and honors"
},
{
"paragraph_id": 37,
"text": "In 1990, on occasion of Dijkstra's 60th birthday, the Department of Computer Science (UTCS) at the University of Texas at Austin organized a two-day seminar in his honor. Speakers came from all over the United States and Europe, and a group of computer scientists contributed research articles which were edited into a book.",
"title": "Awards and honors"
},
{
"paragraph_id": 38,
"text": "In 2002, the C&C Foundation of Japan recognized Dijkstra \"for his pioneering contributions to the establishment of the scientific basis for computer software through creative research in basic software theory, algorithm theory, structured programming, and semaphores.\" Dijkstra was alive to receive notice of the award, but it was accepted by his family in an award ceremony after his death.",
"title": "Awards and honors"
},
{
"paragraph_id": 39,
"text": "Shortly before his death in 2002, Dijkstra received the ACM PODC Influential-Paper Award in distributed computing for his work on self-stabilization of program computation. This annual award was renamed the Dijkstra Prize (Edsger W. Dijkstra Prize in Distributed Computing) the following year, in his honor.",
"title": "Awards and honors"
},
{
"paragraph_id": 40,
"text": "The Dijkstra Award for Outstanding Academic Achievement in Computer Science (Loyola University Chicago, Department of Computer Science) is named for Edsger W. Dijkstra. Beginning in 2005, this award recognizes the top academic performance by a graduating computer science major. Selection is based on GPA in all major courses and election by department faculty.",
"title": "Awards and honors"
},
{
"paragraph_id": 41,
"text": "The Department of Computer Science (UTCS) at the University of Texas at Austin hosted the inaugural Edsger W. Dijkstra Memorial Lecture on 12 October 2010. Tony Hoare, Emeritus Professor at Oxford and Principal Researcher at Microsoft Research, was the speaker for the event. This lecture series was made possible by a generous grant from Schlumberger to honor the memory of Dijkstra.",
"title": "Awards and honors"
},
{
"paragraph_id": 42,
"text": "",
"title": "External links"
}
]
| Edsger Wybe Dijkstra was a Dutch computer scientist, programmer, software engineer, and science essayist. Born in Rotterdam, the Netherlands, Dijkstra studied mathematics and physics and then theoretical physics at the University of Leiden. Adriaan van Wijngaarden offered him a job as the first computer programmer in the Netherlands at the Mathematical Center in Amsterdam, where he worked from 1952 until 1962. He formulated and solved the shortest path problem in 1956, and in 1960 developed the first compiler for the programming language ALGOL 60 in conjunction with colleague Jaap Zonneveld. In 1962 he moved to Eindhoven, and later to Nuenen, where he became a professor in the Mathematics Department at the Technische Hogeschool Eindhoven. In the late 1960s he built the THE multiprogramming system, which influenced the designs of subsequent systems through its use of software-based paged virtual memory. Dijkstra joined Burroughs Corporation as its sole research fellow in August 1973. The Burroughs years saw him at his most prolific in output of research articles. He wrote nearly 500 documents in the "EWD" series, most of them technical reports, for private circulation within a select group. Dijkstra accepted the Schlumberger Centennial Chair in the Computer Science Department at the University of Texas at Austin in 1984, working in Austin, Texas until his retirement in November 1999. He and his wife returned from Austin to his original house in Nuenen, where he died on 6 August 2002 after a long struggle with cancer. He received the 1972 Turing Award for fundamental contributions to developing structured programming languages. Shortly before his death, he received the ACM PODC Influential Paper Award in distributed computing for his work on self-stabilization of program computation. This annual award was renamed the Dijkstra Prize the following year, in his honor. | 2001-10-31T17:37:31Z | 2023-12-31T10:51:16Z | [
"Template:Computer science",
"Template:Refbegin",
"Template:Cite book",
"Template:Refend",
"Template:Concurrent computing",
"Template:Cite magazine",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Interlanguage link",
"Template:Sfnp",
"Template:Cite journal",
"Template:Full citation needed",
"Template:Infobox scientist",
"Template:Blockquote",
"Template:Cite news",
"Template:Reflist",
"Template:Commons category",
"Template:Timelines of computing",
"Template:Turing Award laureates",
"Template:Laundry",
"Template:Cite tech report",
"Template:Cite report",
"Template:Harvp",
"Template:Edsger Dijkstra",
"Template:Respell",
"Template:IPA-nl",
"Template:Who",
"Template:Wikiquote",
"Template:Citation needed",
"Template:Citation",
"Template:Cite EWD",
"Template:Short description",
"Template:Cite web",
"Template:Software engineering",
"Template:ALGOL programming",
"Template:IPAc-en"
]
| https://en.wikipedia.org/wiki/Edsger_W._Dijkstra |
10,021 | Educational perennialism | Educational perennialism is a normative educational philosophy. Perennialists believe that the priority of education should be to teach principles that have persisted for centuries, not facts. Since people are human, one should teach first about humans, rather than machines or techniques, and about liberal, rather than vocational, topics.
Perennialism appears similar to essentialism but focuses first on personal development, while essentialism focuses first on essential skills. Essentialist curricula tend to be more vocational and fact-based, and far less liberal and principle-based. Both philosophies are typically considered to be teacher-centered, as opposed to student-centered philosophies of education such as progressivism. Teachers associated with perennialism are authors of the Western masterpieces and are open to student criticism through the associated Socratic method.
The word "perennial" in secular perennialism suggests something that lasts an indefinite amount of time, recurs again and again, or is self-renewing. Robert Hutchins and Mortimer Adler promoted a universal curriculum based upon the common and essential nature of all human beings and encompassing humanist and scientific traditions. Hutchins and Adler implemented these ideas with great success at the University of Chicago, where they still strongly influence the Undergraduate Common Core. Other notable figures in the movement include Stringfellow Barr and Scott Buchanan (who together initiated the Great Books program at St. John's College in Annapolis, Maryland), Mark Van Doren, Alexander Meiklejohn, and Sir Richard Livingstone, an English classicist with an American following. Inspired by Adler's lectures, Sister Miriam Joseph wrote a textbook on the scholastic trivium and taught it as the Freshman seminar at Saint Mary's College.
Secular perennialists espouse the idea that education should focus on the historical development of a continually advancing common orienting base of human knowledge and art, the timeless value of classic thought on central human issues by landmark thinkers, and revolutionary ideas critical to historical paradigm shifts or changes in world view. A program of studies which is highly general, nonspecialized, and nonvocational is advocated. They firmly believe that exposure of all people to the development of thought by those most responsible for the evolution of the occidental oriented tradition is integral to the survival of the freedoms, human rights, and responsibilities inherent to a true democracy.
Adler states:
... our political democracy depends upon the reconstitution of our schools. Our schools are not turning out young people prepared for the high office and the duties of citizenship in a democratic republic. Our political institutions cannot thrive, they may not even survive, if we do not produce a greater number of thinking citizens, from whom some statesmen of the type we had in the 18th century might eventually emerge. We are, indeed, a nation at risk, and nothing but radical reform of our schools can save us from impending disaster... Whatever the price... the price we will pay for not doing it will be much greater.
Hutchins writes in the same vein:
The business of saying ... that people are not capable of achieving a good education is too strongly reminiscent of the opposition of every extension of democracy. This opposition has always rested on the allegation that the people were incapable of exercising the power they demanded. Always the historic statement has been verified: you cannot expect the slave to show the virtues of the free man unless you first set him free. When the slave has been set free, he has, in the passage of time, become indistinguishable from those who have always been free ... There appears to be an innate human tendency to underestimate the capacity of those who do not belong to "our" group. Those who do not share our background cannot have our ability. Foreigners, people who are in a different economic status, and the young seem invariably to be regarded as intellectually backward ...
As with the essentialists, perennialists are educationally conservative in the requirement of a curriculum focused upon fundamental subject areas, but they stress that the overall aim should be exposure to history's finest thinkers as models for discovery. The student should be taught such basic subjects as English, languages, history, mathematics, natural science, philosophy, and fine arts. Adler states: "The three R's, which always signified the formal disciplines, are the essence of liberal or general education."
Secular perennialists agree with progressivists that memorization of vast amounts of factual information and a focus on second-hand information in textbooks and lectures does not develop rational thought. They advocate learning through the development of meaningful conceptual thinking and judgement by means of a directed reading list of the profound, aesthetic, and meaningful great books of the Western canon. These books, secular perennialists argue, are written by the world's finest thinkers, and cumulatively comprise the "Great Conversation" of humanity with regard to the central human questions. Their basic argument for the use of original works (abridged translations being acceptable as well) is that these are the products of "genius". Hutchins remarks:
Great books are great teachers; they are showing us every day what ordinary people are capable of. These books come out of ignorant, inquiring humanity. They are usually the first announcements for success in learning. Most of them were written for, and addressed to, ordinary people.
The Great Conversation is not static but, along with the set of related great books, changes as the representative thought of man changes or progresses. In this way, it seeks to represent an evolution of thought not based upon the latest cultural fads. Hutchins clarifies this:
In the course of history... new books have been written that have won their place in the list. Books once thought entitled to belong to it have been superseded; and this process of change will continue as long as men can think and write. It is the task of every generation to reassess the tradition in which it lives, to discard what it cannot use, and to bring into context with the distant and intermediate past the most recent contributions to the Great Conversation. ...the West needs to recapture and reemphasize and bring to bear upon its present problems the wisdom that lies in the works of its greatest thinkers and in the name of love
Perennialism was proposed in response to what many considered a failing educational system. Again Hutchins writes:
The products of American high schools are illiterate; and a degree from a famous college or university is no guarantee that the graduate is in any better case. One of the most remarkable features of American society is that the difference between the "uneducated" and the "educated" is so slight.
In this regard John Dewey and Hutchins were in agreement. Hutchins's book The Higher Learning in America deplored the "plight of higher learning" that had turned away from cultivation of the intellect and toward anti-intellectual practicality due, in part, to a lust for money. In a highly negative review of the book, Dewey wrote a series of articles in The Social Frontier which began by applauding Hutchins' attack on "the aimlessness of our present educational scheme.
Perennialists believe in reading being supplemented by mutual investigations involving both teacher and student and minimally-directed discussions through the Socratic method in order to develop a historically oriented understanding of concepts. They argue that accurate, independent reasoning distinguishes the developed or educated mind and stress the development of this faculty. A skilled teacher keeps discussions on topic, corrects errors in reasoning, and accurately formulates problems within the scope of texts being studied but lets the class reach their own conclusions.
Perennialists argue that many of the historical debates and the development of ideas presented by the great books are relevant to any society at any time, making them suitable for instructional use regardless of their age. They acknowledge disagreement between various great books but believe that the student must learn to recognize these disagreements, think about them, and reach a reasoned, defensible conclusion. This is a major goal of the Socratic discussions.
Perennialism was originally religious in nature, developed first by Thomas Aquinas in the thirteenth century in his work (On the Teacher).
In the nineteenth century, John Henry Newman presented a defense of religious perennialism in The Idea of a University. Discourse 5 of that work, "Knowledge Its Own End", is a recent statement of a Christian educational perennialism.
There are several epistemological options, which affect the pedagogical options. The possibilities may be surveyed by considering four extreme positions - idealistic rationalism, idealistic fideism, realistic rationalism and realistic fideism.
Teaching pupils to think critically and rationally are the main objectives of perennialist educators. A perennialist classroom seeks to be a highly structured and disciplined setting that fosters in pupils a never-ending search for the truth. | [
{
"paragraph_id": 0,
"text": "Educational perennialism is a normative educational philosophy. Perennialists believe that the priority of education should be to teach principles that have persisted for centuries, not facts. Since people are human, one should teach first about humans, rather than machines or techniques, and about liberal, rather than vocational, topics.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Perennialism appears similar to essentialism but focuses first on personal development, while essentialism focuses first on essential skills. Essentialist curricula tend to be more vocational and fact-based, and far less liberal and principle-based. Both philosophies are typically considered to be teacher-centered, as opposed to student-centered philosophies of education such as progressivism. Teachers associated with perennialism are authors of the Western masterpieces and are open to student criticism through the associated Socratic method.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The word \"perennial\" in secular perennialism suggests something that lasts an indefinite amount of time, recurs again and again, or is self-renewing. Robert Hutchins and Mortimer Adler promoted a universal curriculum based upon the common and essential nature of all human beings and encompassing humanist and scientific traditions. Hutchins and Adler implemented these ideas with great success at the University of Chicago, where they still strongly influence the Undergraduate Common Core. Other notable figures in the movement include Stringfellow Barr and Scott Buchanan (who together initiated the Great Books program at St. John's College in Annapolis, Maryland), Mark Van Doren, Alexander Meiklejohn, and Sir Richard Livingstone, an English classicist with an American following. Inspired by Adler's lectures, Sister Miriam Joseph wrote a textbook on the scholastic trivium and taught it as the Freshman seminar at Saint Mary's College.",
"title": "Secular perennialism"
},
{
"paragraph_id": 3,
"text": "Secular perennialists espouse the idea that education should focus on the historical development of a continually advancing common orienting base of human knowledge and art, the timeless value of classic thought on central human issues by landmark thinkers, and revolutionary ideas critical to historical paradigm shifts or changes in world view. A program of studies which is highly general, nonspecialized, and nonvocational is advocated. They firmly believe that exposure of all people to the development of thought by those most responsible for the evolution of the occidental oriented tradition is integral to the survival of the freedoms, human rights, and responsibilities inherent to a true democracy.",
"title": "Secular perennialism"
},
{
"paragraph_id": 4,
"text": "Adler states:",
"title": "Secular perennialism"
},
{
"paragraph_id": 5,
"text": "... our political democracy depends upon the reconstitution of our schools. Our schools are not turning out young people prepared for the high office and the duties of citizenship in a democratic republic. Our political institutions cannot thrive, they may not even survive, if we do not produce a greater number of thinking citizens, from whom some statesmen of the type we had in the 18th century might eventually emerge. We are, indeed, a nation at risk, and nothing but radical reform of our schools can save us from impending disaster... Whatever the price... the price we will pay for not doing it will be much greater.",
"title": "Secular perennialism"
},
{
"paragraph_id": 6,
"text": "Hutchins writes in the same vein:",
"title": "Secular perennialism"
},
{
"paragraph_id": 7,
"text": "The business of saying ... that people are not capable of achieving a good education is too strongly reminiscent of the opposition of every extension of democracy. This opposition has always rested on the allegation that the people were incapable of exercising the power they demanded. Always the historic statement has been verified: you cannot expect the slave to show the virtues of the free man unless you first set him free. When the slave has been set free, he has, in the passage of time, become indistinguishable from those who have always been free ... There appears to be an innate human tendency to underestimate the capacity of those who do not belong to \"our\" group. Those who do not share our background cannot have our ability. Foreigners, people who are in a different economic status, and the young seem invariably to be regarded as intellectually backward ...",
"title": "Secular perennialism"
},
{
"paragraph_id": 8,
"text": "As with the essentialists, perennialists are educationally conservative in the requirement of a curriculum focused upon fundamental subject areas, but they stress that the overall aim should be exposure to history's finest thinkers as models for discovery. The student should be taught such basic subjects as English, languages, history, mathematics, natural science, philosophy, and fine arts. Adler states: \"The three R's, which always signified the formal disciplines, are the essence of liberal or general education.\"",
"title": "Secular perennialism"
},
{
"paragraph_id": 9,
"text": "Secular perennialists agree with progressivists that memorization of vast amounts of factual information and a focus on second-hand information in textbooks and lectures does not develop rational thought. They advocate learning through the development of meaningful conceptual thinking and judgement by means of a directed reading list of the profound, aesthetic, and meaningful great books of the Western canon. These books, secular perennialists argue, are written by the world's finest thinkers, and cumulatively comprise the \"Great Conversation\" of humanity with regard to the central human questions. Their basic argument for the use of original works (abridged translations being acceptable as well) is that these are the products of \"genius\". Hutchins remarks:",
"title": "Secular perennialism"
},
{
"paragraph_id": 10,
"text": "Great books are great teachers; they are showing us every day what ordinary people are capable of. These books come out of ignorant, inquiring humanity. They are usually the first announcements for success in learning. Most of them were written for, and addressed to, ordinary people.",
"title": "Secular perennialism"
},
{
"paragraph_id": 11,
"text": "The Great Conversation is not static but, along with the set of related great books, changes as the representative thought of man changes or progresses. In this way, it seeks to represent an evolution of thought not based upon the latest cultural fads. Hutchins clarifies this:",
"title": "Secular perennialism"
},
{
"paragraph_id": 12,
"text": "In the course of history... new books have been written that have won their place in the list. Books once thought entitled to belong to it have been superseded; and this process of change will continue as long as men can think and write. It is the task of every generation to reassess the tradition in which it lives, to discard what it cannot use, and to bring into context with the distant and intermediate past the most recent contributions to the Great Conversation. ...the West needs to recapture and reemphasize and bring to bear upon its present problems the wisdom that lies in the works of its greatest thinkers and in the name of love",
"title": "Secular perennialism"
},
{
"paragraph_id": 13,
"text": "Perennialism was proposed in response to what many considered a failing educational system. Again Hutchins writes:",
"title": "Secular perennialism"
},
{
"paragraph_id": 14,
"text": "The products of American high schools are illiterate; and a degree from a famous college or university is no guarantee that the graduate is in any better case. One of the most remarkable features of American society is that the difference between the \"uneducated\" and the \"educated\" is so slight.",
"title": "Secular perennialism"
},
{
"paragraph_id": 15,
"text": "In this regard John Dewey and Hutchins were in agreement. Hutchins's book The Higher Learning in America deplored the \"plight of higher learning\" that had turned away from cultivation of the intellect and toward anti-intellectual practicality due, in part, to a lust for money. In a highly negative review of the book, Dewey wrote a series of articles in The Social Frontier which began by applauding Hutchins' attack on \"the aimlessness of our present educational scheme.",
"title": "Secular perennialism"
},
{
"paragraph_id": 16,
"text": "Perennialists believe in reading being supplemented by mutual investigations involving both teacher and student and minimally-directed discussions through the Socratic method in order to develop a historically oriented understanding of concepts. They argue that accurate, independent reasoning distinguishes the developed or educated mind and stress the development of this faculty. A skilled teacher keeps discussions on topic, corrects errors in reasoning, and accurately formulates problems within the scope of texts being studied but lets the class reach their own conclusions.",
"title": "Secular perennialism"
},
{
"paragraph_id": 17,
"text": "Perennialists argue that many of the historical debates and the development of ideas presented by the great books are relevant to any society at any time, making them suitable for instructional use regardless of their age. They acknowledge disagreement between various great books but believe that the student must learn to recognize these disagreements, think about them, and reach a reasoned, defensible conclusion. This is a major goal of the Socratic discussions.",
"title": "Secular perennialism"
},
{
"paragraph_id": 18,
"text": "Perennialism was originally religious in nature, developed first by Thomas Aquinas in the thirteenth century in his work (On the Teacher).",
"title": "Religious perennialism"
},
{
"paragraph_id": 19,
"text": "In the nineteenth century, John Henry Newman presented a defense of religious perennialism in The Idea of a University. Discourse 5 of that work, \"Knowledge Its Own End\", is a recent statement of a Christian educational perennialism.",
"title": "Religious perennialism"
},
{
"paragraph_id": 20,
"text": "There are several epistemological options, which affect the pedagogical options. The possibilities may be surveyed by considering four extreme positions - idealistic rationalism, idealistic fideism, realistic rationalism and realistic fideism.",
"title": "Religious perennialism"
},
{
"paragraph_id": 21,
"text": "Teaching pupils to think critically and rationally are the main objectives of perennialist educators. A perennialist classroom seeks to be a highly structured and disciplined setting that fosters in pupils a never-ending search for the truth.",
"title": "Religious perennialism"
},
{
"paragraph_id": 22,
"text": "",
"title": "External links"
}
]
| Educational perennialism is a normative educational philosophy. Perennialists believe that the priority of education should be to teach principles that have persisted for centuries, not facts. Since people are human, one should teach first about humans, rather than machines or techniques, and about liberal, rather than vocational, topics. Perennialism appears similar to essentialism but focuses first on personal development, while essentialism focuses first on essential skills. Essentialist curricula tend to be more vocational and fact-based, and far less liberal and principle-based. Both philosophies are typically considered to be teacher-centered, as opposed to student-centered philosophies of education such as progressivism. Teachers associated with perennialism are authors of the Western masterpieces and are open to student criticism through the associated Socratic method. | 2001-10-31T23:34:51Z | 2023-12-27T11:12:39Z | [
"Template:Cite book",
"Template:Wikiquote",
"Template:Short description",
"Template:Tone",
"Template:Clarify",
"Template:Cite web"
]
| https://en.wikipedia.org/wiki/Educational_perennialism |
10,024 | MDMA | 3,4-Methylenedioxymethamphetamine (MDMA), commonly known as ecstasy (tablet form), and molly or mandy (crystal form), is a potent empathogen–entactogen with stimulant and minor psychedelic properties primarily used for recreational purposes. The purported pharmacological effects that may be prosocial include altered sensations, increased energy, empathy, and pleasure. When taken by mouth, effects begin in 30 to 45 minutes and last three to six hours.
MDMA was first synthesized in 1912 by Merck. It was used to enhance psychotherapy beginning in the 1970s and became popular as a street drug in the 1980s. MDMA is commonly associated with dance parties, raves, and electronic dance music. Tablets sold as ecstasy may be mixed with other substances such as ephedrine, amphetamine, and methamphetamine. In 2016, about 21 million people between the ages of 15 and 64 used ecstasy (0.3% of the world population). This was broadly similar to the percentage of people who use cocaine or amphetamines, but lower than for cannabis or opioids. In the United States, as of 2017, about 7% of people have used MDMA at some point in their lives and 0.9% have used it in the last year. The lethal risk from one dose of MDMA is estimated to be from 1 death in 20,000 instances to 1 death in 50,000 instances.
Short-term adverse effects include grinding of the teeth, blurred vision, sweating and a rapid heartbeat, and extended use can also lead to addiction, memory problems, paranoia and difficulty sleeping. Deaths have been reported due to increased body temperature and dehydration. Following use, people often feel depressed and tired, although this effect does not appear in clinical use, suggesting that it is not a direct result of MDMA administration. MDMA acts primarily by increasing the release of the neurotransmitters serotonin, dopamine and noradrenaline in parts of the brain. It belongs to the substituted amphetamine classes of drugs. MDMA is structurally similar to mescaline (a hallucinogen), methamphetamine (a stimulant), as well as endogenous monoamine neurotransmitters such as serotonin, norepinephrine, and dopamine.
MDMA is illegal in most countries and has limited approved medical uses in a small number of countries. In the United States, the Food and Drug Administration is evaluating the drug for clinical use as of 2021. Canada has allowed limited distribution of MDMA and other psychedelics such as psilocybin upon application to and approval by Health Canada.
In general, MDMA users report feeling the onset of subjective effects within 30 to 60 minutes of oral consumption and reaching peak effect at 75 to 120 minutes, which then plateaus for about 3.5 hours. The desired short-term psychoactive effects of MDMA have been reported to include:
The experience elicited by MDMA depends on the dose, setting, and user. The variability of the induced altered state is lower compared to other psychedelics. For example, MDMA used at parties is associated with high motor activity, reduced sense of identity, and poor awareness of surroundings. Use of MDMA individually or in small groups in a quiet environment and when concentrating, is associated with increased lucidity, concentration, sensitivity to aesthetic aspects of the environment, enhanced awareness of emotions, and improved capability of communication. In psychotherapeutic settings, MDMA effects have been characterized by infantile ideas, mood lability, and memories and moods connected with childhood experiences.
MDMA has been described as an "empathogenic" drug because of its empathy-producing effects. Results of several studies show the effects of increased empathy with others. When testing MDMA for medium and high doses, it showed increased hedonic and arousal continuum. The effect of MDMA increasing sociability is consistent, while its effects on empathy have been more mixed.
MDMA is often considered the drug of choice within the rave culture and is also used at clubs, festivals, and house parties. In the rave environment, the sensory effects of music and lighting are often highly synergistic with the drug. The psychedelic amphetamine quality of MDMA offers multiple appealing aspects to users in the rave setting. Some users enjoy the feeling of mass communion from the inhibition-reducing effects of the drug, while others use it as party fuel because of the drug's stimulatory effects. MDMA is used less often than other stimulants, typically less than once per week.
MDMA is sometimes taken in conjunction with other psychoactive drugs such as LSD, psilocybin mushrooms, 2C-B, and ketamine. The combination with LSD is called "candy-flipping". MDMA is often co-administered with alcohol, methamphetamine, and prescription drugs such as SSRIs with which MDMA has several drug-drug interactions. Three life-threatening reports of MDMA co-administration with ritonavir have been reported; with ritonavir having severe and dangerous drug-drug interactions with a wide range of both psychoactive, anti-psychotic, and non-psychoactive drugs.
As of 2017, MDMA has no accepted medical indications. Before it was widely banned, it saw limited use in psychotherapy. In 2017 the United States Food and Drug Administration (FDA) approved limited research on MDMA-assisted psychotherapy for post-traumatic stress disorder (PTSD), with some preliminary evidence that MDMA may facilitate psychotherapy efficacy for PTSD.
Small doses of MDMA are used by some religious practitioners as an entheogen to enhance prayer or meditation. MDMA has been used as an adjunct to New Age spiritual practices.
MDMA has become widely known as ecstasy (shortened "E", "X", or "XTC"), usually referring to its tablet form, although this term may also include the presence of possible adulterants or diluents. The UK term "mandy" and the US term "molly" colloquially refer to MDMA in a crystalline powder form that is thought to be free of adulterants. MDMA is also sold in the form of the hydrochloride salt, either as loose crystals or in gelcaps. MDMA tablets can sometimes be found in a shaped form that may depict characters from popular culture, likely for deceptive reasons. These are sometimes collectively referred to as "fun tablets".
Partly due to the global supply shortage of sassafras oil—a problem largely assuaged by use of improved or alternative modern methods of synthesis—the purity of substances sold as molly have been found to vary widely. Some of these substances contain methylone, ethylone, MDPV, mephedrone, or any other of the group of compounds commonly known as bath salts, in addition to, or in place of, MDMA. Powdered MDMA ranges from pure MDMA to crushed tablets with 30–40% purity. MDMA tablets typically have low purity due to bulking agents that are added to dilute the drug and increase profits (notably lactose) and binding agents. Tablets sold as ecstasy sometimes contain 3,4-methylenedioxyamphetamine (MDA), 3,4-methylenedioxyethylamphetamine (MDEA), other amphetamine derivatives, caffeine, opiates, or painkillers. Some tablets contain little or no MDMA. The proportion of seized ecstasy tablets with MDMA-like impurities has varied annually and by country. The average content of MDMA in a preparation is 70 to 120 mg with the purity having increased since the 1990s.
MDMA is usually consumed by mouth. It is also sometimes snorted.
Acute adverse effects are usually the result of high or multiple doses, although single dose toxicity can occur in susceptible individuals. The most serious short-term physical health risks of MDMA are hyperthermia and dehydration. Cases of life-threatening or fatal hyponatremia (excessively low sodium concentration in the blood) have developed in MDMA users attempting to prevent dehydration by consuming excessive amounts of water without replenishing electrolytes.
The immediate adverse effects of MDMA use can include:
Other adverse effects that may occur or persist for up to a week following cessation of moderate MDMA use include:
Administration of MDMA to mice causes DNA damage in their brain, especially when the mice are sleep deprived. Even at the very low doses that are comparable to those self-administered by humans, MDMA causes oxidative stress and both single and double-strand breaks in the DNA of the hippocampus region of the mouse brain.
As of 2015, the long-term effects of MDMA on human brain structure and function have not been fully determined. However, there is consistent evidence of structural and functional deficits in MDMA users with high lifetime exposure. There is no evidence of structural or functional changes in MDMA users with only a moderate (<50 doses used and <100 tablets consumed) lifetime exposure. Nonetheless, MDMA in moderate use may still be neurotoxic. Furthermore, it is not clear yet whether "typical" users of MDMA (1 to 2 pills of 75 to 125 mg MDMA or analogue every 1 to 4 weeks) will develop neurotoxic brain lesions. Long-term exposure to MDMA in humans has been shown to produce marked neurodegeneration in striatal, hippocampal, prefrontal, and occipital serotonergic axon terminals. Neurotoxic damage to serotonergic axon terminals has been shown to persist for more than two years. Elevations in brain temperature from MDMA use are positively correlated with MDMA-induced neurotoxicity. However, most studies on MDMA and serotonergic neurotoxicity in humans focus on the heaviest users, those who consume more than seven times the average. It is therefore possible that no serotonergic neurotoxicity is present in most casual users. Adverse neuroplastic changes to brain microvasculature and white matter also occur in humans using low doses of MDMA. Reduced gray matter density in certain brain structures has also been noted in human MDMA users. Global reductions in gray matter volume, thinning of the parietal and orbitofrontal cortices, and decreased hippocampal activity have been observed in long term users. The effects established so far for recreational use of ecstasy lie in the range of moderate to severe effects for serotonin transporter reduction.
Impairments in multiple aspects of cognition, including attention, learning, memory, visual processing, and sleep, have been found in regular MDMA users. The magnitude of these impairments is correlated with lifetime MDMA usage and are partially reversible with abstinence. Several forms of memory are impaired by chronic ecstasy use; however, the effects for memory impairments in ecstasy users are generally small overall. MDMA use is also associated with increased impulsivity and depression.
Serotonin depletion following MDMA use can cause depression in subsequent days. In some cases, depressive symptoms persist for longer periods. Some studies indicate repeated recreational use of ecstasy is associated with depression and anxiety, even after quitting the drug. Depression is one of the main reasons for cessation of use.
At high doses, MDMA induces a neuroimmune response that, through several mechanisms, increases the permeability of the blood–brain barrier, thereby making the brain more susceptible to environmental toxins and pathogens. In addition, MDMA has immunosuppressive effects in the peripheral nervous system and pro-inflammatory effects in the central nervous system.
MDMA may increase the risk of cardiac valvulopathy in heavy or long-term users due to activation of serotonin 5-HT2B receptors. MDMA induces cardiac epigenetic changes in DNA methylation, particularly hypermethylation changes.
Approximately 60% of MDMA users experience withdrawal symptoms when they stop taking MDMA. Some of these symptoms include fatigue, loss of appetite, depression, and trouble concentrating. Tolerance to some of the desired and adverse effects of MDMA is expected to occur with consistent MDMA use. A 2007 delphic analysis of a panel of experts in pharmacology, psychiatry, law, policing and others estimated MDMA to have a psychological dependence and physical dependence potential roughly three-fourths to four-fifths that of cannabis.
MDMA has been shown to induce ΔFosB in the nucleus accumbens. Because MDMA releases dopamine in the striatum, the mechanisms by which it induces ΔFosB in the nucleus accumbens are analogous to other dopaminergic psychostimulants. Therefore, chronic use of MDMA at high doses can result in altered brain structure and drug addiction that occur as a consequence of ΔFosB overexpression in the nucleus accumbens. MDMA is less addictive than other stimulants such as methamphetamine and cocaine. Compared with amphetamine, MDMA and its metabolite MDA are less reinforcing.
One study found approximately 15% of chronic MDMA users met the DSM-IV diagnostic criteria for substance dependence. However, there is little evidence for a specific diagnosable MDMA dependence syndrome because MDMA is typically used relatively infrequently.
There are currently no medications to treat MDMA addiction.
MDMA is a moderately teratogenic drug (i.e., it is toxic to the fetus). In utero exposure to MDMA is associated with a neuro- and cardiotoxicity and impaired motor functioning. Motor delays may be temporary during infancy or long-term. The severity of these developmental delays increases with heavier MDMA use.
MDMA overdose symptoms vary widely due to the involvement of multiple organ systems. Some of the more overt overdose symptoms are listed in the table below. The number of instances of fatal MDMA intoxication is low relative to its usage rates. In most fatalities, MDMA was not the only drug involved. Acute toxicity is mainly caused by serotonin syndrome and sympathomimetic effects. Sympathomimetic side effects can be managed with carvedilol. MDMA's toxicity in overdose may be exacerbated by caffeine, with which it is frequently cut in order to increase volume. A scheme for management of acute MDMA toxicity has been published focusing on treatment of hyperthermia, hyponatraemia, serotonin syndrome, and multiple organ failure.
A number of drug interactions can occur between MDMA and other drugs, including serotonergic drugs. MDMA also interacts with drugs which inhibit CYP450 enzymes, like ritonavir (Norvir), particularly CYP2D6 inhibitors. Life-threatening reactions and death have occurred in people who took MDMA while on ritonavir. Concurrent use of MDMA high dosages with another serotonergic drug can result in a life-threatening condition called serotonin syndrome. Severe overdose resulting in death has also been reported in people who took MDMA in combination with certain monoamine oxidase inhibitors, such as phenelzine (Nardil), tranylcypromine (Parnate), or moclobemide (Aurorix, Manerix). Serotonin reuptake inhibitors such as citalopram (Celexa), duloxetine (Cymbalta), fluoxetine (Prozac), and paroxetine (Paxil) have been shown to block most of the subjective effects of MDMA. Norepinephrine reuptake inhibitors such as reboxetine (Edronax) have been found to reduce emotional excitation and feelings of stimulation with MDMA but do not appear to influence its entactogenic or mood-elevating effects.
MDMA is a substituted amphetamine structurally, and a monoamine-releasing agent mechanistically. Like other monoamine-releasing agents, MDMA enters monoaminergic neurons through monoamine transporters. MDMA has high affinity for dopamine, norepinephrine and serotonin transporters, with some preference for the latter. The methylenedioxy- substitution provides the serotonergic activity, as most other substituted amphetamines show negligible affinity for the serotonin transporter.
Neurotransmitter release induced by monoamine-releasing agents differs significantly from the regular, action potential-evoked neurotransmitter release. Inside the neuron, MDMA inhibits VMAT2 and activates TAAR1. TAAR1 agonism results in the phosphorylation of monoamine transporters by PKA and PKC, which either internalizes the transporter, or reverses its flux direction. VMAT2 inhibition prevents the packaging of the cytosolic monoamines into the synaptic vesicles, which allows them to instead be pumped out of the neuron by the phosphorylated transporters. The end result is that the neuron constantly "leaks" neurotransmitters into the synapse, regardless of any signal received.
MDMA has two enantiomers, (S)-MDMA and (R)-MDMA. Recreationally used MDMA is the equimolar mixture of both. (S)-MDMA causes the entactogenic effects of the racemate, because it releases serotonin, norepinephrine and dopamine much more efficiently via monoamine transporters. It also has higher affinity towards 5-HT2CR. (R)-MDMA has notable agonism towards 5-HT2AR, which supposedly contributes to the mild psychedelic hallucinations induced by high doses of MDMA in humans.
The MDMA concentration in the blood stream starts to rise after about 30 minutes, and reaches its maximal concentration in the blood stream between 1.5 and 3 hours after ingestion. It is then slowly metabolized and excreted, with levels of MDMA and its metabolites decreasing to half their peak concentration over the next several hours. The duration of action of MDMA is usually four to six hours, after which serotonin levels in the brain are depleted. Serotonin levels typically return to normal within 24–48 hours.
Metabolites of MDMA that have been identified in humans include 3,4-methylenedioxyamphetamine (MDA), 4-hydroxy-3-methoxymethamphetamine (HMMA), 4-hydroxy-3-methoxyamphetamine (HMA), 3,4-dihydroxyamphetamine (DHA) (also called alpha-methyldopamine (α-Me-DA)), 3,4-methylenedioxyphenylacetone (MDP2P), and 3,4-methylenedioxy-N-hydroxyamphetamine (MDOH). The contributions of these metabolites to the psychoactive and toxic effects of MDMA are an area of active research. 80% of MDMA is metabolised in the liver, and about 20% is excreted unchanged in the urine.
MDMA is known to be metabolized by two main metabolic pathways: (1) O-demethylenation followed by catechol-O-methyltransferase (COMT)-catalyzed methylation and/or glucuronide/sulfate conjugation; and (2) N-dealkylation, deamination, and oxidation to the corresponding benzoic acid derivatives conjugated with glycine. The metabolism may be primarily by cytochrome P450 (CYP450) enzymes CYP2D6 and CYP3A4 and COMT. Complex, nonlinear pharmacokinetics arise via autoinhibition of CYP2D6 and CYP2D8, resulting in zeroth order kinetics at higher doses. It is thought that this can result in sustained and higher concentrations of MDMA if the user takes consecutive doses of the drug.
MDMA and metabolites are primarily excreted as conjugates, such as sulfates and glucuronides. MDMA is a chiral compound and has been almost exclusively administered as a racemate. However, the two enantiomers have been shown to exhibit different kinetics. The disposition of MDMA may also be stereoselective, with the S-enantiomer having a shorter elimination half-life and greater excretion than the R-enantiomer. Evidence suggests that the area under the blood plasma concentration versus time curve (AUC) was two to four times higher for the (R)-enantiomer than the (S)-enantiomer after a 40 mg oral dose in human volunteers. Likewise, the plasma half-life of (R)-MDMA was significantly longer than that of the (S)-enantiomer (5.8 ± 2.2 hours vs 3.6 ± 0.9 hours). However, because MDMA excretion and metabolism have nonlinear kinetics, the half-lives would be higher at more typical doses (100 mg is sometimes considered a typical dose).
MDMA is in the substituted methylenedioxyphenethylamine and substituted amphetamine classes of chemicals. As a free base, MDMA is a colorless oil insoluble in water. The most common salt of MDMA is the hydrochloride salt; pure MDMA hydrochloride is water-soluble and appears as a white or off-white powder or crystal.
There are numerous methods available to synthesize MDMA via different intermediates. The original MDMA synthesis described in Merck's patent involves brominating safrole to 1-(3,4-methylenedioxyphenyl)-2-bromopropane and then reacting this adduct with methylamine. Most illicit MDMA is synthesized using MDP2P (3,4-methylenedioxyphenyl-2-propanone) as a precursor. MDP2P in turn is generally synthesized from piperonal, safrole or isosafrole. One method is to isomerize safrole to isosafrole in the presence of a strong base, and then oxidize isosafrole to MDP2P. Another method uses the Wacker process to oxidize safrole directly to the MDP2P intermediate with a palladium catalyst. Once the MDP2P intermediate has been prepared, a reductive amination leads to racemic MDMA (an equal parts mixture of (R)-MDMA and (S)-MDMA). Relatively small quantities of essential oil are required to make large amounts of MDMA. The essential oil of Ocotea cymbarum, for example, typically contains between 80 and 94% safrole. This allows 500 mL of the oil to produce between 150 and 340 grams of MDMA.
MDMA and MDA may be quantitated in blood, plasma or urine to monitor for use, confirm a diagnosis of poisoning or assist in the forensic investigation of a traffic or other criminal violation or a sudden death. Some drug abuse screening programs rely on hair, saliva, or sweat as specimens. Most commercial amphetamine immunoassay screening tests cross-react significantly with MDMA or its major metabolites, but chromatographic techniques can easily distinguish and separately measure each of these substances. The concentrations of MDA in the blood or urine of a person who has taken only MDMA are, in general, less than 10% those of the parent drug.
MDMA was first synthesized in 1912 by Merck chemist Anton Köllisch. At the time, Merck was interested in developing substances that stopped abnormal bleeding. Merck wanted to avoid an existing patent held by Bayer for one such compound: hydrastinine. Köllisch developed a preparation of a hydrastinine analogue, methylhydrastinine, at the request of fellow lab members, Walther Beckh and Otto Wolfes. MDMA (called methylsafrylamin, safrylmethylamin or N-Methyl-a-Methylhomopiperonylamin in Merck laboratory reports) was an intermediate compound in the synthesis of methylhydrastinine. Merck was not interested in MDMA itself at the time. On 24 December 1912, Merck filed two patent applications that described the synthesis and some chemical properties of MDMA and its subsequent conversion to methylhydrastinine.
Merck records indicate its researchers returned to the compound sporadically. A 1920 Merck patent describes a chemical modification to MDMA. In 1927, Max Oberlin studied the pharmacology of MDMA while searching for substances with effects similar to adrenaline or ephedrine, the latter being structurally similar to MDMA. Compared to ephedrine, Oberlin observed that it had similar effects on vascular smooth muscle tissue, stronger effects at the uterus, and no "local effect at the eye". MDMA was also found to have effects on blood sugar levels comparable to high doses of ephedrine. Oberlin concluded that the effects of MDMA were not limited to the sympathetic nervous system. Research was stopped "particularly due to a strong price increase of safrylmethylamine", which was still used as an intermediate in methylhydrastinine synthesis. Albert van Schoor performed simple toxicological tests with the drug in 1952, most likely while researching new stimulants or circulatory medications. After pharmacological studies, research on MDMA was not continued. In 1959, Wolfgang Fruhstorfer synthesized MDMA for pharmacological testing while researching stimulants. It is unclear if Fruhstorfer investigated the effects of MDMA in humans.
Outside of Merck, other researchers began to investigate MDMA. In 1953 and 1954, the United States Army commissioned a study of toxicity and behavioral effects in animals injected with mescaline and several analogues, including MDMA. Conducted at the University of Michigan in Ann Arbor, these investigations were declassified in October 1969 and published in 1973. A 1960 Polish paper by Biniecki and Krajewski describing the synthesis of MDMA as an intermediate was the first published scientific paper on the substance.
MDMA may have been in non-medical use in the western United States in 1968. An August 1970 report at a meeting of crime laboratory chemists indicates MDMA was being used recreationally in the Chicago area by 1970. MDMA likely emerged as a substitute for its analog methylenedioxyamphetamine (MDA), a drug at the time popular among users of psychedelics which was made a Schedule 1 substance in the United States in 1970.
American chemist and psychopharmacologist Alexander Shulgin reported he synthesized MDMA in 1965 while researching methylenedioxy compounds at Dow Chemical Company, but did not test the psychoactivity of the compound at this time. Around 1970, Shulgin sent instructions for N-methylated MDA (MDMA) synthesis to the founder of a Los Angeles chemical company who had requested them. This individual later provided these instructions to a client in the Midwest. Shulgin may have suspected he played a role in the emergence of MDMA in Chicago.
Shulgin first heard of the psychoactive effects of N-methylated MDA around 1975 from a young student who reported "amphetamine-like content". Around 30 May 1976, Shulgin again heard about the effects of N-methylated MDA, this time from a graduate student in a medicinal chemistry group he advised at San Francisco State University who directed him to the University of Michigan study. She and two close friends had consumed 100 mg of MDMA and reported positive emotional experiences. Following the self-trials of a colleague at the University of San Francisco, Shulgin synthesized MDMA and tried it himself in September and October 1976. Shulgin first reported on MDMA in a presentation at a conference in Bethesda, Maryland in December 1976. In 1978, he and David E. Nichols published a report on the drug's psychoactive effect in humans. They described MDMA as inducing "an easily controlled altered state of consciousness with emotional and sensual overtones" comparable "to marijuana, to psilocybin devoid of the hallucinatory component, or to low levels of MDA".
While not finding his own experiences with MDMA particularly powerful, Shulgin was impressed with the drug's disinhibiting effects and thought it could be useful in therapy. Believing MDMA allowed users to strip away habits and perceive the world clearly, Shulgin called the drug window. Shulgin occasionally used MDMA for relaxation, referring to it as "my low-calorie martini", and gave the drug to friends, researchers, and others who he thought could benefit from it. One such person was Leo Zeff, a psychotherapist who had been known to use psychedelic substances in his practice. When he tried the drug in 1977, Zeff was impressed with the effects of MDMA and came out of his semi-retirement to promote its use in therapy. Over the following years, Zeff traveled around the United States and occasionally to Europe, eventually training an estimated four thousand psychotherapists in the therapeutic use of MDMA. Zeff named the drug Adam, believing it put users in a state of primordial innocence.
Psychotherapists who used MDMA believed the drug eliminated the typical fear response and increased communication. Sessions were usually held in the home of the patient or the therapist. The role of the therapist was minimized in favor of patient self-discovery accompanied by MDMA induced feelings of empathy. Depression, substance use disorders, relationship problems, premenstrual syndrome, and autism were among several psychiatric disorders MDMA assisted therapy was reported to treat. According to psychiatrist George Greer, therapists who used MDMA in their practice were impressed by the results. Anecdotally, MDMA was said to greatly accelerate therapy. According to David Nutt, MDMA was widely used in the western US in couples counseling, and was called empathy. Only later was the term ecstasy used for it, coinciding with rising opposition to its use.
In the late 1970s and early 1980s, "Adam" spread through personal networks of psychotherapists, psychiatrists, users of psychedelics, and yuppies. Hoping MDMA could avoid criminalization like LSD and mescaline, psychotherapists and experimenters attempted to limit the spread of MDMA and information about it while conducting informal research. Early MDMA distributors were deterred from large scale operations by the threat of possible legislation. Between the 1970s and the mid-1980s, this network of MDMA users consumed an estimated 500,000 doses.
A small recreational market for MDMA developed by the late 1970s, consuming perhaps 10,000 doses in 1976. By the early 1980s MDMA was being used in Boston and New York City nightclubs such as Studio 54 and Paradise Garage. Into the early 1980s, as the recreational market slowly expanded, production of MDMA was dominated by a small group of therapeutically minded Boston chemists. Having commenced production in 1976, this "Boston Group" did not keep up with growing demand and shortages frequently occurred.
Perceiving a business opportunity, Michael Clegg, the Southwest distributor for the Boston Group, started his own "Texas Group" backed financially by Texas friends. In 1981, Clegg had coined "Ecstasy" as a slang term for MDMA to increase its marketability. Starting in 1983, the Texas Group mass-produced MDMA in a Texas lab or imported it from California and marketed tablets using pyramid sales structures and toll-free numbers. MDMA could be purchased via credit card and taxes were paid on sales. Under the brand name "Sassyfras", MDMA tablets were sold in brown bottles. The Texas Group advertised "Ecstasy parties" at bars and discos, describing MDMA as a "fun drug" and "good to dance to". MDMA was openly distributed in Austin and Dallas–Fort Worth area bars and nightclubs, becoming popular with yuppies, college students, and gays.
Recreational use also increased after several cocaine dealers switched to distributing MDMA following experiences with the drug. A California laboratory that analyzed confidentially submitted drug samples first detected MDMA in 1975. Over the following years the number of MDMA samples increased, eventually exceeding the number of MDA samples in the early 1980s. By the mid-1980s, MDMA use had spread to colleges around the United States.
In an early media report on MDMA published in 1982, a Drug Enforcement Administration (DEA) spokesman stated the agency would ban the drug if enough evidence for abuse could be found. By mid-1984, MDMA use was becoming more noticed. Bill Mandel reported on "Adam" in a 10 June San Francisco Chronicle article, but misidentified the drug as methyloxymethylenedioxyamphetamine (MMDA). In the next month, the World Health Organization identified MDMA as the only substance out of twenty phenethylamines to be seized a significant number of times.
After a year of planning and data collection, MDMA was proposed for scheduling by the DEA on 27 July 1984 with a request for comments and objections. The DEA was surprised when a number of psychiatrists, psychotherapists, and researchers objected to the proposed scheduling and requested a hearing. In a Newsweek article published the next year, a DEA pharmacologist stated that the agency had been unaware of its use among psychiatrists. An initial hearing was held on 1 February 1985 at the DEA offices in Washington, D.C., with administrative law judge Francis L. Young presiding. It was decided there to hold three more hearings that year: Los Angeles on 10 June, Kansas City, Missouri on 10–11 July, and Washington, D.C., on 8–11 October.
Sensational media attention was given to the proposed criminalization and the reaction of MDMA proponents, effectively advertising the drug. In response to the proposed scheduling, the Texas Group increased production from 1985 estimates of 30,000 tablets a month to as many as 8,000 per day, potentially making two million ecstasy tablets in the months before MDMA was made illegal. By some estimates the Texas Group distributed 500,000 tablets per month in Dallas alone. According to one participant in an ethnographic study, the Texas Group produced more MDMA in eighteen months than all other distribution networks combined across their entire histories. By May 1985, MDMA use was widespread in California, Texas, southern Florida, and the northeastern United States. According to the DEA there was evidence of use in twenty-eight states and Canada. Urged by Senator Lloyd Bentsen, the DEA announced an emergency Schedule I classification of MDMA on 31 May 1985. The agency cited increased distribution in Texas, escalating street use, and new evidence of MDA (an analog of MDMA) neurotoxicity as reasons for the emergency measure. The ban took effect one month later on 1 July 1985 in the midst of Nancy Reagan's "Just Say No" campaign.
As a result of several expert witnesses testifying that MDMA had an accepted medical usage, the administrative law judge presiding over the hearings recommended that MDMA be classified as a Schedule III substance. Despite this, DEA administrator John C. Lawn overruled and classified the drug as Schedule I. Harvard psychiatrist Lester Grinspoon then sued the DEA, claiming that the DEA had ignored the medical uses of MDMA, and the federal court sided with Grinspoon, calling Lawn's argument "strained" and "unpersuasive", and vacated MDMA's Schedule I status. Despite this, less than a month later Lawn reviewed the evidence and reclassified MDMA as Schedule I again, claiming that the expert testimony of several psychiatrists claiming over 200 cases where MDMA had been used in a therapeutic context with positive results could be dismissed because they were not published in medical journals. In 2017, the FDA granted breakthrough therapy designation for its use with psychotherapy for PTSD. However, this designation has been questioned and problematized.
While engaged in scheduling debates in the United States, the DEA also pushed for international scheduling. In 1985 the World Health Organization's Expert Committee on Drug Dependence recommended that MDMA be placed in Schedule I of the 1971 United Nations Convention on Psychotropic Substances. The committee made this recommendation on the basis of the pharmacological similarity of MDMA to previously scheduled drugs, reports of illicit trafficking in Canada, drug seizures in the United States, and lack of well-defined therapeutic use. While intrigued by reports of psychotherapeutic uses for the drug, the committee viewed the studies as lacking appropriate methodological design and encouraged further research. Committee chairman Paul Grof dissented, believing international control was not warranted at the time and a recommendation should await further therapeutic data. The Commission on Narcotic Drugs added MDMA to Schedule I of the convention on 11 February 1986.
The use of MDMA in Texas clubs declined rapidly after criminalization, although by 1991 the drug remained popular among young middle-class whites and in nightclubs. In 1985, MDMA use became associated with acid house on the Spanish island of Ibiza. Thereafter in the late 1980s, the drug spread alongside rave culture to the UK and then to other European and American cities. Illicit MDMA use became increasingly widespread among young adults in universities and later, in high schools. Since the mid-1990s, MDMA has become the most widely used amphetamine-type drug by college students and teenagers. MDMA became one of the four most widely used illicit drugs in the US, along with cocaine, heroin, and cannabis. According to some estimates as of 2004, only marijuana attracts more first time users in the US.
After MDMA was criminalized, most medical use stopped, although some therapists continued to prescribe the drug illegally. Later, Charles Grob initiated an ascending-dose safety study in healthy volunteers. Subsequent FDA-approved MDMA studies in humans have taken place in the United States in Detroit (Wayne State University), Chicago (University of Chicago), San Francisco (UCSF and California Pacific Medical Center), Baltimore (NIDA–NIH Intramural Program), and South Carolina. Studies have also been conducted in Switzerland (University Hospital of Psychiatry, Zürich), the Netherlands (Maastricht University), and Spain (Universitat Autònoma de Barcelona).
"Molly", short for 'molecule', was recognized as a slang term for crystalline or powder MDMA in the 2000s.
In 2010, the BBC reported that use of MDMA had decreased in the UK in previous years. This may be due to increased seizures during use and decreased production of the precursor chemicals used to manufacture MDMA. Unwitting substitution with other drugs, such as mephedrone and methamphetamine, as well as legal alternatives to MDMA, such as BZP, MDPV, and methylone, are also thought to have contributed to its decrease in popularity.
In 2017 it was found that some pills being sold as MDMA contained pentylone, which can cause very unpleasant agitation and paranoia.
According to David Nutt, when safrole was restricted by the United Nations in order to reduce the supply of MDMA, producers in China began using anethole instead, but this gives para-methoxyamphetamine (PMA, also known as "Dr Death"), which is much more toxic than MDMA and can cause overheating, muscle spasms, seizures, unconsciousness, and death. People wanting MDMA are sometimes sold PMA instead.
MDMA is legally controlled in most of the world under the UN Convention on Psychotropic Substances and other international agreements, although exceptions exist for research and limited medical use. In general, the unlicensed use, sale or manufacture of MDMA are all criminal offences.
In Australia, MDMA was rescheduled on 1 July 2023 as a schedule 8 substance (available on prescription) when used in the treatment of PTSD, while remaining a schedule 9 substance (prohibited) for all other uses. For the treatment of PTSD, MDMA can only be prescribed by psychiatrists with specific training and authorisation. In 1986, MDMA was declared an illegal substance because of its allegedly harmful effects and potential for misuse.. Any non-authorised sale, use or manufacture is strictly prohibited by law. Permits for research uses on humans must be approved by a recognized ethics committee on human research.
In Western Australia under the Misuse of Drugs Act 1981 4.0g of MDMA is the amount required determining a court of trial, 2.0g is considered a presumption with intent to sell or supply and 28.0g is considered trafficking under Australian law.
The Australian Capital Territory has passed legislation to decriminalise the possession of small amounts of MDMA, due to take effect in October 2023.
In the United Kingdom, MDMA was made illegal in 1977 by a modification order to the existing Misuse of Drugs Act 1971. Although MDMA was not named explicitly in this legislation, the order extended the definition of Class A drugs to include various ring-substituted phenethylamines. The drug is therefore illegal to sell, buy, or possess without a licence in the UK. Penalties include a maximum of seven years and/or unlimited fine for possession; life and/or unlimited fine for production or trafficking.
Some researchers such as David Nutt have criticized the scheduling of MDMA, which he determined to be a relatively harmless drug. An editorial he wrote in the Journal of Psychopharmacology, where he compared the risk of harm for horse riding (1 adverse event in 350) to that of ecstasy (1 in 10,000) resulted in his dismissal as well as the resignation of his colleagues from the ACMD.
In the United States, MDMA is listed in Schedule I of the Controlled Substances Act. In a 2011 federal court hearing, the American Civil Liberties Union successfully argued that the sentencing guideline for MDMA/ecstasy is based on outdated science, leading to excessive prison sentences. Other courts have upheld the sentencing guidelines. The United States District Court for the Eastern District of Tennessee explained its ruling by noting that "an individual federal district court judge simply cannot marshal resources akin to those available to the Commission for tackling the manifold issues involved with determining a proper drug equivalency."
In the Netherlands, the Expert Committee on the List (Expertcommissie Lijstensystematiek Opiumwet) issued a report in June 2011 which discussed the evidence for harm and the legal status of MDMA, arguing in favor of maintaining it on List I.
In Canada, MDMA is listed as a Schedule 1 as it is an analogue of amphetamine. The Controlled Drugs and Substances Act was updated as a result of the Safe Streets and Communities Act changing amphetamines from Schedule III to Schedule I in March 2012. In 2022 the federal government granted British Columbia a 3-year exemption, legalizing the possession of up to 2.5 grams (0.088 oz) of MDMA in the province from February 2023 until February 2026.
In 2014, 3.5% of 18 to 25 year-olds had used MDMA in the United States. In the European Union as of 2018, 4.1% of adults (15–64 years old) have used MDMA at least once in their life, and 0.8% had used it in the last year. Among young adults, 1.8% had used MDMA in the last year.
In Europe, an estimated 37% of regular club-goers aged 14 to 35 used MDMA in the past year according to the 2015 European Drug report. The highest one-year prevalence of MDMA use in Germany in 2012 was 1.7% among people aged 25 to 29 compared with a population average of 0.4%. Among adolescent users in the United States between 1999 and 2008, girls were more likely to use MDMA than boys.
In 2008 the European Monitoring Centre for Drugs and Drug Addiction noted that although there were some reports of tablets being sold for as little as €1, most countries in Europe then reported typical retail prices in the range of €3 to €9 per tablet, typically containing 25–65 mg of MDMA. By 2014 the EMCDDA reported that the range was more usually between €5 and €10 per tablet, typically containing 57–102 mg of MDMA, although MDMA in powder form was becoming more common.
The United Nations Office on Drugs and Crime stated in its 2014 World Drug Report that US ecstasy retail prices range from US$1 to $70 per pill, or from $15,000 to $32,000 per kilogram. A new research area named Drug Intelligence aims to automatically monitor distribution networks based on image processing and machine learning techniques, in which an Ecstasy pill picture is analyzed to detect correlations among different production batches. These novel techniques allow police scientists to facilitate the monitoring of illicit distribution networks.
As of October 2015, most of the MDMA in the United States is produced in British Columbia, Canada and imported by Canada-based Asian transnational criminal organizations. The market for MDMA in the United States is relatively small compared to methamphetamine, cocaine, and heroin. In the United States, about 0.9 million people used ecstasy in 2010.
MDMA is particularly expensive in Australia, costing A$15–A$30 per tablet. In terms of purity data for Australian MDMA, the average is around 34%, ranging from less than 1% to about 85%. The majority of tablets contain 70–85 mg of MDMA. Most MDMA enters Australia from the Netherlands, the UK, Asia, and the US.
A number of ecstasy manufacturers brand their pills with a logo, often being the logo of an unrelated corporation. Some pills depict logos of products or media popular with children, such as Shaun the Sheep.
In 2017, doctors in the UK began the first clinical study of MDMA in alcohol use disorder.
The potential for MDMA to be used as a rapid-acting antidepressant has been studied in clinical trials, but as of 2017 the evidence on efficacy and safety were insufficient to reach a conclusion. A 2014 review of the safety and efficacy of MDMA as a treatment for various disorders, particularly PTSD, indicated that MDMA has therapeutic efficacy in some patients; however, it emphasized that issues regarding the controlability of MDMA-induced experiences and neurochemical recovery must be addressed. The author noted that oxytocin and D-cycloserine are potentially safer co-drugs in PTSD treatment, albeit with limited evidence of efficacy. This review and a second corroborating review by a different author both concluded that, because of MDMA's demonstrated potential to cause lasting harm in humans (e.g., serotonergic neurotoxicity and persistent memory impairment), "considerably more research must be performed" on its efficacy in PTSD treatment to determine if the potential treatment benefits outweigh its potential to harm to a patient.
MDMA in combination with psychotherapy has been studied as a treatment for post-traumatic stress disorder, and four clinical trials provide moderate evidence in support of this treatment. However, the lack of appropriate blinding of participants likely leads to overestimation of treatments effects due to high levels of response expectancy. In addition, there are no trials comparing MDMA-assisted psychotherapy for PTSD with existent evidence-based psychological treatments for PTSD, which seems to attain similar or better treatment effects compared with that achieved by MDMA-assisted psychotherapy.
In 2018 researchers identified MDMA as a psychoplastogen which refers to a compound capable of promoting neuroplasticity and received the “breakthrough therapy” designation by the Food and Drug Administration for treating PTSD. | [
{
"paragraph_id": 0,
"text": "3,4-Methylenedioxymethamphetamine (MDMA), commonly known as ecstasy (tablet form), and molly or mandy (crystal form), is a potent empathogen–entactogen with stimulant and minor psychedelic properties primarily used for recreational purposes. The purported pharmacological effects that may be prosocial include altered sensations, increased energy, empathy, and pleasure. When taken by mouth, effects begin in 30 to 45 minutes and last three to six hours.",
"title": ""
},
{
"paragraph_id": 1,
"text": "MDMA was first synthesized in 1912 by Merck. It was used to enhance psychotherapy beginning in the 1970s and became popular as a street drug in the 1980s. MDMA is commonly associated with dance parties, raves, and electronic dance music. Tablets sold as ecstasy may be mixed with other substances such as ephedrine, amphetamine, and methamphetamine. In 2016, about 21 million people between the ages of 15 and 64 used ecstasy (0.3% of the world population). This was broadly similar to the percentage of people who use cocaine or amphetamines, but lower than for cannabis or opioids. In the United States, as of 2017, about 7% of people have used MDMA at some point in their lives and 0.9% have used it in the last year. The lethal risk from one dose of MDMA is estimated to be from 1 death in 20,000 instances to 1 death in 50,000 instances.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Short-term adverse effects include grinding of the teeth, blurred vision, sweating and a rapid heartbeat, and extended use can also lead to addiction, memory problems, paranoia and difficulty sleeping. Deaths have been reported due to increased body temperature and dehydration. Following use, people often feel depressed and tired, although this effect does not appear in clinical use, suggesting that it is not a direct result of MDMA administration. MDMA acts primarily by increasing the release of the neurotransmitters serotonin, dopamine and noradrenaline in parts of the brain. It belongs to the substituted amphetamine classes of drugs. MDMA is structurally similar to mescaline (a hallucinogen), methamphetamine (a stimulant), as well as endogenous monoamine neurotransmitters such as serotonin, norepinephrine, and dopamine.",
"title": ""
},
{
"paragraph_id": 3,
"text": "MDMA is illegal in most countries and has limited approved medical uses in a small number of countries. In the United States, the Food and Drug Administration is evaluating the drug for clinical use as of 2021. Canada has allowed limited distribution of MDMA and other psychedelics such as psilocybin upon application to and approval by Health Canada.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In general, MDMA users report feeling the onset of subjective effects within 30 to 60 minutes of oral consumption and reaching peak effect at 75 to 120 minutes, which then plateaus for about 3.5 hours. The desired short-term psychoactive effects of MDMA have been reported to include:",
"title": "Effects"
},
{
"paragraph_id": 5,
"text": "The experience elicited by MDMA depends on the dose, setting, and user. The variability of the induced altered state is lower compared to other psychedelics. For example, MDMA used at parties is associated with high motor activity, reduced sense of identity, and poor awareness of surroundings. Use of MDMA individually or in small groups in a quiet environment and when concentrating, is associated with increased lucidity, concentration, sensitivity to aesthetic aspects of the environment, enhanced awareness of emotions, and improved capability of communication. In psychotherapeutic settings, MDMA effects have been characterized by infantile ideas, mood lability, and memories and moods connected with childhood experiences.",
"title": "Effects"
},
{
"paragraph_id": 6,
"text": "MDMA has been described as an \"empathogenic\" drug because of its empathy-producing effects. Results of several studies show the effects of increased empathy with others. When testing MDMA for medium and high doses, it showed increased hedonic and arousal continuum. The effect of MDMA increasing sociability is consistent, while its effects on empathy have been more mixed.",
"title": "Effects"
},
{
"paragraph_id": 7,
"text": "MDMA is often considered the drug of choice within the rave culture and is also used at clubs, festivals, and house parties. In the rave environment, the sensory effects of music and lighting are often highly synergistic with the drug. The psychedelic amphetamine quality of MDMA offers multiple appealing aspects to users in the rave setting. Some users enjoy the feeling of mass communion from the inhibition-reducing effects of the drug, while others use it as party fuel because of the drug's stimulatory effects. MDMA is used less often than other stimulants, typically less than once per week.",
"title": "Use"
},
{
"paragraph_id": 8,
"text": "MDMA is sometimes taken in conjunction with other psychoactive drugs such as LSD, psilocybin mushrooms, 2C-B, and ketamine. The combination with LSD is called \"candy-flipping\". MDMA is often co-administered with alcohol, methamphetamine, and prescription drugs such as SSRIs with which MDMA has several drug-drug interactions. Three life-threatening reports of MDMA co-administration with ritonavir have been reported; with ritonavir having severe and dangerous drug-drug interactions with a wide range of both psychoactive, anti-psychotic, and non-psychoactive drugs.",
"title": "Use"
},
{
"paragraph_id": 9,
"text": "As of 2017, MDMA has no accepted medical indications. Before it was widely banned, it saw limited use in psychotherapy. In 2017 the United States Food and Drug Administration (FDA) approved limited research on MDMA-assisted psychotherapy for post-traumatic stress disorder (PTSD), with some preliminary evidence that MDMA may facilitate psychotherapy efficacy for PTSD.",
"title": "Use"
},
{
"paragraph_id": 10,
"text": "Small doses of MDMA are used by some religious practitioners as an entheogen to enhance prayer or meditation. MDMA has been used as an adjunct to New Age spiritual practices.",
"title": "Use"
},
{
"paragraph_id": 11,
"text": "MDMA has become widely known as ecstasy (shortened \"E\", \"X\", or \"XTC\"), usually referring to its tablet form, although this term may also include the presence of possible adulterants or diluents. The UK term \"mandy\" and the US term \"molly\" colloquially refer to MDMA in a crystalline powder form that is thought to be free of adulterants. MDMA is also sold in the form of the hydrochloride salt, either as loose crystals or in gelcaps. MDMA tablets can sometimes be found in a shaped form that may depict characters from popular culture, likely for deceptive reasons. These are sometimes collectively referred to as \"fun tablets\".",
"title": "Use"
},
{
"paragraph_id": 12,
"text": "Partly due to the global supply shortage of sassafras oil—a problem largely assuaged by use of improved or alternative modern methods of synthesis—the purity of substances sold as molly have been found to vary widely. Some of these substances contain methylone, ethylone, MDPV, mephedrone, or any other of the group of compounds commonly known as bath salts, in addition to, or in place of, MDMA. Powdered MDMA ranges from pure MDMA to crushed tablets with 30–40% purity. MDMA tablets typically have low purity due to bulking agents that are added to dilute the drug and increase profits (notably lactose) and binding agents. Tablets sold as ecstasy sometimes contain 3,4-methylenedioxyamphetamine (MDA), 3,4-methylenedioxyethylamphetamine (MDEA), other amphetamine derivatives, caffeine, opiates, or painkillers. Some tablets contain little or no MDMA. The proportion of seized ecstasy tablets with MDMA-like impurities has varied annually and by country. The average content of MDMA in a preparation is 70 to 120 mg with the purity having increased since the 1990s.",
"title": "Use"
},
{
"paragraph_id": 13,
"text": "MDMA is usually consumed by mouth. It is also sometimes snorted.",
"title": "Use"
},
{
"paragraph_id": 14,
"text": "Acute adverse effects are usually the result of high or multiple doses, although single dose toxicity can occur in susceptible individuals. The most serious short-term physical health risks of MDMA are hyperthermia and dehydration. Cases of life-threatening or fatal hyponatremia (excessively low sodium concentration in the blood) have developed in MDMA users attempting to prevent dehydration by consuming excessive amounts of water without replenishing electrolytes.",
"title": "Adverse effects"
},
{
"paragraph_id": 15,
"text": "The immediate adverse effects of MDMA use can include:",
"title": "Adverse effects"
},
{
"paragraph_id": 16,
"text": "Other adverse effects that may occur or persist for up to a week following cessation of moderate MDMA use include:",
"title": "Adverse effects"
},
{
"paragraph_id": 17,
"text": "Administration of MDMA to mice causes DNA damage in their brain, especially when the mice are sleep deprived. Even at the very low doses that are comparable to those self-administered by humans, MDMA causes oxidative stress and both single and double-strand breaks in the DNA of the hippocampus region of the mouse brain.",
"title": "Adverse effects"
},
{
"paragraph_id": 18,
"text": "As of 2015, the long-term effects of MDMA on human brain structure and function have not been fully determined. However, there is consistent evidence of structural and functional deficits in MDMA users with high lifetime exposure. There is no evidence of structural or functional changes in MDMA users with only a moderate (<50 doses used and <100 tablets consumed) lifetime exposure. Nonetheless, MDMA in moderate use may still be neurotoxic. Furthermore, it is not clear yet whether \"typical\" users of MDMA (1 to 2 pills of 75 to 125 mg MDMA or analogue every 1 to 4 weeks) will develop neurotoxic brain lesions. Long-term exposure to MDMA in humans has been shown to produce marked neurodegeneration in striatal, hippocampal, prefrontal, and occipital serotonergic axon terminals. Neurotoxic damage to serotonergic axon terminals has been shown to persist for more than two years. Elevations in brain temperature from MDMA use are positively correlated with MDMA-induced neurotoxicity. However, most studies on MDMA and serotonergic neurotoxicity in humans focus on the heaviest users, those who consume more than seven times the average. It is therefore possible that no serotonergic neurotoxicity is present in most casual users. Adverse neuroplastic changes to brain microvasculature and white matter also occur in humans using low doses of MDMA. Reduced gray matter density in certain brain structures has also been noted in human MDMA users. Global reductions in gray matter volume, thinning of the parietal and orbitofrontal cortices, and decreased hippocampal activity have been observed in long term users. The effects established so far for recreational use of ecstasy lie in the range of moderate to severe effects for serotonin transporter reduction.",
"title": "Adverse effects"
},
{
"paragraph_id": 19,
"text": "Impairments in multiple aspects of cognition, including attention, learning, memory, visual processing, and sleep, have been found in regular MDMA users. The magnitude of these impairments is correlated with lifetime MDMA usage and are partially reversible with abstinence. Several forms of memory are impaired by chronic ecstasy use; however, the effects for memory impairments in ecstasy users are generally small overall. MDMA use is also associated with increased impulsivity and depression.",
"title": "Adverse effects"
},
{
"paragraph_id": 20,
"text": "Serotonin depletion following MDMA use can cause depression in subsequent days. In some cases, depressive symptoms persist for longer periods. Some studies indicate repeated recreational use of ecstasy is associated with depression and anxiety, even after quitting the drug. Depression is one of the main reasons for cessation of use.",
"title": "Adverse effects"
},
{
"paragraph_id": 21,
"text": "At high doses, MDMA induces a neuroimmune response that, through several mechanisms, increases the permeability of the blood–brain barrier, thereby making the brain more susceptible to environmental toxins and pathogens. In addition, MDMA has immunosuppressive effects in the peripheral nervous system and pro-inflammatory effects in the central nervous system.",
"title": "Adverse effects"
},
{
"paragraph_id": 22,
"text": "MDMA may increase the risk of cardiac valvulopathy in heavy or long-term users due to activation of serotonin 5-HT2B receptors. MDMA induces cardiac epigenetic changes in DNA methylation, particularly hypermethylation changes.",
"title": "Adverse effects"
},
{
"paragraph_id": 23,
"text": "Approximately 60% of MDMA users experience withdrawal symptoms when they stop taking MDMA. Some of these symptoms include fatigue, loss of appetite, depression, and trouble concentrating. Tolerance to some of the desired and adverse effects of MDMA is expected to occur with consistent MDMA use. A 2007 delphic analysis of a panel of experts in pharmacology, psychiatry, law, policing and others estimated MDMA to have a psychological dependence and physical dependence potential roughly three-fourths to four-fifths that of cannabis.",
"title": "Adverse effects"
},
{
"paragraph_id": 24,
"text": "MDMA has been shown to induce ΔFosB in the nucleus accumbens. Because MDMA releases dopamine in the striatum, the mechanisms by which it induces ΔFosB in the nucleus accumbens are analogous to other dopaminergic psychostimulants. Therefore, chronic use of MDMA at high doses can result in altered brain structure and drug addiction that occur as a consequence of ΔFosB overexpression in the nucleus accumbens. MDMA is less addictive than other stimulants such as methamphetamine and cocaine. Compared with amphetamine, MDMA and its metabolite MDA are less reinforcing.",
"title": "Adverse effects"
},
{
"paragraph_id": 25,
"text": "One study found approximately 15% of chronic MDMA users met the DSM-IV diagnostic criteria for substance dependence. However, there is little evidence for a specific diagnosable MDMA dependence syndrome because MDMA is typically used relatively infrequently.",
"title": "Adverse effects"
},
{
"paragraph_id": 26,
"text": "There are currently no medications to treat MDMA addiction.",
"title": "Adverse effects"
},
{
"paragraph_id": 27,
"text": "MDMA is a moderately teratogenic drug (i.e., it is toxic to the fetus). In utero exposure to MDMA is associated with a neuro- and cardiotoxicity and impaired motor functioning. Motor delays may be temporary during infancy or long-term. The severity of these developmental delays increases with heavier MDMA use.",
"title": "Adverse effects"
},
{
"paragraph_id": 28,
"text": "MDMA overdose symptoms vary widely due to the involvement of multiple organ systems. Some of the more overt overdose symptoms are listed in the table below. The number of instances of fatal MDMA intoxication is low relative to its usage rates. In most fatalities, MDMA was not the only drug involved. Acute toxicity is mainly caused by serotonin syndrome and sympathomimetic effects. Sympathomimetic side effects can be managed with carvedilol. MDMA's toxicity in overdose may be exacerbated by caffeine, with which it is frequently cut in order to increase volume. A scheme for management of acute MDMA toxicity has been published focusing on treatment of hyperthermia, hyponatraemia, serotonin syndrome, and multiple organ failure.",
"title": "Overdose"
},
{
"paragraph_id": 29,
"text": "A number of drug interactions can occur between MDMA and other drugs, including serotonergic drugs. MDMA also interacts with drugs which inhibit CYP450 enzymes, like ritonavir (Norvir), particularly CYP2D6 inhibitors. Life-threatening reactions and death have occurred in people who took MDMA while on ritonavir. Concurrent use of MDMA high dosages with another serotonergic drug can result in a life-threatening condition called serotonin syndrome. Severe overdose resulting in death has also been reported in people who took MDMA in combination with certain monoamine oxidase inhibitors, such as phenelzine (Nardil), tranylcypromine (Parnate), or moclobemide (Aurorix, Manerix). Serotonin reuptake inhibitors such as citalopram (Celexa), duloxetine (Cymbalta), fluoxetine (Prozac), and paroxetine (Paxil) have been shown to block most of the subjective effects of MDMA. Norepinephrine reuptake inhibitors such as reboxetine (Edronax) have been found to reduce emotional excitation and feelings of stimulation with MDMA but do not appear to influence its entactogenic or mood-elevating effects.",
"title": "Interactions"
},
{
"paragraph_id": 30,
"text": "MDMA is a substituted amphetamine structurally, and a monoamine-releasing agent mechanistically. Like other monoamine-releasing agents, MDMA enters monoaminergic neurons through monoamine transporters. MDMA has high affinity for dopamine, norepinephrine and serotonin transporters, with some preference for the latter. The methylenedioxy- substitution provides the serotonergic activity, as most other substituted amphetamines show negligible affinity for the serotonin transporter.",
"title": "Pharmacology"
},
{
"paragraph_id": 31,
"text": "Neurotransmitter release induced by monoamine-releasing agents differs significantly from the regular, action potential-evoked neurotransmitter release. Inside the neuron, MDMA inhibits VMAT2 and activates TAAR1. TAAR1 agonism results in the phosphorylation of monoamine transporters by PKA and PKC, which either internalizes the transporter, or reverses its flux direction. VMAT2 inhibition prevents the packaging of the cytosolic monoamines into the synaptic vesicles, which allows them to instead be pumped out of the neuron by the phosphorylated transporters. The end result is that the neuron constantly \"leaks\" neurotransmitters into the synapse, regardless of any signal received.",
"title": "Pharmacology"
},
{
"paragraph_id": 32,
"text": "MDMA has two enantiomers, (S)-MDMA and (R)-MDMA. Recreationally used MDMA is the equimolar mixture of both. (S)-MDMA causes the entactogenic effects of the racemate, because it releases serotonin, norepinephrine and dopamine much more efficiently via monoamine transporters. It also has higher affinity towards 5-HT2CR. (R)-MDMA has notable agonism towards 5-HT2AR, which supposedly contributes to the mild psychedelic hallucinations induced by high doses of MDMA in humans.",
"title": "Pharmacology"
},
{
"paragraph_id": 33,
"text": "The MDMA concentration in the blood stream starts to rise after about 30 minutes, and reaches its maximal concentration in the blood stream between 1.5 and 3 hours after ingestion. It is then slowly metabolized and excreted, with levels of MDMA and its metabolites decreasing to half their peak concentration over the next several hours. The duration of action of MDMA is usually four to six hours, after which serotonin levels in the brain are depleted. Serotonin levels typically return to normal within 24–48 hours.",
"title": "Pharmacology"
},
{
"paragraph_id": 34,
"text": "Metabolites of MDMA that have been identified in humans include 3,4-methylenedioxyamphetamine (MDA), 4-hydroxy-3-methoxymethamphetamine (HMMA), 4-hydroxy-3-methoxyamphetamine (HMA), 3,4-dihydroxyamphetamine (DHA) (also called alpha-methyldopamine (α-Me-DA)), 3,4-methylenedioxyphenylacetone (MDP2P), and 3,4-methylenedioxy-N-hydroxyamphetamine (MDOH). The contributions of these metabolites to the psychoactive and toxic effects of MDMA are an area of active research. 80% of MDMA is metabolised in the liver, and about 20% is excreted unchanged in the urine.",
"title": "Pharmacology"
},
{
"paragraph_id": 35,
"text": "MDMA is known to be metabolized by two main metabolic pathways: (1) O-demethylenation followed by catechol-O-methyltransferase (COMT)-catalyzed methylation and/or glucuronide/sulfate conjugation; and (2) N-dealkylation, deamination, and oxidation to the corresponding benzoic acid derivatives conjugated with glycine. The metabolism may be primarily by cytochrome P450 (CYP450) enzymes CYP2D6 and CYP3A4 and COMT. Complex, nonlinear pharmacokinetics arise via autoinhibition of CYP2D6 and CYP2D8, resulting in zeroth order kinetics at higher doses. It is thought that this can result in sustained and higher concentrations of MDMA if the user takes consecutive doses of the drug.",
"title": "Pharmacology"
},
{
"paragraph_id": 36,
"text": "MDMA and metabolites are primarily excreted as conjugates, such as sulfates and glucuronides. MDMA is a chiral compound and has been almost exclusively administered as a racemate. However, the two enantiomers have been shown to exhibit different kinetics. The disposition of MDMA may also be stereoselective, with the S-enantiomer having a shorter elimination half-life and greater excretion than the R-enantiomer. Evidence suggests that the area under the blood plasma concentration versus time curve (AUC) was two to four times higher for the (R)-enantiomer than the (S)-enantiomer after a 40 mg oral dose in human volunteers. Likewise, the plasma half-life of (R)-MDMA was significantly longer than that of the (S)-enantiomer (5.8 ± 2.2 hours vs 3.6 ± 0.9 hours). However, because MDMA excretion and metabolism have nonlinear kinetics, the half-lives would be higher at more typical doses (100 mg is sometimes considered a typical dose).",
"title": "Pharmacology"
},
{
"paragraph_id": 37,
"text": "MDMA is in the substituted methylenedioxyphenethylamine and substituted amphetamine classes of chemicals. As a free base, MDMA is a colorless oil insoluble in water. The most common salt of MDMA is the hydrochloride salt; pure MDMA hydrochloride is water-soluble and appears as a white or off-white powder or crystal.",
"title": "Chemistry"
},
{
"paragraph_id": 38,
"text": "There are numerous methods available to synthesize MDMA via different intermediates. The original MDMA synthesis described in Merck's patent involves brominating safrole to 1-(3,4-methylenedioxyphenyl)-2-bromopropane and then reacting this adduct with methylamine. Most illicit MDMA is synthesized using MDP2P (3,4-methylenedioxyphenyl-2-propanone) as a precursor. MDP2P in turn is generally synthesized from piperonal, safrole or isosafrole. One method is to isomerize safrole to isosafrole in the presence of a strong base, and then oxidize isosafrole to MDP2P. Another method uses the Wacker process to oxidize safrole directly to the MDP2P intermediate with a palladium catalyst. Once the MDP2P intermediate has been prepared, a reductive amination leads to racemic MDMA (an equal parts mixture of (R)-MDMA and (S)-MDMA). Relatively small quantities of essential oil are required to make large amounts of MDMA. The essential oil of Ocotea cymbarum, for example, typically contains between 80 and 94% safrole. This allows 500 mL of the oil to produce between 150 and 340 grams of MDMA.",
"title": "Chemistry"
},
{
"paragraph_id": 39,
"text": "MDMA and MDA may be quantitated in blood, plasma or urine to monitor for use, confirm a diagnosis of poisoning or assist in the forensic investigation of a traffic or other criminal violation or a sudden death. Some drug abuse screening programs rely on hair, saliva, or sweat as specimens. Most commercial amphetamine immunoassay screening tests cross-react significantly with MDMA or its major metabolites, but chromatographic techniques can easily distinguish and separately measure each of these substances. The concentrations of MDA in the blood or urine of a person who has taken only MDMA are, in general, less than 10% those of the parent drug.",
"title": "Chemistry"
},
{
"paragraph_id": 40,
"text": "MDMA was first synthesized in 1912 by Merck chemist Anton Köllisch. At the time, Merck was interested in developing substances that stopped abnormal bleeding. Merck wanted to avoid an existing patent held by Bayer for one such compound: hydrastinine. Köllisch developed a preparation of a hydrastinine analogue, methylhydrastinine, at the request of fellow lab members, Walther Beckh and Otto Wolfes. MDMA (called methylsafrylamin, safrylmethylamin or N-Methyl-a-Methylhomopiperonylamin in Merck laboratory reports) was an intermediate compound in the synthesis of methylhydrastinine. Merck was not interested in MDMA itself at the time. On 24 December 1912, Merck filed two patent applications that described the synthesis and some chemical properties of MDMA and its subsequent conversion to methylhydrastinine.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "Merck records indicate its researchers returned to the compound sporadically. A 1920 Merck patent describes a chemical modification to MDMA. In 1927, Max Oberlin studied the pharmacology of MDMA while searching for substances with effects similar to adrenaline or ephedrine, the latter being structurally similar to MDMA. Compared to ephedrine, Oberlin observed that it had similar effects on vascular smooth muscle tissue, stronger effects at the uterus, and no \"local effect at the eye\". MDMA was also found to have effects on blood sugar levels comparable to high doses of ephedrine. Oberlin concluded that the effects of MDMA were not limited to the sympathetic nervous system. Research was stopped \"particularly due to a strong price increase of safrylmethylamine\", which was still used as an intermediate in methylhydrastinine synthesis. Albert van Schoor performed simple toxicological tests with the drug in 1952, most likely while researching new stimulants or circulatory medications. After pharmacological studies, research on MDMA was not continued. In 1959, Wolfgang Fruhstorfer synthesized MDMA for pharmacological testing while researching stimulants. It is unclear if Fruhstorfer investigated the effects of MDMA in humans.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "Outside of Merck, other researchers began to investigate MDMA. In 1953 and 1954, the United States Army commissioned a study of toxicity and behavioral effects in animals injected with mescaline and several analogues, including MDMA. Conducted at the University of Michigan in Ann Arbor, these investigations were declassified in October 1969 and published in 1973. A 1960 Polish paper by Biniecki and Krajewski describing the synthesis of MDMA as an intermediate was the first published scientific paper on the substance.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "MDMA may have been in non-medical use in the western United States in 1968. An August 1970 report at a meeting of crime laboratory chemists indicates MDMA was being used recreationally in the Chicago area by 1970. MDMA likely emerged as a substitute for its analog methylenedioxyamphetamine (MDA), a drug at the time popular among users of psychedelics which was made a Schedule 1 substance in the United States in 1970.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "American chemist and psychopharmacologist Alexander Shulgin reported he synthesized MDMA in 1965 while researching methylenedioxy compounds at Dow Chemical Company, but did not test the psychoactivity of the compound at this time. Around 1970, Shulgin sent instructions for N-methylated MDA (MDMA) synthesis to the founder of a Los Angeles chemical company who had requested them. This individual later provided these instructions to a client in the Midwest. Shulgin may have suspected he played a role in the emergence of MDMA in Chicago.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "Shulgin first heard of the psychoactive effects of N-methylated MDA around 1975 from a young student who reported \"amphetamine-like content\". Around 30 May 1976, Shulgin again heard about the effects of N-methylated MDA, this time from a graduate student in a medicinal chemistry group he advised at San Francisco State University who directed him to the University of Michigan study. She and two close friends had consumed 100 mg of MDMA and reported positive emotional experiences. Following the self-trials of a colleague at the University of San Francisco, Shulgin synthesized MDMA and tried it himself in September and October 1976. Shulgin first reported on MDMA in a presentation at a conference in Bethesda, Maryland in December 1976. In 1978, he and David E. Nichols published a report on the drug's psychoactive effect in humans. They described MDMA as inducing \"an easily controlled altered state of consciousness with emotional and sensual overtones\" comparable \"to marijuana, to psilocybin devoid of the hallucinatory component, or to low levels of MDA\".",
"title": "History"
},
{
"paragraph_id": 46,
"text": "While not finding his own experiences with MDMA particularly powerful, Shulgin was impressed with the drug's disinhibiting effects and thought it could be useful in therapy. Believing MDMA allowed users to strip away habits and perceive the world clearly, Shulgin called the drug window. Shulgin occasionally used MDMA for relaxation, referring to it as \"my low-calorie martini\", and gave the drug to friends, researchers, and others who he thought could benefit from it. One such person was Leo Zeff, a psychotherapist who had been known to use psychedelic substances in his practice. When he tried the drug in 1977, Zeff was impressed with the effects of MDMA and came out of his semi-retirement to promote its use in therapy. Over the following years, Zeff traveled around the United States and occasionally to Europe, eventually training an estimated four thousand psychotherapists in the therapeutic use of MDMA. Zeff named the drug Adam, believing it put users in a state of primordial innocence.",
"title": "History"
},
{
"paragraph_id": 47,
"text": "Psychotherapists who used MDMA believed the drug eliminated the typical fear response and increased communication. Sessions were usually held in the home of the patient or the therapist. The role of the therapist was minimized in favor of patient self-discovery accompanied by MDMA induced feelings of empathy. Depression, substance use disorders, relationship problems, premenstrual syndrome, and autism were among several psychiatric disorders MDMA assisted therapy was reported to treat. According to psychiatrist George Greer, therapists who used MDMA in their practice were impressed by the results. Anecdotally, MDMA was said to greatly accelerate therapy. According to David Nutt, MDMA was widely used in the western US in couples counseling, and was called empathy. Only later was the term ecstasy used for it, coinciding with rising opposition to its use.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "In the late 1970s and early 1980s, \"Adam\" spread through personal networks of psychotherapists, psychiatrists, users of psychedelics, and yuppies. Hoping MDMA could avoid criminalization like LSD and mescaline, psychotherapists and experimenters attempted to limit the spread of MDMA and information about it while conducting informal research. Early MDMA distributors were deterred from large scale operations by the threat of possible legislation. Between the 1970s and the mid-1980s, this network of MDMA users consumed an estimated 500,000 doses.",
"title": "History"
},
{
"paragraph_id": 49,
"text": "A small recreational market for MDMA developed by the late 1970s, consuming perhaps 10,000 doses in 1976. By the early 1980s MDMA was being used in Boston and New York City nightclubs such as Studio 54 and Paradise Garage. Into the early 1980s, as the recreational market slowly expanded, production of MDMA was dominated by a small group of therapeutically minded Boston chemists. Having commenced production in 1976, this \"Boston Group\" did not keep up with growing demand and shortages frequently occurred.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "Perceiving a business opportunity, Michael Clegg, the Southwest distributor for the Boston Group, started his own \"Texas Group\" backed financially by Texas friends. In 1981, Clegg had coined \"Ecstasy\" as a slang term for MDMA to increase its marketability. Starting in 1983, the Texas Group mass-produced MDMA in a Texas lab or imported it from California and marketed tablets using pyramid sales structures and toll-free numbers. MDMA could be purchased via credit card and taxes were paid on sales. Under the brand name \"Sassyfras\", MDMA tablets were sold in brown bottles. The Texas Group advertised \"Ecstasy parties\" at bars and discos, describing MDMA as a \"fun drug\" and \"good to dance to\". MDMA was openly distributed in Austin and Dallas–Fort Worth area bars and nightclubs, becoming popular with yuppies, college students, and gays.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "Recreational use also increased after several cocaine dealers switched to distributing MDMA following experiences with the drug. A California laboratory that analyzed confidentially submitted drug samples first detected MDMA in 1975. Over the following years the number of MDMA samples increased, eventually exceeding the number of MDA samples in the early 1980s. By the mid-1980s, MDMA use had spread to colleges around the United States.",
"title": "History"
},
{
"paragraph_id": 52,
"text": "In an early media report on MDMA published in 1982, a Drug Enforcement Administration (DEA) spokesman stated the agency would ban the drug if enough evidence for abuse could be found. By mid-1984, MDMA use was becoming more noticed. Bill Mandel reported on \"Adam\" in a 10 June San Francisco Chronicle article, but misidentified the drug as methyloxymethylenedioxyamphetamine (MMDA). In the next month, the World Health Organization identified MDMA as the only substance out of twenty phenethylamines to be seized a significant number of times.",
"title": "History"
},
{
"paragraph_id": 53,
"text": "After a year of planning and data collection, MDMA was proposed for scheduling by the DEA on 27 July 1984 with a request for comments and objections. The DEA was surprised when a number of psychiatrists, psychotherapists, and researchers objected to the proposed scheduling and requested a hearing. In a Newsweek article published the next year, a DEA pharmacologist stated that the agency had been unaware of its use among psychiatrists. An initial hearing was held on 1 February 1985 at the DEA offices in Washington, D.C., with administrative law judge Francis L. Young presiding. It was decided there to hold three more hearings that year: Los Angeles on 10 June, Kansas City, Missouri on 10–11 July, and Washington, D.C., on 8–11 October.",
"title": "History"
},
{
"paragraph_id": 54,
"text": "Sensational media attention was given to the proposed criminalization and the reaction of MDMA proponents, effectively advertising the drug. In response to the proposed scheduling, the Texas Group increased production from 1985 estimates of 30,000 tablets a month to as many as 8,000 per day, potentially making two million ecstasy tablets in the months before MDMA was made illegal. By some estimates the Texas Group distributed 500,000 tablets per month in Dallas alone. According to one participant in an ethnographic study, the Texas Group produced more MDMA in eighteen months than all other distribution networks combined across their entire histories. By May 1985, MDMA use was widespread in California, Texas, southern Florida, and the northeastern United States. According to the DEA there was evidence of use in twenty-eight states and Canada. Urged by Senator Lloyd Bentsen, the DEA announced an emergency Schedule I classification of MDMA on 31 May 1985. The agency cited increased distribution in Texas, escalating street use, and new evidence of MDA (an analog of MDMA) neurotoxicity as reasons for the emergency measure. The ban took effect one month later on 1 July 1985 in the midst of Nancy Reagan's \"Just Say No\" campaign.",
"title": "History"
},
{
"paragraph_id": 55,
"text": "As a result of several expert witnesses testifying that MDMA had an accepted medical usage, the administrative law judge presiding over the hearings recommended that MDMA be classified as a Schedule III substance. Despite this, DEA administrator John C. Lawn overruled and classified the drug as Schedule I. Harvard psychiatrist Lester Grinspoon then sued the DEA, claiming that the DEA had ignored the medical uses of MDMA, and the federal court sided with Grinspoon, calling Lawn's argument \"strained\" and \"unpersuasive\", and vacated MDMA's Schedule I status. Despite this, less than a month later Lawn reviewed the evidence and reclassified MDMA as Schedule I again, claiming that the expert testimony of several psychiatrists claiming over 200 cases where MDMA had been used in a therapeutic context with positive results could be dismissed because they were not published in medical journals. In 2017, the FDA granted breakthrough therapy designation for its use with psychotherapy for PTSD. However, this designation has been questioned and problematized.",
"title": "History"
},
{
"paragraph_id": 56,
"text": "While engaged in scheduling debates in the United States, the DEA also pushed for international scheduling. In 1985 the World Health Organization's Expert Committee on Drug Dependence recommended that MDMA be placed in Schedule I of the 1971 United Nations Convention on Psychotropic Substances. The committee made this recommendation on the basis of the pharmacological similarity of MDMA to previously scheduled drugs, reports of illicit trafficking in Canada, drug seizures in the United States, and lack of well-defined therapeutic use. While intrigued by reports of psychotherapeutic uses for the drug, the committee viewed the studies as lacking appropriate methodological design and encouraged further research. Committee chairman Paul Grof dissented, believing international control was not warranted at the time and a recommendation should await further therapeutic data. The Commission on Narcotic Drugs added MDMA to Schedule I of the convention on 11 February 1986.",
"title": "History"
},
{
"paragraph_id": 57,
"text": "The use of MDMA in Texas clubs declined rapidly after criminalization, although by 1991 the drug remained popular among young middle-class whites and in nightclubs. In 1985, MDMA use became associated with acid house on the Spanish island of Ibiza. Thereafter in the late 1980s, the drug spread alongside rave culture to the UK and then to other European and American cities. Illicit MDMA use became increasingly widespread among young adults in universities and later, in high schools. Since the mid-1990s, MDMA has become the most widely used amphetamine-type drug by college students and teenagers. MDMA became one of the four most widely used illicit drugs in the US, along with cocaine, heroin, and cannabis. According to some estimates as of 2004, only marijuana attracts more first time users in the US.",
"title": "History"
},
{
"paragraph_id": 58,
"text": "After MDMA was criminalized, most medical use stopped, although some therapists continued to prescribe the drug illegally. Later, Charles Grob initiated an ascending-dose safety study in healthy volunteers. Subsequent FDA-approved MDMA studies in humans have taken place in the United States in Detroit (Wayne State University), Chicago (University of Chicago), San Francisco (UCSF and California Pacific Medical Center), Baltimore (NIDA–NIH Intramural Program), and South Carolina. Studies have also been conducted in Switzerland (University Hospital of Psychiatry, Zürich), the Netherlands (Maastricht University), and Spain (Universitat Autònoma de Barcelona).",
"title": "History"
},
{
"paragraph_id": 59,
"text": "\"Molly\", short for 'molecule', was recognized as a slang term for crystalline or powder MDMA in the 2000s.",
"title": "History"
},
{
"paragraph_id": 60,
"text": "In 2010, the BBC reported that use of MDMA had decreased in the UK in previous years. This may be due to increased seizures during use and decreased production of the precursor chemicals used to manufacture MDMA. Unwitting substitution with other drugs, such as mephedrone and methamphetamine, as well as legal alternatives to MDMA, such as BZP, MDPV, and methylone, are also thought to have contributed to its decrease in popularity.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "In 2017 it was found that some pills being sold as MDMA contained pentylone, which can cause very unpleasant agitation and paranoia.",
"title": "History"
},
{
"paragraph_id": 62,
"text": "According to David Nutt, when safrole was restricted by the United Nations in order to reduce the supply of MDMA, producers in China began using anethole instead, but this gives para-methoxyamphetamine (PMA, also known as \"Dr Death\"), which is much more toxic than MDMA and can cause overheating, muscle spasms, seizures, unconsciousness, and death. People wanting MDMA are sometimes sold PMA instead.",
"title": "History"
},
{
"paragraph_id": 63,
"text": "MDMA is legally controlled in most of the world under the UN Convention on Psychotropic Substances and other international agreements, although exceptions exist for research and limited medical use. In general, the unlicensed use, sale or manufacture of MDMA are all criminal offences.",
"title": "Society and culture"
},
{
"paragraph_id": 64,
"text": "In Australia, MDMA was rescheduled on 1 July 2023 as a schedule 8 substance (available on prescription) when used in the treatment of PTSD, while remaining a schedule 9 substance (prohibited) for all other uses. For the treatment of PTSD, MDMA can only be prescribed by psychiatrists with specific training and authorisation. In 1986, MDMA was declared an illegal substance because of its allegedly harmful effects and potential for misuse.. Any non-authorised sale, use or manufacture is strictly prohibited by law. Permits for research uses on humans must be approved by a recognized ethics committee on human research.",
"title": "Society and culture"
},
{
"paragraph_id": 65,
"text": "In Western Australia under the Misuse of Drugs Act 1981 4.0g of MDMA is the amount required determining a court of trial, 2.0g is considered a presumption with intent to sell or supply and 28.0g is considered trafficking under Australian law.",
"title": "Society and culture"
},
{
"paragraph_id": 66,
"text": "The Australian Capital Territory has passed legislation to decriminalise the possession of small amounts of MDMA, due to take effect in October 2023.",
"title": "Society and culture"
},
{
"paragraph_id": 67,
"text": "In the United Kingdom, MDMA was made illegal in 1977 by a modification order to the existing Misuse of Drugs Act 1971. Although MDMA was not named explicitly in this legislation, the order extended the definition of Class A drugs to include various ring-substituted phenethylamines. The drug is therefore illegal to sell, buy, or possess without a licence in the UK. Penalties include a maximum of seven years and/or unlimited fine for possession; life and/or unlimited fine for production or trafficking.",
"title": "Society and culture"
},
{
"paragraph_id": 68,
"text": "Some researchers such as David Nutt have criticized the scheduling of MDMA, which he determined to be a relatively harmless drug. An editorial he wrote in the Journal of Psychopharmacology, where he compared the risk of harm for horse riding (1 adverse event in 350) to that of ecstasy (1 in 10,000) resulted in his dismissal as well as the resignation of his colleagues from the ACMD.",
"title": "Society and culture"
},
{
"paragraph_id": 69,
"text": "In the United States, MDMA is listed in Schedule I of the Controlled Substances Act. In a 2011 federal court hearing, the American Civil Liberties Union successfully argued that the sentencing guideline for MDMA/ecstasy is based on outdated science, leading to excessive prison sentences. Other courts have upheld the sentencing guidelines. The United States District Court for the Eastern District of Tennessee explained its ruling by noting that \"an individual federal district court judge simply cannot marshal resources akin to those available to the Commission for tackling the manifold issues involved with determining a proper drug equivalency.\"",
"title": "Society and culture"
},
{
"paragraph_id": 70,
"text": "In the Netherlands, the Expert Committee on the List (Expertcommissie Lijstensystematiek Opiumwet) issued a report in June 2011 which discussed the evidence for harm and the legal status of MDMA, arguing in favor of maintaining it on List I.",
"title": "Society and culture"
},
{
"paragraph_id": 71,
"text": "In Canada, MDMA is listed as a Schedule 1 as it is an analogue of amphetamine. The Controlled Drugs and Substances Act was updated as a result of the Safe Streets and Communities Act changing amphetamines from Schedule III to Schedule I in March 2012. In 2022 the federal government granted British Columbia a 3-year exemption, legalizing the possession of up to 2.5 grams (0.088 oz) of MDMA in the province from February 2023 until February 2026.",
"title": "Society and culture"
},
{
"paragraph_id": 72,
"text": "In 2014, 3.5% of 18 to 25 year-olds had used MDMA in the United States. In the European Union as of 2018, 4.1% of adults (15–64 years old) have used MDMA at least once in their life, and 0.8% had used it in the last year. Among young adults, 1.8% had used MDMA in the last year.",
"title": "Society and culture"
},
{
"paragraph_id": 73,
"text": "In Europe, an estimated 37% of regular club-goers aged 14 to 35 used MDMA in the past year according to the 2015 European Drug report. The highest one-year prevalence of MDMA use in Germany in 2012 was 1.7% among people aged 25 to 29 compared with a population average of 0.4%. Among adolescent users in the United States between 1999 and 2008, girls were more likely to use MDMA than boys.",
"title": "Society and culture"
},
{
"paragraph_id": 74,
"text": "In 2008 the European Monitoring Centre for Drugs and Drug Addiction noted that although there were some reports of tablets being sold for as little as €1, most countries in Europe then reported typical retail prices in the range of €3 to €9 per tablet, typically containing 25–65 mg of MDMA. By 2014 the EMCDDA reported that the range was more usually between €5 and €10 per tablet, typically containing 57–102 mg of MDMA, although MDMA in powder form was becoming more common.",
"title": "Society and culture"
},
{
"paragraph_id": 75,
"text": "The United Nations Office on Drugs and Crime stated in its 2014 World Drug Report that US ecstasy retail prices range from US$1 to $70 per pill, or from $15,000 to $32,000 per kilogram. A new research area named Drug Intelligence aims to automatically monitor distribution networks based on image processing and machine learning techniques, in which an Ecstasy pill picture is analyzed to detect correlations among different production batches. These novel techniques allow police scientists to facilitate the monitoring of illicit distribution networks.",
"title": "Society and culture"
},
{
"paragraph_id": 76,
"text": "As of October 2015, most of the MDMA in the United States is produced in British Columbia, Canada and imported by Canada-based Asian transnational criminal organizations. The market for MDMA in the United States is relatively small compared to methamphetamine, cocaine, and heroin. In the United States, about 0.9 million people used ecstasy in 2010.",
"title": "Society and culture"
},
{
"paragraph_id": 77,
"text": "MDMA is particularly expensive in Australia, costing A$15–A$30 per tablet. In terms of purity data for Australian MDMA, the average is around 34%, ranging from less than 1% to about 85%. The majority of tablets contain 70–85 mg of MDMA. Most MDMA enters Australia from the Netherlands, the UK, Asia, and the US.",
"title": "Society and culture"
},
{
"paragraph_id": 78,
"text": "A number of ecstasy manufacturers brand their pills with a logo, often being the logo of an unrelated corporation. Some pills depict logos of products or media popular with children, such as Shaun the Sheep.",
"title": "Society and culture"
},
{
"paragraph_id": 79,
"text": "In 2017, doctors in the UK began the first clinical study of MDMA in alcohol use disorder.",
"title": "Research"
},
{
"paragraph_id": 80,
"text": "The potential for MDMA to be used as a rapid-acting antidepressant has been studied in clinical trials, but as of 2017 the evidence on efficacy and safety were insufficient to reach a conclusion. A 2014 review of the safety and efficacy of MDMA as a treatment for various disorders, particularly PTSD, indicated that MDMA has therapeutic efficacy in some patients; however, it emphasized that issues regarding the controlability of MDMA-induced experiences and neurochemical recovery must be addressed. The author noted that oxytocin and D-cycloserine are potentially safer co-drugs in PTSD treatment, albeit with limited evidence of efficacy. This review and a second corroborating review by a different author both concluded that, because of MDMA's demonstrated potential to cause lasting harm in humans (e.g., serotonergic neurotoxicity and persistent memory impairment), \"considerably more research must be performed\" on its efficacy in PTSD treatment to determine if the potential treatment benefits outweigh its potential to harm to a patient.",
"title": "Research"
},
{
"paragraph_id": 81,
"text": "MDMA in combination with psychotherapy has been studied as a treatment for post-traumatic stress disorder, and four clinical trials provide moderate evidence in support of this treatment. However, the lack of appropriate blinding of participants likely leads to overestimation of treatments effects due to high levels of response expectancy. In addition, there are no trials comparing MDMA-assisted psychotherapy for PTSD with existent evidence-based psychological treatments for PTSD, which seems to attain similar or better treatment effects compared with that achieved by MDMA-assisted psychotherapy.",
"title": "Research"
},
{
"paragraph_id": 82,
"text": "In 2018 researchers identified MDMA as a psychoplastogen which refers to a compound capable of promoting neuroplasticity and received the “breakthrough therapy” designation by the Food and Drug Administration for treating PTSD.",
"title": "Research"
}
]
| 3,4-Methylenedioxymethamphetamine (MDMA), commonly known as ecstasy, and molly or mandy, is a potent empathogen–entactogen with stimulant and minor psychedelic properties primarily used for recreational purposes. The purported pharmacological effects that may be prosocial include altered sensations, increased energy, empathy, and pleasure. When taken by mouth, effects begin in 30 to 45 minutes and last three to six hours. MDMA was first synthesized in 1912 by Merck. It was used to enhance psychotherapy beginning in the 1970s and became popular as a street drug in the 1980s. MDMA is commonly associated with dance parties, raves, and electronic dance music. Tablets sold as ecstasy may be mixed with other substances such as ephedrine, amphetamine, and methamphetamine. In 2016, about 21 million people between the ages of 15 and 64 used ecstasy. This was broadly similar to the percentage of people who use cocaine or amphetamines, but lower than for cannabis or opioids. In the United States, as of 2017, about 7% of people have used MDMA at some point in their lives and 0.9% have used it in the last year. The lethal risk from one dose of MDMA is estimated to be from 1 death in 20,000 instances to 1 death in 50,000 instances. Short-term adverse effects include grinding of the teeth, blurred vision, sweating and a rapid heartbeat, and extended use can also lead to addiction, memory problems, paranoia and difficulty sleeping. Deaths have been reported due to increased body temperature and dehydration. Following use, people often feel depressed and tired, although this effect does not appear in clinical use, suggesting that it is not a direct result of MDMA administration. MDMA acts primarily by increasing the release of the neurotransmitters serotonin, dopamine and noradrenaline in parts of the brain. It belongs to the substituted amphetamine classes of drugs. MDMA is structurally similar to mescaline, methamphetamine, as well as endogenous monoamine neurotransmitters such as serotonin, norepinephrine, and dopamine. MDMA is illegal in most countries and has limited approved medical uses in a small number of countries. In the United States, the Food and Drug Administration is evaluating the drug for clinical use as of 2021. Canada has allowed limited distribution of MDMA and other psychedelics such as psilocybin upon application to and approval by Health Canada. | 2001-11-08T18:23:50Z | 2023-12-27T00:18:01Z | [
"Template:Confusing section",
"Template:Editorializing",
"Template:Primary source inline",
"Template:Annotated image 4",
"Template:Short description",
"Template:Wbr",
"Template:Multiple image",
"Template:Div col end",
"Template:Cite AV media",
"Template:Where",
"Template:Contradictory inline",
"Template:Main",
"Template:Clear",
"Template:Smallcaps all",
"Template:Cite book",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Citation needed",
"Template:Clear left",
"Template:When",
"Template:Global estimates of illicit drug users",
"Template:Reflist",
"Template:Sister project links",
"Template:Hatgrp",
"Template:Asof",
"Template:TOC limit",
"Template:As of",
"Template:Rp",
"Template:Cbignore",
"Template:See also",
"Template:Clarify",
"Template:Phenethylamines",
"Template:Pp-vandalism",
"Template:Infobox drug",
"Template:Convert",
"Template:Div col",
"Template:Page needed",
"Template:Cite journal",
"Template:Cite web",
"Template:Cite news",
"Template:Cite magazine",
"Template:Free access",
"Template:Navboxes"
]
| https://en.wikipedia.org/wiki/MDMA |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.