id
int64
12
1.07M
title
stringlengths
1
124
text
stringlengths
0
228k
paragraphs
list
abstract
stringlengths
0
123k
date_created
stringlengths
0
20
date_modified
stringlengths
20
20
templates
list
url
stringlengths
31
154
7,638
Consilience
In science and history, consilience (also convergence of evidence or concordance of evidence) is the principle that evidence from independent, unrelated sources can "converge" on strong conclusions. That is, when multiple sources of evidence are in agreement, the conclusion can be very strong even when none of the individual sources of evidence is significantly so on its own. Most established scientific knowledge is supported by a convergence of evidence: if not, the evidence is comparatively weak, and there will probably not be a strong scientific consensus. The principle is based on unity of knowledge; measuring the same result by several different methods should lead to the same answer. For example, it should not matter whether one measures distances within the Giza pyramid complex by laser rangefinding, by satellite imaging, or with a meter stick – in all three cases, the answer should be approximately the same. For the same reason, different dating methods in geochronology should concur, a result in chemistry should not contradict a result in geology, etc. The word consilience was originally coined as the phrase "consilience of inductions" by William Whewell (consilience refers to a "jumping together" of knowledge). The word comes from Latin com- "together" and -siliens "jumping" (as in resilience). Consilience requires the use of independent methods of measurement, meaning that the methods have few shared characteristics. That is, the mechanism by which the measurement is made is different; each method is dependent on an unrelated natural phenomenon. For example, the accuracy of laser rangefinding measurements is based on the scientific understanding of lasers, while satellite pictures and meter sticks rely on different phenomena. Because the methods are independent, when one of several methods is in error, it is very unlikely to be in error in the same way as any of the other methods, and a difference between the measurements will be observed. If the scientific understanding of the properties of lasers were inaccurate, then the laser measurement would be inaccurate but the others would not. As a result, when several different methods agree, this is strong evidence that none of the methods are in error and the conclusion is correct. This is because of a greatly reduced likelihood of errors: for a consensus estimate from multiple measurements to be wrong, the errors would have to be similar for all samples and all methods of measurement, which is extremely unlikely. Random errors will tend to cancel out as more measurements are made, due to regression to the mean; systematic errors will be detected by differences between the measurements (and will also tend to cancel out since the direction of the error will still be random). This is how scientific theories reach high confidence – over time, they build up a large degree of evidence which converges on the same conclusion. When results from different strong methods do appear to conflict, this is treated as a serious problem to be reconciled. For example, in the 19th century, the Sun appeared to be no more than 20 million years old, but the Earth appeared to be no less than 300 million years (resolved by the discovery of nuclear fusion and radioactivity, and the theory of quantum mechanics); or current attempts to resolve theoretical differences between quantum mechanics and general relativity. Because of consilience, the strength of evidence for any particular conclusion is related to how many independent methods are supporting the conclusion, as well as how different these methods are. Those techniques with the fewest (or no) shared characteristics provide the strongest consilience and result in the strongest conclusions. This also means that confidence is usually strongest when considering evidence from different fields because the techniques are usually very different. For example, the theory of evolution is supported by a convergence of evidence from genetics, molecular biology, paleontology, geology, biogeography, comparative anatomy, comparative physiology, and many other fields. In fact, the evidence within each of these fields is itself a convergence providing evidence for the theory. (As a result, to disprove evolution, most or all of these independent lines of evidence would have to be found to be in error.) The strength of the evidence, considered together as a whole, results in the strong scientific consensus that the theory is correct. In a similar way, evidence about the history of the universe is drawn from astronomy, astrophysics, planetary geology, and physics. Finding similar conclusions from multiple independent methods is also evidence for the reliability of the methods themselves, because consilience eliminates the possibility of all potential errors that do not affect all the methods equally. This is also used for the validation of new techniques through comparison with the consilient ones. If only partial consilience is observed, this allows for the detection of errors in methodology; any weaknesses in one technique can be compensated for by the strengths of the others. Alternatively, if using more than one or two techniques for every experiment is infeasible, some of the benefits of consilience may still be obtained if it is well-established that these techniques usually give the same result. Consilience is important across all of science, including the social sciences, and is often used as an argument for scientific realism by philosophers of science. Each branch of science studies a subset of reality that depends on factors studied in other branches. Atomic physics underlies the workings of chemistry, which studies emergent properties that in turn are the basis of biology. Psychology is not separate from the study of properties emergent from the interaction of neurons and synapses. Sociology, economics, and anthropology are each, in turn, studies of properties emergent from the interaction of countless individual humans. The concept that all the different areas of research are studying one real, existing universe is an apparent explanation of why scientific knowledge determined in one field of inquiry has often helped in understanding other fields. Consilience does not forbid deviations: in fact, since not all experiments are perfect, some deviations from established knowledge are expected. However, when the convergence is strong enough, then new evidence inconsistent with the previous conclusion is not usually enough to outweigh that convergence. Without an equally strong convergence on the new result, the weight of evidence will still favor the established result. This means that the new evidence is most likely to be wrong. Science denialism (for example, AIDS denialism) is often based on a misunderstanding of this property of consilience. A denier may promote small gaps not yet accounted for by the consilient evidence, or small amounts of evidence contradicting a conclusion without accounting for the pre-existing strength resulting from consilience. More generally, to insist that all evidence converge precisely with no deviations would be naïve falsificationism, equivalent to considering a single contrary result to falsify a theory when another explanation, such as equipment malfunction or misinterpretation of results, is much more likely. Historical evidence also converges in an analogous way. For example: if five ancient historians, none of whom knew each other, all claim that Julius Caesar seized power in Rome in 49 BCE, this is strong evidence in favor of that event occurring even if each individual historian is only partially reliable. By contrast, if the same historian had made the same claim five times in five different places (and no other types of evidence were available), the claim is much weaker because it originates from a single source. The evidence from the ancient historians could also converge with evidence from other fields, such as archaeology: for example, evidence that many senators fled Rome at the time, that the battles of Caesar’s civil war occurred, and so forth. Consilience has also been discussed in reference to Holocaust denial. "We [have now discussed] eighteen proofs all converging on one conclusion...the deniers shift the burden of proof to historians by demanding that each piece of evidence, independently and without corroboration between them, prove the Holocaust. Yet no historian has ever claimed that one piece of evidence proves the Holocaust. We must examine the collective whole." That is, individually the evidence may underdetermine the conclusion, but together they overdetermine it. A similar way to state this is that to ask for one particular piece of evidence in favor of a conclusion is a flawed question. In addition to the sciences, consilience can be important to the arts, ethics and religion. Both artists and scientists have identified the importance of biology in the process of artistic innovation. Consilience has its roots in the ancient Greek concept of an intrinsic orderliness that governs our cosmos, inherently comprehensible by logical process, a vision at odds with mystical views in many cultures that surrounded the Hellenes. The rational view was recovered during the high Middle Ages, separated from theology during the Renaissance and found its apogee in the Age of Enlightenment. Whewell's definition was that: The Consilience of Inductions takes place when an Induction, obtained from one class of facts, coincides with an Induction obtained from another different class. Thus Consilience is a test of the truth of the Theory in which it occurs. More recent descriptions include: "Where there is a convergence of evidence, where the same explanation is implied, there is increased confidence in the explanation. Where there is divergence, then either the explanation is at fault or one or more of the sources of information is in error or requires reinterpretation." "Proof is derived through a convergence of evidence from numerous lines of inquiry--multiple, independent inductions, all of which point to an unmistakable conclusion." Although the concept of consilience in Whewell's sense was widely discussed by philosophers of science, the term was unfamiliar to the broader public until the end of the 20th century, when it was revived in Consilience: The Unity of Knowledge, a 1998 book by the author and biologist E. O. Wilson, as an attempt to bridge the cultural gap between the sciences and the humanities that was the subject of C. P. Snow's The Two Cultures and the Scientific Revolution (1959). Wilson believed that "the humanities, ranging from philosophy and history to moral reasoning, comparative religion, and interpretation of the arts, will draw closer to the sciences and partly fuse with them" with the result that science and the scientific method, from within this fusion, would not only explain the physical phenomenon but also provide moral guidance and be the ultimate source of all truths. Wilson held that with the rise of the modern sciences, the sense of unity gradually was lost in the increasing fragmentation and specialization of knowledge in the last two centuries. He asserted that the sciences, humanities, and arts have a common goal: to give a purpose to understand the details, to lend to all inquirers "a conviction, far deeper than a mere working proposition, that the world is orderly and can be explained by a small number of natural laws." An important point made by Wilson is that hereditary human nature and evolution itself profoundly effect the evolution of culture, in essence, a sociobiological concept. Wilson's concept is a much broader notion of consilience than that of Whewell, who was merely pointing out that generalizations invented to account for one set of phenomena often account for others as well. A parallel view lies in the term universology, which literally means "the science of the universe." Universology was first promoted for the study of the interconnecting principles and truths of all domains of knowledge by Stephen Pearl Andrews, a 19th-century utopian futurist and anarchist.
[ { "paragraph_id": 0, "text": "In science and history, consilience (also convergence of evidence or concordance of evidence) is the principle that evidence from independent, unrelated sources can \"converge\" on strong conclusions. That is, when multiple sources of evidence are in agreement, the conclusion can be very strong even when none of the individual sources of evidence is significantly so on its own. Most established scientific knowledge is supported by a convergence of evidence: if not, the evidence is comparatively weak, and there will probably not be a strong scientific consensus.", "title": "" }, { "paragraph_id": 1, "text": "The principle is based on unity of knowledge; measuring the same result by several different methods should lead to the same answer. For example, it should not matter whether one measures distances within the Giza pyramid complex by laser rangefinding, by satellite imaging, or with a meter stick – in all three cases, the answer should be approximately the same. For the same reason, different dating methods in geochronology should concur, a result in chemistry should not contradict a result in geology, etc.", "title": "" }, { "paragraph_id": 2, "text": "The word consilience was originally coined as the phrase \"consilience of inductions\" by William Whewell (consilience refers to a \"jumping together\" of knowledge). The word comes from Latin com- \"together\" and -siliens \"jumping\" (as in resilience).", "title": "" }, { "paragraph_id": 3, "text": "Consilience requires the use of independent methods of measurement, meaning that the methods have few shared characteristics. That is, the mechanism by which the measurement is made is different; each method is dependent on an unrelated natural phenomenon. For example, the accuracy of laser rangefinding measurements is based on the scientific understanding of lasers, while satellite pictures and meter sticks rely on different phenomena. Because the methods are independent, when one of several methods is in error, it is very unlikely to be in error in the same way as any of the other methods, and a difference between the measurements will be observed. If the scientific understanding of the properties of lasers were inaccurate, then the laser measurement would be inaccurate but the others would not.", "title": "Description" }, { "paragraph_id": 4, "text": "As a result, when several different methods agree, this is strong evidence that none of the methods are in error and the conclusion is correct. This is because of a greatly reduced likelihood of errors: for a consensus estimate from multiple measurements to be wrong, the errors would have to be similar for all samples and all methods of measurement, which is extremely unlikely. Random errors will tend to cancel out as more measurements are made, due to regression to the mean; systematic errors will be detected by differences between the measurements (and will also tend to cancel out since the direction of the error will still be random). This is how scientific theories reach high confidence – over time, they build up a large degree of evidence which converges on the same conclusion.", "title": "Description" }, { "paragraph_id": 5, "text": "When results from different strong methods do appear to conflict, this is treated as a serious problem to be reconciled. For example, in the 19th century, the Sun appeared to be no more than 20 million years old, but the Earth appeared to be no less than 300 million years (resolved by the discovery of nuclear fusion and radioactivity, and the theory of quantum mechanics); or current attempts to resolve theoretical differences between quantum mechanics and general relativity.", "title": "Description" }, { "paragraph_id": 6, "text": "Because of consilience, the strength of evidence for any particular conclusion is related to how many independent methods are supporting the conclusion, as well as how different these methods are. Those techniques with the fewest (or no) shared characteristics provide the strongest consilience and result in the strongest conclusions. This also means that confidence is usually strongest when considering evidence from different fields because the techniques are usually very different.", "title": "Significance" }, { "paragraph_id": 7, "text": "For example, the theory of evolution is supported by a convergence of evidence from genetics, molecular biology, paleontology, geology, biogeography, comparative anatomy, comparative physiology, and many other fields. In fact, the evidence within each of these fields is itself a convergence providing evidence for the theory. (As a result, to disprove evolution, most or all of these independent lines of evidence would have to be found to be in error.) The strength of the evidence, considered together as a whole, results in the strong scientific consensus that the theory is correct. In a similar way, evidence about the history of the universe is drawn from astronomy, astrophysics, planetary geology, and physics.", "title": "Significance" }, { "paragraph_id": 8, "text": "Finding similar conclusions from multiple independent methods is also evidence for the reliability of the methods themselves, because consilience eliminates the possibility of all potential errors that do not affect all the methods equally. This is also used for the validation of new techniques through comparison with the consilient ones. If only partial consilience is observed, this allows for the detection of errors in methodology; any weaknesses in one technique can be compensated for by the strengths of the others. Alternatively, if using more than one or two techniques for every experiment is infeasible, some of the benefits of consilience may still be obtained if it is well-established that these techniques usually give the same result.", "title": "Significance" }, { "paragraph_id": 9, "text": "Consilience is important across all of science, including the social sciences, and is often used as an argument for scientific realism by philosophers of science. Each branch of science studies a subset of reality that depends on factors studied in other branches. Atomic physics underlies the workings of chemistry, which studies emergent properties that in turn are the basis of biology. Psychology is not separate from the study of properties emergent from the interaction of neurons and synapses. Sociology, economics, and anthropology are each, in turn, studies of properties emergent from the interaction of countless individual humans. The concept that all the different areas of research are studying one real, existing universe is an apparent explanation of why scientific knowledge determined in one field of inquiry has often helped in understanding other fields.", "title": "Significance" }, { "paragraph_id": 10, "text": "Consilience does not forbid deviations: in fact, since not all experiments are perfect, some deviations from established knowledge are expected. However, when the convergence is strong enough, then new evidence inconsistent with the previous conclusion is not usually enough to outweigh that convergence. Without an equally strong convergence on the new result, the weight of evidence will still favor the established result. This means that the new evidence is most likely to be wrong.", "title": "Deviations" }, { "paragraph_id": 11, "text": "Science denialism (for example, AIDS denialism) is often based on a misunderstanding of this property of consilience. A denier may promote small gaps not yet accounted for by the consilient evidence, or small amounts of evidence contradicting a conclusion without accounting for the pre-existing strength resulting from consilience. More generally, to insist that all evidence converge precisely with no deviations would be naïve falsificationism, equivalent to considering a single contrary result to falsify a theory when another explanation, such as equipment malfunction or misinterpretation of results, is much more likely.", "title": "Deviations" }, { "paragraph_id": 12, "text": "Historical evidence also converges in an analogous way. For example: if five ancient historians, none of whom knew each other, all claim that Julius Caesar seized power in Rome in 49 BCE, this is strong evidence in favor of that event occurring even if each individual historian is only partially reliable. By contrast, if the same historian had made the same claim five times in five different places (and no other types of evidence were available), the claim is much weaker because it originates from a single source. The evidence from the ancient historians could also converge with evidence from other fields, such as archaeology: for example, evidence that many senators fled Rome at the time, that the battles of Caesar’s civil war occurred, and so forth.", "title": "In history" }, { "paragraph_id": 13, "text": "Consilience has also been discussed in reference to Holocaust denial.", "title": "In history" }, { "paragraph_id": 14, "text": "\"We [have now discussed] eighteen proofs all converging on one conclusion...the deniers shift the burden of proof to historians by demanding that each piece of evidence, independently and without corroboration between them, prove the Holocaust. Yet no historian has ever claimed that one piece of evidence proves the Holocaust. We must examine the collective whole.\"", "title": "In history" }, { "paragraph_id": 15, "text": "That is, individually the evidence may underdetermine the conclusion, but together they overdetermine it. A similar way to state this is that to ask for one particular piece of evidence in favor of a conclusion is a flawed question.", "title": "In history" }, { "paragraph_id": 16, "text": "In addition to the sciences, consilience can be important to the arts, ethics and religion. Both artists and scientists have identified the importance of biology in the process of artistic innovation.", "title": "Outside the sciences" }, { "paragraph_id": 17, "text": "Consilience has its roots in the ancient Greek concept of an intrinsic orderliness that governs our cosmos, inherently comprehensible by logical process, a vision at odds with mystical views in many cultures that surrounded the Hellenes. The rational view was recovered during the high Middle Ages, separated from theology during the Renaissance and found its apogee in the Age of Enlightenment.", "title": "History of the concept" }, { "paragraph_id": 18, "text": "Whewell's definition was that:", "title": "History of the concept" }, { "paragraph_id": 19, "text": "The Consilience of Inductions takes place when an Induction, obtained from one class of facts, coincides with an Induction obtained from another different class. Thus Consilience is a test of the truth of the Theory in which it occurs.", "title": "History of the concept" }, { "paragraph_id": 20, "text": "More recent descriptions include:", "title": "History of the concept" }, { "paragraph_id": 21, "text": "\"Where there is a convergence of evidence, where the same explanation is implied, there is increased confidence in the explanation. Where there is divergence, then either the explanation is at fault or one or more of the sources of information is in error or requires reinterpretation.\"", "title": "History of the concept" }, { "paragraph_id": 22, "text": "\"Proof is derived through a convergence of evidence from numerous lines of inquiry--multiple, independent inductions, all of which point to an unmistakable conclusion.\"", "title": "History of the concept" }, { "paragraph_id": 23, "text": "Although the concept of consilience in Whewell's sense was widely discussed by philosophers of science, the term was unfamiliar to the broader public until the end of the 20th century, when it was revived in Consilience: The Unity of Knowledge, a 1998 book by the author and biologist E. O. Wilson, as an attempt to bridge the cultural gap between the sciences and the humanities that was the subject of C. P. Snow's The Two Cultures and the Scientific Revolution (1959). Wilson believed that \"the humanities, ranging from philosophy and history to moral reasoning, comparative religion, and interpretation of the arts, will draw closer to the sciences and partly fuse with them\" with the result that science and the scientific method, from within this fusion, would not only explain the physical phenomenon but also provide moral guidance and be the ultimate source of all truths.", "title": "Edward O. Wilson" }, { "paragraph_id": 24, "text": "Wilson held that with the rise of the modern sciences, the sense of unity gradually was lost in the increasing fragmentation and specialization of knowledge in the last two centuries. He asserted that the sciences, humanities, and arts have a common goal: to give a purpose to understand the details, to lend to all inquirers \"a conviction, far deeper than a mere working proposition, that the world is orderly and can be explained by a small number of natural laws.\" An important point made by Wilson is that hereditary human nature and evolution itself profoundly effect the evolution of culture, in essence, a sociobiological concept. Wilson's concept is a much broader notion of consilience than that of Whewell, who was merely pointing out that generalizations invented to account for one set of phenomena often account for others as well.", "title": "Edward O. Wilson" }, { "paragraph_id": 25, "text": "A parallel view lies in the term universology, which literally means \"the science of the universe.\" Universology was first promoted for the study of the interconnecting principles and truths of all domains of knowledge by Stephen Pearl Andrews, a 19th-century utopian futurist and anarchist.", "title": "Edward O. Wilson" } ]
In science and history, consilience is the principle that evidence from independent, unrelated sources can "converge" on strong conclusions. That is, when multiple sources of evidence are in agreement, the conclusion can be very strong even when none of the individual sources of evidence is significantly so on its own. Most established scientific knowledge is supported by a convergence of evidence: if not, the evidence is comparatively weak, and there will probably not be a strong scientific consensus. The principle is based on unity of knowledge; measuring the same result by several different methods should lead to the same answer. For example, it should not matter whether one measures distances within the Giza pyramid complex by laser rangefinding, by satellite imaging, or with a meter stick – in all three cases, the answer should be approximately the same. For the same reason, different dating methods in geochronology should concur, a result in chemistry should not contradict a result in geology, etc. The word consilience was originally coined as the phrase "consilience of inductions" by William Whewell. The word comes from Latin com- "together" and -siliens "jumping".
2023-06-29T21:58:32Z
[ "Template:Wiktionary", "Template:Philosophy of science", "Template:Quote", "Template:Annotated link", "Template:Cite web", "Template:Blockquote", "Template:Reflist", "Template:Cite book", "Template:Short description", "Template:Other uses", "Template:Use dmy dates" ]
https://en.wikipedia.org/wiki/Consilience
7,642
Clarence Brown
Clarence Leon Brown (May 10, 1890 – August 17, 1987) was an American film director. Born in Clinton, Massachusetts, to Larkin Harry Brown, a cotton manufacturer, and Katherine Ann Brown (née Gaw), Brown moved to Tennessee when he was 11 years old. He attended Knoxville High School and the University of Tennessee, both in Knoxville, Tennessee, graduating from the university at the age of 19 with two degrees in engineering. An early fascination in automobiles led Brown to a job with the Stevens-Duryea Company, then to his own Brown Motor Car Company in Alabama. He later abandoned the car dealership after developing an interest in motion pictures around 1913. He was hired by the Peerless Studio at Fort Lee, New Jersey, and became an assistant to the French-born director Maurice Tourneur. After serving as a fighter pilot and flight instructor in the United States Army Air Service during World War I, Brown was given his first co-directing credit (with Tourneur) for The Great Redeemer (1920). Later that year, he directed a major portion of The Last of the Mohicans after Tourneur was injured in a fall. Brown moved to Universal in 1924, and then to Metro-Goldwyn-Mayer, where he remained until the mid-1950s. At MGM he was one of the main directors of their major female stars; he directed Joan Crawford six times and Greta Garbo seven. Brown was nominated six times (see below) for an Academy Award as a director, but he never received an Oscar. However, he won Best Foreign Film for Anna Karenina, starring Garbo at the 1935 Venice International Film Festival. Brown's films gained a total of 38 Academy Award nominations and earned nine Oscars. Brown himself received six Academy Award nominations and in 1949, he won the British Academy Award for the film version of William Faulkner's Intruder in the Dust. In 1957, Brown was awarded The George Eastman Award, given by George Eastman House for distinguished contribution to the art of film. Brown retired a wealthy man due to his real estate investments, but refused to watch new movies, as he feared they might cause him to restart his career. The Clarence Brown Theater, on the campus of the University of Tennessee, is named in his honor. He holds the record for most nominations for the Academy Award for Best Director without a win, with six. Clarence Brown was married four times. His first marriage was to Paula Herndon Pratt in 1913, which lasted until their divorce in 1920. The couple produced a daughter, Adrienne Brown. His second marriage was to Ona Wilson, which lasted from 1922 until their divorce in 1927. He was engaged to Dorothy Sebastian and Mona Maris, although he did not marry either of them, with Maris later saying she ended their relationship because she had her "own ideas of marriage then." He married his third wife, Alice Joyce, in 1933 and they divorced in 1945. His last marriage was to Marian Spies in 1946, which lasted until his death in 1987. Brown died at the Saint John's Health Center in Santa Monica, California from kidney failure on August 17, 1987, at the age of 97. He is interred at Forest Lawn Memorial Park in Glendale, California. On February 8, 1960, Brown received a star on the Hollywood Walk of Fame at 1752 Vine Street, for his contributions to the motion pictures industry.
[ { "paragraph_id": 0, "text": "Clarence Leon Brown (May 10, 1890 – August 17, 1987) was an American film director.", "title": "" }, { "paragraph_id": 1, "text": "Born in Clinton, Massachusetts, to Larkin Harry Brown, a cotton manufacturer, and Katherine Ann Brown (née Gaw), Brown moved to Tennessee when he was 11 years old. He attended Knoxville High School and the University of Tennessee, both in Knoxville, Tennessee, graduating from the university at the age of 19 with two degrees in engineering. An early fascination in automobiles led Brown to a job with the Stevens-Duryea Company, then to his own Brown Motor Car Company in Alabama. He later abandoned the car dealership after developing an interest in motion pictures around 1913. He was hired by the Peerless Studio at Fort Lee, New Jersey, and became an assistant to the French-born director Maurice Tourneur.", "title": "Early life" }, { "paragraph_id": 2, "text": "After serving as a fighter pilot and flight instructor in the United States Army Air Service during World War I, Brown was given his first co-directing credit (with Tourneur) for The Great Redeemer (1920). Later that year, he directed a major portion of The Last of the Mohicans after Tourneur was injured in a fall.", "title": "Career" }, { "paragraph_id": 3, "text": "Brown moved to Universal in 1924, and then to Metro-Goldwyn-Mayer, where he remained until the mid-1950s. At MGM he was one of the main directors of their major female stars; he directed Joan Crawford six times and Greta Garbo seven.", "title": "Career" }, { "paragraph_id": 4, "text": "Brown was nominated six times (see below) for an Academy Award as a director, but he never received an Oscar. However, he won Best Foreign Film for Anna Karenina, starring Garbo at the 1935 Venice International Film Festival.", "title": "Career" }, { "paragraph_id": 5, "text": "Brown's films gained a total of 38 Academy Award nominations and earned nine Oscars. Brown himself received six Academy Award nominations and in 1949, he won the British Academy Award for the film version of William Faulkner's Intruder in the Dust.", "title": "Career" }, { "paragraph_id": 6, "text": "In 1957, Brown was awarded The George Eastman Award, given by George Eastman House for distinguished contribution to the art of film. Brown retired a wealthy man due to his real estate investments, but refused to watch new movies, as he feared they might cause him to restart his career.", "title": "Career" }, { "paragraph_id": 7, "text": "The Clarence Brown Theater, on the campus of the University of Tennessee, is named in his honor. He holds the record for most nominations for the Academy Award for Best Director without a win, with six.", "title": "Career" }, { "paragraph_id": 8, "text": "Clarence Brown was married four times. His first marriage was to Paula Herndon Pratt in 1913, which lasted until their divorce in 1920. The couple produced a daughter, Adrienne Brown.", "title": "Personal life" }, { "paragraph_id": 9, "text": "His second marriage was to Ona Wilson, which lasted from 1922 until their divorce in 1927.", "title": "Personal life" }, { "paragraph_id": 10, "text": "He was engaged to Dorothy Sebastian and Mona Maris, although he did not marry either of them, with Maris later saying she ended their relationship because she had her \"own ideas of marriage then.\"", "title": "Personal life" }, { "paragraph_id": 11, "text": "He married his third wife, Alice Joyce, in 1933 and they divorced in 1945.", "title": "Personal life" }, { "paragraph_id": 12, "text": "His last marriage was to Marian Spies in 1946, which lasted until his death in 1987.", "title": "Personal life" }, { "paragraph_id": 13, "text": "Brown died at the Saint John's Health Center in Santa Monica, California from kidney failure on August 17, 1987, at the age of 97. He is interred at Forest Lawn Memorial Park in Glendale, California.", "title": "Death" }, { "paragraph_id": 14, "text": "On February 8, 1960, Brown received a star on the Hollywood Walk of Fame at 1752 Vine Street, for his contributions to the motion pictures industry.", "title": "Death" } ]
Clarence Leon Brown was an American film director.
2002-02-25T15:51:15Z
2023-09-16T22:25:00Z
[ "Template:ISBN", "Template:Clarence Brown", "Template:Short description", "Template:Similar names", "Template:Use American English", "Template:Div col", "Template:Efn", "Template:Authority control", "Template:Notelist", "Template:Cite web", "Template:Cite book", "Template:Cite news", "Template:Wikisource author", "Template:Use mdy dates", "Template:Reflist", "Template:Commons category", "Template:Webarchive", "Template:Infobox person", "Template:Div col end", "Template:IMDb name" ]
https://en.wikipedia.org/wiki/Clarence_Brown
7,643
Conciliation
Conciliation is an alternative dispute revolution (ADR) process whereby the parties to a dispute use a conciliator, who meets with the parties both separately and together in an attempt to resolve their differences. They do this by lowering tensions, improving communication, interpreting issues, encouraging parties to explore potential solvents and assisting parties in finding a mutually acceptable outcome. Conciliation differs from arbitration in that the conciliation process, in and of itself, has no legal standing, and the conciliator usually has no authority to seek evidence or call witnesses, usually writes no decision, and makes no award. There is a form of "conciliation" that is more akin to negotiation. A "conciliator" assists each of the parties to independently develop a list of all of their objectives (the outcomes which they desire to obtain from the conciliation). The conciliator then has each of the parties separately prioritize their own list from most to least important. He/She then goes back and forth between the parties and encourages them to "give" on the objectives one at a time, starting with the least important and working toward the most important for each party in turn. The parties rarely place the same priorities on all objectives, and usually have some objectives that are not listed by the other party. Thus the conciliator can quickly build a string of successes and help the parties create an atmosphere of trust which the conciliator can continue to develop. Most successful "conciliators" in this sense are highly skilled negotiators. Some conciliators operate under the auspices of any one of several non-governmental entities, and for governmental agencies such as the Federal Mediation and Conciliation Service in the United States. There is a different form of conciliation that, instead of a linear process of bilateral negotiation, employs deep listening and witnessing. Conciliation literally means: "Process of bringing people together into council". In this second definition, a conciliator is not so much focused on goals and objectives preset by the parties, but more focused on assisting parties to come together to resolve conflicts on their own. Many people in trying to resolve conflict independently come up with solutions that turn into goals based on understanding only a portion of the whole issue. By helping parties understand deeply where all are coming from, different and new solutions emerge from this deep understanding. The conciliator is in service to this deep witnessing between all parties involved. At times when two or more parties are not ready to face each other nor communicate with each other directly, the conciliator helps parties to understand their own perspective, feel more empowered to speak their truth and represent their own needs in a future dialogue with the other parties to the conflict. The conciliator addresses any power disparities perceived by any party in a safe manner. The ensuing dialogue in this form of conciliation can - with the parties' wishes - involve the conciliator as a facilitator until the parties feel comfortable to communicate on their own. This form of conciliation is non-linear and involves an informal method of reconciliation between people who do not necessarily need to negotiate legal issues such as property rights or tort injuries. It can also involve more emotional and passionate elements as tangible and historical topics emerge as the root causes of the conflict. Most successful people who work in conciliation quietly persevere and allow the progressive movements in the parties' healing guide them. More about this process can be found at Consulting & Conciliation Service. Historical conciliation is an applied conflict resolution approach that utilizes historical narratives to positively transform relations between societies in conflicts. Historical conciliation can utilize many different methodologies, including mediation, sustained dialogues, apologies, acknowledgement, support of public commemoration activities, and public diplomacy. Historical conciliation is not an excavation of objective facts. The point of facilitating historical questions is not to discover all the facts in regard to who was right or wrong. Rather, the objective is to discover the complexity, ambiguity, and emotions surrounding both dominant and non-dominant cultural and individual narratives of history. It is also not a rewriting of history. The goal is not to create a combined narrative that everyone agrees upon. Instead, the aim is to create room for critical thinking and more inclusive understanding of the past and conceptions of “the other”. Some conflicts that are addressed through historical conciliation have their roots in conflicting identities of the people involved. Whether the identity at stake is their ethnicity, religion or culture, it requires a comprehensive approach that takes people's needs for recognition, hopes, fears, and concerns into account. Some conflicts might be based in unmet needs for security or recognition, or thwarted development. To learn more about the theory of basic human social needs and how they give rise to conflict, please see John Burton, Karen Horney, Hannah Arendt, and Johan Galtung to name a few. While the above historical summary speaks to some uses of conciliation, it is not the only method and by itself cannot address the entirety of a system of protracted historical conflict. A holistic approach to resolving deep-rooted violent conflict would ideally employ all methods of conflict resolution - education, negotiation, analysis, diplomacy, second track diplomacy, mass therapy, truth and reconciliation, cultural inventory, leadership, peer mediation/facilitation. In short, to resolve a deeply rooted prolonged crisis, it takes all of us, coming from our strengths and positive intentions, and a willingness to allow everyone to come to the table. For examples of applied conciliation from an historical context, look for Quaker efforts in witness and peacemaking in London, New York and South Africa. Japanese law makes extensive use of conciliation (調停, chōtei) in civil disputes. The most common forms are civil conciliation and domestic conciliation, both of which are managed under the auspice of the court system by one judge and two non-judge "conciliators". Civil conciliation is a form of dispute resolution for small lawsuits, and provides a simpler and cheaper alternative to litigation. Depending on the nature of the case, non-judge experts (doctors, appraisers, actuaries, and so on) may be called by the court as conciliators to help decide the case. Domestic conciliation is most commonly used to handle contentious divorces, but may apply to other domestic disputes such as the annulment of a marriage or acknowledgment of paternity. Parties in such cases are required to undergo conciliation proceedings and may only bring their case to court once conciliation has failed.
[ { "paragraph_id": 0, "text": "Conciliation is an alternative dispute revolution (ADR) process whereby the parties to a dispute use a conciliator, who meets with the parties both separately and together in an attempt to resolve their differences. They do this by lowering tensions, improving communication, interpreting issues, encouraging parties to explore potential solvents and assisting parties in finding a mutually acceptable outcome.", "title": "" }, { "paragraph_id": 1, "text": "Conciliation differs from arbitration in that the conciliation process, in and of itself, has no legal standing, and the conciliator usually has no authority to seek evidence or call witnesses, usually writes no decision, and makes no award.", "title": "" }, { "paragraph_id": 2, "text": "There is a form of \"conciliation\" that is more akin to negotiation. A \"conciliator\" assists each of the parties to independently develop a list of all of their objectives (the outcomes which they desire to obtain from the conciliation). The conciliator then has each of the parties separately prioritize their own list from most to least important. He/She then goes back and forth between the parties and encourages them to \"give\" on the objectives one at a time, starting with the least important and working toward the most important for each party in turn. The parties rarely place the same priorities on all objectives, and usually have some objectives that are not listed by the other party. Thus the conciliator can quickly build a string of successes and help the parties create an atmosphere of trust which the conciliator can continue to develop.", "title": "Conciliation techniques" }, { "paragraph_id": 3, "text": "Most successful \"conciliators\" in this sense are highly skilled negotiators. Some conciliators operate under the auspices of any one of several non-governmental entities, and for governmental agencies such as the Federal Mediation and Conciliation Service in the United States.", "title": "Conciliation techniques" }, { "paragraph_id": 4, "text": "There is a different form of conciliation that, instead of a linear process of bilateral negotiation, employs deep listening and witnessing. Conciliation literally means: \"Process of bringing people together into council\". In this second definition, a conciliator is not so much focused on goals and objectives preset by the parties, but more focused on assisting parties to come together to resolve conflicts on their own. Many people in trying to resolve conflict independently come up with solutions that turn into goals based on understanding only a portion of the whole issue. By helping parties understand deeply where all are coming from, different and new solutions emerge from this deep understanding. The conciliator is in service to this deep witnessing between all parties involved. At times when two or more parties are not ready to face each other nor communicate with each other directly, the conciliator helps parties to understand their own perspective, feel more empowered to speak their truth and represent their own needs in a future dialogue with the other parties to the conflict. The conciliator addresses any power disparities perceived by any party in a safe manner. The ensuing dialogue in this form of conciliation can - with the parties' wishes - involve the conciliator as a facilitator until the parties feel comfortable to communicate on their own. This form of conciliation is non-linear and involves an informal method of reconciliation between people who do not necessarily need to negotiate legal issues such as property rights or tort injuries. It can also involve more emotional and passionate elements as tangible and historical topics emerge as the root causes of the conflict. Most successful people who work in conciliation quietly persevere and allow the progressive movements in the parties' healing guide them. More about this process can be found at Consulting & Conciliation Service.", "title": "Conciliation techniques" }, { "paragraph_id": 5, "text": "Historical conciliation is an applied conflict resolution approach that utilizes historical narratives to positively transform relations between societies in conflicts. Historical conciliation can utilize many different methodologies, including mediation, sustained dialogues, apologies, acknowledgement, support of public commemoration activities, and public diplomacy.", "title": "Historical conciliation" }, { "paragraph_id": 6, "text": "Historical conciliation is not an excavation of objective facts. The point of facilitating historical questions is not to discover all the facts in regard to who was right or wrong. Rather, the objective is to discover the complexity, ambiguity, and emotions surrounding both dominant and non-dominant cultural and individual narratives of history. It is also not a rewriting of history. The goal is not to create a combined narrative that everyone agrees upon. Instead, the aim is to create room for critical thinking and more inclusive understanding of the past and conceptions of “the other”.", "title": "Historical conciliation" }, { "paragraph_id": 7, "text": "Some conflicts that are addressed through historical conciliation have their roots in conflicting identities of the people involved. Whether the identity at stake is their ethnicity, religion or culture, it requires a comprehensive approach that takes people's needs for recognition, hopes, fears, and concerns into account.", "title": "Historical conciliation" }, { "paragraph_id": 8, "text": "Some conflicts might be based in unmet needs for security or recognition, or thwarted development. To learn more about the theory of basic human social needs and how they give rise to conflict, please see John Burton, Karen Horney, Hannah Arendt, and Johan Galtung to name a few.", "title": "Historical conciliation" }, { "paragraph_id": 9, "text": "While the above historical summary speaks to some uses of conciliation, it is not the only method and by itself cannot address the entirety of a system of protracted historical conflict. A holistic approach to resolving deep-rooted violent conflict would ideally employ all methods of conflict resolution - education, negotiation, analysis, diplomacy, second track diplomacy, mass therapy, truth and reconciliation, cultural inventory, leadership, peer mediation/facilitation. In short, to resolve a deeply rooted prolonged crisis, it takes all of us, coming from our strengths and positive intentions, and a willingness to allow everyone to come to the table.", "title": "Historical conciliation" }, { "paragraph_id": 10, "text": "For examples of applied conciliation from an historical context, look for Quaker efforts in witness and peacemaking in London, New York and South Africa.", "title": "Historical conciliation" }, { "paragraph_id": 11, "text": "Japanese law makes extensive use of conciliation (調停, chōtei) in civil disputes. The most common forms are civil conciliation and domestic conciliation, both of which are managed under the auspice of the court system by one judge and two non-judge \"conciliators\".", "title": "Japan" }, { "paragraph_id": 12, "text": "Civil conciliation is a form of dispute resolution for small lawsuits, and provides a simpler and cheaper alternative to litigation. Depending on the nature of the case, non-judge experts (doctors, appraisers, actuaries, and so on) may be called by the court as conciliators to help decide the case.", "title": "Japan" }, { "paragraph_id": 13, "text": "Domestic conciliation is most commonly used to handle contentious divorces, but may apply to other domestic disputes such as the annulment of a marriage or acknowledgment of paternity. Parties in such cases are required to undergo conciliation proceedings and may only bring their case to court once conciliation has failed.", "title": "Japan" } ]
Conciliation is an alternative dispute revolution (ADR) process whereby the parties to a dispute use a conciliator, who meets with the parties both separately and together in an attempt to resolve their differences. They do this by lowering tensions, improving communication, interpreting issues, encouraging parties to explore potential solvents and assisting parties in finding a mutually acceptable outcome. Conciliation differs from arbitration in that the conciliation process, in and of itself, has no legal standing, and the conciliator usually has no authority to seek evidence or call witnesses, usually writes no decision, and makes no award.
2002-01-04T21:19:38Z
2023-10-24T10:38:14Z
[ "Template:Use dmy dates", "Template:Reflist", "Template:Cite web", "Template:Short description", "Template:More references", "Template:Wiktionary", "Template:Alternative dispute resolution", "Template:Nihongo", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Conciliation
7,645
Cyclone (programming language)
The Cyclone programming language is intended to be a safe dialect of the C language. Cyclone is designed to avoid buffer overflows and other vulnerabilities that are possible in C programs, without losing the power and convenience of C as a tool for system programming. Cyclone development was started as a joint project of AT&T Labs Research and Greg Morrisett's group at Cornell University in 2001. Version 1.0 was released on May 8, 2006. Cyclone attempts to avoid some of the common pitfalls of C, while still maintaining its look and performance. To this end, Cyclone places the following limits on programs: To maintain the tool set that C programmers are used to, Cyclone provides the following extensions: For a better high-level introduction to Cyclone, the reasoning behind Cyclone and the source of these lists, see this paper. Cyclone looks, in general, much like C, but it should be viewed as a C-like language. Cyclone implements three kinds of pointer: The purpose of introducing these new pointer types is to avoid common problems when using pointers. Take for instance a function, called foo that takes a pointer to an int: Although the person who wrote the function foo could have inserted NULL checks, let us assume that for performance reasons they did not. Calling foo(NULL); will result in undefined behavior (typically, although not necessarily, a SIGSEGV signal being sent to the application). To avoid such problems, Cyclone introduces the @ pointer type, which can never be NULL. Thus, the "safe" version of foo would be: This tells the Cyclone compiler that the argument to foo should never be NULL, avoiding the aforementioned undefined behavior. The simple change of * to @ saves the programmer from having to write NULL checks and the operating system from having to trap NULL pointer dereferences. This extra limit, however, can be a rather large stumbling block for most C programmers, who are used to being able to manipulate their pointers directly with arithmetic. Although this is desirable, it can lead to buffer overflows and other "off-by-one"-style mistakes. To avoid this, the ? pointer type is delimited by a known bound, the size of the array. Although this adds overhead due to the extra information stored about the pointer, it improves safety and security. Take for instance a simple (and naïve) strlen function, written in C: This function assumes that the string being passed in is terminated by NULL ('\0'). However, what would happen if char buf[6] = {'h','e','l','l','o','!'}; were passed to this string? This is perfectly legal in C, yet would cause strlen to iterate through memory not necessarily associated with the string s. There are functions, such as strnlen which can be used to avoid such problems, but these functions are not standard with every implementation of ANSI C. The Cyclone version of strlen is not so different from the C version: Here, strlen bounds itself by the length of the array passed to it, thus not going over the actual length. Each of the kinds of pointer type can be safely cast to each of the others, and arrays and strings are automatically cast to ? by the compiler. (Casting from ? to * invokes a bounds check, and casting from ? to @ invokes both a NULL check and a bounds check. Casting from * to ? results in no checks whatsoever; the resulting ? pointer has a size of 1.) Consider the following code, in C: The function itoa allocates an array of chars buf on the stack and returns a pointer to the start of buf. However, the memory used on the stack for buf is deallocated when the function returns, so the returned value cannot be used safely outside of the function. While GNU Compiler Collection and other compilers will warn about such code, the following will typically compile without warnings: GNU Compiler Collection can produce warnings for such code as a side-effect of option -O2 or -O3, but there are no guarantees that all such errors will be detected. Cyclone does regional analysis of each segment of code, preventing dangling pointers, such as the one returned from this version of itoa. All of the local variables in a given scope are considered to be part of the same region, separate from the heap or any other local region. Thus, when analyzing itoa, the Cyclone compiler would see that z is a pointer into the local stack, and would report an error. Presentations:
[ { "paragraph_id": 0, "text": "The Cyclone programming language is intended to be a safe dialect of the C language. Cyclone is designed to avoid buffer overflows and other vulnerabilities that are possible in C programs, without losing the power and convenience of C as a tool for system programming.", "title": "" }, { "paragraph_id": 1, "text": "Cyclone development was started as a joint project of AT&T Labs Research and Greg Morrisett's group at Cornell University in 2001. Version 1.0 was released on May 8, 2006.", "title": "" }, { "paragraph_id": 2, "text": "Cyclone attempts to avoid some of the common pitfalls of C, while still maintaining its look and performance. To this end, Cyclone places the following limits on programs:", "title": "Language features" }, { "paragraph_id": 3, "text": "To maintain the tool set that C programmers are used to, Cyclone provides the following extensions:", "title": "Language features" }, { "paragraph_id": 4, "text": "For a better high-level introduction to Cyclone, the reasoning behind Cyclone and the source of these lists, see this paper.", "title": "Language features" }, { "paragraph_id": 5, "text": "Cyclone looks, in general, much like C, but it should be viewed as a C-like language.", "title": "Language features" }, { "paragraph_id": 6, "text": "Cyclone implements three kinds of pointer:", "title": "Language features" }, { "paragraph_id": 7, "text": "The purpose of introducing these new pointer types is to avoid common problems when using pointers. Take for instance a function, called foo that takes a pointer to an int:", "title": "Language features" }, { "paragraph_id": 8, "text": "Although the person who wrote the function foo could have inserted NULL checks, let us assume that for performance reasons they did not. Calling foo(NULL); will result in undefined behavior (typically, although not necessarily, a SIGSEGV signal being sent to the application). To avoid such problems, Cyclone introduces the @ pointer type, which can never be NULL. Thus, the \"safe\" version of foo would be:", "title": "Language features" }, { "paragraph_id": 9, "text": "This tells the Cyclone compiler that the argument to foo should never be NULL, avoiding the aforementioned undefined behavior. The simple change of * to @ saves the programmer from having to write NULL checks and the operating system from having to trap NULL pointer dereferences. This extra limit, however, can be a rather large stumbling block for most C programmers, who are used to being able to manipulate their pointers directly with arithmetic. Although this is desirable, it can lead to buffer overflows and other \"off-by-one\"-style mistakes. To avoid this, the ? pointer type is delimited by a known bound, the size of the array. Although this adds overhead due to the extra information stored about the pointer, it improves safety and security. Take for instance a simple (and naïve) strlen function, written in C:", "title": "Language features" }, { "paragraph_id": 10, "text": "This function assumes that the string being passed in is terminated by NULL ('\\0'). However, what would happen if char buf[6] = {'h','e','l','l','o','!'}; were passed to this string? This is perfectly legal in C, yet would cause strlen to iterate through memory not necessarily associated with the string s. There are functions, such as strnlen which can be used to avoid such problems, but these functions are not standard with every implementation of ANSI C. The Cyclone version of strlen is not so different from the C version:", "title": "Language features" }, { "paragraph_id": 11, "text": "Here, strlen bounds itself by the length of the array passed to it, thus not going over the actual length. Each of the kinds of pointer type can be safely cast to each of the others, and arrays and strings are automatically cast to ? by the compiler. (Casting from ? to * invokes a bounds check, and casting from ? to @ invokes both a NULL check and a bounds check. Casting from * to ? results in no checks whatsoever; the resulting ? pointer has a size of 1.)", "title": "Language features" }, { "paragraph_id": 12, "text": "Consider the following code, in C:", "title": "Language features" }, { "paragraph_id": 13, "text": "The function itoa allocates an array of chars buf on the stack and returns a pointer to the start of buf. However, the memory used on the stack for buf is deallocated when the function returns, so the returned value cannot be used safely outside of the function. While GNU Compiler Collection and other compilers will warn about such code, the following will typically compile without warnings:", "title": "Language features" }, { "paragraph_id": 14, "text": "GNU Compiler Collection can produce warnings for such code as a side-effect of option -O2 or -O3, but there are no guarantees that all such errors will be detected. Cyclone does regional analysis of each segment of code, preventing dangling pointers, such as the one returned from this version of itoa. All of the local variables in a given scope are considered to be part of the same region, separate from the heap or any other local region. Thus, when analyzing itoa, the Cyclone compiler would see that z is a pointer into the local stack, and would report an error.", "title": "Language features" }, { "paragraph_id": 15, "text": "Presentations:", "title": "External links" } ]
The Cyclone programming language is intended to be a safe dialect of the C language. Cyclone is designed to avoid buffer overflows and other vulnerabilities that are possible in C programs, without losing the power and convenience of C as a tool for system programming. Cyclone development was started as a joint project of AT&T Labs Research and Greg Morrisett's group at Cornell University in 2001. Version 1.0 was released on May 8, 2006.
2002-01-04T22:45:46Z
2023-12-11T20:21:50Z
[ "Template:CProLang", "Template:Short description", "Template:More footnotes", "Template:Infobox programming language", "Template:Code", "Template:Reflist", "Template:Cite web" ]
https://en.wikipedia.org/wiki/Cyclone_(programming_language)
7,646
Cognitivism
Cognitivism may refer to:
[ { "paragraph_id": 0, "text": "Cognitivism may refer to:", "title": "" } ]
Cognitivism may refer to: Cognitivism (ethics), the philosophical view that ethical sentences express propositions and are capable of being true or false Cognitivism (psychology), a psychological approach that argues that mental function can be understood as the internal manipulation of symbols Cognitivism (aesthetics), a view that cognitive psychology can help understand art and the response to it Anecdotal cognitivism, a psychological methodology for interpreting animal behavior in terms of mental states
2016-12-12T14:01:45Z
[ "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/Cognitivism
7,647
Counter (digital)
In digital logic and computing, a counter is a device which stores (and sometimes displays) the number of times a particular event or process has occurred, often in relationship to a clock. The most common type is a sequential digital logic circuit with an input line called the clock and multiple output lines. The values on the output lines represent a number in the binary or BCD number system. Each pulse applied to the clock input increments or decrements the number in the counter. A counter circuit is usually constructed of several flip-flops connected in a cascade. Counters are a very widely used component in digital circuits, and are manufactured as separate integrated circuits and also incorporated as parts of larger integrated circuits. An electronic counter is a sequential logic circuit that has a clock input signal and a group of output signals that represent an integer "counts" value. Upon each qualified clock edge, the circuit will increment (or decrement, depending on circuit design) the counts. When the counts have reached the end of the counting sequence (maximum counts when incrementing; zero counts when decrementing), the next clock will cause the counts to overflow or underflow, and the counting sequence will start over. Internally, counters use flip-flops to represent the current counts and to retain the counts between clocks. Depending on the type of counter, the output may be a direct representation of the counts (a binary number), or it may be encoded. Examples of the latter include ring counters and counters that output Gray codes. Many counters provide additional input signals to facilitate dynamic control of the counting sequence, such as: Some counters provide a Terminal Count output which indicates that the next clock will cause overflow or underflow. This is commonly used to implement counter cascading (combining two or more counters to create a single, larger counter) by connecting the Terminal Count output of one counter to the Enable input of the next counter. The modulus of a counter is the number of states in its count sequence. The maximum possible modulus is determined by the number of flip-flops. For example, a four-bit counter can have a modulus of up to 16 (2^4). Counters are generally classified as either synchronous or asynchronous. In synchronous counters, all flip-flops share a common clock and change state at the same time. In asynchronous counters, each flip-flop has a unique clock, and the flip-flop states change at different times. Counters are categorized in various ways. For example: Counters are implemented in a variety of ways, including as dedicated MSI and LSI integrated circuits, as embedded counters within ASICs, as general-purpose counter and timer peripherals in microcontrollers, and as IP blocks in FPGAs. An asynchronous (ripple) counter is a "chain" of toggle (T) flip-flops wherein the least-significant flip-flop (bit 0) is clocked by an external signal (the counter input clock), and all other flip-flops are clocked by the output of the nearest, less significant flip-flop (e.g., bit 0 clocks the bit 1 flip-flop, bit 1 clocks the bit 2 flip-flop, etc.). The first flip-flop is clocked by rising edges; all other flip-flops in the chain are clocked by falling clock edges. Each flip-flop introduces a delay from clock edge to output toggle, thus causing the counter bits to change at different times and producing a ripple effect as the input clock propagates through the chain. When implemented with discrete flip-flops, ripple counters are commonly implemented with JK flip-flops, with each flip-flop configured to toggle when clocked (i.e., J and K are both connected to logic high). In the simplest case, a one-bit counter consists of a single flip-flop. This counter will increment (by toggling its output) once per clock cycle and will count from zero to one before overflowing (starting over at zero). Each output state corresponds to two clock cycles; consequently, the flip-flop output frequency is exactly half the frequency of the input clock. If this output is then used as the clock signal for a second flip-flop, the pair of flip-flops will form a two-bit ripple counter with the following state sequence: Additional flip-flops may be added to the chain to form counters of any arbitrary word size, with the output frequency of each bit equal to exactly half the frequency of the nearest, less significant bit. Ripple counters exhibit unstable output states while the input clock propagates through the circuit. The duration of this instability (the output settling time) is proportional to the number of flip-flops. This makes ripple counters unsuitable for use in synchronous circuits that require the counter to have a fast output settling time. Also, it is often impractical to use ripple counter output bits as clocks for external circuits because the ripple effect causes timing skew between the bits. Ripple counters are commonly used as general-purpose counters and clock frequency dividers in applications where the instantaneous count and timing skew is unimportant. In a synchronous counter, the clock inputs of the flip-flops are connected, and the common clock simultaneously triggers all flip-flops. Consequently, all of the flip-flops change state at the same time (in parallel). For example, the circuit shown to the right is an ascending (up-counting) four-bit synchronous counter implemented with JK flip-flops. Each bit of this counter is allowed to toggle when all of the less significant bits are at a logic high state. Upon clock rising edge, bit 1 toggles if bit 0 is logic high; bit 2 toggles if bits 0 and 1 are both high; bit 3 toggles if bits 2, 1, and 0 are all high. A decade counter counts in decimal digits, rather than binary. A decade counter may have each (that is, it may count in binary-coded decimal, as the 7490 integrated circuit did) or other binary encodings. A decade counter is a binary counter designed to count to 1001 (decimal 9). An ordinary four-stage counter can be easily modified to a decade counter by adding a NAND gate as in the schematic to the right. Notice that FF2 and FF4 provide the inputs to the NAND gate. The NAND gate outputs are connected to the CLR input of each of the FFs.". It counts from 0 to 9 and then resets to zero. The counter output can be set to zero by pulsing the reset line low. The count then increments on each clock pulse until it reaches 1001 (decimal 9). When it increments to 1010 (decimal 10), both inputs of the NAND gate go high. The result is that the NAND output goes low, and resets the counter to zero. D going low can be a CARRY OUT signal, indicating that there has been a count of ten. A ring counter is a circular shift register that is initiated such that only one of its flip-flops is the state one while others are in their zero states. A ring counter is a shift register (a cascade connection of flip-flops) with the output of the last one connected to the input of the first, that is, in a ring. Typically, a pattern consisting of a single bit is circulated, so the state repeats every n clock cycles if n flip-flops are used. A Johnson counter (or switch-tail ring counter, twisted ring counter, walking ring counter, or Möbius counter) is a modified ring counter, where the output from the last stage is inverted and fed back as input to the first stage. The register cycles through a sequence of bit-patterns, whose length is equal to twice the length of the shift register, continuing indefinitely. These counters find specialist applications similar to the decade counter (note: the 74x4017 decade counter is a Johnson counter), digital-to-analog conversion, etc. They can be implemented easily using D- or JK-type flip-flops. In computability theory, a counter is considered a type of memory. A counter stores a single natural number (initially zero) and can be arbitrarily long. A counter is usually considered in conjunction with a finite-state machine (FSM), which can perform the following operations on the counter: The following machines are listed in order of power, with each one being strictly more powerful than the one below it: For the first and last, it doesn't matter whether the FSM is a deterministic finite automaton or a nondeterministic finite automaton. They have the same power. The first two and the last one are levels of the Chomsky hierarchy. The first machine, an FSM plus two counters, is equivalent in power to a Turing machine. See the article on counter machines for a proof. A web counter or hit counter is a computer program that indicates the number of visitors or hits a particular webpage has received. Once set up, these counters will be incremented by one every time the web page is accessed in a web browser. The number is usually displayed as an inline digital image or in plain text or on a physical counter such as a mechanical counter. Images may be presented in a variety of fonts, or styles; the classic example is the wheels of an odometer. Web counter was popular in the mid to late 1990s and early 2000s, later replaced by more detailed and complete web traffic measures. Many automation systems use PC and laptops to monitor different parameters of machines and production data. Counters may count parameters such as the number of pieces produced, the production batch number, and measurements of the amounts of material used. Long before electronics became common, mechanical devices were used to count events. These are known as tally counters. They typically consist of a series of disks mounted on an axle, with the digits zero through nine marked on their edge. The right-most disk moves one increment with each event. Each disk except the left-most has a protrusion that moves the next disk to the left one increment after the completion of one revolution. Such counters were used as odometers for bicycles and cars and in tape recorders, fuel dispensers, in production machinery as well as in other machinery. One of the largest manufacturers was the Veeder-Root company, and their name was often used for this type of counter. Handheld tally counters are used mainly for stocktaking and counting people attending events. Electromechanical counters were used to accumulate totals in tabulating machines that pioneered the data processing industry.
[ { "paragraph_id": 0, "text": "In digital logic and computing, a counter is a device which stores (and sometimes displays) the number of times a particular event or process has occurred, often in relationship to a clock. The most common type is a sequential digital logic circuit with an input line called the clock and multiple output lines. The values on the output lines represent a number in the binary or BCD number system. Each pulse applied to the clock input increments or decrements the number in the counter.", "title": "" }, { "paragraph_id": 1, "text": "A counter circuit is usually constructed of several flip-flops connected in a cascade. Counters are a very widely used component in digital circuits, and are manufactured as separate integrated circuits and also incorporated as parts of larger integrated circuits.", "title": "" }, { "paragraph_id": 2, "text": "An electronic counter is a sequential logic circuit that has a clock input signal and a group of output signals that represent an integer \"counts\" value. Upon each qualified clock edge, the circuit will increment (or decrement, depending on circuit design) the counts. When the counts have reached the end of the counting sequence (maximum counts when incrementing; zero counts when decrementing), the next clock will cause the counts to overflow or underflow, and the counting sequence will start over. Internally, counters use flip-flops to represent the current counts and to retain the counts between clocks. Depending on the type of counter, the output may be a direct representation of the counts (a binary number), or it may be encoded. Examples of the latter include ring counters and counters that output Gray codes.", "title": "Electronic counters" }, { "paragraph_id": 3, "text": "Many counters provide additional input signals to facilitate dynamic control of the counting sequence, such as:", "title": "Electronic counters" }, { "paragraph_id": 4, "text": "Some counters provide a Terminal Count output which indicates that the next clock will cause overflow or underflow. This is commonly used to implement counter cascading (combining two or more counters to create a single, larger counter) by connecting the Terminal Count output of one counter to the Enable input of the next counter.", "title": "Electronic counters" }, { "paragraph_id": 5, "text": "The modulus of a counter is the number of states in its count sequence. The maximum possible modulus is determined by the number of flip-flops. For example, a four-bit counter can have a modulus of up to 16 (2^4).", "title": "Electronic counters" }, { "paragraph_id": 6, "text": "Counters are generally classified as either synchronous or asynchronous. In synchronous counters, all flip-flops share a common clock and change state at the same time. In asynchronous counters, each flip-flop has a unique clock, and the flip-flop states change at different times.", "title": "Electronic counters" }, { "paragraph_id": 7, "text": "Counters are categorized in various ways. For example:", "title": "Electronic counters" }, { "paragraph_id": 8, "text": "Counters are implemented in a variety of ways, including as dedicated MSI and LSI integrated circuits, as embedded counters within ASICs, as general-purpose counter and timer peripherals in microcontrollers, and as IP blocks in FPGAs.", "title": "Electronic counters" }, { "paragraph_id": 9, "text": "An asynchronous (ripple) counter is a \"chain\" of toggle (T) flip-flops wherein the least-significant flip-flop (bit 0) is clocked by an external signal (the counter input clock), and all other flip-flops are clocked by the output of the nearest, less significant flip-flop (e.g., bit 0 clocks the bit 1 flip-flop, bit 1 clocks the bit 2 flip-flop, etc.). The first flip-flop is clocked by rising edges; all other flip-flops in the chain are clocked by falling clock edges. Each flip-flop introduces a delay from clock edge to output toggle, thus causing the counter bits to change at different times and producing a ripple effect as the input clock propagates through the chain. When implemented with discrete flip-flops, ripple counters are commonly implemented with JK flip-flops, with each flip-flop configured to toggle when clocked (i.e., J and K are both connected to logic high).", "title": "Electronic counters" }, { "paragraph_id": 10, "text": "In the simplest case, a one-bit counter consists of a single flip-flop. This counter will increment (by toggling its output) once per clock cycle and will count from zero to one before overflowing (starting over at zero). Each output state corresponds to two clock cycles; consequently, the flip-flop output frequency is exactly half the frequency of the input clock. If this output is then used as the clock signal for a second flip-flop, the pair of flip-flops will form a two-bit ripple counter with the following state sequence:", "title": "Electronic counters" }, { "paragraph_id": 11, "text": "Additional flip-flops may be added to the chain to form counters of any arbitrary word size, with the output frequency of each bit equal to exactly half the frequency of the nearest, less significant bit.", "title": "Electronic counters" }, { "paragraph_id": 12, "text": "Ripple counters exhibit unstable output states while the input clock propagates through the circuit. The duration of this instability (the output settling time) is proportional to the number of flip-flops. This makes ripple counters unsuitable for use in synchronous circuits that require the counter to have a fast output settling time. Also, it is often impractical to use ripple counter output bits as clocks for external circuits because the ripple effect causes timing skew between the bits. Ripple counters are commonly used as general-purpose counters and clock frequency dividers in applications where the instantaneous count and timing skew is unimportant.", "title": "Electronic counters" }, { "paragraph_id": 13, "text": "In a synchronous counter, the clock inputs of the flip-flops are connected, and the common clock simultaneously triggers all flip-flops. Consequently, all of the flip-flops change state at the same time (in parallel).", "title": "Electronic counters" }, { "paragraph_id": 14, "text": "For example, the circuit shown to the right is an ascending (up-counting) four-bit synchronous counter implemented with JK flip-flops. Each bit of this counter is allowed to toggle when all of the less significant bits are at a logic high state. Upon clock rising edge, bit 1 toggles if bit 0 is logic high; bit 2 toggles if bits 0 and 1 are both high; bit 3 toggles if bits 2, 1, and 0 are all high.", "title": "Electronic counters" }, { "paragraph_id": 15, "text": "A decade counter counts in decimal digits, rather than binary. A decade counter may have each (that is, it may count in binary-coded decimal, as the 7490 integrated circuit did) or other binary encodings. A decade counter is a binary counter designed to count to 1001 (decimal 9). An ordinary four-stage counter can be easily modified to a decade counter by adding a NAND gate as in the schematic to the right. Notice that FF2 and FF4 provide the inputs to the NAND gate. The NAND gate outputs are connected to the CLR input of each of the FFs.\". It counts from 0 to 9 and then resets to zero. The counter output can be set to zero by pulsing the reset line low. The count then increments on each clock pulse until it reaches 1001 (decimal 9). When it increments to 1010 (decimal 10), both inputs of the NAND gate go high. The result is that the NAND output goes low, and resets the counter to zero. D going low can be a CARRY OUT signal, indicating that there has been a count of ten.", "title": "Electronic counters" }, { "paragraph_id": 16, "text": "A ring counter is a circular shift register that is initiated such that only one of its flip-flops is the state one while others are in their zero states.", "title": "Electronic counters" }, { "paragraph_id": 17, "text": "A ring counter is a shift register (a cascade connection of flip-flops) with the output of the last one connected to the input of the first, that is, in a ring. Typically, a pattern consisting of a single bit is circulated, so the state repeats every n clock cycles if n flip-flops are used.", "title": "Electronic counters" }, { "paragraph_id": 18, "text": "A Johnson counter (or switch-tail ring counter, twisted ring counter, walking ring counter, or Möbius counter) is a modified ring counter, where the output from the last stage is inverted and fed back as input to the first stage. The register cycles through a sequence of bit-patterns, whose length is equal to twice the length of the shift register, continuing indefinitely. These counters find specialist applications similar to the decade counter (note: the 74x4017 decade counter is a Johnson counter), digital-to-analog conversion, etc. They can be implemented easily using D- or JK-type flip-flops.", "title": "Electronic counters" }, { "paragraph_id": 19, "text": "In computability theory, a counter is considered a type of memory. A counter stores a single natural number (initially zero) and can be arbitrarily long. A counter is usually considered in conjunction with a finite-state machine (FSM), which can perform the following operations on the counter:", "title": "Computer science counters" }, { "paragraph_id": 20, "text": "The following machines are listed in order of power, with each one being strictly more powerful than the one below it:", "title": "Computer science counters" }, { "paragraph_id": 21, "text": "For the first and last, it doesn't matter whether the FSM is a deterministic finite automaton or a nondeterministic finite automaton. They have the same power. The first two and the last one are levels of the Chomsky hierarchy.", "title": "Computer science counters" }, { "paragraph_id": 22, "text": "The first machine, an FSM plus two counters, is equivalent in power to a Turing machine. See the article on counter machines for a proof.", "title": "Computer science counters" }, { "paragraph_id": 23, "text": "A web counter or hit counter is a computer program that indicates the number of visitors or hits a particular webpage has received. Once set up, these counters will be incremented by one every time the web page is accessed in a web browser.", "title": "Computer science counters" }, { "paragraph_id": 24, "text": "The number is usually displayed as an inline digital image or in plain text or on a physical counter such as a mechanical counter. Images may be presented in a variety of fonts, or styles; the classic example is the wheels of an odometer.", "title": "Computer science counters" }, { "paragraph_id": 25, "text": "Web counter was popular in the mid to late 1990s and early 2000s, later replaced by more detailed and complete web traffic measures.", "title": "Computer science counters" }, { "paragraph_id": 26, "text": "Many automation systems use PC and laptops to monitor different parameters of machines and production data. Counters may count parameters such as the number of pieces produced, the production batch number, and measurements of the amounts of material used.", "title": "Computer science counters" }, { "paragraph_id": 27, "text": "Long before electronics became common, mechanical devices were used to count events. These are known as tally counters. They typically consist of a series of disks mounted on an axle, with the digits zero through nine marked on their edge. The right-most disk moves one increment with each event. Each disk except the left-most has a protrusion that moves the next disk to the left one increment after the completion of one revolution. Such counters were used as odometers for bicycles and cars and in tape recorders, fuel dispensers, in production machinery as well as in other machinery. One of the largest manufacturers was the Veeder-Root company, and their name was often used for this type of counter.", "title": "Mechanical counters" }, { "paragraph_id": 28, "text": "Handheld tally counters are used mainly for stocktaking and counting people attending events.", "title": "Mechanical counters" }, { "paragraph_id": 29, "text": "Electromechanical counters were used to accumulate totals in tabulating machines that pioneered the data processing industry.", "title": "Mechanical counters" } ]
In digital logic and computing, a counter is a device which stores the number of times a particular event or process has occurred, often in relationship to a clock. The most common type is a sequential digital logic circuit with an input line called the clock and multiple output lines. The values on the output lines represent a number in the binary or BCD number system. Each pulse applied to the clock input increments or decrements the number in the counter. A counter circuit is usually constructed of several flip-flops connected in a cascade. Counters are a very widely used component in digital circuits, and are manufactured as separate integrated circuits and also incorporated as parts of larger integrated circuits.
2002-02-25T15:43:11Z
2023-10-14T09:32:24Z
[ "Template:Short description", "Template:Vanchor", "Template:Main article", "Template:Reflist", "Template:Cite book", "Template:Commons category-inline", "Template:About", "Template:Cite web", "Template:Citation", "Template:Cite report", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Counter_(digital)
7,649
Cervical mucus method
Cervical mucus method may refer to a specific method of fertility awareness or natural family planning:
[ { "paragraph_id": 0, "text": "Cervical mucus method may refer to a specific method of fertility awareness or natural family planning:", "title": "" } ]
Cervical mucus method may refer to a specific method of fertility awareness or natural family planning: Billings ovulation method Creighton Model FertilityCare System Two Day Method
2012-08-23T18:43:01Z
[ "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/Cervical_mucus_method
7,651
Coleridge (disambiguation)
Samuel Taylor Coleridge (1772–1834) was an English poet, literary critic, philosopher and theologian. Coleridge may also refer to:
[ { "paragraph_id": 0, "text": "Samuel Taylor Coleridge (1772–1834) was an English poet, literary critic, philosopher and theologian.", "title": "" }, { "paragraph_id": 1, "text": "Coleridge may also refer to:", "title": "" } ]
Samuel Taylor Coleridge (1772–1834) was an English poet, literary critic, philosopher and theologian. Coleridge may also refer to:
2019-09-19T09:53:58Z
[ "Template:Disambiguation", "Template:Wiktionary", "Template:Intitle" ]
https://en.wikipedia.org/wiki/Coleridge_(disambiguation)
7,655
Clay Mathematics Institute
The Clay Mathematics Institute (CMI) is a private, non-profit foundation dedicated to increasing and disseminating mathematical knowledge. Formerly based in Peterborough, New Hampshire, the corporate address is now in Denver, Colorado. CMI's scientific activities are managed from the President's office in Oxford, United Kingdom. It gives out various awards and sponsorships to promising mathematicians. The institute was founded in 1998 through the sponsorship of Boston businessman Landon T. Clay. Harvard mathematician Arthur Jaffe was the first president of CMI. While the institute is best known for its Millennium Prize Problems, it carries out a wide range of activities, including a postdoctoral program (ten Clay Research Fellows are supported currently), conferences, workshops, and summer schools. The institute is run according to a standard structure comprising a scientific advisory committee that decides on grant-awarding and research proposals, and a board of directors that oversees and approves the committee's decisions. As of September 2021, the board is made up of members of the Clay family, whereas the advisory committee is composed of Simon Donaldson, Michael Hopkins, Andrei Okounkov, Gigliola Staffilani and Andrew Wiles. Martin R. Bridson is the current president of CMI. The institute is best known for establishing the Millennium Prize Problems on May 24, 2000. These seven problems are considered by CMI to be "important classic questions that have resisted solution over the years." For each problem, the first person to solve it will be awarded US$1,000,000 by the CMI. In announcing the prize, CMI drew a parallel to Hilbert's problems, which were proposed in 1900, and had a substantial impact on 20th century mathematics. Of the initial 23 Hilbert problems, most of which have been solved, only the Riemann hypothesis (formulated in 1859) is included in the seven Millennium Prize Problems. For each problem, the Institute had a professional mathematician write up an official statement of the problem, which will be the main standard by which a given solution will be measured against. The seven problems are: Some of the mathematicians who were involved in the selection and presentation of the seven problems were Michael Atiyah, Enrico Bombieri, Alain Connes, Pierre Deligne, Charles Fefferman, John Milnor, David Mumford, Andrew Wiles, and Edward Witten. In recognition of major breakthroughs in mathematical research, the institute has an annual prize — the Clay Research Award. Its recipients to date are Ian Agol, Manindra Agrawal, Yves Benoist, Manjul Bhargava, Tristan Buckmaster, Danny Calegari, Alain Connes, Nils Dencker, Alex Eskin, David Gabai, Ben Green, Mark Gross, Larry Guth, Christopher Hacon, Richard S. Hamilton, Michael Harris, Philip Isett, Jeremy Kahn, Nets Katz, Laurent Lafforgue, Gérard Laumon, Aleksandr Logunov, Eugenia Malinnikova, Vladimir Markovic, James McKernan, Jason Miller, Maryam Mirzakhani, Ngô Bảo Châu, Rahul Pandharipande, Jonathan Pila, Jean-François Quint, Peter Scholze, Oded Schramm, Scott Sheffield, Bernd Siebert, Stanislav Smirnov, Terence Tao, Clifford Taubes, Richard Taylor, Maryna Viazovska, Vlad Vicol, Claire Voisin, Jean-Loup Waldspurger, Andrew Wiles, Geordie Williamson, Edward Witten and Wei Zhang. Besides the Millennium Prize Problems, the Clay Mathematics Institute supports mathematics via the awarding of research fellowships (which range from two to five years and are aimed at younger mathematicians), as well as shorter-term scholarships for programs, individual research, and book writing. The institute also has a yearly Clay Research Award, recognizing major breakthroughs in mathematical research. Finally, the institute organizes a number of summer schools, conferences, workshops, public lectures, and outreach activities aimed primarily at junior mathematicians (from the high school to the postdoctoral level). CMI publications are available in PDF form at most six months after they appear in print. This article incorporates material from Millennium Problems on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. 41°49′34.4″N 71°24′54.7″W / 41.826222°N 71.415194°W / 41.826222; -71.415194
[ { "paragraph_id": 0, "text": "The Clay Mathematics Institute (CMI) is a private, non-profit foundation dedicated to increasing and disseminating mathematical knowledge. Formerly based in Peterborough, New Hampshire, the corporate address is now in Denver, Colorado. CMI's scientific activities are managed from the President's office in Oxford, United Kingdom. It gives out various awards and sponsorships to promising mathematicians. The institute was founded in 1998 through the sponsorship of Boston businessman Landon T. Clay. Harvard mathematician Arthur Jaffe was the first president of CMI.", "title": "" }, { "paragraph_id": 1, "text": "While the institute is best known for its Millennium Prize Problems, it carries out a wide range of activities, including a postdoctoral program (ten Clay Research Fellows are supported currently), conferences, workshops, and summer schools.", "title": "" }, { "paragraph_id": 2, "text": "The institute is run according to a standard structure comprising a scientific advisory committee that decides on grant-awarding and research proposals, and a board of directors that oversees and approves the committee's decisions. As of September 2021, the board is made up of members of the Clay family, whereas the advisory committee is composed of Simon Donaldson, Michael Hopkins, Andrei Okounkov, Gigliola Staffilani and Andrew Wiles. Martin R. Bridson is the current president of CMI.", "title": "Governance" }, { "paragraph_id": 3, "text": "The institute is best known for establishing the Millennium Prize Problems on May 24, 2000. These seven problems are considered by CMI to be \"important classic questions that have resisted solution over the years.\" For each problem, the first person to solve it will be awarded US$1,000,000 by the CMI. In announcing the prize, CMI drew a parallel to Hilbert's problems, which were proposed in 1900, and had a substantial impact on 20th century mathematics. Of the initial 23 Hilbert problems, most of which have been solved, only the Riemann hypothesis (formulated in 1859) is included in the seven Millennium Prize Problems.", "title": "Millennium Prize Problems" }, { "paragraph_id": 4, "text": "For each problem, the Institute had a professional mathematician write up an official statement of the problem, which will be the main standard by which a given solution will be measured against. The seven problems are:", "title": "Millennium Prize Problems" }, { "paragraph_id": 5, "text": "Some of the mathematicians who were involved in the selection and presentation of the seven problems were Michael Atiyah, Enrico Bombieri, Alain Connes, Pierre Deligne, Charles Fefferman, John Milnor, David Mumford, Andrew Wiles, and Edward Witten.", "title": "Millennium Prize Problems" }, { "paragraph_id": 6, "text": "In recognition of major breakthroughs in mathematical research, the institute has an annual prize — the Clay Research Award. Its recipients to date are Ian Agol, Manindra Agrawal, Yves Benoist, Manjul Bhargava, Tristan Buckmaster, Danny Calegari, Alain Connes, Nils Dencker, Alex Eskin, David Gabai, Ben Green, Mark Gross, Larry Guth, Christopher Hacon, Richard S. Hamilton, Michael Harris, Philip Isett, Jeremy Kahn, Nets Katz, Laurent Lafforgue, Gérard Laumon, Aleksandr Logunov, Eugenia Malinnikova, Vladimir Markovic, James McKernan, Jason Miller, Maryam Mirzakhani, Ngô Bảo Châu, Rahul Pandharipande, Jonathan Pila, Jean-François Quint, Peter Scholze, Oded Schramm, Scott Sheffield, Bernd Siebert, Stanislav Smirnov, Terence Tao, Clifford Taubes, Richard Taylor, Maryna Viazovska, Vlad Vicol, Claire Voisin, Jean-Loup Waldspurger, Andrew Wiles, Geordie Williamson, Edward Witten and Wei Zhang.", "title": "Other awards" }, { "paragraph_id": 7, "text": "Besides the Millennium Prize Problems, the Clay Mathematics Institute supports mathematics via the awarding of research fellowships (which range from two to five years and are aimed at younger mathematicians), as well as shorter-term scholarships for programs, individual research, and book writing. The institute also has a yearly Clay Research Award, recognizing major breakthroughs in mathematical research. Finally, the institute organizes a number of summer schools, conferences, workshops, public lectures, and outreach activities aimed primarily at junior mathematicians (from the high school to the postdoctoral level). CMI publications are available in PDF form at most six months after they appear in print.", "title": "Other activities" }, { "paragraph_id": 8, "text": "This article incorporates material from Millennium Problems on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. 41°49′34.4″N 71°24′54.7″W / 41.826222°N 71.415194°W / 41.826222; -71.415194", "title": "External links" } ]
The Clay Mathematics Institute (CMI) is a private, non-profit foundation dedicated to increasing and disseminating mathematical knowledge. Formerly based in Peterborough, New Hampshire, the corporate address is now in Denver, Colorado. CMI's scientific activities are managed from the President's office in Oxford, United Kingdom. It gives out various awards and sponsorships to promising mathematicians. The institute was founded in 1998 through the sponsorship of Boston businessman Landon T. Clay. Harvard mathematician Arthur Jaffe was the first president of CMI. While the institute is best known for its Millennium Prize Problems, it carries out a wide range of activities, including a postdoctoral program, conferences, workshops, and summer schools.
2002-01-06T05:18:51Z
2023-12-12T18:01:06Z
[ "Template:PlanetMath attribution", "Template:Coord", "Template:Authority control", "Template:Short description", "Template:Cite news", "Template:ISBN", "Template:Main", "Template:Cite web", "Template:Cite press release", "Template:Multiple issues", "Template:Infobox organization", "Template:As of" ]
https://en.wikipedia.org/wiki/Clay_Mathematics_Institute
7,659
Cerebral arteriovenous malformation
A cerebral arteriovenous malformation (cerebral AVM, CAVM, cAVM, brain AVM, or BAVM) is an abnormal connection between the arteries and veins in the brain—specifically, an arteriovenous malformation in the cerebrum. The most frequently observed problems related to a cerebral arteriovenous malformation (AVM) are headaches and seizures, cranial nerve afflictions including pinched nerve and palsy, backaches, neckaches, and nausea from coagulated blood that has made its way down to be dissolved in the cerebrospinal fluid. Perhaps 15% of the population at detection are asymptomatic. Other common symptoms are a pulsing noise in the head, progressive weakness, numbness and vision changes as well as debilitating, excruciating pain. In serious cases, blood vessels rupture and cause bleeding within the brain (intracranial hemorrhage). In more than half of patients with AVM, this is the first symptom. Symptoms due to bleeding include loss of consciousness, sudden and severe headache, nausea, vomiting, incontinence, and blurred vision, amongst others. Impairments caused by local brain-tissue damage on the bleed site are also possible, including seizure, one-sided weakness (hemiparesis), a loss of touch sensation on one side of the body and deficits in language processing (aphasia). Ruptured AVMs are responsible for considerable mortality and morbidity. AVMs in certain critical locations may stop the circulation of the cerebrospinal fluid, causing it to accumulate within the skull and giving rise to a clinical condition called hydrocephalus. A stiff neck can occur as the result of increased pressure within the skull and irritation of the meninges. A cerebral AVM is an abnormal anastomosis (connection) between the arteries and veins in the human brain and are most commonly of prenatal origin. In a normal brain, oxygen-enriched blood from the heart travels in sequence through smaller blood vessels going from arteries, to arterioles and then capillaries. Oxygen is removed in the capillaries to be used by the brain. After the oxygen is removed, blood reaches venules and later veins which will take it back to the heart and lungs. A cerebral AVM causes blood to travel from arteries to veins through the abnormal connections, disrupting normal circulation. A cerebral AVM diagnosis is established by neuroimaging studies after a complete neurological and physical examination. Three main techniques are used to visualize the brain and search for an AVM: computed tomography (CT), magnetic resonance imaging (MRI), and cerebral angiography. A CT scan of the head is usually performed first when the subject is symptomatic. It can suggest the approximate site of the bleed. MRI is more sensitive than CT in the diagnosis, and provides better information about the exact location of the malformation. More detailed pictures of the tangle of blood vessels that compose an AVM can be obtained by using radioactive agents injected into the blood stream. If a CT is used in conjunctiangiogram, this is called a computerized tomography angiogram; while, if MRI is used it is called magnetic resonance angiogram. The best images of a cerebral AVM are obtained through cerebral angiography. This procedure involves using a catheter, threaded through an artery up to the head, to deliver a contrast agent into the AVM. As the contrast agent flows through the AVM structure, a sequence of X-ray images are obtained. A common method of grading cerebral AVMs is the Spetzler-Martin (SM) grade. This system was designed to assess the patient's risk of neurological deficit after open surgical resection (surgical morbidity), based on characteristics of the AVM itself. Based on this system, AVMs may be classified as grades 1–5. This system was not intended to characterize risk of hemorrhage. "Eloquent" is defined as areas within the brain that, if removed will result in loss of sensory processing or linguistic ability, minor paralysis, or paralysis. These include the basal ganglia, language cortices, sensorimotor regions, and white matter tracts. Importantly, eloquent areas are often defined differently across studies where deep cerebellar nuclei, cerebral peduncles, thalamus, hypothalamus, internal capsule, brainstem, and the visual cortex could be included. The risk of post-surgical neurological deficit (difficulty with language, motor weakness, vision loss) increases with increasing Spetzler-Martin grade. A limitation of the Spetzler-Martin Grading system is that it does not include the following factors: Patient age, hemorrhage, diffuseness of nidus, and arterial supply. In 2010 a new supplemented Spetzler-Martin system (SM-supp, Lawton-Young) was devised adding these variables to the SM system. Under this new system AVMs are classified from grades 1–10. It has since been determined to have greater predictive accuracy than SM grades alone. Treatment depends on the location and size of the AVM and whether there is bleeding or not. The treatment in the case of sudden bleeding is focused on restoration of vital function. Anticonvulsant medications such as phenytoin are often used to control seizure; medications or procedures may be employed to relieve intracranial pressure. Eventually, curative treatment may be required to prevent recurrent hemorrhage. However, any type of intervention may also carry a risk of creating a neurological deficit. Surgical elimination of the blood vessels involved is the preferred curative treatment for many types of AVM. Surgery is performed by a neurosurgeon who temporarily removes part of the skull (craniotomy), separates the AVM from surrounding brain tissue, and resects the abnormal vessels. While surgery can result in an immediate, complete removal of the AVM, risks exist depending on the size and the location of the malformation. The AVM must be resected en bloc, for partial resection will likely cause severe hemorrhage. The preferred treatment of Spetzler-Martin grade 1 and 2 AVMs in young, healthy patients is surgical resection due to the relatively small risk of neurological damage compared to the high lifetime risk of hemorrhage. Grade 3 AVMs may or may not be amenable to surgery. Grade 4 and 5 AVMs are not usually surgically treated. Radiosurgery has been widely used on small AVMs with considerable success. The Gamma Knife is an apparatus used to precisely apply a controlled radiation dosage to the volume of the brain occupied by the AVM. While this treatment does not require an incision and craniotomy (with their own inherent risks), three or more years may pass before the complete effects are known, during which time patients are at risk of bleeding. Complete obliteration of the AVM may or may not occur after several years, and repeat treatment may be needed. Radiosurgery is itself not without risk. In one large study, nine percent of patients had transient neurological symptoms, including headache, after radiosurgery for AVM. However, most symptoms resolved, and the long-term rate of neurological symptoms was 3.8%. Embolization is performed by interventional neuroradiologists and the occlusion of blood vessels most commonly is obtained with ethylene vinyl alcohol copolymer (Onyx) or n-butyl cyanoacrylate. These substances are introduced by a radiographically guided catheter, and block vessels responsible for blood flow into the AVM. Embolization is frequently used as an adjunct to either surgery or radiation treatment. Embolization reduces the size of the AVM and during surgery it reduces the risk of bleeding. However, embolization alone may completely obliterate some AVMs. In high flow intranidal fistulas balloons can also be used to reduce the flow so that embolization can be done safely. A first-of-its-kind controlled clinical trial by the National Institutes of Health and National Institute of Neurological Disorders and Stroke focuses on the risk of stroke or death in patients with an AVM who either did or did not undergo interventional eradication. Early results suggest that the invasive treatment of unruptured AVMs tends to yield worse results than the therapeutic (medical) management of symptoms. Because of the higher-than-expected experimental event rate (e.g. stroke or death), patient enrollment was halted by May 2013, while the study intended to follow participants (over a planned 5 to 10 years) to determine which approach seems to produce better long-term results. The main risk is intracranial hemorrhage. This risk is difficult to quantify since many patients with asymptomatic AVMs will never come to medical attention. Small AVMs tend to bleed more often than do larger ones, the opposite of cerebral aneurysms. If a rupture or bleeding incident occurs, the blood may penetrate either into the brain tissue (cerebral hemorrhage) or into the subarachnoid space, which is located between the sheaths (meninges) surrounding the brain (subarachnoid hemorrhage). Bleeding may also extend into the ventricular system (intraventricular hemorrhage). Cerebral hemorrhage appears to be most common. One long-term study (mean follow up greater than 20 years) of over 150 symptomatic AVMs (either presenting with bleeding or seizures) found the risk of cerebral hemorrhage to be approximately 4% per year, slightly higher than the 2–4% seen in other studies. The earlier an AVM appears, the more likely it is to cause hemorrhage over one's lifetime; e.g. (assuming a 3% annual risk), an AVM appearing at 25 years of age indicates a 79% lifetime chance of hemorrhage, while one appearing at age 85 indicates only a 17% chance. Ruptured AVMs are a significant source of morbidity and mortality; following a rupture, as many as 29% of patients will die, with only 55% able to live independently. The annual new detection rate incidence of AVMs is approximately 1 per 100,000 a year. The point prevalence in adults is approximately 18 per 100,000. AVMs are more common in males than females, although in females pregnancy may start or worsen symptoms due to the increase in blood flow and volume it usually brings. There is a significant preponderance (15–20%) of AVM in patients with hereditary hemorrhagic telangiectasia (Osler–Weber–Rendu syndrome). Footnotes Citations
[ { "paragraph_id": 0, "text": "A cerebral arteriovenous malformation (cerebral AVM, CAVM, cAVM, brain AVM, or BAVM) is an abnormal connection between the arteries and veins in the brain—specifically, an arteriovenous malformation in the cerebrum.", "title": "" }, { "paragraph_id": 1, "text": "The most frequently observed problems related to a cerebral arteriovenous malformation (AVM) are headaches and seizures, cranial nerve afflictions including pinched nerve and palsy, backaches, neckaches, and nausea from coagulated blood that has made its way down to be dissolved in the cerebrospinal fluid. Perhaps 15% of the population at detection are asymptomatic. Other common symptoms are a pulsing noise in the head, progressive weakness, numbness and vision changes as well as debilitating, excruciating pain.", "title": "Signs and symptoms" }, { "paragraph_id": 2, "text": "In serious cases, blood vessels rupture and cause bleeding within the brain (intracranial hemorrhage). In more than half of patients with AVM, this is the first symptom. Symptoms due to bleeding include loss of consciousness, sudden and severe headache, nausea, vomiting, incontinence, and blurred vision, amongst others. Impairments caused by local brain-tissue damage on the bleed site are also possible, including seizure, one-sided weakness (hemiparesis), a loss of touch sensation on one side of the body and deficits in language processing (aphasia). Ruptured AVMs are responsible for considerable mortality and morbidity.", "title": "Signs and symptoms" }, { "paragraph_id": 3, "text": "AVMs in certain critical locations may stop the circulation of the cerebrospinal fluid, causing it to accumulate within the skull and giving rise to a clinical condition called hydrocephalus. A stiff neck can occur as the result of increased pressure within the skull and irritation of the meninges.", "title": "Signs and symptoms" }, { "paragraph_id": 4, "text": "A cerebral AVM is an abnormal anastomosis (connection) between the arteries and veins in the human brain and are most commonly of prenatal origin. In a normal brain, oxygen-enriched blood from the heart travels in sequence through smaller blood vessels going from arteries, to arterioles and then capillaries. Oxygen is removed in the capillaries to be used by the brain. After the oxygen is removed, blood reaches venules and later veins which will take it back to the heart and lungs. A cerebral AVM causes blood to travel from arteries to veins through the abnormal connections, disrupting normal circulation.", "title": "Pathophysiology" }, { "paragraph_id": 5, "text": "A cerebral AVM diagnosis is established by neuroimaging studies after a complete neurological and physical examination. Three main techniques are used to visualize the brain and search for an AVM: computed tomography (CT), magnetic resonance imaging (MRI), and cerebral angiography. A CT scan of the head is usually performed first when the subject is symptomatic. It can suggest the approximate site of the bleed. MRI is more sensitive than CT in the diagnosis, and provides better information about the exact location of the malformation. More detailed pictures of the tangle of blood vessels that compose an AVM can be obtained by using radioactive agents injected into the blood stream. If a CT is used in conjunctiangiogram, this is called a computerized tomography angiogram; while, if MRI is used it is called magnetic resonance angiogram. The best images of a cerebral AVM are obtained through cerebral angiography. This procedure involves using a catheter, threaded through an artery up to the head, to deliver a contrast agent into the AVM. As the contrast agent flows through the AVM structure, a sequence of X-ray images are obtained.", "title": "Diagnosis" }, { "paragraph_id": 6, "text": "A common method of grading cerebral AVMs is the Spetzler-Martin (SM) grade. This system was designed to assess the patient's risk of neurological deficit after open surgical resection (surgical morbidity), based on characteristics of the AVM itself. Based on this system, AVMs may be classified as grades 1–5. This system was not intended to characterize risk of hemorrhage.", "title": "Grading" }, { "paragraph_id": 7, "text": "\"Eloquent\" is defined as areas within the brain that, if removed will result in loss of sensory processing or linguistic ability, minor paralysis, or paralysis. These include the basal ganglia, language cortices, sensorimotor regions, and white matter tracts. Importantly, eloquent areas are often defined differently across studies where deep cerebellar nuclei, cerebral peduncles, thalamus, hypothalamus, internal capsule, brainstem, and the visual cortex could be included.", "title": "Grading" }, { "paragraph_id": 8, "text": "The risk of post-surgical neurological deficit (difficulty with language, motor weakness, vision loss) increases with increasing Spetzler-Martin grade.", "title": "Grading" }, { "paragraph_id": 9, "text": "A limitation of the Spetzler-Martin Grading system is that it does not include the following factors: Patient age, hemorrhage, diffuseness of nidus, and arterial supply. In 2010 a new supplemented Spetzler-Martin system (SM-supp, Lawton-Young) was devised adding these variables to the SM system. Under this new system AVMs are classified from grades 1–10. It has since been determined to have greater predictive accuracy than SM grades alone.", "title": "Grading" }, { "paragraph_id": 10, "text": "Treatment depends on the location and size of the AVM and whether there is bleeding or not.", "title": "Treatment" }, { "paragraph_id": 11, "text": "The treatment in the case of sudden bleeding is focused on restoration of vital function.", "title": "Treatment" }, { "paragraph_id": 12, "text": "Anticonvulsant medications such as phenytoin are often used to control seizure; medications or procedures may be employed to relieve intracranial pressure. Eventually, curative treatment may be required to prevent recurrent hemorrhage. However, any type of intervention may also carry a risk of creating a neurological deficit.", "title": "Treatment" }, { "paragraph_id": 13, "text": "Surgical elimination of the blood vessels involved is the preferred curative treatment for many types of AVM. Surgery is performed by a neurosurgeon who temporarily removes part of the skull (craniotomy), separates the AVM from surrounding brain tissue, and resects the abnormal vessels. While surgery can result in an immediate, complete removal of the AVM, risks exist depending on the size and the location of the malformation. The AVM must be resected en bloc, for partial resection will likely cause severe hemorrhage. The preferred treatment of Spetzler-Martin grade 1 and 2 AVMs in young, healthy patients is surgical resection due to the relatively small risk of neurological damage compared to the high lifetime risk of hemorrhage. Grade 3 AVMs may or may not be amenable to surgery. Grade 4 and 5 AVMs are not usually surgically treated.", "title": "Treatment" }, { "paragraph_id": 14, "text": "Radiosurgery has been widely used on small AVMs with considerable success. The Gamma Knife is an apparatus used to precisely apply a controlled radiation dosage to the volume of the brain occupied by the AVM. While this treatment does not require an incision and craniotomy (with their own inherent risks), three or more years may pass before the complete effects are known, during which time patients are at risk of bleeding. Complete obliteration of the AVM may or may not occur after several years, and repeat treatment may be needed. Radiosurgery is itself not without risk. In one large study, nine percent of patients had transient neurological symptoms, including headache, after radiosurgery for AVM. However, most symptoms resolved, and the long-term rate of neurological symptoms was 3.8%.", "title": "Treatment" }, { "paragraph_id": 15, "text": "Embolization is performed by interventional neuroradiologists and the occlusion of blood vessels most commonly is obtained with ethylene vinyl alcohol copolymer (Onyx) or n-butyl cyanoacrylate. These substances are introduced by a radiographically guided catheter, and block vessels responsible for blood flow into the AVM. Embolization is frequently used as an adjunct to either surgery or radiation treatment. Embolization reduces the size of the AVM and during surgery it reduces the risk of bleeding. However, embolization alone may completely obliterate some AVMs. In high flow intranidal fistulas balloons can also be used to reduce the flow so that embolization can be done safely.", "title": "Treatment" }, { "paragraph_id": 16, "text": "A first-of-its-kind controlled clinical trial by the National Institutes of Health and National Institute of Neurological Disorders and Stroke focuses on the risk of stroke or death in patients with an AVM who either did or did not undergo interventional eradication. Early results suggest that the invasive treatment of unruptured AVMs tends to yield worse results than the therapeutic (medical) management of symptoms. Because of the higher-than-expected experimental event rate (e.g. stroke or death), patient enrollment was halted by May 2013, while the study intended to follow participants (over a planned 5 to 10 years) to determine which approach seems to produce better long-term results.", "title": "Treatment" }, { "paragraph_id": 17, "text": "The main risk is intracranial hemorrhage. This risk is difficult to quantify since many patients with asymptomatic AVMs will never come to medical attention. Small AVMs tend to bleed more often than do larger ones, the opposite of cerebral aneurysms. If a rupture or bleeding incident occurs, the blood may penetrate either into the brain tissue (cerebral hemorrhage) or into the subarachnoid space, which is located between the sheaths (meninges) surrounding the brain (subarachnoid hemorrhage). Bleeding may also extend into the ventricular system (intraventricular hemorrhage). Cerebral hemorrhage appears to be most common. One long-term study (mean follow up greater than 20 years) of over 150 symptomatic AVMs (either presenting with bleeding or seizures) found the risk of cerebral hemorrhage to be approximately 4% per year, slightly higher than the 2–4% seen in other studies. The earlier an AVM appears, the more likely it is to cause hemorrhage over one's lifetime; e.g. (assuming a 3% annual risk), an AVM appearing at 25 years of age indicates a 79% lifetime chance of hemorrhage, while one appearing at age 85 indicates only a 17% chance. Ruptured AVMs are a significant source of morbidity and mortality; following a rupture, as many as 29% of patients will die, with only 55% able to live independently.", "title": "Prognosis" }, { "paragraph_id": 18, "text": "The annual new detection rate incidence of AVMs is approximately 1 per 100,000 a year. The point prevalence in adults is approximately 18 per 100,000. AVMs are more common in males than females, although in females pregnancy may start or worsen symptoms due to the increase in blood flow and volume it usually brings. There is a significant preponderance (15–20%) of AVM in patients with hereditary hemorrhagic telangiectasia (Osler–Weber–Rendu syndrome).", "title": "Epidemiology" }, { "paragraph_id": 19, "text": "Footnotes", "title": "References" }, { "paragraph_id": 20, "text": "Citations", "title": "References" } ]
A cerebral arteriovenous malformation is an abnormal connection between the arteries and veins in the brain—specifically, an arteriovenous malformation in the cerebrum.
2002-02-25T15:51:15Z
2023-10-18T20:51:35Z
[ "Template:Cite web", "Template:Congenital vascular defects", "Template:Infobox medical condition (new)", "Template:Nowrap", "Template:See also", "Template:Reflist", "Template:Cite book", "Template:Medical resources", "Template:Commons category", "Template:Use mdy dates", "Template:Efn", "Template:Notelist", "Template:Cite journal" ]
https://en.wikipedia.org/wiki/Cerebral_arteriovenous_malformation
7,660
Comparative method
In linguistics, the comparative method is a technique for studying the development of languages by performing a feature-by-feature comparison of two or more languages with common descent from a shared ancestor and then extrapolating backwards to infer the properties of that ancestor. The comparative method may be contrasted with the method of internal reconstruction in which the internal development of a single language is inferred by the analysis of features within that language. Ordinarily, both methods are used together to reconstruct prehistoric phases of languages; to fill in gaps in the historical record of a language; to discover the development of phonological, morphological and other linguistic systems and to confirm or to refute hypothesised relationships between languages. The comparative method emerged in the early 19th century with the birth of Indo-European studies, then took a definite scientific approach with the works of the Neogrammarians in the late 19th–early 20th century. Key contributions were made by the Danish scholars Rasmus Rask (1787–1832) and Karl Verner (1846–1896), and the German scholar Jacob Grimm (1785–1863). The first linguist to offer reconstructed forms from a proto-language was August Schleicher (1821–1868) in his Compendium der vergleichenden Grammatik der indogermanischen Sprachen, originally published in 1861. Here is Schleicher's explanation of why he offered reconstructed forms: In the present work an attempt is made to set forth the inferred Indo-European original language side by side with its really existent derived languages. Besides the advantages offered by such a plan, in setting immediately before the eyes of the student the final results of the investigation in a more concrete form, and thereby rendering easier his insight into the nature of particular Indo-European languages, there is, I think, another of no less importance gained by it, namely that it shows the baselessness of the assumption that the non-Indian Indo-European languages were derived from Old-Indian (Sanskrit). The aim of the comparative method is to highlight and interpret systematic phonological and semantic correspondences between two or more attested languages. If those correspondences cannot be rationally explained as the result of linguistic universals or language contact (borrowings, areal influence, etc.), and if they are sufficiently numerous, regular, and systematic that they cannot be dismissed as chance similarities, then it must be assumed that they descend from a single parent language called the 'proto-language'. A sequence of regular sound changes (along with their underlying sound laws) can then be postulated to explain the correspondences between the attested forms, which eventually allows for the reconstruction of a proto-language by the methodical comparison of "linguistic facts" within a generalized system of correspondences. Every linguistic fact is part of a whole in which everything is connected to everything else. One detail must not be linked to another detail, but one linguistic system to another. Relation is considered to be "established beyond a reasonable doubt" if a reconstruction of the common ancestor is feasible. The ultimate proof of genetic relationship, and to many linguists' minds the only real proof, lies in a successful reconstruction of the ancestral forms from which the semantically corresponding cognates can be derived. In some cases, this reconstruction can only be partial, generally because the compared languages are too scarcely attested, the temporal distance between them and their proto-language is too deep, or their internal evolution render many of the sound laws obscure to researchers. In such case, a relation is considered plausible, but uncertain. Descent is defined as transmission across the generations: children learn a language from the parents' generation and, after being influenced by their peers, transmit it to the next generation, and so on. For example, a continuous chain of speakers across the centuries links Vulgar Latin to all of its modern descendants. Two languages are genetically related if they descended from the same ancestor language. For example, Italian and French both come from Latin and therefore belong to the same family, the Romance languages. Having a large component of vocabulary from a certain origin is not sufficient to establish relatedness; for example, heavy borrowing from Arabic into Persian has caused more of the vocabulary of Modern Persian to be from Arabic than from the direct ancestor of Persian, Proto-Indo-Iranian, but Persian remains a member of the Indo-Iranian family and is not considered "related" to Arabic. However, it is possible for languages to have different degrees of relatedness. English, for example, is related to both German and Russian but is more closely related to the former than to the latter. Although all three languages share a common ancestor, Proto-Indo-European, English and German also share a more recent common ancestor, Proto-Germanic, but Russian does not. Therefore, English and German are considered to belong to a subgroup of Indo-European that Russian does not belong to, the Germanic languages. The division of related languages into subgroups is accomplished by finding shared linguistic innovations that differentiate them from the parent language. For instance, English and German both exhibit the effects of a collection of sound changes known as Grimm's Law, which Russian was not affected by. The fact that English and German share this innovation is seen as evidence of English and German's more recent common ancestor—since the innovation actually took place within that common ancestor, before English and German diverged into separate languages. On the other hand, shared retentions from the parent language are not sufficient evidence of a sub-group. For example, German and Russian both retain from Proto-Indo-European a contrast between the dative case and the accusative case, which English has lost. However, that similarity between German and Russian is not evidence that German is more closely related to Russian than to English but means only that the innovation in question, the loss of the accusative/dative distinction, happened more recently in English than the divergence of English from German. In Antiquity, Romans were aware of the similarities between Greek and Latin, but did not study them systematically. They sometimes explained them mythologically, as the result of Rome being a Greek colony speaking a debased dialect. Even though grammarians of Antiquity had access to other languages around them (Oscan, Umbrian, Etruscan, Gaulish, Egyptian, Parthian...), they showed little interest in comparing, studying, or just documenting them. Comparison between languages really began after Antiquity. In the 9th or 10th century AD, Yehuda Ibn Quraysh compared the phonology and morphology of Hebrew, Aramaic and Arabic but attributed the resemblance to the Biblical story of Babel, with Abraham, Isaac and Joseph retaining Adam's language, with other languages at various removes becoming more altered from the original Hebrew. In publications of 1647 and 1654, Marcus van Boxhorn first described a rigorous methodology for historical linguistic comparisons and proposed the existence of an Indo-European proto-language, which he called "Scythian", unrelated to Hebrew but ancestral to Germanic, Greek, Romance, Persian, Sanskrit, Slavic, Celtic and Baltic languages. The Scythian theory was further developed by Andreas Jäger (1686) and William Wotton (1713), who made early forays to reconstruct the primitive common language. In 1710 and 1723, Lambert ten Kate first formulated the regularity of sound laws, introducing among others the term root vowel. Another early systematic attempt to prove the relationship between two languages on the basis of similarity of grammar and lexicon was made by the Hungarian János Sajnovics in 1770, when he attempted to demonstrate the relationship between Sami and Hungarian. That work was later extended to all Finno-Ugric languages in 1799 by his countryman Samuel Gyarmathi. However, the origin of modern historical linguistics is often traced back to Sir William Jones, an English philologist living in India, who in 1786 made his famous observation: The Sanscrit language, whatever be its antiquity, is of a wonderful structure; more perfect than the Greek, more copious than the Latin, and more exquisitely refined than either, yet bearing to both of them a stronger affinity, both in the roots of verbs and the forms of grammar, than could possibly have been produced by accident; so strong indeed, that no philologer could examine them all three, without believing them to have sprung from some common source, which, perhaps, no longer exists. There is a similar reason, though not quite so forcible, for supposing that both the Gothick and the Celtick, though blended with a very different idiom, had the same origin with the Sanscrit; and the old Persian might be added to the same family. The comparative method developed out of attempts to reconstruct the proto-language mentioned by Jones, which he did not name but subsequent linguists have labelled Proto-Indo-European (PIE). The first professional comparison between the Indo-European languages that were then known was made by the German linguist Franz Bopp in 1816. He did not attempt a reconstruction but demonstrated that Greek, Latin and Sanskrit shared a common structure and a common lexicon. In 1808, Friedrich Schlegel first stated the importance of using the eldest possible form of a language when trying to prove its relationships; in 1818, Rasmus Christian Rask developed the principle of regular sound-changes to explain his observations of similarities between individual words in the Germanic languages and their cognates in Greek and Latin. Jacob Grimm, better known for his Fairy Tales, used the comparative method in Deutsche Grammatik (published 1819–1837 in four volumes), which attempted to show the development of the Germanic languages from a common origin, which was the first systematic study of diachronic language change. Both Rask and Grimm were unable to explain apparent exceptions to the sound laws that they had discovered. Although Hermann Grassmann explained one of the anomalies with the publication of Grassmann's law in 1862, Karl Verner made a methodological breakthrough in 1875, when he identified a pattern now known as Verner's law, the first sound-law based on comparative evidence showing that a phonological change in one phoneme could depend on other factors within the same word (such as neighbouring phonemes and the position of the accent), which are now called conditioning environments. Similar discoveries made by the Junggrammatiker (usually translated as "Neogrammarians") at the University of Leipzig in the late 19th century led them to conclude that all sound changes were ultimately regular, resulting in the famous statement by Karl Brugmann and Hermann Osthoff in 1878 that "sound laws have no exceptions". That idea is fundamental to the modern comparative method since it necessarily assumes regular correspondences between sounds in related languages and thus regular sound changes from the proto-language. The Neogrammarian hypothesis led to the application of the comparative method to reconstruct Proto-Indo-European since Indo-European was then by far the most well-studied language family. Linguists working with other families soon followed suit, and the comparative method quickly became the established method for uncovering linguistic relationships. There is no fixed set of steps to be followed in the application of the comparative method, but some steps are suggested by Lyle Campbell and Terry Crowley, who are both authors of introductory texts in historical linguistics. This abbreviated summary is based on their concepts of how to proceed. This step involves making lists of words that are likely cognates among the languages being compared. If there is a regularly-recurring match between the phonetic structure of basic words with similar meanings, a genetic kinship can probably then be established. For example, linguists looking at the Polynesian family might come up with a list similar to the following (their actual list would be much longer): Borrowings or false cognates can skew or obscure the correct data. For example, English taboo ([tæbu]) is like the six Polynesian forms because of borrowing from Tongan into English, not because of a genetic similarity. That problem can usually be overcome by using basic vocabulary, such as kinship terms, numbers, body parts and pronouns. Nonetheless, even basic vocabulary can be sometimes borrowed. Finnish, for example, borrowed the word for "mother", äiti, from Proto-Germanic *aiþį̄ (compare to Gothic aiþei). English borrowed the pronouns "they", "them", and "their(s)" from Norse. Thai and various other East Asian languages borrowed their numbers from Chinese. An extreme case is represented by Pirahã, a Muran language of South America, which has been controversially claimed to have borrowed all of its pronouns from Nheengatu. The next step involves determining the regular sound-correspondences exhibited by the lists of potential cognates. For example, in the Polynesian data above, it is apparent that words that contain t in most of the languages listed have cognates in Hawaiian with k in the same position. That is visible in multiple cognate sets: the words glossed as 'one', 'three', 'man' and 'taboo' all show the relationship. The situation is called a "regular correspondence" between k in Hawaiian and t in the other Polynesian languages. Similarly, a regular correspondence can be seen between Hawaiian and Rapanui h, Tongan and Samoan f, Maori ɸ, and Rarotongan ʔ. Mere phonetic similarity, as between English day and Latin dies (both with the same meaning), has no probative value. English initial d- does not regularly match Latin d- since a large set of English and Latin non-borrowed cognates cannot be assembled such that English d repeatedly and consistently corresponds to Latin d at the beginning of a word, and whatever sporadic matches can be observed are due either to chance (as in the above example) or to borrowing (for example, Latin diabolus and English devil, both ultimately of Greek origin). However, English and Latin exhibit a regular correspondence of t- : d- (in which "A : B" means "A corresponds to B"), as in the following examples: If there are many regular correspondence sets of this kind (the more, the better), a common origin becomes a virtual certainty, particularly if some of the correspondences are non-trivial or unusual. During the late 18th to late 19th century, two major developments improved the method's effectiveness. First, it was found that many sound changes are conditioned by a specific context. For example, in both Greek and Sanskrit, an aspirated stop evolved into an unaspirated one, but only if a second aspirate occurred later in the same word; this is Grassmann's law, first described for Sanskrit by Sanskrit grammarian Pāṇini and promulgated by Hermann Grassmann in 1863. Second, it was found that sometimes sound changes occurred in contexts that were later lost. For instance, in Sanskrit velars (k-like sounds) were replaced by palatals (ch-like sounds) whenever the following vowel was *i or *e. Subsequent to this change, all instances of *e were replaced by a. The situation could be reconstructed only because the original distribution of e and a could be recovered from the evidence of other Indo-European languages. For instance, the Latin suffix que, "and", preserves the original *e vowel that caused the consonant shift in Sanskrit: Verner's Law, discovered by Karl Verner c. 1875, provides a similar case: the voicing of consonants in Germanic languages underwent a change that was determined by the position of the old Indo-European accent. Following the change, the accent shifted to initial position. Verner solved the puzzle by comparing the Germanic voicing pattern with Greek and Sanskrit accent patterns. This stage of the comparative method, therefore, involves examining the correspondence sets discovered in step 2 and seeing which of them apply only in certain contexts. If two (or more) sets apply in complementary distribution, they can be assumed to reflect a single original phoneme: "some sound changes, particularly conditioned sound changes, can result in a proto-sound being associated with more than one correspondence set". For example, the following potential cognate list can be established for Romance languages, which descend from Latin: They evidence two correspondence sets, k : k and k : ʃ: Since French ʃ occurs only before a where the other languages also have a, and French k occurs elsewhere, the difference is caused by different environments (being before a conditions the change), and the sets are complementary. They can, therefore, be assumed to reflect a single proto-phoneme (in this case *k, spelled ⟨c⟩ in Latin). The original Latin words are corpus, crudus, catena and captiare, all with an initial k. If more evidence along those lines were given, one might conclude that an alteration of the original k took place because of a different environment. A more complex case involves consonant clusters in Proto-Algonquian. The Algonquianist Leonard Bloomfield used the reflexes of the clusters in four of the daughter languages to reconstruct the following correspondence sets: Although all five correspondence sets overlap with one another in various places, they are not in complementary distribution and so Bloomfield recognised that a different cluster must be reconstructed for each set. His reconstructions were, respectively, *hk, *xk, *čk (=[t͡ʃk]), *šk (=[ʃk]), and çk (in which 'x' and 'ç' are arbitrary symbols, rather than attempts to guess the phonetic value of the proto-phonemes). Typology assists in deciding what reconstruction best fits the data. For example, the voicing of voiceless stops between vowels is common, but the devoicing of voiced stops in that environment is rare. If a correspondence -t- : -d- between vowels is found in two languages, the proto-phoneme is more likely to be *-t-, with a development to the voiced form in the second language. The opposite reconstruction would represent a rare type. However, unusual sound changes occur. The Proto-Indo-European word for two, for example, is reconstructed as *dwō, which is reflected in Classical Armenian as erku. Several other cognates demonstrate a regular change *dw- → erk- in Armenian. Similarly, in Bearlake, a dialect of the Athabaskan language of Slavey, there has been a sound change of Proto-Athabaskan *ts → Bearlake kʷ. It is very unlikely that *dw- changed directly into erk- and *ts into kʷ, but they probably instead went through several intermediate steps before they arrived at the later forms. It is not phonetic similarity that matters for the comparative method but rather regular sound correspondences. By the principle of economy, the reconstruction of a proto-phoneme should require as few sound changes as possible to arrive at the modern reflexes in the daughter languages. For example, Algonquian languages exhibit the following correspondence set: The simplest reconstruction for this set would be either *m or *b. Both *m → b and *b → m are likely. Because m occurs in five of the languages and b in only one of them, if *b is reconstructed, it is necessary to assume five separate changes of *b → m, but if *m is reconstructed, it is necessary to assume only one change of *m → b and so *m would be most economical. That argument assumes the languages other than Arapaho to be at least partly independent of one another. If they all formed a common subgroup, the development *b → m would have to be assumed to have occurred only once. In the final step, the linguist checks to see how the proto-phonemes fit the known typological constraints. For example, a hypothetical system, has only one voiced stop, *b, and although it has an alveolar and a velar nasal, *n and *ŋ, there is no corresponding labial nasal. However, languages generally maintain symmetry in their phonemic inventories. In this case, a linguist might attempt to investigate the possibilities that either what was earlier reconstructed as *b is in fact *m or that the *n and *ŋ are in fact *d and *g. Even a symmetrical system can be typologically suspicious. For example, here is the traditional Proto-Indo-European stop inventory: An earlier voiceless aspirated row was removed on grounds of insufficient evidence. Since the mid-20th century, a number of linguists have argued that this phonology is implausible and that it is extremely unlikely for a language to have a voiced aspirated (breathy voice) series without a corresponding voiceless aspirated series. Thomas Gamkrelidze and Vyacheslav Ivanov provided a potential solution and argued that the series that are traditionally reconstructed as plain voiced should be reconstructed as glottalized: either implosive (ɓ, ɗ, ɠ) or ejective (pʼ, tʼ, kʼ). The plain voiceless and voiced aspirated series would thus be replaced by just voiceless and voiced, with aspiration being a non-distinctive quality of both. That example of the application of linguistic typology to linguistic reconstruction has become known as the glottalic theory. It has a large number of proponents but is not generally accepted. The reconstruction of proto-sounds logically precedes the reconstruction of grammatical morphemes (word-forming affixes and inflectional endings), patterns of declension and conjugation and so on. The full reconstruction of an unrecorded protolanguage is an open-ended task. The limitations of the comparative method were recognized by the very linguists who developed it, but it is still seen as a valuable tool. In the case of Indo-European, the method seemed at least a partial validation of the centuries-old search for an Ursprache, the original language. The others were presumed to be ordered in a family tree, which was the tree model of the neogrammarians. The archaeologists followed suit and attempted to find archaeological evidence of a culture or cultures that could be presumed to have spoken a proto-language, such as Vere Gordon Childe's The Aryans: a study of Indo-European origins, 1926. Childe was a philologist turned archaeologist. Those views culminated in the Siedlungsarchaologie, or "settlement-archaeology", of Gustaf Kossinna, becoming known as "Kossinna's Law". Kossinna asserted that cultures represent ethnic groups, including their languages, but his law was rejected after World War II. The fall of Kossinna's Law removed the temporal and spatial framework previously applied to many proto-languages. Fox concludes: The Comparative Method as such is not, in fact, historical; it provides evidence of linguistic relationships to which we may give a historical interpretation.... [Our increased knowledge about the historical processes involved] has probably made historical linguists less prone to equate the idealizations required by the method with historical reality.... Provided we keep [the interpretation of the results and the method itself] apart, the Comparative Method can continue to be used in the reconstruction of earlier stages of languages. Proto-languages can be verified in many historical instances, such as Latin. Although no longer a law, settlement-archaeology is known to be essentially valid for some cultures that straddle history and prehistory, such as the Celtic Iron Age (mainly Celtic) and Mycenaean civilization (mainly Greek). None of those models can be or have been completely rejected, but none is sufficient alone. The foundation of the comparative method, and of comparative linguistics in general, is the Neogrammarians' fundamental assumption that "sound laws have no exceptions". When it was initially proposed, critics of the Neogrammarians proposed an alternate position that summarised by the maxim "each word has its own history". Several types of change actually alter words in irregular ways. Unless identified, they may hide or distort laws and cause false perceptions of relationship. All languages borrow words from other languages in various contexts. Loanwords imitate the form of the donor language, as in Finnic kuningas, from Proto-Germanic *kuningaz ('king'), with possible adaptations to the local phonology, as in Japanese sakkā, from English soccer. At first sight, borrowed words may mislead the investigator into seeing a genetic relationship, although they can more easily be identified with information on the historical stages of both the donor and receiver languages. Inherently, words that were borrowed from a common source (such as English coffee and Basque kafe, ultimately from Arabic qahwah) do share a genetic relationship, although limited to the history of this word. Borrowing on a larger scale occurs in areal diffusion, when features are adopted by contiguous languages over a geographical area. The borrowing may be phonological, morphological or lexical. A false proto-language over the area may be reconstructed for them or may be taken to be a third language serving as a source of diffused features. Several areal features and other influences may converge to form a Sprachbund, a wider region sharing features that appear to be related but are diffusional. For instance, the Mainland Southeast Asia linguistic area, before it was recognised, suggested several false classifications of such languages as Chinese, Thai and Vietnamese. Sporadic changes, such as irregular inflections, compounding and abbreviation, do not follow any laws. For example, the Spanish words palabra ('word'), peligro ('danger') and milagro ('miracle') would have been parabla, periglo, miraglo by regular sound changes from the Latin parabŏla, perīcŭlum and mīrācŭlum, but the r and l changed places by sporadic metathesis. Analogy is the sporadic change of a feature to be like another feature in the same or a different language. It may affect a single word or be generalized to an entire class of features, such as a verb paradigm. An example is the Russian word for nine. The word, by regular sound changes from Proto-Slavic, should have been /nʲevʲatʲ/, but it is in fact /dʲevʲatʲ/. It is believed that the initial nʲ- changed to dʲ- under influence of the word for "ten" in Russian, /dʲesʲatʲ/. Those who study contemporary language changes, such as William Labov, acknowledge that even a systematic sound change is applied at first inconsistently, with the percentage of its occurrence in a person's speech dependent on various social factors. The sound change seems to gradually spread in a process known as lexical diffusion. While it does not invalidate the Neogrammarians' axiom that "sound laws have no exceptions", the gradual application of the very sound laws shows that they do not always apply to all lexical items at the same time. Hock notes, "While it probably is true in the long run every word has its own history, it is not justified to conclude as some linguists have, that therefore the Neogrammarian position on the nature of linguistic change is falsified". The comparative method cannot recover aspects of a language that were not inherited in its daughter idioms. For instance, the Latin declension pattern was lost in Romance languages, resulting in an impossibility to fully reconstruct such a feature via systematic comparison. The comparative method is used to construct a tree model (German Stammbaum) of language evolution, in which daughter languages are seen as branching from the proto-language, gradually growing more distant from it through accumulated phonological, morpho-syntactic, and lexical changes. The tree model features nodes that are presumed to be distinct proto-languages existing independently in distinct regions during distinct historical times. The reconstruction of unattested proto-languages lends itself to that illusion since they cannot be verified, and the linguist is free to select whatever definite times and places seems best. Right from the outset of Indo-European studies, however, Thomas Young said: It is not, however, very easy to say what the definition should be that should constitute a separate language, but it seems most natural to call those languages distinct, of which the one cannot be understood by common persons in the habit of speaking the other.... Still, however, it may remain doubtfull whether the Danes and the Swedes could not, in general, understand each other tolerably well... nor is it possible to say if the twenty ways of pronouncing the sounds, belonging to the Chinese characters, ought or ought not to be considered as so many languages or dialects.... But,... the languages so nearly allied must stand next to each other in a systematic order… The assumption of uniformity in a proto-language, implicit in the comparative method, is problematic. Even small language communities always have differences in dialect, whether they are based on area, gender, class or other factors. The Pirahã language of Brazil is spoken by only several hundred people but has at least two different dialects, one spoken by men and one by women. Campbell points out: It is not so much that the comparative method 'assumes' no variation; rather, it is just that there is nothing built into the comparative method which would allow it to address variation directly.... This assumption of uniformity is a reasonable idealization; it does no more damage to the understanding of the language than, say, modern reference grammars do which concentrate on a language's general structure, typically leaving out consideration of regional or social variation. Different dialects, as they evolve into separate languages, remain in contact with and influence one another. Even after they are considered distinct, languages near one another continue to influence one another and often share grammatical, phonological, and lexical innovations. A change in one language of a family may spread to neighboring languages, and multiple waves of change are communicated like waves across language and dialect boundaries, each with its own randomly delimited range. If a language is divided into an inventory of features, each with its own time and range (isoglosses), they do not all coincide. History and prehistory may not offer a time and place for a distinct coincidence, as may be the case for Proto-Italic, for which the proto-language is only a concept. However, Hock observes: The discovery in the late nineteenth century that isoglosses can cut across well-established linguistic boundaries at first created considerable attention and controversy. And it became fashionable to oppose a wave theory to a tree theory.... Today, however, it is quite evident that the phenomena referred to by these two terms are complementary aspects of linguistic change.... The reconstruction of unknown proto-languages is inherently subjective. In the Proto-Algonquian example above, the choice of *m as the parent phoneme is only likely, not certain. It is conceivable that a Proto-Algonquian language with *b in those positions split into two branches, one that preserved *b and one that changed it to *m instead, and while the first branch developed only into Arapaho, the second spread out more widely and developed into all the other Algonquian tribes. It is also possible that the nearest common ancestor of the Algonquian languages used some other sound instead, such as *p, which eventually mutated to *b in one branch and to *m in the other. Examples of strikingly complicated and even circular developments are indeed known to have occurred (such as Proto-Indo-European *t > Pre-Proto-Germanic *þ > Proto-Germanic *ð > Proto-West-Germanic *d > Old High German t in fater > Modern German Vater), but in the absence of any evidence or other reason to postulate a more complicated development, the preference of a simpler explanation is justified by the principle of parsimony, also known as Occam's razor. Since reconstruction involves many such choices, some linguists prefer to view the reconstructed features as abstract representations of sound correspondences, rather than as objects with a historical time and place. The existence of proto-languages and the validity of the comparative method is verifiable if the reconstruction can be matched to a known language, which may be known only as a shadow in the loanwords of another language. For example, Finnic languages such as Finnish have borrowed many words from an early stage of Germanic, and the shape of the loans matches the forms that have been reconstructed for Proto-Germanic. Finnish kuningas 'king' and kaunis 'beautiful' match the Germanic reconstructions *kuningaz and *skauniz (> German König 'king', schön 'beautiful'). The wave model was developed in the 1870s as an alternative to the tree model to represent the historical patterns of language diversification. Both the tree-based and the wave-based representations are compatible with the comparative method. By contrast, some approaches are incompatible with the comparative method, including contentious glottochronology and even more controversial mass lexical comparison considered by most historical linguists to be flawed and unreliable.
[ { "paragraph_id": 0, "text": "In linguistics, the comparative method is a technique for studying the development of languages by performing a feature-by-feature comparison of two or more languages with common descent from a shared ancestor and then extrapolating backwards to infer the properties of that ancestor. The comparative method may be contrasted with the method of internal reconstruction in which the internal development of a single language is inferred by the analysis of features within that language. Ordinarily, both methods are used together to reconstruct prehistoric phases of languages; to fill in gaps in the historical record of a language; to discover the development of phonological, morphological and other linguistic systems and to confirm or to refute hypothesised relationships between languages.", "title": "" }, { "paragraph_id": 1, "text": "The comparative method emerged in the early 19th century with the birth of Indo-European studies, then took a definite scientific approach with the works of the Neogrammarians in the late 19th–early 20th century. Key contributions were made by the Danish scholars Rasmus Rask (1787–1832) and Karl Verner (1846–1896), and the German scholar Jacob Grimm (1785–1863). The first linguist to offer reconstructed forms from a proto-language was August Schleicher (1821–1868) in his Compendium der vergleichenden Grammatik der indogermanischen Sprachen, originally published in 1861. Here is Schleicher's explanation of why he offered reconstructed forms:", "title": "" }, { "paragraph_id": 2, "text": "In the present work an attempt is made to set forth the inferred Indo-European original language side by side with its really existent derived languages. Besides the advantages offered by such a plan, in setting immediately before the eyes of the student the final results of the investigation in a more concrete form, and thereby rendering easier his insight into the nature of particular Indo-European languages, there is, I think, another of no less importance gained by it, namely that it shows the baselessness of the assumption that the non-Indian Indo-European languages were derived from Old-Indian (Sanskrit).", "title": "" }, { "paragraph_id": 3, "text": "The aim of the comparative method is to highlight and interpret systematic phonological and semantic correspondences between two or more attested languages. If those correspondences cannot be rationally explained as the result of linguistic universals or language contact (borrowings, areal influence, etc.), and if they are sufficiently numerous, regular, and systematic that they cannot be dismissed as chance similarities, then it must be assumed that they descend from a single parent language called the 'proto-language'.", "title": "Definition" }, { "paragraph_id": 4, "text": "A sequence of regular sound changes (along with their underlying sound laws) can then be postulated to explain the correspondences between the attested forms, which eventually allows for the reconstruction of a proto-language by the methodical comparison of \"linguistic facts\" within a generalized system of correspondences.", "title": "Definition" }, { "paragraph_id": 5, "text": "Every linguistic fact is part of a whole in which everything is connected to everything else. One detail must not be linked to another detail, but one linguistic system to another.", "title": "Definition" }, { "paragraph_id": 6, "text": "Relation is considered to be \"established beyond a reasonable doubt\" if a reconstruction of the common ancestor is feasible.", "title": "Definition" }, { "paragraph_id": 7, "text": "The ultimate proof of genetic relationship, and to many linguists' minds the only real proof, lies in a successful reconstruction of the ancestral forms from which the semantically corresponding cognates can be derived.", "title": "Definition" }, { "paragraph_id": 8, "text": "In some cases, this reconstruction can only be partial, generally because the compared languages are too scarcely attested, the temporal distance between them and their proto-language is too deep, or their internal evolution render many of the sound laws obscure to researchers. In such case, a relation is considered plausible, but uncertain.", "title": "Definition" }, { "paragraph_id": 9, "text": "Descent is defined as transmission across the generations: children learn a language from the parents' generation and, after being influenced by their peers, transmit it to the next generation, and so on. For example, a continuous chain of speakers across the centuries links Vulgar Latin to all of its modern descendants.", "title": "Definition" }, { "paragraph_id": 10, "text": "Two languages are genetically related if they descended from the same ancestor language. For example, Italian and French both come from Latin and therefore belong to the same family, the Romance languages. Having a large component of vocabulary from a certain origin is not sufficient to establish relatedness; for example, heavy borrowing from Arabic into Persian has caused more of the vocabulary of Modern Persian to be from Arabic than from the direct ancestor of Persian, Proto-Indo-Iranian, but Persian remains a member of the Indo-Iranian family and is not considered \"related\" to Arabic.", "title": "Definition" }, { "paragraph_id": 11, "text": "However, it is possible for languages to have different degrees of relatedness. English, for example, is related to both German and Russian but is more closely related to the former than to the latter. Although all three languages share a common ancestor, Proto-Indo-European, English and German also share a more recent common ancestor, Proto-Germanic, but Russian does not. Therefore, English and German are considered to belong to a subgroup of Indo-European that Russian does not belong to, the Germanic languages.", "title": "Definition" }, { "paragraph_id": 12, "text": "The division of related languages into subgroups is accomplished by finding shared linguistic innovations that differentiate them from the parent language. For instance, English and German both exhibit the effects of a collection of sound changes known as Grimm's Law, which Russian was not affected by. The fact that English and German share this innovation is seen as evidence of English and German's more recent common ancestor—since the innovation actually took place within that common ancestor, before English and German diverged into separate languages. On the other hand, shared retentions from the parent language are not sufficient evidence of a sub-group. For example, German and Russian both retain from Proto-Indo-European a contrast between the dative case and the accusative case, which English has lost. However, that similarity between German and Russian is not evidence that German is more closely related to Russian than to English but means only that the innovation in question, the loss of the accusative/dative distinction, happened more recently in English than the divergence of English from German.", "title": "Definition" }, { "paragraph_id": 13, "text": "In Antiquity, Romans were aware of the similarities between Greek and Latin, but did not study them systematically. They sometimes explained them mythologically, as the result of Rome being a Greek colony speaking a debased dialect.", "title": "Origin and development" }, { "paragraph_id": 14, "text": "Even though grammarians of Antiquity had access to other languages around them (Oscan, Umbrian, Etruscan, Gaulish, Egyptian, Parthian...), they showed little interest in comparing, studying, or just documenting them. Comparison between languages really began after Antiquity.", "title": "Origin and development" }, { "paragraph_id": 15, "text": "In the 9th or 10th century AD, Yehuda Ibn Quraysh compared the phonology and morphology of Hebrew, Aramaic and Arabic but attributed the resemblance to the Biblical story of Babel, with Abraham, Isaac and Joseph retaining Adam's language, with other languages at various removes becoming more altered from the original Hebrew.", "title": "Origin and development" }, { "paragraph_id": 16, "text": "In publications of 1647 and 1654, Marcus van Boxhorn first described a rigorous methodology for historical linguistic comparisons and proposed the existence of an Indo-European proto-language, which he called \"Scythian\", unrelated to Hebrew but ancestral to Germanic, Greek, Romance, Persian, Sanskrit, Slavic, Celtic and Baltic languages. The Scythian theory was further developed by Andreas Jäger (1686) and William Wotton (1713), who made early forays to reconstruct the primitive common language. In 1710 and 1723, Lambert ten Kate first formulated the regularity of sound laws, introducing among others the term root vowel.", "title": "Origin and development" }, { "paragraph_id": 17, "text": "Another early systematic attempt to prove the relationship between two languages on the basis of similarity of grammar and lexicon was made by the Hungarian János Sajnovics in 1770, when he attempted to demonstrate the relationship between Sami and Hungarian. That work was later extended to all Finno-Ugric languages in 1799 by his countryman Samuel Gyarmathi. However, the origin of modern historical linguistics is often traced back to Sir William Jones, an English philologist living in India, who in 1786 made his famous observation:", "title": "Origin and development" }, { "paragraph_id": 18, "text": "The Sanscrit language, whatever be its antiquity, is of a wonderful structure; more perfect than the Greek, more copious than the Latin, and more exquisitely refined than either, yet bearing to both of them a stronger affinity, both in the roots of verbs and the forms of grammar, than could possibly have been produced by accident; so strong indeed, that no philologer could examine them all three, without believing them to have sprung from some common source, which, perhaps, no longer exists. There is a similar reason, though not quite so forcible, for supposing that both the Gothick and the Celtick, though blended with a very different idiom, had the same origin with the Sanscrit; and the old Persian might be added to the same family.", "title": "Origin and development" }, { "paragraph_id": 19, "text": "The comparative method developed out of attempts to reconstruct the proto-language mentioned by Jones, which he did not name but subsequent linguists have labelled Proto-Indo-European (PIE). The first professional comparison between the Indo-European languages that were then known was made by the German linguist Franz Bopp in 1816. He did not attempt a reconstruction but demonstrated that Greek, Latin and Sanskrit shared a common structure and a common lexicon. In 1808, Friedrich Schlegel first stated the importance of using the eldest possible form of a language when trying to prove its relationships; in 1818, Rasmus Christian Rask developed the principle of regular sound-changes to explain his observations of similarities between individual words in the Germanic languages and their cognates in Greek and Latin. Jacob Grimm, better known for his Fairy Tales, used the comparative method in Deutsche Grammatik (published 1819–1837 in four volumes), which attempted to show the development of the Germanic languages from a common origin, which was the first systematic study of diachronic language change.", "title": "Origin and development" }, { "paragraph_id": 20, "text": "Both Rask and Grimm were unable to explain apparent exceptions to the sound laws that they had discovered. Although Hermann Grassmann explained one of the anomalies with the publication of Grassmann's law in 1862, Karl Verner made a methodological breakthrough in 1875, when he identified a pattern now known as Verner's law, the first sound-law based on comparative evidence showing that a phonological change in one phoneme could depend on other factors within the same word (such as neighbouring phonemes and the position of the accent), which are now called conditioning environments.", "title": "Origin and development" }, { "paragraph_id": 21, "text": "Similar discoveries made by the Junggrammatiker (usually translated as \"Neogrammarians\") at the University of Leipzig in the late 19th century led them to conclude that all sound changes were ultimately regular, resulting in the famous statement by Karl Brugmann and Hermann Osthoff in 1878 that \"sound laws have no exceptions\". That idea is fundamental to the modern comparative method since it necessarily assumes regular correspondences between sounds in related languages and thus regular sound changes from the proto-language. The Neogrammarian hypothesis led to the application of the comparative method to reconstruct Proto-Indo-European since Indo-European was then by far the most well-studied language family. Linguists working with other families soon followed suit, and the comparative method quickly became the established method for uncovering linguistic relationships.", "title": "Origin and development" }, { "paragraph_id": 22, "text": "There is no fixed set of steps to be followed in the application of the comparative method, but some steps are suggested by Lyle Campbell and Terry Crowley, who are both authors of introductory texts in historical linguistics. This abbreviated summary is based on their concepts of how to proceed.", "title": "Application" }, { "paragraph_id": 23, "text": "This step involves making lists of words that are likely cognates among the languages being compared. If there is a regularly-recurring match between the phonetic structure of basic words with similar meanings, a genetic kinship can probably then be established. For example, linguists looking at the Polynesian family might come up with a list similar to the following (their actual list would be much longer):", "title": "Application" }, { "paragraph_id": 24, "text": "Borrowings or false cognates can skew or obscure the correct data. For example, English taboo ([tæbu]) is like the six Polynesian forms because of borrowing from Tongan into English, not because of a genetic similarity. That problem can usually be overcome by using basic vocabulary, such as kinship terms, numbers, body parts and pronouns. Nonetheless, even basic vocabulary can be sometimes borrowed. Finnish, for example, borrowed the word for \"mother\", äiti, from Proto-Germanic *aiþį̄ (compare to Gothic aiþei). English borrowed the pronouns \"they\", \"them\", and \"their(s)\" from Norse. Thai and various other East Asian languages borrowed their numbers from Chinese. An extreme case is represented by Pirahã, a Muran language of South America, which has been controversially claimed to have borrowed all of its pronouns from Nheengatu.", "title": "Application" }, { "paragraph_id": 25, "text": "The next step involves determining the regular sound-correspondences exhibited by the lists of potential cognates. For example, in the Polynesian data above, it is apparent that words that contain t in most of the languages listed have cognates in Hawaiian with k in the same position. That is visible in multiple cognate sets: the words glossed as 'one', 'three', 'man' and 'taboo' all show the relationship. The situation is called a \"regular correspondence\" between k in Hawaiian and t in the other Polynesian languages. Similarly, a regular correspondence can be seen between Hawaiian and Rapanui h, Tongan and Samoan f, Maori ɸ, and Rarotongan ʔ.", "title": "Application" }, { "paragraph_id": 26, "text": "Mere phonetic similarity, as between English day and Latin dies (both with the same meaning), has no probative value. English initial d- does not regularly match Latin d- since a large set of English and Latin non-borrowed cognates cannot be assembled such that English d repeatedly and consistently corresponds to Latin d at the beginning of a word, and whatever sporadic matches can be observed are due either to chance (as in the above example) or to borrowing (for example, Latin diabolus and English devil, both ultimately of Greek origin). However, English and Latin exhibit a regular correspondence of t- : d- (in which \"A : B\" means \"A corresponds to B\"), as in the following examples:", "title": "Application" }, { "paragraph_id": 27, "text": "If there are many regular correspondence sets of this kind (the more, the better), a common origin becomes a virtual certainty, particularly if some of the correspondences are non-trivial or unusual.", "title": "Application" }, { "paragraph_id": 28, "text": "During the late 18th to late 19th century, two major developments improved the method's effectiveness.", "title": "Application" }, { "paragraph_id": 29, "text": "First, it was found that many sound changes are conditioned by a specific context. For example, in both Greek and Sanskrit, an aspirated stop evolved into an unaspirated one, but only if a second aspirate occurred later in the same word; this is Grassmann's law, first described for Sanskrit by Sanskrit grammarian Pāṇini and promulgated by Hermann Grassmann in 1863.", "title": "Application" }, { "paragraph_id": 30, "text": "Second, it was found that sometimes sound changes occurred in contexts that were later lost. For instance, in Sanskrit velars (k-like sounds) were replaced by palatals (ch-like sounds) whenever the following vowel was *i or *e. Subsequent to this change, all instances of *e were replaced by a. The situation could be reconstructed only because the original distribution of e and a could be recovered from the evidence of other Indo-European languages. For instance, the Latin suffix que, \"and\", preserves the original *e vowel that caused the consonant shift in Sanskrit:", "title": "Application" }, { "paragraph_id": 31, "text": "Verner's Law, discovered by Karl Verner c. 1875, provides a similar case: the voicing of consonants in Germanic languages underwent a change that was determined by the position of the old Indo-European accent. Following the change, the accent shifted to initial position. Verner solved the puzzle by comparing the Germanic voicing pattern with Greek and Sanskrit accent patterns.", "title": "Application" }, { "paragraph_id": 32, "text": "This stage of the comparative method, therefore, involves examining the correspondence sets discovered in step 2 and seeing which of them apply only in certain contexts. If two (or more) sets apply in complementary distribution, they can be assumed to reflect a single original phoneme: \"some sound changes, particularly conditioned sound changes, can result in a proto-sound being associated with more than one correspondence set\".", "title": "Application" }, { "paragraph_id": 33, "text": "For example, the following potential cognate list can be established for Romance languages, which descend from Latin:", "title": "Application" }, { "paragraph_id": 34, "text": "They evidence two correspondence sets, k : k and k : ʃ:", "title": "Application" }, { "paragraph_id": 35, "text": "Since French ʃ occurs only before a where the other languages also have a, and French k occurs elsewhere, the difference is caused by different environments (being before a conditions the change), and the sets are complementary. They can, therefore, be assumed to reflect a single proto-phoneme (in this case *k, spelled ⟨c⟩ in Latin). The original Latin words are corpus, crudus, catena and captiare, all with an initial k. If more evidence along those lines were given, one might conclude that an alteration of the original k took place because of a different environment.", "title": "Application" }, { "paragraph_id": 36, "text": "A more complex case involves consonant clusters in Proto-Algonquian. The Algonquianist Leonard Bloomfield used the reflexes of the clusters in four of the daughter languages to reconstruct the following correspondence sets:", "title": "Application" }, { "paragraph_id": 37, "text": "Although all five correspondence sets overlap with one another in various places, they are not in complementary distribution and so Bloomfield recognised that a different cluster must be reconstructed for each set. His reconstructions were, respectively, *hk, *xk, *čk (=[t͡ʃk]), *šk (=[ʃk]), and çk (in which 'x' and 'ç' are arbitrary symbols, rather than attempts to guess the phonetic value of the proto-phonemes).", "title": "Application" }, { "paragraph_id": 38, "text": "Typology assists in deciding what reconstruction best fits the data. For example, the voicing of voiceless stops between vowels is common, but the devoicing of voiced stops in that environment is rare. If a correspondence -t- : -d- between vowels is found in two languages, the proto-phoneme is more likely to be *-t-, with a development to the voiced form in the second language. The opposite reconstruction would represent a rare type.", "title": "Application" }, { "paragraph_id": 39, "text": "However, unusual sound changes occur. The Proto-Indo-European word for two, for example, is reconstructed as *dwō, which is reflected in Classical Armenian as erku. Several other cognates demonstrate a regular change *dw- → erk- in Armenian. Similarly, in Bearlake, a dialect of the Athabaskan language of Slavey, there has been a sound change of Proto-Athabaskan *ts → Bearlake kʷ. It is very unlikely that *dw- changed directly into erk- and *ts into kʷ, but they probably instead went through several intermediate steps before they arrived at the later forms. It is not phonetic similarity that matters for the comparative method but rather regular sound correspondences.", "title": "Application" }, { "paragraph_id": 40, "text": "By the principle of economy, the reconstruction of a proto-phoneme should require as few sound changes as possible to arrive at the modern reflexes in the daughter languages. For example, Algonquian languages exhibit the following correspondence set:", "title": "Application" }, { "paragraph_id": 41, "text": "The simplest reconstruction for this set would be either *m or *b. Both *m → b and *b → m are likely. Because m occurs in five of the languages and b in only one of them, if *b is reconstructed, it is necessary to assume five separate changes of *b → m, but if *m is reconstructed, it is necessary to assume only one change of *m → b and so *m would be most economical.", "title": "Application" }, { "paragraph_id": 42, "text": "That argument assumes the languages other than Arapaho to be at least partly independent of one another. If they all formed a common subgroup, the development *b → m would have to be assumed to have occurred only once.", "title": "Application" }, { "paragraph_id": 43, "text": "In the final step, the linguist checks to see how the proto-phonemes fit the known typological constraints. For example, a hypothetical system,", "title": "Application" }, { "paragraph_id": 44, "text": "has only one voiced stop, *b, and although it has an alveolar and a velar nasal, *n and *ŋ, there is no corresponding labial nasal. However, languages generally maintain symmetry in their phonemic inventories. In this case, a linguist might attempt to investigate the possibilities that either what was earlier reconstructed as *b is in fact *m or that the *n and *ŋ are in fact *d and *g.", "title": "Application" }, { "paragraph_id": 45, "text": "Even a symmetrical system can be typologically suspicious. For example, here is the traditional Proto-Indo-European stop inventory:", "title": "Application" }, { "paragraph_id": 46, "text": "An earlier voiceless aspirated row was removed on grounds of insufficient evidence. Since the mid-20th century, a number of linguists have argued that this phonology is implausible and that it is extremely unlikely for a language to have a voiced aspirated (breathy voice) series without a corresponding voiceless aspirated series.", "title": "Application" }, { "paragraph_id": 47, "text": "Thomas Gamkrelidze and Vyacheslav Ivanov provided a potential solution and argued that the series that are traditionally reconstructed as plain voiced should be reconstructed as glottalized: either implosive (ɓ, ɗ, ɠ) or ejective (pʼ, tʼ, kʼ). The plain voiceless and voiced aspirated series would thus be replaced by just voiceless and voiced, with aspiration being a non-distinctive quality of both. That example of the application of linguistic typology to linguistic reconstruction has become known as the glottalic theory. It has a large number of proponents but is not generally accepted.", "title": "Application" }, { "paragraph_id": 48, "text": "The reconstruction of proto-sounds logically precedes the reconstruction of grammatical morphemes (word-forming affixes and inflectional endings), patterns of declension and conjugation and so on. The full reconstruction of an unrecorded protolanguage is an open-ended task.", "title": "Application" }, { "paragraph_id": 49, "text": "The limitations of the comparative method were recognized by the very linguists who developed it, but it is still seen as a valuable tool. In the case of Indo-European, the method seemed at least a partial validation of the centuries-old search for an Ursprache, the original language. The others were presumed to be ordered in a family tree, which was the tree model of the neogrammarians.", "title": "Complications" }, { "paragraph_id": 50, "text": "The archaeologists followed suit and attempted to find archaeological evidence of a culture or cultures that could be presumed to have spoken a proto-language, such as Vere Gordon Childe's The Aryans: a study of Indo-European origins, 1926. Childe was a philologist turned archaeologist. Those views culminated in the Siedlungsarchaologie, or \"settlement-archaeology\", of Gustaf Kossinna, becoming known as \"Kossinna's Law\". Kossinna asserted that cultures represent ethnic groups, including their languages, but his law was rejected after World War II. The fall of Kossinna's Law removed the temporal and spatial framework previously applied to many proto-languages. Fox concludes:", "title": "Complications" }, { "paragraph_id": 51, "text": "The Comparative Method as such is not, in fact, historical; it provides evidence of linguistic relationships to which we may give a historical interpretation.... [Our increased knowledge about the historical processes involved] has probably made historical linguists less prone to equate the idealizations required by the method with historical reality.... Provided we keep [the interpretation of the results and the method itself] apart, the Comparative Method can continue to be used in the reconstruction of earlier stages of languages.", "title": "Complications" }, { "paragraph_id": 52, "text": "Proto-languages can be verified in many historical instances, such as Latin. Although no longer a law, settlement-archaeology is known to be essentially valid for some cultures that straddle history and prehistory, such as the Celtic Iron Age (mainly Celtic) and Mycenaean civilization (mainly Greek). None of those models can be or have been completely rejected, but none is sufficient alone.", "title": "Complications" }, { "paragraph_id": 53, "text": "The foundation of the comparative method, and of comparative linguistics in general, is the Neogrammarians' fundamental assumption that \"sound laws have no exceptions\". When it was initially proposed, critics of the Neogrammarians proposed an alternate position that summarised by the maxim \"each word has its own history\". Several types of change actually alter words in irregular ways. Unless identified, they may hide or distort laws and cause false perceptions of relationship.", "title": "Complications" }, { "paragraph_id": 54, "text": "All languages borrow words from other languages in various contexts. Loanwords imitate the form of the donor language, as in Finnic kuningas, from Proto-Germanic *kuningaz ('king'), with possible adaptations to the local phonology, as in Japanese sakkā, from English soccer. At first sight, borrowed words may mislead the investigator into seeing a genetic relationship, although they can more easily be identified with information on the historical stages of both the donor and receiver languages. Inherently, words that were borrowed from a common source (such as English coffee and Basque kafe, ultimately from Arabic qahwah) do share a genetic relationship, although limited to the history of this word.", "title": "Complications" }, { "paragraph_id": 55, "text": "Borrowing on a larger scale occurs in areal diffusion, when features are adopted by contiguous languages over a geographical area. The borrowing may be phonological, morphological or lexical. A false proto-language over the area may be reconstructed for them or may be taken to be a third language serving as a source of diffused features.", "title": "Complications" }, { "paragraph_id": 56, "text": "Several areal features and other influences may converge to form a Sprachbund, a wider region sharing features that appear to be related but are diffusional. For instance, the Mainland Southeast Asia linguistic area, before it was recognised, suggested several false classifications of such languages as Chinese, Thai and Vietnamese.", "title": "Complications" }, { "paragraph_id": 57, "text": "Sporadic changes, such as irregular inflections, compounding and abbreviation, do not follow any laws. For example, the Spanish words palabra ('word'), peligro ('danger') and milagro ('miracle') would have been parabla, periglo, miraglo by regular sound changes from the Latin parabŏla, perīcŭlum and mīrācŭlum, but the r and l changed places by sporadic metathesis.", "title": "Complications" }, { "paragraph_id": 58, "text": "Analogy is the sporadic change of a feature to be like another feature in the same or a different language. It may affect a single word or be generalized to an entire class of features, such as a verb paradigm. An example is the Russian word for nine. The word, by regular sound changes from Proto-Slavic, should have been /nʲevʲatʲ/, but it is in fact /dʲevʲatʲ/. It is believed that the initial nʲ- changed to dʲ- under influence of the word for \"ten\" in Russian, /dʲesʲatʲ/.", "title": "Complications" }, { "paragraph_id": 59, "text": "Those who study contemporary language changes, such as William Labov, acknowledge that even a systematic sound change is applied at first inconsistently, with the percentage of its occurrence in a person's speech dependent on various social factors. The sound change seems to gradually spread in a process known as lexical diffusion. While it does not invalidate the Neogrammarians' axiom that \"sound laws have no exceptions\", the gradual application of the very sound laws shows that they do not always apply to all lexical items at the same time. Hock notes, \"While it probably is true in the long run every word has its own history, it is not justified to conclude as some linguists have, that therefore the Neogrammarian position on the nature of linguistic change is falsified\".", "title": "Complications" }, { "paragraph_id": 60, "text": "The comparative method cannot recover aspects of a language that were not inherited in its daughter idioms. For instance, the Latin declension pattern was lost in Romance languages, resulting in an impossibility to fully reconstruct such a feature via systematic comparison.", "title": "Complications" }, { "paragraph_id": 61, "text": "The comparative method is used to construct a tree model (German Stammbaum) of language evolution, in which daughter languages are seen as branching from the proto-language, gradually growing more distant from it through accumulated phonological, morpho-syntactic, and lexical changes.", "title": "Complications" }, { "paragraph_id": 62, "text": "The tree model features nodes that are presumed to be distinct proto-languages existing independently in distinct regions during distinct historical times. The reconstruction of unattested proto-languages lends itself to that illusion since they cannot be verified, and the linguist is free to select whatever definite times and places seems best. Right from the outset of Indo-European studies, however, Thomas Young said:", "title": "Complications" }, { "paragraph_id": 63, "text": "It is not, however, very easy to say what the definition should be that should constitute a separate language, but it seems most natural to call those languages distinct, of which the one cannot be understood by common persons in the habit of speaking the other.... Still, however, it may remain doubtfull whether the Danes and the Swedes could not, in general, understand each other tolerably well... nor is it possible to say if the twenty ways of pronouncing the sounds, belonging to the Chinese characters, ought or ought not to be considered as so many languages or dialects.... But,... the languages so nearly allied must stand next to each other in a systematic order…", "title": "Complications" }, { "paragraph_id": 64, "text": "The assumption of uniformity in a proto-language, implicit in the comparative method, is problematic. Even small language communities always have differences in dialect, whether they are based on area, gender, class or other factors. The Pirahã language of Brazil is spoken by only several hundred people but has at least two different dialects, one spoken by men and one by women. Campbell points out:", "title": "Complications" }, { "paragraph_id": 65, "text": "It is not so much that the comparative method 'assumes' no variation; rather, it is just that there is nothing built into the comparative method which would allow it to address variation directly.... This assumption of uniformity is a reasonable idealization; it does no more damage to the understanding of the language than, say, modern reference grammars do which concentrate on a language's general structure, typically leaving out consideration of regional or social variation.", "title": "Complications" }, { "paragraph_id": 66, "text": "Different dialects, as they evolve into separate languages, remain in contact with and influence one another. Even after they are considered distinct, languages near one another continue to influence one another and often share grammatical, phonological, and lexical innovations. A change in one language of a family may spread to neighboring languages, and multiple waves of change are communicated like waves across language and dialect boundaries, each with its own randomly delimited range. If a language is divided into an inventory of features, each with its own time and range (isoglosses), they do not all coincide. History and prehistory may not offer a time and place for a distinct coincidence, as may be the case for Proto-Italic, for which the proto-language is only a concept. However, Hock observes:", "title": "Complications" }, { "paragraph_id": 67, "text": "The discovery in the late nineteenth century that isoglosses can cut across well-established linguistic boundaries at first created considerable attention and controversy. And it became fashionable to oppose a wave theory to a tree theory.... Today, however, it is quite evident that the phenomena referred to by these two terms are complementary aspects of linguistic change....", "title": "Complications" }, { "paragraph_id": 68, "text": "The reconstruction of unknown proto-languages is inherently subjective. In the Proto-Algonquian example above, the choice of *m as the parent phoneme is only likely, not certain. It is conceivable that a Proto-Algonquian language with *b in those positions split into two branches, one that preserved *b and one that changed it to *m instead, and while the first branch developed only into Arapaho, the second spread out more widely and developed into all the other Algonquian tribes. It is also possible that the nearest common ancestor of the Algonquian languages used some other sound instead, such as *p, which eventually mutated to *b in one branch and to *m in the other.", "title": "Complications" }, { "paragraph_id": 69, "text": "Examples of strikingly complicated and even circular developments are indeed known to have occurred (such as Proto-Indo-European *t > Pre-Proto-Germanic *þ > Proto-Germanic *ð > Proto-West-Germanic *d > Old High German t in fater > Modern German Vater), but in the absence of any evidence or other reason to postulate a more complicated development, the preference of a simpler explanation is justified by the principle of parsimony, also known as Occam's razor. Since reconstruction involves many such choices, some linguists prefer to view the reconstructed features as abstract representations of sound correspondences, rather than as objects with a historical time and place.", "title": "Complications" }, { "paragraph_id": 70, "text": "The existence of proto-languages and the validity of the comparative method is verifiable if the reconstruction can be matched to a known language, which may be known only as a shadow in the loanwords of another language. For example, Finnic languages such as Finnish have borrowed many words from an early stage of Germanic, and the shape of the loans matches the forms that have been reconstructed for Proto-Germanic. Finnish kuningas 'king' and kaunis 'beautiful' match the Germanic reconstructions *kuningaz and *skauniz (> German König 'king', schön 'beautiful').", "title": "Complications" }, { "paragraph_id": 71, "text": "The wave model was developed in the 1870s as an alternative to the tree model to represent the historical patterns of language diversification. Both the tree-based and the wave-based representations are compatible with the comparative method.", "title": "Complications" }, { "paragraph_id": 72, "text": "By contrast, some approaches are incompatible with the comparative method, including contentious glottochronology and even more controversial mass lexical comparison considered by most historical linguists to be flawed and unreliable.", "title": "Complications" } ]
In linguistics, the comparative method is a technique for studying the development of languages by performing a feature-by-feature comparison of two or more languages with common descent from a shared ancestor and then extrapolating backwards to infer the properties of that ancestor. The comparative method may be contrasted with the method of internal reconstruction in which the internal development of a single language is inferred by the analysis of features within that language. Ordinarily, both methods are used together to reconstruct prehistoric phases of languages; to fill in gaps in the historical record of a language; to discover the development of phonological, morphological and other linguistic systems and to confirm or to refute hypothesised relationships between languages. The comparative method emerged in the early 19th century with the birth of Indo-European studies, then took a definite scientific approach with the works of the Neogrammarians in the late 19th–early 20th century. Key contributions were made by the Danish scholars Rasmus Rask (1787–1832) and Karl Verner (1846–1896), and the German scholar Jacob Grimm (1785–1863). The first linguist to offer reconstructed forms from a proto-language was August Schleicher (1821–1868) in his Compendium der vergleichenden Grammatik der indogermanischen Sprachen, originally published in 1861. Here is Schleicher's explanation of why he offered reconstructed forms:
2002-01-07T00:44:36Z
2023-12-08T21:26:15Z
[ "Template:Blockquote", "Template:Quote", "Template:Cite book", "Template:Refbegin", "Template:Use dmy dates", "Template:Reflist", "Template:Harvnb", "Template:Cite journal", "Template:IPA notice", "Template:Circa", "Template:IPA link", "Template:Citation needed", "Template:Refend", "Template:Sfn", "Template:By whom", "Template:Who", "Template:Angle bracket", "Template:Other uses", "Template:Lang", "Template:Citation", "Template:Nowrap", "Template:IPA", "Template:Webarchive", "Template:'", "Template:Short description", "Template:Cite encyclopedia", "Template:Cite web", "Template:Long-range comparative linguistics" ]
https://en.wikipedia.org/wiki/Comparative_method
7,661
Council of Constance
The Council of Constance (Latin: Concilium Constantiense; German: Konzil von Konstanz) was an ecumenical council of the Catholic Church that was held from 1414 to 1418 in the Bishopric of Constance (Konstanz) in present-day Germany. The council ended the Western Schism by deposing or accepting the resignation of the remaining papal claimants and by electing Pope Martin V. It was the last papal election to take place outside of Italy. The council also condemned Jan Hus as a heretic and facilitated his execution by the civil authority, and ruled on issues of national sovereignty, the rights of pagans and just war, in response to a conflict between the Grand Duchy of Lithuania, the Kingdom of Poland and the Order of the Teutonic Knights. The council is also important for its role in the debates over ecclesial conciliarism and papal supremacy. Constance issued two particularly significant decrees regarding the constitution of the Catholic Church: Haec sancta (1415), which asserted the superiority of ecumenical councils over popes in at least certain situations, and Frequens (1417), which provided for councils to be held automatically every ten years. The status of these decrees proved controversial in the centuries after the council, and Frequens was never put into practice. Though Haec sancta, at least, continued to be accepted as binding by much of the church up to the 19th century, present-day Catholic theologians generally regard these decrees as either invalid or as practical responses to a particular situation without wider implications. The council's main purpose was to end the Papal schism which had resulted from the confusion following the Avignon Papacy. Pope Gregory XI's return to Rome in 1377, followed by his death (in 1378) and the controversial election of his successor, Pope Urban VI, resulted in the defection of a number of cardinals and the election of a rival pope based at Avignon in 1378. After thirty years of schism, the rival courts convened the Council of Pisa seeking to resolve the situation by deposing the two claimant popes and electing a new one. The council claimed that in such a situation, a council of bishops had greater authority than just one bishop, even if he were the bishop of Rome. Though the elected Antipope Alexander V and his successor, Antipope John XXIII (not to be confused with the 20th-century Pope John XXIII), gained widespread support, especially at the cost of the Avignon antipope, the schism remained, now involving not two but three claimants: Gregory XII at Rome, Benedict XIII at Avignon, and John XXIII. Therefore, many voices, including Sigismund, King of the Romans and of Hungary (and later Holy Roman Emperor), pressed for another council to resolve the issue. That council was called by John XXIII and was held from 16 November 1414 to 22 April 1418 in Constance, Germany. The council was attended by roughly 29 cardinals, 100 "learned doctors of law and divinity", 134 abbots, and 183 bishops and archbishops. Sigismund arrived on Christmas Eve 1414 and exercised a profound and continuous influence on the course of the council in his capacity of imperial protector of the church. An innovation at the council was that instead of voting as individuals, the bishops voted in national blocs. The vote by nations was in great measure the initiative of the English, German, and French members. The legality of this measure, in imitation of the "nations" of the universities, was more than questionable, but during February 1415 it carried and thenceforth was accepted in practice, though never authorized by any formal decree of the council. The four "nations" consisted of England, France, Italy, and Germany, with Poles, Hungarians, Danes, and Scandinavians counted with the Germans. While the Italian representatives made up half of those in attendance, they were equal in influence to the English, who sent twenty deputies and three bishops. The Spanish deputies (from Portugal, Castile, Navarre and Aragon), initially absent, joined the council at the twenty-first session, constituting upon arrival the fifth nation. Many members of the new assembly (comparatively few bishops, but many doctors of theology and of canon and civil law, procurators of bishops, deputies of universities, cathedral chapters, provosts, etc., agents and representatives of princes, etc.) strongly favored the voluntary abdication of all three popes, as did King Sigismund. Although the Italian bishops who had accompanied John XXIII in large numbers supported his legitimacy, he grew increasingly more suspicious of the council. Partly in response to a fierce anonymous attack on his character from an Italian source, on 2 March 1415 he promised to resign. However, on 20 March he secretly fled the city and took refuge at Schaffhausen in territory of his friend Frederick, Duke of Austria-Tyrol. The famous decree Haec sancta synodus, which gave primacy to the authority of the council and thus became a source for ecclesial conciliarism, was promulgated in the fifth session, 6 April 1415: Legitimately assembled in the holy Spirit, constituting a general council and representing the Catholic church militant, it has power immediately from Christ; and everyone of whatever state or dignity, even papal, is bound to obey it in those matters which pertain to the faith, the eradication of the said schism, and the general reform of the said church of God in head and members. Haec sancta synodus marks the high-water mark of the Conciliar movement of reform. The acts of the council were not made public until 1442, at the behest of the Council of Basel; they were printed in 1500. The creation of a book on how to die was ordered by the council, and thus written in 1415 under the title Ars moriendi. Haec sancta is today generally considered invalid by the Catholic Church, on the basis that Gregory XII was the legitimate pope at the time and the decree was passed by the council in a session before his confirmation. On this reading, the first sessions of the Council of Constance represented an invalid and illicit assembly of bishops, gathered under the authority of an antipope. This historiography is of much later provenance than the council itself, however: the Pisan line represented by John XXIII had been considered legitimate not just by most of the Latin church at the time of the council, but also subsequently by Pope Martin V, who referred to John as "our predecessor" in contrast to the other two claimants, who were merely "popes so-called in their obediences". The specific argument distinguishing two parts in the council was seemingly first made by the 17th-century Sorbonne theologian André Duval, and remained a fringe view for some time before its vindication within the Catholic Church under the influence of 19th-century ultramontanism. With the support of King Sigismund, enthroned before the high altar of the cathedral of Constance, the Council of Constance recommended that all three papal claimants abdicate, and that another be chosen. In part because of the constant presence of the King, other rulers demanded that they have a say in who would be pope. Gregory XII then sent representatives to Constance, whom he granted full powers to summon, open, and preside over an Ecumenical Council; he also empowered them to present his resignation of the papacy. This would pave the way for the end of the Western Schism. The legates were received by King Sigismund and by the assembled Bishops, and the King yielded the presidency of the proceedings to the papal legates, Cardinal Giovanni Dominici of Ragusa and Prince Carlo Malatesta. On 4 July 1415 the Bull of Gregory XII which appointed Dominici and Malatesta as his proxies at the council was formally read before the assembled Bishops. The cardinal then read a decree of Gregory XII which convoked the council and authorized its succeeding acts. Thereupon, the Bishops voted to accept the summons. Prince Malatesta immediately informed the council that he was empowered by a commission from Pope Gregory XII to resign the Papal Throne on the Pontiff's behalf. He asked the council whether they would prefer to receive the abdication at that point or at a later date. The Bishops voted to receive the Papal abdication immediately. Thereupon the commission by Gregory XII authorizing his proxy to resign the Papacy on his behalf was read and Malatesta, acting in the name of Gregory XII, pronounced the resignation of the papacy by Gregory XII and handed a written copy of the resignation to the assembly. Former Pope Gregory XII was then created titular Cardinal Bishop of Porto and Santa Ruffina by the council, with rank immediately below the Pope (which made him the highest-ranking person in the church, since, due to his abdication, the See of Peter in Rome was vacant). Gregory XII's cardinals were accepted as true cardinals by the council, but the members of the council delayed electing a new pope for fear that a new pope would restrict further discussion of pressing issues in the church. By the time the anti-popes were all deposed and the new Pope, Martin V, was elected, two years had passed since Gregory XII's abdication, and Gregory was already dead. The council took great care to protect the legitimacy of the succession, ratified all his acts, and a new pontiff was chosen. The new pope, Martin V, elected November 1417, soon asserted the absolute authority of the papal office. A second goal of the council was to continue the reforms begun at the Council of Pisa (1409). The reforms were largely directed against John Wycliffe, mentioned in the opening session and condemned in the eighth on 4 May 1415, and Jan Hus, along with their followers. Hus, summoned to Constance under a letter of safe conduct, was found guilty of heresy by the council and turned over to the secular court. "This holy synod of Constance, seeing that God's church has nothing more that it can do, relinquishes Jan Hus to the judgment of the secular authority and decrees that he is to be relinquished to the secular court." (Council of Constance Session 15 – 6 July 1415). The secular court sentenced him to be burned to death at the stake. Jerome of Prague, a supporter of Hus, came to Constance to offer assistance but was similarly arrested, judged, found guilty of heresy and turned over to the same secular court, with the same outcome as Hus. Poggio Bracciolini attended the council and related the unfairness of the process against Jerome. Paweł Włodkowic and the other Polish representatives to the Council of Constance publicly defended Hus. In 1411, the First Peace of Thorn ended the Polish–Lithuanian–Teutonic War, in which the Teutonic Knights fought the Kingdom of Poland and Grand Duchy of Lithuania. However, the peace was not stable and further conflicts arose regarding demarcation of the Samogitian borders. The tensions erupted into the brief Hunger War in summer 1414. It was concluded that the disputes would be mediated by the Council of Constance. The Polish-Lithuanian position was defended by Paulus Vladimiri, rector of the Jagiellonian University, who challenged legality of the Teutonic crusade against Lithuania. He argued that a forced conversion was incompatible with free will, which was an essential component of a genuine conversion. Therefore, the Knights could only wage a defensive war if pagans violated natural rights of the Christians. Vladimiri further stipulated that infidels had rights which had to be respected, and neither the Pope nor the Holy Roman Emperor had the authority to violate them. Lithuanians also brought a group of Samogitian representatives to testify to atrocities committed by the Knights. The Dominican theologian John of Falkenberg proved to be the fiercest opponent of the Poles. In his Liber de doctrina, Falkenberg argued that the Emperor has the right to slay even peaceful infidels simply because they are pagans. ... The Poles deserve death for defending infidels, and should be exterminated even more than the infidels; they should be deprived of their sovereignty and reduced to slavery. In Satira, he attacked Polish-Lithuanian King Jogaila, calling him a "mad dog" unworthy to be king. Falkenberg was condemned and imprisoned for such libel. Other opponents included Grand Master's proctor Peter Wormditt, Dominic of San Gimignano, John Urbach, Ardecino de Porta of Novara, and Bishop of Ciudad Rodrigo Andrew Escobar. They argued that the Knights were perfectly justified in their crusade as it was a sacred duty of Christians to spread the true faith. Cardinal Pierre d'Ailly published an independent opinion that attempted to somewhat balance both Polish and Teutonic positions. The council established the Diocese of Samogitia, with its seat in Medininkai and subordinated to Lithuanian dioceses, and appointed Matthias of Trakai as the first bishop. Pope Martin V appointed the Lithuanians Jogaila and Vytautas, who were respectively King of Poland and Grand Duke of Lithuania, as vicars general in Pskov and Veliky Novgorod in recognition of their Catholicism. After another round of futile negotiations, the Gollub War broke out in 1422. It ended with the Treaty of Melno. Polish-Lithuanian-Teutonic wars continued for another hundred years. Although Pope Martin V did not directly challenge the decrees of the council, his successor Eugenius IV repudiated an attempt by a faction at the Council of Basel to declare the provisions of Haec sancta and Frequens a matter of faith. His 1439 bull on the matter, Moyses vir Dei, was underwritten by the Council of Florence. In convening the Fifth Lateran Council (1512–17), Pope Julius II further pronounced that Frequens had lost its force; Lateran V is sometimes seen as having itself abrogated Haec sancta, though the reading is controversial. Either way, while Rome itself came to reject the provisions made by the council, significant parts of the Church, notably in France, continued to uphold the validity of its decisions long after the event: Haec sancta was reaffirmed in the Gallican Articles of 1682, and even during the First Vatican Council of 1869–70 the French-American bishop of St. Augustine, Florida, Augustin Vérot, attempted to read Haec sancta into the record of deliberations. Despite the apparently definitive rejection of conciliarism at the First Vatican Council, the debate over the status of Constance was renewed in the 20th century. In the 1960s, in the context of the Second Vatican Council, the reformist Catholic theologian Hans Küng and the historian Paul de Vooght [cs] argued in defense of the dogmatic character of Haec sancta, suggesting that its terms could be reconciled with the definition of papal supremacy at Vatican I. Küng's argument received support from prelates such as Cardinal Franz König. Other Catholic historians adopted different views: Hubert Jedin considered Haec sancta to be an emergency measure with no binding validity beyond its immediate context, while Joseph Gill rejected the validity of the session that passed the decree altogether. The debate over Haec sancta subsided in the 1970s, however, without resolution. 47°39′48″N 9°10′37″E / 47.66333°N 9.17694°E / 47.66333; 9.17694
[ { "paragraph_id": 0, "text": "The Council of Constance (Latin: Concilium Constantiense; German: Konzil von Konstanz) was an ecumenical council of the Catholic Church that was held from 1414 to 1418 in the Bishopric of Constance (Konstanz) in present-day Germany. The council ended the Western Schism by deposing or accepting the resignation of the remaining papal claimants and by electing Pope Martin V. It was the last papal election to take place outside of Italy.", "title": "" }, { "paragraph_id": 1, "text": "The council also condemned Jan Hus as a heretic and facilitated his execution by the civil authority, and ruled on issues of national sovereignty, the rights of pagans and just war, in response to a conflict between the Grand Duchy of Lithuania, the Kingdom of Poland and the Order of the Teutonic Knights.", "title": "" }, { "paragraph_id": 2, "text": "The council is also important for its role in the debates over ecclesial conciliarism and papal supremacy. Constance issued two particularly significant decrees regarding the constitution of the Catholic Church: Haec sancta (1415), which asserted the superiority of ecumenical councils over popes in at least certain situations, and Frequens (1417), which provided for councils to be held automatically every ten years. The status of these decrees proved controversial in the centuries after the council, and Frequens was never put into practice. Though Haec sancta, at least, continued to be accepted as binding by much of the church up to the 19th century, present-day Catholic theologians generally regard these decrees as either invalid or as practical responses to a particular situation without wider implications.", "title": "" }, { "paragraph_id": 3, "text": "The council's main purpose was to end the Papal schism which had resulted from the confusion following the Avignon Papacy. Pope Gregory XI's return to Rome in 1377, followed by his death (in 1378) and the controversial election of his successor, Pope Urban VI, resulted in the defection of a number of cardinals and the election of a rival pope based at Avignon in 1378. After thirty years of schism, the rival courts convened the Council of Pisa seeking to resolve the situation by deposing the two claimant popes and electing a new one. The council claimed that in such a situation, a council of bishops had greater authority than just one bishop, even if he were the bishop of Rome. Though the elected Antipope Alexander V and his successor, Antipope John XXIII (not to be confused with the 20th-century Pope John XXIII), gained widespread support, especially at the cost of the Avignon antipope, the schism remained, now involving not two but three claimants: Gregory XII at Rome, Benedict XIII at Avignon, and John XXIII.", "title": "Origin and background" }, { "paragraph_id": 4, "text": "Therefore, many voices, including Sigismund, King of the Romans and of Hungary (and later Holy Roman Emperor), pressed for another council to resolve the issue. That council was called by John XXIII and was held from 16 November 1414 to 22 April 1418 in Constance, Germany. The council was attended by roughly 29 cardinals, 100 \"learned doctors of law and divinity\", 134 abbots, and 183 bishops and archbishops.", "title": "Origin and background" }, { "paragraph_id": 5, "text": "Sigismund arrived on Christmas Eve 1414 and exercised a profound and continuous influence on the course of the council in his capacity of imperial protector of the church. An innovation at the council was that instead of voting as individuals, the bishops voted in national blocs. The vote by nations was in great measure the initiative of the English, German, and French members. The legality of this measure, in imitation of the \"nations\" of the universities, was more than questionable, but during February 1415 it carried and thenceforth was accepted in practice, though never authorized by any formal decree of the council. The four \"nations\" consisted of England, France, Italy, and Germany, with Poles, Hungarians, Danes, and Scandinavians counted with the Germans. While the Italian representatives made up half of those in attendance, they were equal in influence to the English, who sent twenty deputies and three bishops. The Spanish deputies (from Portugal, Castile, Navarre and Aragon), initially absent, joined the council at the twenty-first session, constituting upon arrival the fifth nation.", "title": "Participants" }, { "paragraph_id": 6, "text": "Many members of the new assembly (comparatively few bishops, but many doctors of theology and of canon and civil law, procurators of bishops, deputies of universities, cathedral chapters, provosts, etc., agents and representatives of princes, etc.) strongly favored the voluntary abdication of all three popes, as did King Sigismund.", "title": "Decrees and doctrinal status" }, { "paragraph_id": 7, "text": "Although the Italian bishops who had accompanied John XXIII in large numbers supported his legitimacy, he grew increasingly more suspicious of the council. Partly in response to a fierce anonymous attack on his character from an Italian source, on 2 March 1415 he promised to resign. However, on 20 March he secretly fled the city and took refuge at Schaffhausen in territory of his friend Frederick, Duke of Austria-Tyrol.", "title": "Decrees and doctrinal status" }, { "paragraph_id": 8, "text": "The famous decree Haec sancta synodus, which gave primacy to the authority of the council and thus became a source for ecclesial conciliarism, was promulgated in the fifth session, 6 April 1415:", "title": "Decrees and doctrinal status" }, { "paragraph_id": 9, "text": "Legitimately assembled in the holy Spirit, constituting a general council and representing the Catholic church militant, it has power immediately from Christ; and everyone of whatever state or dignity, even papal, is bound to obey it in those matters which pertain to the faith, the eradication of the said schism, and the general reform of the said church of God in head and members.", "title": "Decrees and doctrinal status" }, { "paragraph_id": 10, "text": "Haec sancta synodus marks the high-water mark of the Conciliar movement of reform.", "title": "Decrees and doctrinal status" }, { "paragraph_id": 11, "text": "The acts of the council were not made public until 1442, at the behest of the Council of Basel; they were printed in 1500. The creation of a book on how to die was ordered by the council, and thus written in 1415 under the title Ars moriendi.", "title": "Decrees and doctrinal status" }, { "paragraph_id": 12, "text": "Haec sancta is today generally considered invalid by the Catholic Church, on the basis that Gregory XII was the legitimate pope at the time and the decree was passed by the council in a session before his confirmation. On this reading, the first sessions of the Council of Constance represented an invalid and illicit assembly of bishops, gathered under the authority of an antipope. This historiography is of much later provenance than the council itself, however: the Pisan line represented by John XXIII had been considered legitimate not just by most of the Latin church at the time of the council, but also subsequently by Pope Martin V, who referred to John as \"our predecessor\" in contrast to the other two claimants, who were merely \"popes so-called in their obediences\". The specific argument distinguishing two parts in the council was seemingly first made by the 17th-century Sorbonne theologian André Duval, and remained a fringe view for some time before its vindication within the Catholic Church under the influence of 19th-century ultramontanism.", "title": "Decrees and doctrinal status" }, { "paragraph_id": 13, "text": "With the support of King Sigismund, enthroned before the high altar of the cathedral of Constance, the Council of Constance recommended that all three papal claimants abdicate, and that another be chosen. In part because of the constant presence of the King, other rulers demanded that they have a say in who would be pope.", "title": "Ending the Western Schism" }, { "paragraph_id": 14, "text": "Gregory XII then sent representatives to Constance, whom he granted full powers to summon, open, and preside over an Ecumenical Council; he also empowered them to present his resignation of the papacy. This would pave the way for the end of the Western Schism.", "title": "Ending the Western Schism" }, { "paragraph_id": 15, "text": "The legates were received by King Sigismund and by the assembled Bishops, and the King yielded the presidency of the proceedings to the papal legates, Cardinal Giovanni Dominici of Ragusa and Prince Carlo Malatesta. On 4 July 1415 the Bull of Gregory XII which appointed Dominici and Malatesta as his proxies at the council was formally read before the assembled Bishops. The cardinal then read a decree of Gregory XII which convoked the council and authorized its succeeding acts. Thereupon, the Bishops voted to accept the summons. Prince Malatesta immediately informed the council that he was empowered by a commission from Pope Gregory XII to resign the Papal Throne on the Pontiff's behalf. He asked the council whether they would prefer to receive the abdication at that point or at a later date. The Bishops voted to receive the Papal abdication immediately. Thereupon the commission by Gregory XII authorizing his proxy to resign the Papacy on his behalf was read and Malatesta, acting in the name of Gregory XII, pronounced the resignation of the papacy by Gregory XII and handed a written copy of the resignation to the assembly.", "title": "Ending the Western Schism" }, { "paragraph_id": 16, "text": "Former Pope Gregory XII was then created titular Cardinal Bishop of Porto and Santa Ruffina by the council, with rank immediately below the Pope (which made him the highest-ranking person in the church, since, due to his abdication, the See of Peter in Rome was vacant). Gregory XII's cardinals were accepted as true cardinals by the council, but the members of the council delayed electing a new pope for fear that a new pope would restrict further discussion of pressing issues in the church.", "title": "Ending the Western Schism" }, { "paragraph_id": 17, "text": "By the time the anti-popes were all deposed and the new Pope, Martin V, was elected, two years had passed since Gregory XII's abdication, and Gregory was already dead. The council took great care to protect the legitimacy of the succession, ratified all his acts, and a new pontiff was chosen. The new pope, Martin V, elected November 1417, soon asserted the absolute authority of the papal office.", "title": "Ending the Western Schism" }, { "paragraph_id": 18, "text": "A second goal of the council was to continue the reforms begun at the Council of Pisa (1409). The reforms were largely directed against John Wycliffe, mentioned in the opening session and condemned in the eighth on 4 May 1415, and Jan Hus, along with their followers. Hus, summoned to Constance under a letter of safe conduct, was found guilty of heresy by the council and turned over to the secular court. \"This holy synod of Constance, seeing that God's church has nothing more that it can do, relinquishes Jan Hus to the judgment of the secular authority and decrees that he is to be relinquished to the secular court.\" (Council of Constance Session 15 – 6 July 1415). The secular court sentenced him to be burned to death at the stake.", "title": "Condemnation of Jan Hus" }, { "paragraph_id": 19, "text": "Jerome of Prague, a supporter of Hus, came to Constance to offer assistance but was similarly arrested, judged, found guilty of heresy and turned over to the same secular court, with the same outcome as Hus. Poggio Bracciolini attended the council and related the unfairness of the process against Jerome.", "title": "Condemnation of Jan Hus" }, { "paragraph_id": 20, "text": "Paweł Włodkowic and the other Polish representatives to the Council of Constance publicly defended Hus.", "title": "Condemnation of Jan Hus" }, { "paragraph_id": 21, "text": "In 1411, the First Peace of Thorn ended the Polish–Lithuanian–Teutonic War, in which the Teutonic Knights fought the Kingdom of Poland and Grand Duchy of Lithuania. However, the peace was not stable and further conflicts arose regarding demarcation of the Samogitian borders. The tensions erupted into the brief Hunger War in summer 1414. It was concluded that the disputes would be mediated by the Council of Constance.", "title": "Polish–Lithuanian–Teutonic conflict" }, { "paragraph_id": 22, "text": "The Polish-Lithuanian position was defended by Paulus Vladimiri, rector of the Jagiellonian University, who challenged legality of the Teutonic crusade against Lithuania. He argued that a forced conversion was incompatible with free will, which was an essential component of a genuine conversion. Therefore, the Knights could only wage a defensive war if pagans violated natural rights of the Christians. Vladimiri further stipulated that infidels had rights which had to be respected, and neither the Pope nor the Holy Roman Emperor had the authority to violate them. Lithuanians also brought a group of Samogitian representatives to testify to atrocities committed by the Knights.", "title": "Polish–Lithuanian–Teutonic conflict" }, { "paragraph_id": 23, "text": "The Dominican theologian John of Falkenberg proved to be the fiercest opponent of the Poles. In his Liber de doctrina, Falkenberg argued that", "title": "Polish–Lithuanian–Teutonic conflict" }, { "paragraph_id": 24, "text": "the Emperor has the right to slay even peaceful infidels simply because they are pagans. ... The Poles deserve death for defending infidels, and should be exterminated even more than the infidels; they should be deprived of their sovereignty and reduced to slavery.", "title": "Polish–Lithuanian–Teutonic conflict" }, { "paragraph_id": 25, "text": "In Satira, he attacked Polish-Lithuanian King Jogaila, calling him a \"mad dog\" unworthy to be king. Falkenberg was condemned and imprisoned for such libel. Other opponents included Grand Master's proctor Peter Wormditt, Dominic of San Gimignano, John Urbach, Ardecino de Porta of Novara, and Bishop of Ciudad Rodrigo Andrew Escobar. They argued that the Knights were perfectly justified in their crusade as it was a sacred duty of Christians to spread the true faith. Cardinal Pierre d'Ailly published an independent opinion that attempted to somewhat balance both Polish and Teutonic positions.", "title": "Polish–Lithuanian–Teutonic conflict" }, { "paragraph_id": 26, "text": "The council established the Diocese of Samogitia, with its seat in Medininkai and subordinated to Lithuanian dioceses, and appointed Matthias of Trakai as the first bishop. Pope Martin V appointed the Lithuanians Jogaila and Vytautas, who were respectively King of Poland and Grand Duke of Lithuania, as vicars general in Pskov and Veliky Novgorod in recognition of their Catholicism. After another round of futile negotiations, the Gollub War broke out in 1422. It ended with the Treaty of Melno. Polish-Lithuanian-Teutonic wars continued for another hundred years.", "title": "Polish–Lithuanian–Teutonic conflict" }, { "paragraph_id": 27, "text": "Although Pope Martin V did not directly challenge the decrees of the council, his successor Eugenius IV repudiated an attempt by a faction at the Council of Basel to declare the provisions of Haec sancta and Frequens a matter of faith. His 1439 bull on the matter, Moyses vir Dei, was underwritten by the Council of Florence. In convening the Fifth Lateran Council (1512–17), Pope Julius II further pronounced that Frequens had lost its force; Lateran V is sometimes seen as having itself abrogated Haec sancta, though the reading is controversial. Either way, while Rome itself came to reject the provisions made by the council, significant parts of the Church, notably in France, continued to uphold the validity of its decisions long after the event: Haec sancta was reaffirmed in the Gallican Articles of 1682, and even during the First Vatican Council of 1869–70 the French-American bishop of St. Augustine, Florida, Augustin Vérot, attempted to read Haec sancta into the record of deliberations.", "title": "Later status" }, { "paragraph_id": 28, "text": "Despite the apparently definitive rejection of conciliarism at the First Vatican Council, the debate over the status of Constance was renewed in the 20th century. In the 1960s, in the context of the Second Vatican Council, the reformist Catholic theologian Hans Küng and the historian Paul de Vooght [cs] argued in defense of the dogmatic character of Haec sancta, suggesting that its terms could be reconciled with the definition of papal supremacy at Vatican I. Küng's argument received support from prelates such as Cardinal Franz König. Other Catholic historians adopted different views: Hubert Jedin considered Haec sancta to be an emergency measure with no binding validity beyond its immediate context, while Joseph Gill rejected the validity of the session that passed the decree altogether. The debate over Haec sancta subsided in the 1970s, however, without resolution.", "title": "Later status" }, { "paragraph_id": 29, "text": "47°39′48″N 9°10′37″E / 47.66333°N 9.17694°E / 47.66333; 9.17694", "title": "External links" } ]
The Council of Constance was an ecumenical council of the Catholic Church that was held from 1414 to 1418 in the Bishopric of Constance (Konstanz) in present-day Germany. The council ended the Western Schism by deposing or accepting the resignation of the remaining papal claimants and by electing Pope Martin V. It was the last papal election to take place outside of Italy. The council also condemned Jan Hus as a heretic and facilitated his execution by the civil authority, and ruled on issues of national sovereignty, the rights of pagans and just war, in response to a conflict between the Grand Duchy of Lithuania, the Kingdom of Poland and the Order of the Teutonic Knights. The council is also important for its role in the debates over ecclesial conciliarism and papal supremacy. Constance issued two particularly significant decrees regarding the constitution of the Catholic Church: Haec sancta (1415), which asserted the superiority of ecumenical councils over popes in at least certain situations, and Frequens (1417), which provided for councils to be held automatically every ten years. The status of these decrees proved controversial in the centuries after the council, and Frequens was never put into practice. Though Haec sancta, at least, continued to be accepted as binding by much of the church up to the 19th century, present-day Catholic theologians generally regard these decrees as either invalid or as practical responses to a particular situation without wider implications.
2023-07-12T05:28:19Z
[ "Template:Reflist", "Template:Papal elections and conclaves from 1061", "Template:Samogitian dispute", "Template:Coord", "Template:Lang-de", "Template:Cite CE1913", "Template:Cite journal", "Template:Refend", "Template:Western Schism timeline", "Template:Cite web", "Template:Citation", "Template:According to whom", "Template:ISBN?", "Template:Ecumenical councils of the Catholic Church", "Template:Lang-la", "Template:Full citation needed", "Template:Cite magazine", "Template:WesternSchism", "Template:Ecumenical councils", "Template:Authority control", "Template:More citations needed", "Template:More citations needed section", "Template:Blockquote", "Template:Efn", "Template:Ill", "Template:Notelist", "Template:Cite book", "Template:Commons category", "Template:Infobox ecumenical council", "Template:Sfn", "Template:Short description", "Template:Refbegin" ]
https://en.wikipedia.org/wiki/Council_of_Constance
7,662
Churches Uniting in Christ
Churches Uniting in Christ (CUIC) is an ecumenical organization that brings together mainline American denominations (including both predominantly white and predominantly black churches), and was inaugurated on January 20, 2002, in Memphis, Tennessee on the balcony of the Lorraine Motel. It is the successor organization to the Consultation on Church Union. CUIC is the successor organization to the Consultation on Church Union (COCU), which had been founded in 1962. The original task of COCU was to negotiate a consensus between its nine (originally four) member communions (it also included three "advisory participant" churches). However, it never succeeded in this goal, despite making progress on several ecumenical fronts. At COCU's 18th plenary meeting in St. Louis, Missouri (January 1999), CUIC was proposed as a new relationship among the nine member communions. Each member communion voted to join CUIC over the next few years. Heads of communion from each member of COCU (as well as the ELCA, a partner in mission and dialogue) inaugurated the group on the day before Martin Luther King Jr. Day in 2002 at the motel where he was killed. This particular location highlighted the group's focus on racism as a major dividing factor between and among churches. The Coordinating Council of CUIC created several task forces: Racial and Social Justice, Ministry, Young Adult and Local and Regional Ecumenism. Each task force represented an important part of early CUIC work. Local ecumenical liturgies were encouraged, and excitement initially built around "pilot programs" in Denver, Los Angeles, and Memphis. The Racial and Social Justice task force created gatherings and discussions on racial justice. The Ministry task force received much of the attention from church structures, however. The group had been given a mandate to complete work on reconciliation by 2007, and in 2003 began working on a document entitled "Mutual Recognition and Mutual Reconciliation of Ministries." One of the most difficult issues concerning recognition and reconciliation of ministries was that of the historic episcopate. This was one of the issues that defeated proposals for union by COCU as well. The group approached this problem through dialogue, soliciting information from each member communion on the particularities of their theology and ecclesiology in order to come to a mutually acceptable conclusion. CUIC released the seventh and final draft of the MRMRM document in June 2005. Much work was done in 2006 on this document, which focused on "Episkope," the oversight of ministry. The work culminated in a consultation on episkope in St. Louis in October 2006 involving the heads of communion of the members of CUIC. At this consultation, the MRMRM document was met with resistance, and concern was raised in particular that CUIC was focusing too narrowly on reconciliation of ministries and "not taking seriously our commitment to working on those issues of systemic racism that remain at the heart of our continuing and separated life as churches here in the United States." The nine churches which inaugurated CUIC in 2002 were joined by the Moravian Church, Northern Province. The Moravians had been partners in mission and dialogue since 2002, but joined as a member communion after the October 2006 consultation on episcope. In 2007, the African Methodist Episcopal Zion Church and the African Methodist Episcopal Church withdrew from CUIC. Neither body sent representatives to the CUIC plenary on January 11–14, 2008, though the AME Council of Bishops never voted to suspend membership officially. They felt the other churches were not doing enough to counter the history of racial injustice between black and white churches. In response to this, the remaining churches in CUIC decided in 2008 to suspend their work while they seek reconciliation with these churches. This work began with a group of representatives who revisited the 1999 document "Call to Christian Commitment and Action to Combat Racism," which is available on the current CUIC website. This also meant eliminating the position of Director as well as the suspension of the work of the CUIC task forces. As of 2012, CUIC no longer has physical offices, opting instead for a virtual office and storing the archives of both CUIC and COCU at Princeton Seminary's Henry Luce III Library. The African Methodist Episcopal Church resumed its participation by the February 2010 plenary meeting, where CUIC moved to refocus on its eight marks of commitment and a shared concern for racial justice as a major dividing factor facing ecumenism. Although the African Methodist Episcopal Zion Church has not rejoined the group, efforts have continued to bring this communion back into membership. The Rev. Staccato Powell, an AMEZ pastor, preached at the 2011 CUIC plenary in Ft. Lauderdale, Florida as a part of these reconciliation efforts. Combating racism has again become a priority of CUIC. Concerns over the historic episcopate have been sidelined since 2008, though they may re-emerge. The group's focus on mutual reconciliation of ministries has been revisited in the light of racism and the impact that racism may have on exchanging ministers between denominations. Therefore, the coordinating council of CUIC created a consultation on race and ministry while also choosing to partner with the Samuel Dewitt Proctor Conference, a social justice organization involved in African American faith communities. The purpose of CUIC has always been unity (as reflected in their current slogan, "reconciling the baptized, seeking unity with justice"). This reflects one of the core scripture passages in the ecumenical movement, Jesus' prayer in John 17:21, "That they all may be one". CUIC has approached this goal of unity in various ways throughout its history. Racism has been a primary focus of CUIC since 2002 (and, indeed, a primary focus of COCU alongside other forms of exclusion and prejudice, such as sexism and ableism). According to Dan Krutz, former president of CUIC, "Overcoming racism has been a focal point of CUIC since its beginning... Racism may be the biggest sin that divides churches." Even before the absence of the AME and AMEZ churches at the January 2011 plenary, some in CUIC had noticed the lack of commitment to racial reconciliation. Since 2008, however, racism has become an even more pressing concern. This has led CUIC to address issues of racism in the public sphere, including the killing of Trayvon Martin and the recovery from the 2010 Haiti earthquake. According to their website, one of the reasons for transitioning from COCU to CUIC is so that member churches "stop 'consulting' and start living their unity in Christ more fully." This means that each member communion in CUIC agrees to abide by the eight Marks of Commitment, which are summarized as follows:
[ { "paragraph_id": 0, "text": "Churches Uniting in Christ (CUIC) is an ecumenical organization that brings together mainline American denominations (including both predominantly white and predominantly black churches), and was inaugurated on January 20, 2002, in Memphis, Tennessee on the balcony of the Lorraine Motel. It is the successor organization to the Consultation on Church Union.", "title": "" }, { "paragraph_id": 1, "text": "CUIC is the successor organization to the Consultation on Church Union (COCU), which had been founded in 1962. The original task of COCU was to negotiate a consensus between its nine (originally four) member communions (it also included three \"advisory participant\" churches). However, it never succeeded in this goal, despite making progress on several ecumenical fronts. At COCU's 18th plenary meeting in St. Louis, Missouri (January 1999), CUIC was proposed as a new relationship among the nine member communions. Each member communion voted to join CUIC over the next few years.", "title": "History" }, { "paragraph_id": 2, "text": "Heads of communion from each member of COCU (as well as the ELCA, a partner in mission and dialogue) inaugurated the group on the day before Martin Luther King Jr. Day in 2002 at the motel where he was killed. This particular location highlighted the group's focus on racism as a major dividing factor between and among churches.", "title": "History" }, { "paragraph_id": 3, "text": "The Coordinating Council of CUIC created several task forces: Racial and Social Justice, Ministry, Young Adult and Local and Regional Ecumenism. Each task force represented an important part of early CUIC work. Local ecumenical liturgies were encouraged, and excitement initially built around \"pilot programs\" in Denver, Los Angeles, and Memphis. The Racial and Social Justice task force created gatherings and discussions on racial justice. The Ministry task force received much of the attention from church structures, however. The group had been given a mandate to complete work on reconciliation by 2007, and in 2003 began working on a document entitled \"Mutual Recognition and Mutual Reconciliation of Ministries.\"", "title": "History" }, { "paragraph_id": 4, "text": "One of the most difficult issues concerning recognition and reconciliation of ministries was that of the historic episcopate. This was one of the issues that defeated proposals for union by COCU as well. The group approached this problem through dialogue, soliciting information from each member communion on the particularities of their theology and ecclesiology in order to come to a mutually acceptable conclusion.", "title": "History" }, { "paragraph_id": 5, "text": "CUIC released the seventh and final draft of the MRMRM document in June 2005. Much work was done in 2006 on this document, which focused on \"Episkope,\" the oversight of ministry. The work culminated in a consultation on episkope in St. Louis in October 2006 involving the heads of communion of the members of CUIC. At this consultation, the MRMRM document was met with resistance, and concern was raised in particular that CUIC was focusing too narrowly on reconciliation of ministries and \"not taking seriously our commitment to working on those issues of systemic racism that remain at the heart of our continuing and separated life as churches here in the United States.\"", "title": "History" }, { "paragraph_id": 6, "text": "The nine churches which inaugurated CUIC in 2002 were joined by the Moravian Church, Northern Province. The Moravians had been partners in mission and dialogue since 2002, but joined as a member communion after the October 2006 consultation on episcope.", "title": "History" }, { "paragraph_id": 7, "text": "In 2007, the African Methodist Episcopal Zion Church and the African Methodist Episcopal Church withdrew from CUIC. Neither body sent representatives to the CUIC plenary on January 11–14, 2008, though the AME Council of Bishops never voted to suspend membership officially. They felt the other churches were not doing enough to counter the history of racial injustice between black and white churches. In response to this, the remaining churches in CUIC decided in 2008 to suspend their work while they seek reconciliation with these churches. This work began with a group of representatives who revisited the 1999 document \"Call to Christian Commitment and Action to Combat Racism,\" which is available on the current CUIC website. This also meant eliminating the position of Director as well as the suspension of the work of the CUIC task forces. As of 2012, CUIC no longer has physical offices, opting instead for a virtual office and storing the archives of both CUIC and COCU at Princeton Seminary's Henry Luce III Library.", "title": "History" }, { "paragraph_id": 8, "text": "The African Methodist Episcopal Church resumed its participation by the February 2010 plenary meeting, where CUIC moved to refocus on its eight marks of commitment and a shared concern for racial justice as a major dividing factor facing ecumenism. Although the African Methodist Episcopal Zion Church has not rejoined the group, efforts have continued to bring this communion back into membership. The Rev. Staccato Powell, an AMEZ pastor, preached at the 2011 CUIC plenary in Ft. Lauderdale, Florida as a part of these reconciliation efforts. Combating racism has again become a priority of CUIC. Concerns over the historic episcopate have been sidelined since 2008, though they may re-emerge. The group's focus on mutual reconciliation of ministries has been revisited in the light of racism and the impact that racism may have on exchanging ministers between denominations. Therefore, the coordinating council of CUIC created a consultation on race and ministry while also choosing to partner with the Samuel Dewitt Proctor Conference, a social justice organization involved in African American faith communities.", "title": "History" }, { "paragraph_id": 9, "text": "The purpose of CUIC has always been unity (as reflected in their current slogan, \"reconciling the baptized, seeking unity with justice\"). This reflects one of the core scripture passages in the ecumenical movement, Jesus' prayer in John 17:21, \"That they all may be one\". CUIC has approached this goal of unity in various ways throughout its history.", "title": "Purpose" }, { "paragraph_id": 10, "text": "Racism has been a primary focus of CUIC since 2002 (and, indeed, a primary focus of COCU alongside other forms of exclusion and prejudice, such as sexism and ableism). According to Dan Krutz, former president of CUIC, \"Overcoming racism has been a focal point of CUIC since its beginning... Racism may be the biggest sin that divides churches.\" Even before the absence of the AME and AMEZ churches at the January 2011 plenary, some in CUIC had noticed the lack of commitment to racial reconciliation. Since 2008, however, racism has become an even more pressing concern. This has led CUIC to address issues of racism in the public sphere, including the killing of Trayvon Martin and the recovery from the 2010 Haiti earthquake.", "title": "Purpose" }, { "paragraph_id": 11, "text": "According to their website, one of the reasons for transitioning from COCU to CUIC is so that member churches \"stop 'consulting' and start living their unity in Christ more fully.\" This means that each member communion in CUIC agrees to abide by the eight Marks of Commitment, which are summarized as follows:", "title": "Purpose" } ]
Churches Uniting in Christ (CUIC) is an ecumenical organization that brings together mainline American denominations, and was inaugurated on January 20, 2002, in Memphis, Tennessee on the balcony of the Lorraine Motel. It is the successor organization to the Consultation on Church Union.
2002-01-07T05:46:14Z
2023-12-21T13:49:50Z
[ "Template:Reflist", "Template:Citation", "Template:Cite web", "Template:Cite journal", "Template:Cite news", "Template:Short description", "Template:Infobox organization", "Template:Cbignore", "Template:Cite magazine", "Template:Authority control", "Template:Christian denominations in the United States", "Template:Dead link" ]
https://en.wikipedia.org/wiki/Churches_Uniting_in_Christ
7,663
Canadian Unitarian Council
The Canadian Unitarian Council (French: Conseil unitarien du Canada) (CUC) is a liberal religious association of Unitarian and Unitarian Universalist congregations in Canada. It was formed on May 14, 1961, initially to be the national organization for Canadians belonging to the Unitarian Universalist Association (UUA) which formed a day later on May 15, 1961. Between 1961 and 2002, almost all member congregations of the CUC were also members of the UUA and most services to congregations in Canada were provided by the UUA. However, in 2002, the CUC formally became a separate entity from the UUA, although the UUA continues to provide ministerial settlement services. Some Canadian congregations have continued to be members of both the CUC and the UUA, while most congregations are only members of the CUC. The Canadian Unitarian Council is the only national body for Unitarian and Unitarian Universalist congregations in Canada and is a member of the International Council of Unitarians and Universalists. The CUC is made up of 46 member congregations and emerging groups, who are the legal owners of the organization, and who are, for governance and service delivery, divided into four regions: "BC" (British Columbia), "Western" (Alberta to Thunder Bay), "Central" (between Thunder Bay and Kingston), and "Eastern" (Kingston, Ottawa and everything east of that). However, for youth ministry, the "Central" and "Eastern" regions are combined to form a youth region known as "QuOM" (Quebec, Ontario and the Maritimes), giving the youth only three regions for their activities. The organization as a whole is governed by the CUC Board of Trustees (Board), whose mandate it is to govern in the best interests of the CUC's owners. The Board is made up of 8 members who are elected by congregational delegates at the CUC's Annual General Meeting. This consists of two Trustees from each region, who are eligible to serve a maximum of two three-year terms. Board meetings also include Official Observers to the Board, who participate without a vote and represent UU Youth and Ministers. As members of the CUC, congregations and emerging groups are served by volunteer Service Consultants, Congregational Networks, and a series of other committees. There are two directors of regional services, one for the Western two regions, and one for the Eastern two regions. The Director of Lifespan Learning oversees development of religious exploration programming and youth and young adults are served by a Youth and Young Adult Ministry Development staff person. Policies and business of the CUC are determined at the Annual Conference and Meeting (ACM), consisting of the Annual Conference, in which workshops are held, and the Annual General Meeting, in which business matters and plenary meetings are performed. The ACM features two addresses, a Keynote and a Confluence Lecture. The Confluence Lecture is comparable to the UUA's Ware Lecture in prestige. In early days this event simply consisted of the Annual General Meeting component as the Annual Conference component was not added to much later. And starting in 2017 the conference portion will only take place every second year. Past ACMs have been held in the following locations: The CUC does not have a central creed in which members are required to believe, but they have found it useful to articulate their common values in what has become known as The Principles and Sources of our Religious Faith, which are currently based on the UUA's Principles and Purposes with the addition of an 8th principle adopted by CUC members at a special meeting on November 27, 2021. The CUC had a task force whose mandate was to consider revising them. The principles and sources as published in church literature and on the CUC website The Principles and Sources of our Religious Faith Principles We, the member congregations of the Canadian Unitarian Council, covenant to affirm and promote: Sources The living tradition which we share draws from many sources: Grateful for the religious pluralism which enriches and ennobles our faith, we are inspired to deepen our understanding and expand our vision. As free congregations we enter into this covenant, promising to one another our mutual trust and support. The CUC formed on May 14, 1961 to be the national organization for Canadians within the about-to-form UUA (it formed a day later on May 15, 1961). And until 2002, almost all member congregations of the CUC were also members of the UUA and most services to CUC member congregations were provided by the UUA. However, after an agreement between the UUA and the CUC, since 2002 most services have been provided by the CUC to its own member congregations, with the UUA continuing to provide ministerial settlement services. And also since 2002, some Canadian congregations have continued to be members of both the UUA and CUC while others are members of only the CUC. The Canadian Unitarian Universalist youth of the day disapproved of the 2002 change in relationship between the CUC and UUA. It is quite evident in the words of this statement, which was adopted by the attendees of the 2001 youth conference held at the Unitarian Church of Montreal: We the youth of Canada are deeply concerned about the direction the CUC seems to be taking. As stewards of our faith, adults have a responsibility to take into consideration the concerns of youth. We are opposed to making this massive jump in our evolutionary progress. The Canadian Unitarian Universalist Women's Association (CUUWA), established in May 2011, is a women's rights organization associated with the CUC. The CUUWA gained initial support from Prairie Women's Gathering and the Vancouver Island Women's retreat, and has since become a nationally-recognized organization. Originally called the Canadian Unitarian Universalist Women's Federation, the organization aims to raise awareness for women's education, rights, and equality of income. The association also aims to change societal attitudes about women and inform society of the issues women have faced locally and internationally. As a part of their mission, the CUUWA circulates educational materials that highlight women's contributions to society. The organization hosts an annual general meeting during the Canadian Unitarian Council Annual Conference. While the name of the organization is the Canadian Unitarian Council, the CUC includes congregations with Unitarian, Universalist, Unitarian Universalist, and Universalist Unitarian in their names. Changing the name of the CUC has occasionally been debated, but there have been no successful motions. To recognize this diversity, some members of the CUC abbreviate Unitarian Universalist as U*U (and playfully read it as "You star, you"). Note, not all CUC members like this playful reading and so when these people write the abbreviation they leave out the star (*), just writing UU instead.
[ { "paragraph_id": 0, "text": "The Canadian Unitarian Council (French: Conseil unitarien du Canada) (CUC) is a liberal religious association of Unitarian and Unitarian Universalist congregations in Canada. It was formed on May 14, 1961, initially to be the national organization for Canadians belonging to the Unitarian Universalist Association (UUA) which formed a day later on May 15, 1961. Between 1961 and 2002, almost all member congregations of the CUC were also members of the UUA and most services to congregations in Canada were provided by the UUA. However, in 2002, the CUC formally became a separate entity from the UUA, although the UUA continues to provide ministerial settlement services. Some Canadian congregations have continued to be members of both the CUC and the UUA, while most congregations are only members of the CUC.", "title": "" }, { "paragraph_id": 1, "text": "The Canadian Unitarian Council is the only national body for Unitarian and Unitarian Universalist congregations in Canada and is a member of the International Council of Unitarians and Universalists.", "title": "" }, { "paragraph_id": 2, "text": "The CUC is made up of 46 member congregations and emerging groups, who are the legal owners of the organization, and who are, for governance and service delivery, divided into four regions: \"BC\" (British Columbia), \"Western\" (Alberta to Thunder Bay), \"Central\" (between Thunder Bay and Kingston), and \"Eastern\" (Kingston, Ottawa and everything east of that). However, for youth ministry, the \"Central\" and \"Eastern\" regions are combined to form a youth region known as \"QuOM\" (Quebec, Ontario and the Maritimes), giving the youth only three regions for their activities. The organization as a whole is governed by the CUC Board of Trustees (Board), whose mandate it is to govern in the best interests of the CUC's owners. The Board is made up of 8 members who are elected by congregational delegates at the CUC's Annual General Meeting. This consists of two Trustees from each region, who are eligible to serve a maximum of two three-year terms. Board meetings also include Official Observers to the Board, who participate without a vote and represent UU Youth and Ministers.", "title": "Organization" }, { "paragraph_id": 3, "text": "As members of the CUC, congregations and emerging groups are served by volunteer Service Consultants, Congregational Networks, and a series of other committees. There are two directors of regional services, one for the Western two regions, and one for the Eastern two regions. The Director of Lifespan Learning oversees development of religious exploration programming and youth and young adults are served by a Youth and Young Adult Ministry Development staff person.", "title": "Organization" }, { "paragraph_id": 4, "text": "Policies and business of the CUC are determined at the Annual Conference and Meeting (ACM), consisting of the Annual Conference, in which workshops are held, and the Annual General Meeting, in which business matters and plenary meetings are performed. The ACM features two addresses, a Keynote and a Confluence Lecture. The Confluence Lecture is comparable to the UUA's Ware Lecture in prestige. In early days this event simply consisted of the Annual General Meeting component as the Annual Conference component was not added to much later. And starting in 2017 the conference portion will only take place every second year. Past ACMs have been held in the following locations:", "title": "Organization" }, { "paragraph_id": 5, "text": "The CUC does not have a central creed in which members are required to believe, but they have found it useful to articulate their common values in what has become known as The Principles and Sources of our Religious Faith, which are currently based on the UUA's Principles and Purposes with the addition of an 8th principle adopted by CUC members at a special meeting on November 27, 2021. The CUC had a task force whose mandate was to consider revising them.", "title": "Organization" }, { "paragraph_id": 6, "text": "The principles and sources as published in church literature and on the CUC website", "title": "Organization" }, { "paragraph_id": 7, "text": "The Principles and Sources of our Religious Faith", "title": "Organization" }, { "paragraph_id": 8, "text": "Principles", "title": "Organization" }, { "paragraph_id": 9, "text": "We, the member congregations of the Canadian Unitarian Council, covenant to affirm and promote:", "title": "Organization" }, { "paragraph_id": 10, "text": "Sources", "title": "Organization" }, { "paragraph_id": 11, "text": "The living tradition which we share draws from many sources:", "title": "Organization" }, { "paragraph_id": 12, "text": "Grateful for the religious pluralism which enriches and ennobles our faith, we are inspired to deepen our understanding and expand our vision. As free congregations we enter into this covenant, promising to one another our mutual trust and support.", "title": "Organization" }, { "paragraph_id": 13, "text": "The CUC formed on May 14, 1961 to be the national organization for Canadians within the about-to-form UUA (it formed a day later on May 15, 1961). And until 2002, almost all member congregations of the CUC were also members of the UUA and most services to CUC member congregations were provided by the UUA. However, after an agreement between the UUA and the CUC, since 2002 most services have been provided by the CUC to its own member congregations, with the UUA continuing to provide ministerial settlement services. And also since 2002, some Canadian congregations have continued to be members of both the UUA and CUC while others are members of only the CUC.", "title": "Organization" }, { "paragraph_id": 14, "text": "The Canadian Unitarian Universalist youth of the day disapproved of the 2002 change in relationship between the CUC and UUA. It is quite evident in the words of this statement, which was adopted by the attendees of the 2001 youth conference held at the Unitarian Church of Montreal:", "title": "Organization" }, { "paragraph_id": 15, "text": "We the youth of Canada are deeply concerned about the direction the CUC seems to be taking. As stewards of our faith, adults have a responsibility to take into consideration the concerns of youth. We are opposed to making this massive jump in our evolutionary progress.", "title": "Organization" }, { "paragraph_id": 16, "text": "The Canadian Unitarian Universalist Women's Association (CUUWA), established in May 2011, is a women's rights organization associated with the CUC. The CUUWA gained initial support from Prairie Women's Gathering and the Vancouver Island Women's retreat, and has since become a nationally-recognized organization.", "title": "Organization" }, { "paragraph_id": 17, "text": "Originally called the Canadian Unitarian Universalist Women's Federation, the organization aims to raise awareness for women's education, rights, and equality of income. The association also aims to change societal attitudes about women and inform society of the issues women have faced locally and internationally. As a part of their mission, the CUUWA circulates educational materials that highlight women's contributions to society. The organization hosts an annual general meeting during the Canadian Unitarian Council Annual Conference.", "title": "Organization" }, { "paragraph_id": 18, "text": "While the name of the organization is the Canadian Unitarian Council, the CUC includes congregations with Unitarian, Universalist, Unitarian Universalist, and Universalist Unitarian in their names. Changing the name of the CUC has occasionally been debated, but there have been no successful motions. To recognize this diversity, some members of the CUC abbreviate Unitarian Universalist as U*U (and playfully read it as \"You star, you\"). Note, not all CUC members like this playful reading and so when these people write the abbreviation they leave out the star (*), just writing UU instead.", "title": "Organization" } ]
The Canadian Unitarian Council (CUC) is a liberal religious association of Unitarian and Unitarian Universalist congregations in Canada. It was formed on May 14, 1961, initially to be the national organization for Canadians belonging to the Unitarian Universalist Association (UUA) which formed a day later on May 15, 1961. Between 1961 and 2002, almost all member congregations of the CUC were also members of the UUA and most services to congregations in Canada were provided by the UUA. However, in 2002, the CUC formally became a separate entity from the UUA, although the UUA continues to provide ministerial settlement services. Some Canadian congregations have continued to be members of both the CUC and the UUA, while most congregations are only members of the CUC. The Canadian Unitarian Council is the only national body for Unitarian and Unitarian Universalist congregations in Canada and is a member of the International Council of Unitarians and Universalists.
2002-01-07T05:50:26Z
2023-10-15T11:50:50Z
[ "Template:Infobox religion", "Template:Portal", "Template:Webarchive", "Template:Unitarian, Universalist, and Unitarian Universalist topics", "Template:Citation needed", "Template:Reflist", "Template:Cite web", "Template:Short description", "Template:Distinguish", "Template:Thirdpartysources", "Template:Lang-fr", "Template:Quote" ]
https://en.wikipedia.org/wiki/Canadian_Unitarian_Council
7,668
Charles Mingus
Charles Mingus Jr. (April 22, 1922 – January 5, 1979) was an American jazz upright bassist, composer, bandleader, pianist, and author. A major proponent of collective improvisation, he is considered to be one of the greatest jazz musicians and composers in history, with a career spanning three decades and collaborations with other jazz greats such as Duke Ellington, Charlie Parker, Max Roach, and Eric Dolphy. Mingus's work ranged from advanced bebop and avant-garde jazz with small and midsize ensembles, to pioneering the post-bop style on seminal recordings like Pithecanthropus Erectus (1956) and Mingus Ah Um (1959), and progressive big band experiments such as The Black Saint and the Sinner Lady (1963). Mingus's compositions continue to be played by contemporary musicians ranging from the repertory bands Mingus Big Band, Mingus Dynasty, and Mingus Orchestra, to the high school students who play the charts and compete in the Charles Mingus High School Competition. In 1993, the Library of Congress acquired Mingus's collected papers— including scores, sound recordings, correspondence and photos— in what they described as "the most important acquisition of a manuscript collection relating to jazz in the Library's history". Charles Mingus was born in Nogales, Arizona. His father, Charles Mingus Sr., was a sergeant in the U.S. Army. Mingus Jr. was largely raised in the Watts area of Los Angeles. Mingus's ethnic background was complex. His ancestry included German American, African American, and Native American heritage. His maternal grandfather was a Chinese British subject from Hong Kong, and his maternal grandmother was an African American from the southern United States. Mingus was the great-great-great-grandson of the family's founding patriarch who was, by most accounts, a German immigrant. In Mingus's autobiography Beneath the Underdog, his mother was described as "the daughter of an English/Chinese man and a South-American woman", and his father was the son "of a black farm worker and a Swedish woman". Charles Mingus Sr. claims to have been raised by his mother and her husband as a white person until he was fourteen, when his mother revealed to her family that the child's true father was a black slave, after which he had to run away from his family and live on his own. The autobiography does not confirm whether Charles Mingus Sr. or Mingus himself believed this story was true, or whether it was merely an embellished version of the Mingus family's lineage. According to new information used to educate visitors to Mingus Mill in the Great Smoky Mountains National Park, included in signs unveiled May 23, 2023, the father of Mingus Sr. was former slave Daniel Mingus, owned by the family of his mother Clarinda Mingus, a white woman. When Clarinda married a white man, Mingus Sr. was left with his white grandfather and great-grandparents. His father, who later changed his name to West, apparently did not have a relationship with Mingus Sr. His mother allowed only church-related music in their home, but Mingus developed an early love for other music, especially that of Duke Ellington. He studied trombone, and later cello, although he was unable to follow the cello professionally because, at the time, it was nearly impossible for a black musician to make a career of classical music, and the cello was not accepted as a jazz instrument. Despite this, Mingus was still attached to the cello; as he studied bass with Red Callender in the late 1930s, Callender even commented that the cello was still Mingus's main instrument. In Beneath the Underdog, Mingus states that he did not actually start learning bass until Buddy Collette accepted him into his swing band under the stipulation that he be the band's bass player. Due to a poor education, the young Mingus could not read musical notation quickly enough to join the local youth orchestra. This had a serious impact on his early musical experiences, leaving him feeling ostracized from the classical music world. These early experiences, in addition to his lifelong confrontations with racism, were reflected in his music, which often focused on themes of racism, discrimination and (in)justice. Much of the cello technique he learned was applicable to double bass when he took up the instrument in high school. He studied for five years with Herman Reinshagen, principal bassist of the New York Philharmonic, and compositional techniques with Lloyd Reese. Throughout much of his career, he played a bass made in 1927 by the German maker Ernst Heinrich Roth. Beginning in his teen years, Mingus was writing quite advanced pieces; many are similar to Third Stream because they incorporate elements of classical music. A number of them were recorded in 1960 with conductor Gunther Schuller, and released as Pre-Bird, referring to Charlie "Bird" Parker; Mingus was one of many musicians whose perspectives on music were altered by Parker into "pre- and post-Bird" eras. Mingus gained a reputation as a bass prodigy. His first major professional job was playing with former Ellington clarinetist Barney Bigard. He toured with Louis Armstrong in 1943, and by early 1945 was recording in Los Angeles in a band led by Russell Jacquet, which also included Teddy Edwards, Maurice James Simon, Wild Bill Davis, and Chico Hamilton, and in May that year, in Hollywood, again with Edwards, in a band led by Howard McGhee. He then played with Lionel Hampton's band in the late 1940s; Hampton performed and recorded several Mingus pieces. A popular trio of Mingus, Red Norvo, and Tal Farlow in 1950 and 1951 received considerable acclaim, but Mingus's race caused problems with some club owners and he left the group. Mingus was briefly a member of Ellington's band in 1953, as a substitute for bassist Wendell Marshall; however, Mingus's notorious temper led to his being one of the few musicians personally fired by Ellington (Bubber Miley and drummer Bobby Durham are among the others), after a backstage fight between Mingus and Juan Tizol. Also in the early 1950s, before attaining commercial recognition as a bandleader, Mingus played gigs with Charlie Parker, whose compositions and improvisations greatly inspired and influenced him. Mingus considered Parker the greatest genius and innovator in jazz history, but he had a love-hate relationship with Parker's legacy. Mingus blamed the Parker mythology for a derivative crop of pretenders to Parker's throne. He was also conflicted and sometimes disgusted by Parker's self-destructive habits and the romanticized lure of drug addiction they offered to other jazz musicians. In response to the many sax players who imitated Parker, Mingus titled a song "If Charlie Parker Were a Gunslinger, There'd Be a Whole Lot of Dead Copycats" (released on Mingus Dynasty as "Gunslinging Bird"). Mingus was married four times. His wives were Jeanne Gross, Lucille (Celia) Germanis, Judy Starkey, and Susan Graham Ungaro. In 1952, Mingus co-founded Debut Records with Max Roach so he could conduct his recording career as he saw fit. The name originated from his desire to document unrecorded young musicians. Despite this, the best-known recording the company issued was of the most prominent figures in bebop. On May 15, 1953, Mingus joined Dizzy Gillespie, Parker, Bud Powell, and Roach for a concert at Massey Hall in Toronto, which is the last recorded documentation of Gillespie and Parker playing together. After the event, Mingus chose to overdub his barely audible bass part back in New York; the original version was issued later. The two 10" albums of the Massey Hall concert (one featured the trio of Powell, Mingus and Roach) were among Debut Records' earliest releases. Mingus may have objected to the way the major record companies treated musicians, but Gillespie once commented that he did not receive any royalties "for years and years" for his Massey Hall appearance. The records, however, are often regarded as among the finest live jazz recordings. One story has it that Mingus was involved in a notorious incident while playing a 1955 club date billed as a "reunion" with Parker, Powell, and Roach. Powell, who suffered from alcoholism and mental illness (possibly exacerbated by a severe police beating and electroshock treatments), had to be helped from the stage, unable to play or speak coherently. As Powell's incapacitation became apparent, Parker stood in one spot at a microphone, chanting "Bud Powell ... Bud Powell ..." as if beseeching Powell's return. Allegedly, Parker continued this incantation for several minutes after Powell's departure, to his own amusement and Mingus's exasperation. Mingus took another microphone and announced to the crowd, "Ladies and Gentlemen, please don't associate me with any of this. This is not jazz. These are sick people." This was Parker's last public performance; about a week later he died after years of substance abuse. Mingus often worked with a mid-sized ensemble (around 8–10 members) of rotating musicians known as the Jazz Workshop. Mingus broke new ground, constantly demanding that his musicians be able to explore and develop their perceptions on the spot. Those who joined the Workshop (or Sweatshops as they were colorfully dubbed by the musicians) included Pepper Adams, Jaki Byard, Booker Ervin, John Handy, Jimmy Knepper, Charles McPherson, and Horace Parlan. Mingus shaped these musicians into a cohesive improvisational machine that in many ways anticipated free jazz. Some musicians dubbed the workshop a "university" for jazz. The 1950s are generally regarded as Mingus's most productive and fertile period. Over a ten-year period, he made 30 records for a number of labels (Atlantic, Candid, Columbia, Impulse and others). Mingus had already recorded around ten albums as a bandleader, but 1956 was a breakthrough year for him, with the release of Pithecanthropus Erectus, arguably his first major work as both a bandleader and composer. Like Ellington, Mingus wrote songs with specific musicians in mind, and his band for Erectus included adventurous musicians: piano player Mal Waldron, alto saxophonist Jackie McLean and the Sonny Rollins-influenced tenor of J. R. Monterose. The title song is a ten-minute tone poem, depicting the rise of man from his hominid roots (Pithecanthropus erectus) to an eventual downfall. A section of the piece was free improvisation, free of structure or theme. Another album from this period, The Clown (1957, also on Atlantic Records), the title track of which features narration by humorist Jean Shepherd, was the first to feature drummer Dannie Richmond, who remained his preferred drummer until Mingus's death in 1979. The two men formed one of the most impressive and versatile rhythm sections in jazz. Both were accomplished performers seeking to stretch the boundaries of their music while staying true to its roots. When joined by pianist Jaki Byard, they were dubbed "The Almighty Three". In 1959, Mingus and his jazz workshop musicians recorded one of his best-known albums, Mingus Ah Um. Even in a year of standout masterpieces, including Dave Brubeck's Time Out, Miles Davis's Kind of Blue, John Coltrane's Giant Steps, and Ornette Coleman's The Shape of Jazz to Come, this was a major achievement, featuring such classic Mingus compositions as "Goodbye Pork Pie Hat" (an elegy to Lester Young) and the vocal-less version of "Fables of Faubus" (a protest against segregationist Arkansas governor Orval Faubus that features double-time sections). In 2003 the album's legacy was cemented when it was inducted into the National Recording Registry. Also during 1959, Mingus recorded the album Blues & Roots, which was released the following year. Mingus said in his liner notes: "I was born swinging and clapped my hands in church as a little boy, but I've grown up and I like to do things other than just swing. But blues can do more than just swing." Mingus witnessed Ornette Coleman's legendary—and controversial—1960 appearances at New York City's Five Spot jazz club. He initially expressed rather mixed feelings for Coleman's innovative music: "... if the free-form guys could play the same tune twice, then I would say they were playing something ... Most of the time they use their fingers on the saxophone and they don't even know what's going to come out. They're experimenting." That same year, however, Mingus formed a quartet with Richmond, trumpeter Ted Curson and multi-instrumentalist Eric Dolphy. This ensemble featured the same instruments as Coleman's quartet, and is often regarded as Mingus rising to the challenging new standard established by Coleman. The quartet recorded on both Charles Mingus Presents Charles Mingus and Mingus. The former also features the version of "Fables of Faubus" with lyrics, aptly titled "Original Faubus Fables". In 1961, Mingus spent time staying at the house of his mother's sister (Louise) and her husband, Fess Williams, a clarinetist and saxophonist, in Jamaica, Queens. Subsequently, Mingus invited Williams to play at the 1962 Town Hall Concert. Only one misstep occurred in this era: The Town Hall Concert in October 1962, a "live workshop"/recording session. With an ambitious program, the event was plagued with troubles from its inception. Mingus's vision, now known as Epitaph, was finally realized by conductor Gunther Schuller in a concert in 1989, a decade after Mingus died. Outside of music, Mingus published a mail-order how-to guide in 1954 called The Charles Mingus CAT-alog for Toilet Training Your Cat. The guide explained in detail how to get a cat to use a human toilet. Sixty years later, in 2014, the late American character actor Reg E. Cathey performed a voice recording of the complete guide for Studio 360. In 1963, Mingus released The Black Saint and the Sinner Lady, described as "one of the greatest achievements in orchestration by any composer in jazz history." The album was also unique in that Mingus asked his psychotherapist, Dr. Edmund Pollock, to provide notes for the record. Mingus also released Mingus Plays Piano, an unaccompanied album featuring some fully improvised pieces, in 1963. In addition, 1963 saw the release of Mingus Mingus Mingus Mingus Mingus, an album praised by critic Nat Hentoff. In 1964 Mingus put together one of his best-known groups, a sextet including Dannie Richmond, Jaki Byard, Eric Dolphy, trumpeter Johnny Coles, and tenor saxophonist Clifford Jordan. The group was recorded frequently during its short existence. Mosaic Records has released a 7-CD set, Charles Mingus – The Jazz Workshop Concerts 1964–65, featuring concerts from Town Hall, Amsterdam, Monterey ’64, Monterey ’65, & Minneapolis). Coles fell ill and left during a European tour. Dolphy stayed in Europe after the tour ended, and died suddenly in Berlin on June 28, 1964. 1964 was also the year that Mingus met his future wife, Sue Graham Ungaro. The couple were married in 1966 by Allen Ginsberg. Facing financial hardship, Mingus was evicted from his New York home in 1966. Mingus's pace slowed somewhat in the late 1960s and early 1970s. In 1974, after his 1970 sextet with Charles McPherson, Eddie Preston and Bobby Jones disbanded, he formed a quintet with Richmond, pianist Don Pullen, trumpeter Jack Walrath and saxophonist George Adams. They recorded two well-received albums, Changes One and Changes Two. Mingus also played with Charles McPherson in many of his groups during this time. Cumbia and Jazz Fusion in 1976 sought to blend Colombian music (the "Cumbia" of the title) with more traditional jazz forms. In 1971, Mingus taught for a semester at the University at Buffalo, The State University of New York as the Slee Professor of Music. By the mid-1970s, Mingus was suffering from amyotrophic lateral sclerosis (ALS). His once formidable bass technique declined until he could no longer play the instrument. He continued composing, however, and supervised a number of recordings before his death. At the time of his death, he was working with Joni Mitchell on an album eventually titled Mingus, which included lyrics added by Mitchell to his compositions, including "Goodbye Pork Pie Hat". The album featured the talents of Wayne Shorter, Herbie Hancock, and another influential bassist and composer, Jaco Pastorius. Mingus died on January 5, 1979, aged 56, in Cuernavaca, Mexico, where he had traveled for treatment and convalescence. His ashes were scattered in the Ganges River. His compositions retained the hot and soulful feel of hard bop, drawing heavily from black gospel music and blues, while sometimes containing elements of third stream, free jazz, and classical music. He once cited Duke Ellington and church as his main influences. Mingus espoused collective improvisation, similar to the old New Orleans jazz parades, paying particular attention to how each band member interacted with the group as a whole. In creating his bands, he looked not only at the skills of the available musicians, but also their personalities. Many musicians passed through his bands and later went on to impressive careers. He recruited talented and sometimes little-known artists, whom he utilized to assemble unconventional instrumental configurations. As a performer, Mingus was a pioneer in double bass technique, widely recognized as one of the instrument's most proficient players. Because of his brilliant writing for midsize ensembles, and his catering to and emphasizing the strengths of the musicians in his groups, Mingus is often considered the heir of Duke Ellington, for whom he expressed great admiration and collaborated on the record Money Jungle. Dizzy Gillespie had once said Mingus reminded him "of a young Duke", citing their shared "organizational genius". Nearly as well known as his ambitious music was Mingus's often fearsome temperament, which earned him the nickname "the Angry Man of Jazz". His refusal to compromise his musical integrity led to many onstage eruptions, exhortations to musicians, and dismissals. Although respected for his musical talents, Mingus was sometimes feared for his occasionally violent onstage temper, which was at times directed at members of his band and other times aimed at the audience. He was physically large, prone to obesity (especially in his later years), and was by all accounts often intimidating and frightening when expressing anger or displeasure. When confronted with a nightclub audience talking and clinking ice in their glasses while he performed, Mingus stopped his band and loudly chastised the audience, stating: "Isaac Stern doesn't have to put up with this shit." Mingus destroyed a $20,000 bass in response to audience heckling at the Five Spot in New York City. Guitarist and singer Jackie Paris was a witness to Mingus's irascibility. Paris recalls his time in the Jazz Workshop: "He chased everybody off the stand except [drummer] Paul Motian and me ... The three of us just wailed on the blues for about an hour and a half before he called the other cats back." On October 12, 1962, Mingus punched Jimmy Knepper in the mouth while the two men were working together at Mingus's apartment on a score for his upcoming concert at the Town Hall in New York, and Knepper refused to take on more work. Mingus's blow broke off a crowned tooth and its underlying stub. According to Knepper, this ruined his embouchure and resulted in the permanent loss of the top octave of his range on the trombone – a significant handicap for any professional trombonist. This attack temporarily ended their working relationship, and Knepper was unable to perform at the concert. Charged with assault, Mingus appeared in court in January 1963 and was given a suspended sentence. Knepper did again work with Mingus in 1977 and played extensively with the Mingus Dynasty, formed after Mingus's death in 1979. In addition to bouts of ill temper, Mingus was prone to clinical depression and tended to have brief periods of extreme creative activity intermixed with fairly long stretches of greatly decreased output, such as the five-year period following the death of Eric Dolphy. In 1966, Mingus was evicted from his apartment at 5 Great Jones Street in New York City for nonpayment of rent, captured in the 1968 documentary film Mingus: Charlie Mingus 1968, directed by Thomas Reichman. The film also features Mingus performing in clubs and in the apartment, firing a .410 shotgun indoors, composing at the piano, playing with and taking care of his young daughter Caroline, and discussing love, art, politics, and the music school he had hoped to create. Charles Mingus's music is currently being performed and reinterpreted by the Mingus Big Band, which in October 2008 began playing every Monday at Jazz Standard in New York City, and often tours the rest of the U.S. and Europe. The Mingus Big Band, the Mingus Orchestra, and the Mingus Dynasty band are managed by Jazz Workshop, Inc. and run by Mingus's widow, Sue Graham Mingus. Elvis Costello has written lyrics for a few Mingus pieces. He had once sung lyrics for one piece, "Invisible Lady", backed by the Mingus Big Band on the album, Tonight at Noon: Three of Four Shades of Love. Epitaph is considered one of Charles Mingus's masterpieces. The composition is 4,235 measures long, requires two hours to perform, and is one of the longest jazz pieces ever written. Epitaph was only completely discovered, by musicologist Andrew Homzy, during the cataloging process after Mingus's death. With the help of a grant from the Ford Foundation, the score and instrumental parts were copied, and the piece itself was premiered by a 30-piece orchestra, conducted by Gunther Schuller. This concert was produced by Mingus's widow, Sue Graham Mingus, at Alice Tully Hall on June 3, 1989, 10 years after Mingus's death. It was performed again at several concerts in 2007. The performance at Walt Disney Concert Hall is available on NPR. Hal Leonard published the complete score in 2008. Mingus wrote the sprawling, exaggerated, quasi-autobiography, Beneath the Underdog: His World as Composed by Mingus, throughout the 1960s, and it was published in 1971. Its "stream of consciousness" style covered several aspects of his life that had previously been off-record. In addition to his musical and intellectual proliferation, Mingus goes into great detail about his perhaps overstated sexual exploits. He claims to have had more than 31 affairs in the course of his life (including 26 prostitutes in one sitting). This does not include any of his five wives (he claims to have been married to two of them simultaneously). In addition, he asserts that he held a brief career as a pimp. This has never been confirmed. Mingus's autobiography also serves as an insight into his psyche, as well as his attitudes about race and society. It includes accounts of abuse at the hands of his father from an early age, being bullied as a child, his removal from a white musician's union, and grappling with disapproval while married to white women and other examples of hardship and prejudice. The work of Charles Mingus has also received attention in academia. According to Ashon Crawley, the musicianship of Charles Mingus provides a salient example of the power of music to unsettle the dualistic, categorical distinction of sacred from profane through otherwise epistemologies. Crawley offers a reading of Mingus that examines the deep imbrication uniting Holiness – Pentecostal aesthetic practices and jazz. Mingus recognized the importance and impact of the midweek gathering of black folks at the Holiness – Pentecostal Church at 79th and Watts in Los Angeles that he would attend with his stepmother or his friend Britt Woodman. Crawley goes on to argue that these visits were the impetus for the song "Wednesday Prayer Meeting". Emphasis is placed on the ethical demand of the prayer meeting felt and experienced that, according to Crawley, Mingus attempts to capture. In many ways, "Wednesday Night Prayer Meeting" was Mingus's homage to black sociality. By exploring Mingus's homage to black Pentecostal aesthetics, Crawley expounds on how Mingus figured out that those Holiness – Pentecostal gatherings were the constant repetition of the ongoing, deep, intense mode of study, a kind of study wherein the aesthetic forms created could not be severed from the intellectual practice because they were one and also, but not, the same." Gunther Schuller has suggested that Mingus should be ranked among the most important American composers, jazz or otherwise. In 1988, a grant from the National Endowment for the Arts made possible the cataloging of Mingus compositions, which were then donated to the Music Division of the New York Public Library for public use. In 1993, The Library of Congress acquired Mingus's collected papers—including scores, sound recordings, correspondence and photos—in what they described as "the most important acquisition of a manuscript collection relating to jazz in the Library's history". Considering the number of compositions that Charles Mingus wrote, his works have not been recorded as often as comparable jazz composers. The only Mingus tribute albums recorded during his lifetime were baritone saxophonist Pepper Adams's album, Pepper Adams Plays the Compositions of Charlie Mingus, in 1963, and Joni Mitchell's album Mingus, in 1979. Of all his works, his elegy for Lester Young, "Goodbye Pork Pie Hat" (from Mingus Ah Um) has probably had the most recordings. The song has been covered by both jazz and non-jazz artists, such as Jeff Beck, Andy Summers, Eugene Chadbourne, and Bert Jansch and John Renbourn with and without Pentangle. Joni Mitchell sang a version with lyrics that she wrote for it. Elvis Costello has recorded "Hora Decubitus" (from Mingus Mingus Mingus Mingus Mingus) on My Flame Burns Blue (2006). "Better Git It in Your Soul" was covered by Davey Graham on his album "Folk, Blues, and Beyond". Trumpeter Ron Miles performs a version of "Pithecanthropus Erectus" on his CD "Witness". New York Ska Jazz Ensemble has done a cover of Mingus's "Haitian Fight Song", as have the British folk rock group Pentangle and others. Hal Willner's 1992 tribute album Weird Nightmare: Meditations on Mingus (Columbia Records) contains idiosyncratic renditions of Mingus's works involving numerous popular musicians including Chuck D, Keith Richards, Henry Rollins and Dr. John. The Italian band Quintorigo recorded an entire album devoted to Mingus's music, titled Play Mingus. Gunther Schuller's edition of Mingus's "Epitaph", which premiered at Lincoln Center in 1989, was subsequently released on Columbia/Sony Records. One of the most elaborate tributes to Mingus came on September 29, 1969, at a festival honoring him. Duke Ellington performed The Clown, with Ellington reading Jean Shepherd's narration. It was long believed that no recording of this performance existed; however, one was discovered and premiered on July 11, 2013, by Dry River Jazz host Trevor Hodgkins for NPR member station KRWG-FM with re-airings on July 13, 2013, and July 26, 2014. Mingus's elegy for Duke, "Duke Ellington's Sound Of Love", was recorded by Kevin Mahogany on Double Rainbow (1993) and Anita Wardell on Why Do You Cry? (1995).
[ { "paragraph_id": 0, "text": "Charles Mingus Jr. (April 22, 1922 – January 5, 1979) was an American jazz upright bassist, composer, bandleader, pianist, and author. A major proponent of collective improvisation, he is considered to be one of the greatest jazz musicians and composers in history, with a career spanning three decades and collaborations with other jazz greats such as Duke Ellington, Charlie Parker, Max Roach, and Eric Dolphy. Mingus's work ranged from advanced bebop and avant-garde jazz with small and midsize ensembles, to pioneering the post-bop style on seminal recordings like Pithecanthropus Erectus (1956) and Mingus Ah Um (1959), and progressive big band experiments such as The Black Saint and the Sinner Lady (1963).", "title": "" }, { "paragraph_id": 1, "text": "Mingus's compositions continue to be played by contemporary musicians ranging from the repertory bands Mingus Big Band, Mingus Dynasty, and Mingus Orchestra, to the high school students who play the charts and compete in the Charles Mingus High School Competition. In 1993, the Library of Congress acquired Mingus's collected papers— including scores, sound recordings, correspondence and photos— in what they described as \"the most important acquisition of a manuscript collection relating to jazz in the Library's history\".", "title": "" }, { "paragraph_id": 2, "text": "Charles Mingus was born in Nogales, Arizona. His father, Charles Mingus Sr., was a sergeant in the U.S. Army. Mingus Jr. was largely raised in the Watts area of Los Angeles.", "title": "Biography" }, { "paragraph_id": 3, "text": "Mingus's ethnic background was complex. His ancestry included German American, African American, and Native American heritage. His maternal grandfather was a Chinese British subject from Hong Kong, and his maternal grandmother was an African American from the southern United States. Mingus was the great-great-great-grandson of the family's founding patriarch who was, by most accounts, a German immigrant. In Mingus's autobiography Beneath the Underdog, his mother was described as \"the daughter of an English/Chinese man and a South-American woman\", and his father was the son \"of a black farm worker and a Swedish woman\". Charles Mingus Sr. claims to have been raised by his mother and her husband as a white person until he was fourteen, when his mother revealed to her family that the child's true father was a black slave, after which he had to run away from his family and live on his own. The autobiography does not confirm whether Charles Mingus Sr. or Mingus himself believed this story was true, or whether it was merely an embellished version of the Mingus family's lineage. According to new information used to educate visitors to Mingus Mill in the Great Smoky Mountains National Park, included in signs unveiled May 23, 2023, the father of Mingus Sr. was former slave Daniel Mingus, owned by the family of his mother Clarinda Mingus, a white woman. When Clarinda married a white man, Mingus Sr. was left with his white grandfather and great-grandparents. His father, who later changed his name to West, apparently did not have a relationship with Mingus Sr.", "title": "Biography" }, { "paragraph_id": 4, "text": "His mother allowed only church-related music in their home, but Mingus developed an early love for other music, especially that of Duke Ellington. He studied trombone, and later cello, although he was unable to follow the cello professionally because, at the time, it was nearly impossible for a black musician to make a career of classical music, and the cello was not accepted as a jazz instrument. Despite this, Mingus was still attached to the cello; as he studied bass with Red Callender in the late 1930s, Callender even commented that the cello was still Mingus's main instrument. In Beneath the Underdog, Mingus states that he did not actually start learning bass until Buddy Collette accepted him into his swing band under the stipulation that he be the band's bass player.", "title": "Biography" }, { "paragraph_id": 5, "text": "Due to a poor education, the young Mingus could not read musical notation quickly enough to join the local youth orchestra. This had a serious impact on his early musical experiences, leaving him feeling ostracized from the classical music world. These early experiences, in addition to his lifelong confrontations with racism, were reflected in his music, which often focused on themes of racism, discrimination and (in)justice.", "title": "Biography" }, { "paragraph_id": 6, "text": "Much of the cello technique he learned was applicable to double bass when he took up the instrument in high school. He studied for five years with Herman Reinshagen, principal bassist of the New York Philharmonic, and compositional techniques with Lloyd Reese. Throughout much of his career, he played a bass made in 1927 by the German maker Ernst Heinrich Roth.", "title": "Biography" }, { "paragraph_id": 7, "text": "Beginning in his teen years, Mingus was writing quite advanced pieces; many are similar to Third Stream because they incorporate elements of classical music. A number of them were recorded in 1960 with conductor Gunther Schuller, and released as Pre-Bird, referring to Charlie \"Bird\" Parker; Mingus was one of many musicians whose perspectives on music were altered by Parker into \"pre- and post-Bird\" eras.", "title": "Biography" }, { "paragraph_id": 8, "text": "Mingus gained a reputation as a bass prodigy. His first major professional job was playing with former Ellington clarinetist Barney Bigard. He toured with Louis Armstrong in 1943, and by early 1945 was recording in Los Angeles in a band led by Russell Jacquet, which also included Teddy Edwards, Maurice James Simon, Wild Bill Davis, and Chico Hamilton, and in May that year, in Hollywood, again with Edwards, in a band led by Howard McGhee.", "title": "Biography" }, { "paragraph_id": 9, "text": "He then played with Lionel Hampton's band in the late 1940s; Hampton performed and recorded several Mingus pieces. A popular trio of Mingus, Red Norvo, and Tal Farlow in 1950 and 1951 received considerable acclaim, but Mingus's race caused problems with some club owners and he left the group. Mingus was briefly a member of Ellington's band in 1953, as a substitute for bassist Wendell Marshall; however, Mingus's notorious temper led to his being one of the few musicians personally fired by Ellington (Bubber Miley and drummer Bobby Durham are among the others), after a backstage fight between Mingus and Juan Tizol.", "title": "Biography" }, { "paragraph_id": 10, "text": "Also in the early 1950s, before attaining commercial recognition as a bandleader, Mingus played gigs with Charlie Parker, whose compositions and improvisations greatly inspired and influenced him. Mingus considered Parker the greatest genius and innovator in jazz history, but he had a love-hate relationship with Parker's legacy. Mingus blamed the Parker mythology for a derivative crop of pretenders to Parker's throne. He was also conflicted and sometimes disgusted by Parker's self-destructive habits and the romanticized lure of drug addiction they offered to other jazz musicians. In response to the many sax players who imitated Parker, Mingus titled a song \"If Charlie Parker Were a Gunslinger, There'd Be a Whole Lot of Dead Copycats\" (released on Mingus Dynasty as \"Gunslinging Bird\").", "title": "Biography" }, { "paragraph_id": 11, "text": "Mingus was married four times. His wives were Jeanne Gross, Lucille (Celia) Germanis, Judy Starkey, and Susan Graham Ungaro.", "title": "Biography" }, { "paragraph_id": 12, "text": "In 1952, Mingus co-founded Debut Records with Max Roach so he could conduct his recording career as he saw fit. The name originated from his desire to document unrecorded young musicians. Despite this, the best-known recording the company issued was of the most prominent figures in bebop. On May 15, 1953, Mingus joined Dizzy Gillespie, Parker, Bud Powell, and Roach for a concert at Massey Hall in Toronto, which is the last recorded documentation of Gillespie and Parker playing together. After the event, Mingus chose to overdub his barely audible bass part back in New York; the original version was issued later. The two 10\" albums of the Massey Hall concert (one featured the trio of Powell, Mingus and Roach) were among Debut Records' earliest releases. Mingus may have objected to the way the major record companies treated musicians, but Gillespie once commented that he did not receive any royalties \"for years and years\" for his Massey Hall appearance. The records, however, are often regarded as among the finest live jazz recordings.", "title": "Biography" }, { "paragraph_id": 13, "text": "One story has it that Mingus was involved in a notorious incident while playing a 1955 club date billed as a \"reunion\" with Parker, Powell, and Roach. Powell, who suffered from alcoholism and mental illness (possibly exacerbated by a severe police beating and electroshock treatments), had to be helped from the stage, unable to play or speak coherently. As Powell's incapacitation became apparent, Parker stood in one spot at a microphone, chanting \"Bud Powell ... Bud Powell ...\" as if beseeching Powell's return. Allegedly, Parker continued this incantation for several minutes after Powell's departure, to his own amusement and Mingus's exasperation. Mingus took another microphone and announced to the crowd, \"Ladies and Gentlemen, please don't associate me with any of this. This is not jazz. These are sick people.\" This was Parker's last public performance; about a week later he died after years of substance abuse.", "title": "Biography" }, { "paragraph_id": 14, "text": "Mingus often worked with a mid-sized ensemble (around 8–10 members) of rotating musicians known as the Jazz Workshop. Mingus broke new ground, constantly demanding that his musicians be able to explore and develop their perceptions on the spot. Those who joined the Workshop (or Sweatshops as they were colorfully dubbed by the musicians) included Pepper Adams, Jaki Byard, Booker Ervin, John Handy, Jimmy Knepper, Charles McPherson, and Horace Parlan. Mingus shaped these musicians into a cohesive improvisational machine that in many ways anticipated free jazz. Some musicians dubbed the workshop a \"university\" for jazz.", "title": "Biography" }, { "paragraph_id": 15, "text": "The 1950s are generally regarded as Mingus's most productive and fertile period. Over a ten-year period, he made 30 records for a number of labels (Atlantic, Candid, Columbia, Impulse and others). Mingus had already recorded around ten albums as a bandleader, but 1956 was a breakthrough year for him, with the release of Pithecanthropus Erectus, arguably his first major work as both a bandleader and composer. Like Ellington, Mingus wrote songs with specific musicians in mind, and his band for Erectus included adventurous musicians: piano player Mal Waldron, alto saxophonist Jackie McLean and the Sonny Rollins-influenced tenor of J. R. Monterose. The title song is a ten-minute tone poem, depicting the rise of man from his hominid roots (Pithecanthropus erectus) to an eventual downfall. A section of the piece was free improvisation, free of structure or theme.", "title": "Biography" }, { "paragraph_id": 16, "text": "Another album from this period, The Clown (1957, also on Atlantic Records), the title track of which features narration by humorist Jean Shepherd, was the first to feature drummer Dannie Richmond, who remained his preferred drummer until Mingus's death in 1979. The two men formed one of the most impressive and versatile rhythm sections in jazz. Both were accomplished performers seeking to stretch the boundaries of their music while staying true to its roots. When joined by pianist Jaki Byard, they were dubbed \"The Almighty Three\".", "title": "Biography" }, { "paragraph_id": 17, "text": "In 1959, Mingus and his jazz workshop musicians recorded one of his best-known albums, Mingus Ah Um. Even in a year of standout masterpieces, including Dave Brubeck's Time Out, Miles Davis's Kind of Blue, John Coltrane's Giant Steps, and Ornette Coleman's The Shape of Jazz to Come, this was a major achievement, featuring such classic Mingus compositions as \"Goodbye Pork Pie Hat\" (an elegy to Lester Young) and the vocal-less version of \"Fables of Faubus\" (a protest against segregationist Arkansas governor Orval Faubus that features double-time sections). In 2003 the album's legacy was cemented when it was inducted into the National Recording Registry. Also during 1959, Mingus recorded the album Blues & Roots, which was released the following year. Mingus said in his liner notes: \"I was born swinging and clapped my hands in church as a little boy, but I've grown up and I like to do things other than just swing. But blues can do more than just swing.\"", "title": "Biography" }, { "paragraph_id": 18, "text": "Mingus witnessed Ornette Coleman's legendary—and controversial—1960 appearances at New York City's Five Spot jazz club. He initially expressed rather mixed feelings for Coleman's innovative music: \"... if the free-form guys could play the same tune twice, then I would say they were playing something ... Most of the time they use their fingers on the saxophone and they don't even know what's going to come out. They're experimenting.\" That same year, however, Mingus formed a quartet with Richmond, trumpeter Ted Curson and multi-instrumentalist Eric Dolphy. This ensemble featured the same instruments as Coleman's quartet, and is often regarded as Mingus rising to the challenging new standard established by Coleman. The quartet recorded on both Charles Mingus Presents Charles Mingus and Mingus. The former also features the version of \"Fables of Faubus\" with lyrics, aptly titled \"Original Faubus Fables\".", "title": "Biography" }, { "paragraph_id": 19, "text": "In 1961, Mingus spent time staying at the house of his mother's sister (Louise) and her husband, Fess Williams, a clarinetist and saxophonist, in Jamaica, Queens. Subsequently, Mingus invited Williams to play at the 1962 Town Hall Concert.", "title": "Biography" }, { "paragraph_id": 20, "text": "Only one misstep occurred in this era: The Town Hall Concert in October 1962, a \"live workshop\"/recording session. With an ambitious program, the event was plagued with troubles from its inception. Mingus's vision, now known as Epitaph, was finally realized by conductor Gunther Schuller in a concert in 1989, a decade after Mingus died.", "title": "Biography" }, { "paragraph_id": 21, "text": "Outside of music, Mingus published a mail-order how-to guide in 1954 called The Charles Mingus CAT-alog for Toilet Training Your Cat. The guide explained in detail how to get a cat to use a human toilet. Sixty years later, in 2014, the late American character actor Reg E. Cathey performed a voice recording of the complete guide for Studio 360.", "title": "Biography" }, { "paragraph_id": 22, "text": "In 1963, Mingus released The Black Saint and the Sinner Lady, described as \"one of the greatest achievements in orchestration by any composer in jazz history.\" The album was also unique in that Mingus asked his psychotherapist, Dr. Edmund Pollock, to provide notes for the record.", "title": "Biography" }, { "paragraph_id": 23, "text": "Mingus also released Mingus Plays Piano, an unaccompanied album featuring some fully improvised pieces, in 1963.", "title": "Biography" }, { "paragraph_id": 24, "text": "In addition, 1963 saw the release of Mingus Mingus Mingus Mingus Mingus, an album praised by critic Nat Hentoff.", "title": "Biography" }, { "paragraph_id": 25, "text": "In 1964 Mingus put together one of his best-known groups, a sextet including Dannie Richmond, Jaki Byard, Eric Dolphy, trumpeter Johnny Coles, and tenor saxophonist Clifford Jordan. The group was recorded frequently during its short existence. Mosaic Records has released a 7-CD set, Charles Mingus – The Jazz Workshop Concerts 1964–65, featuring concerts from Town Hall, Amsterdam, Monterey ’64, Monterey ’65, & Minneapolis). Coles fell ill and left during a European tour. Dolphy stayed in Europe after the tour ended, and died suddenly in Berlin on June 28, 1964. 1964 was also the year that Mingus met his future wife, Sue Graham Ungaro. The couple were married in 1966 by Allen Ginsberg. Facing financial hardship, Mingus was evicted from his New York home in 1966.", "title": "Biography" }, { "paragraph_id": 26, "text": "Mingus's pace slowed somewhat in the late 1960s and early 1970s. In 1974, after his 1970 sextet with Charles McPherson, Eddie Preston and Bobby Jones disbanded, he formed a quintet with Richmond, pianist Don Pullen, trumpeter Jack Walrath and saxophonist George Adams. They recorded two well-received albums, Changes One and Changes Two. Mingus also played with Charles McPherson in many of his groups during this time. Cumbia and Jazz Fusion in 1976 sought to blend Colombian music (the \"Cumbia\" of the title) with more traditional jazz forms. In 1971, Mingus taught for a semester at the University at Buffalo, The State University of New York as the Slee Professor of Music.", "title": "Biography" }, { "paragraph_id": 27, "text": "By the mid-1970s, Mingus was suffering from amyotrophic lateral sclerosis (ALS). His once formidable bass technique declined until he could no longer play the instrument. He continued composing, however, and supervised a number of recordings before his death. At the time of his death, he was working with Joni Mitchell on an album eventually titled Mingus, which included lyrics added by Mitchell to his compositions, including \"Goodbye Pork Pie Hat\". The album featured the talents of Wayne Shorter, Herbie Hancock, and another influential bassist and composer, Jaco Pastorius.", "title": "Biography" }, { "paragraph_id": 28, "text": "Mingus died on January 5, 1979, aged 56, in Cuernavaca, Mexico, where he had traveled for treatment and convalescence. His ashes were scattered in the Ganges River.", "title": "Biography" }, { "paragraph_id": 29, "text": "His compositions retained the hot and soulful feel of hard bop, drawing heavily from black gospel music and blues, while sometimes containing elements of third stream, free jazz, and classical music. He once cited Duke Ellington and church as his main influences.", "title": "Musical style" }, { "paragraph_id": 30, "text": "Mingus espoused collective improvisation, similar to the old New Orleans jazz parades, paying particular attention to how each band member interacted with the group as a whole. In creating his bands, he looked not only at the skills of the available musicians, but also their personalities. Many musicians passed through his bands and later went on to impressive careers. He recruited talented and sometimes little-known artists, whom he utilized to assemble unconventional instrumental configurations. As a performer, Mingus was a pioneer in double bass technique, widely recognized as one of the instrument's most proficient players.", "title": "Musical style" }, { "paragraph_id": 31, "text": "Because of his brilliant writing for midsize ensembles, and his catering to and emphasizing the strengths of the musicians in his groups, Mingus is often considered the heir of Duke Ellington, for whom he expressed great admiration and collaborated on the record Money Jungle. Dizzy Gillespie had once said Mingus reminded him \"of a young Duke\", citing their shared \"organizational genius\".", "title": "Musical style" }, { "paragraph_id": 32, "text": "Nearly as well known as his ambitious music was Mingus's often fearsome temperament, which earned him the nickname \"the Angry Man of Jazz\". His refusal to compromise his musical integrity led to many onstage eruptions, exhortations to musicians, and dismissals. Although respected for his musical talents, Mingus was sometimes feared for his occasionally violent onstage temper, which was at times directed at members of his band and other times aimed at the audience. He was physically large, prone to obesity (especially in his later years), and was by all accounts often intimidating and frightening when expressing anger or displeasure. When confronted with a nightclub audience talking and clinking ice in their glasses while he performed, Mingus stopped his band and loudly chastised the audience, stating: \"Isaac Stern doesn't have to put up with this shit.\" Mingus destroyed a $20,000 bass in response to audience heckling at the Five Spot in New York City.", "title": "Personality and temper" }, { "paragraph_id": 33, "text": "Guitarist and singer Jackie Paris was a witness to Mingus's irascibility. Paris recalls his time in the Jazz Workshop: \"He chased everybody off the stand except [drummer] Paul Motian and me ... The three of us just wailed on the blues for about an hour and a half before he called the other cats back.\"", "title": "Personality and temper" }, { "paragraph_id": 34, "text": "On October 12, 1962, Mingus punched Jimmy Knepper in the mouth while the two men were working together at Mingus's apartment on a score for his upcoming concert at the Town Hall in New York, and Knepper refused to take on more work. Mingus's blow broke off a crowned tooth and its underlying stub. According to Knepper, this ruined his embouchure and resulted in the permanent loss of the top octave of his range on the trombone – a significant handicap for any professional trombonist. This attack temporarily ended their working relationship, and Knepper was unable to perform at the concert. Charged with assault, Mingus appeared in court in January 1963 and was given a suspended sentence. Knepper did again work with Mingus in 1977 and played extensively with the Mingus Dynasty, formed after Mingus's death in 1979.", "title": "Personality and temper" }, { "paragraph_id": 35, "text": "In addition to bouts of ill temper, Mingus was prone to clinical depression and tended to have brief periods of extreme creative activity intermixed with fairly long stretches of greatly decreased output, such as the five-year period following the death of Eric Dolphy.", "title": "Personality and temper" }, { "paragraph_id": 36, "text": "In 1966, Mingus was evicted from his apartment at 5 Great Jones Street in New York City for nonpayment of rent, captured in the 1968 documentary film Mingus: Charlie Mingus 1968, directed by Thomas Reichman. The film also features Mingus performing in clubs and in the apartment, firing a .410 shotgun indoors, composing at the piano, playing with and taking care of his young daughter Caroline, and discussing love, art, politics, and the music school he had hoped to create.", "title": "Personality and temper" }, { "paragraph_id": 37, "text": "Charles Mingus's music is currently being performed and reinterpreted by the Mingus Big Band, which in October 2008 began playing every Monday at Jazz Standard in New York City, and often tours the rest of the U.S. and Europe. The Mingus Big Band, the Mingus Orchestra, and the Mingus Dynasty band are managed by Jazz Workshop, Inc. and run by Mingus's widow, Sue Graham Mingus.", "title": "Legacy" }, { "paragraph_id": 38, "text": "Elvis Costello has written lyrics for a few Mingus pieces. He had once sung lyrics for one piece, \"Invisible Lady\", backed by the Mingus Big Band on the album, Tonight at Noon: Three of Four Shades of Love.", "title": "Legacy" }, { "paragraph_id": 39, "text": "Epitaph is considered one of Charles Mingus's masterpieces. The composition is 4,235 measures long, requires two hours to perform, and is one of the longest jazz pieces ever written. Epitaph was only completely discovered, by musicologist Andrew Homzy, during the cataloging process after Mingus's death. With the help of a grant from the Ford Foundation, the score and instrumental parts were copied, and the piece itself was premiered by a 30-piece orchestra, conducted by Gunther Schuller. This concert was produced by Mingus's widow, Sue Graham Mingus, at Alice Tully Hall on June 3, 1989, 10 years after Mingus's death. It was performed again at several concerts in 2007. The performance at Walt Disney Concert Hall is available on NPR. Hal Leonard published the complete score in 2008.", "title": "Legacy" }, { "paragraph_id": 40, "text": "Mingus wrote the sprawling, exaggerated, quasi-autobiography, Beneath the Underdog: His World as Composed by Mingus, throughout the 1960s, and it was published in 1971. Its \"stream of consciousness\" style covered several aspects of his life that had previously been off-record. In addition to his musical and intellectual proliferation, Mingus goes into great detail about his perhaps overstated sexual exploits. He claims to have had more than 31 affairs in the course of his life (including 26 prostitutes in one sitting). This does not include any of his five wives (he claims to have been married to two of them simultaneously). In addition, he asserts that he held a brief career as a pimp. This has never been confirmed.", "title": "Legacy" }, { "paragraph_id": 41, "text": "Mingus's autobiography also serves as an insight into his psyche, as well as his attitudes about race and society. It includes accounts of abuse at the hands of his father from an early age, being bullied as a child, his removal from a white musician's union, and grappling with disapproval while married to white women and other examples of hardship and prejudice.", "title": "Legacy" }, { "paragraph_id": 42, "text": "The work of Charles Mingus has also received attention in academia. According to Ashon Crawley, the musicianship of Charles Mingus provides a salient example of the power of music to unsettle the dualistic, categorical distinction of sacred from profane through otherwise epistemologies. Crawley offers a reading of Mingus that examines the deep imbrication uniting Holiness – Pentecostal aesthetic practices and jazz. Mingus recognized the importance and impact of the midweek gathering of black folks at the Holiness – Pentecostal Church at 79th and Watts in Los Angeles that he would attend with his stepmother or his friend Britt Woodman. Crawley goes on to argue that these visits were the impetus for the song \"Wednesday Prayer Meeting\". Emphasis is placed on the ethical demand of the prayer meeting felt and experienced that, according to Crawley, Mingus attempts to capture. In many ways, \"Wednesday Night Prayer Meeting\" was Mingus's homage to black sociality. By exploring Mingus's homage to black Pentecostal aesthetics, Crawley expounds on how Mingus figured out that those Holiness – Pentecostal gatherings were the constant repetition of the ongoing, deep, intense mode of study, a kind of study wherein the aesthetic forms created could not be severed from the intellectual practice because they were one and also, but not, the same.\"", "title": "Legacy" }, { "paragraph_id": 43, "text": "Gunther Schuller has suggested that Mingus should be ranked among the most important American composers, jazz or otherwise. In 1988, a grant from the National Endowment for the Arts made possible the cataloging of Mingus compositions, which were then donated to the Music Division of the New York Public Library for public use. In 1993, The Library of Congress acquired Mingus's collected papers—including scores, sound recordings, correspondence and photos—in what they described as \"the most important acquisition of a manuscript collection relating to jazz in the Library's history\".", "title": "Legacy" }, { "paragraph_id": 44, "text": "Considering the number of compositions that Charles Mingus wrote, his works have not been recorded as often as comparable jazz composers. The only Mingus tribute albums recorded during his lifetime were baritone saxophonist Pepper Adams's album, Pepper Adams Plays the Compositions of Charlie Mingus, in 1963, and Joni Mitchell's album Mingus, in 1979. Of all his works, his elegy for Lester Young, \"Goodbye Pork Pie Hat\" (from Mingus Ah Um) has probably had the most recordings. The song has been covered by both jazz and non-jazz artists, such as Jeff Beck, Andy Summers, Eugene Chadbourne, and Bert Jansch and John Renbourn with and without Pentangle. Joni Mitchell sang a version with lyrics that she wrote for it.", "title": "Legacy" }, { "paragraph_id": 45, "text": "Elvis Costello has recorded \"Hora Decubitus\" (from Mingus Mingus Mingus Mingus Mingus) on My Flame Burns Blue (2006). \"Better Git It in Your Soul\" was covered by Davey Graham on his album \"Folk, Blues, and Beyond\". Trumpeter Ron Miles performs a version of \"Pithecanthropus Erectus\" on his CD \"Witness\". New York Ska Jazz Ensemble has done a cover of Mingus's \"Haitian Fight Song\", as have the British folk rock group Pentangle and others. Hal Willner's 1992 tribute album Weird Nightmare: Meditations on Mingus (Columbia Records) contains idiosyncratic renditions of Mingus's works involving numerous popular musicians including Chuck D, Keith Richards, Henry Rollins and Dr. John. The Italian band Quintorigo recorded an entire album devoted to Mingus's music, titled Play Mingus.", "title": "Legacy" }, { "paragraph_id": 46, "text": "Gunther Schuller's edition of Mingus's \"Epitaph\", which premiered at Lincoln Center in 1989, was subsequently released on Columbia/Sony Records.", "title": "Legacy" }, { "paragraph_id": 47, "text": "One of the most elaborate tributes to Mingus came on September 29, 1969, at a festival honoring him. Duke Ellington performed The Clown, with Ellington reading Jean Shepherd's narration. It was long believed that no recording of this performance existed; however, one was discovered and premiered on July 11, 2013, by Dry River Jazz host Trevor Hodgkins for NPR member station KRWG-FM with re-airings on July 13, 2013, and July 26, 2014. Mingus's elegy for Duke, \"Duke Ellington's Sound Of Love\", was recorded by Kevin Mahogany on Double Rainbow (1993) and Anita Wardell on Why Do You Cry? (1995).", "title": "Legacy" } ]
Charles Mingus Jr. was an American jazz upright bassist, composer, bandleader, pianist, and author. A major proponent of collective improvisation, he is considered to be one of the greatest jazz musicians and composers in history, with a career spanning three decades and collaborations with other jazz greats such as Duke Ellington, Charlie Parker, Max Roach, and Eric Dolphy. Mingus's work ranged from advanced bebop and avant-garde jazz with small and midsize ensembles, to pioneering the post-bop style on seminal recordings like Pithecanthropus Erectus (1956) and Mingus Ah Um (1959), and progressive big band experiments such as The Black Saint and the Sinner Lady (1963). Mingus's compositions continue to be played by contemporary musicians ranging from the repertory bands Mingus Big Band, Mingus Dynasty, and Mingus Orchestra, to the high school students who play the charts and compete in the Charles Mingus High School Competition. In 1993, the Library of Congress acquired Mingus's collected papers— including scores, sound recordings, correspondence and photos— in what they described as "the most important acquisition of a manuscript collection relating to jazz in the Library's history".
2002-01-07T22:34:30Z
2023-11-19T12:42:35Z
[ "Template:Short description", "Template:Main", "Template:Reflist", "Template:Cbignore", "Template:Archival records", "Template:Curlie", "Template:Use mdy dates", "Template:Infobox musical artist", "Template:ISBN", "Template:Sisterlinks", "Template:IMDb name", "Template:Official website", "Template:Cn", "Template:Citation needed", "Template:Weasel inline", "Template:Cite news", "Template:Citation", "Template:Cite magazine", "Template:Cite web", "Template:Cite book", "Template:Webarchive", "Template:Charles Mingus", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Charles_Mingus
7,669
Centimetre
A centimetre (international spelling) or centimeter (American spelling) (SI symbol cm) is a unit of length in the International System of Units (SI), equal to one hundredth of a metre, centi being the SI prefix for a factor of 1/100. Equivalently, there are 100 centimetres in 1 metre. The centimetre was the base unit of length in the now deprecated centimetre–gram–second (CGS) system of units. Though for many physical quantities, SI prefixes for factors of 10—like milli- and kilo-—are often preferred by technicians, the centimetre remains a practical unit of length for many everyday measurements; for instance, human height is most commonly measured in centimetres. A centimetre is approximately the width of the fingernail of an average adult person. One millilitre is defined as one cubic centimetre, under the SI system of units. In addition to its use in the measurement of length, the centimetre is used: For the purposes of compatibility with Chinese, Japanese and Korean (CJK) characters, Unicode has symbols for: They are mostly used only with East Asian fixed-width CJK fonts, because they are equal in size to one Chinese character.
[ { "paragraph_id": 0, "text": "A centimetre (international spelling) or centimeter (American spelling) (SI symbol cm) is a unit of length in the International System of Units (SI), equal to one hundredth of a metre, centi being the SI prefix for a factor of 1/100. Equivalently, there are 100 centimetres in 1 metre. The centimetre was the base unit of length in the now deprecated centimetre–gram–second (CGS) system of units.", "title": "" }, { "paragraph_id": 1, "text": "Though for many physical quantities, SI prefixes for factors of 10—like milli- and kilo-—are often preferred by technicians, the centimetre remains a practical unit of length for many everyday measurements; for instance, human height is most commonly measured in centimetres. A centimetre is approximately the width of the fingernail of an average adult person.", "title": "" }, { "paragraph_id": 2, "text": "One millilitre is defined as one cubic centimetre, under the SI system of units.", "title": "Equivalence to other units of length" }, { "paragraph_id": 3, "text": "In addition to its use in the measurement of length, the centimetre is used:", "title": "Other uses" }, { "paragraph_id": 4, "text": "For the purposes of compatibility with Chinese, Japanese and Korean (CJK) characters, Unicode has symbols for:", "title": "Unicode symbols" }, { "paragraph_id": 5, "text": "They are mostly used only with East Asian fixed-width CJK fonts, because they are equal in size to one Chinese character.", "title": "Unicode symbols" } ]
A centimetre (international spelling) or centimeter (American spelling) (SI symbol cm) is a unit of length in the International System of Units (SI), equal to one hundredth of a metre, centi being the SI prefix for a factor of 1/100. Equivalently, there are 100 centimetres in 1 metre. The centimetre was the base unit of length in the now deprecated centimetre–gram–second (CGS) system of units. Though for many physical quantities, SI prefixes for factors of 103—like milli- and kilo-—are often preferred by technicians, the centimetre remains a practical unit of length for many everyday measurements; for instance, human height is most commonly measured in centimetres. A centimetre is approximately the width of the fingernail of an average adult person.
2002-02-25T15:43:11Z
2023-11-07T16:06:49Z
[ "Template:Reflist", "Template:Cite web", "Template:Distinguish", "Template:Unichar", "Template:Authority control", "Template:Wiktionary", "Template:Sfrac", "Template:SI units of length", "Template:CGS units", "Template:Short description", "Template:Use British English", "Template:Infobox unit", "Template:Broader", "Template:Val", "Template:Cite book" ]
https://en.wikipedia.org/wiki/Centimetre
7,670
Central Coast
Central Coast may refer to:
[ { "paragraph_id": 0, "text": "Central Coast may refer to:", "title": "" } ]
Central Coast may refer to:
2021-11-22T14:11:45Z
[ "Template:Place name disambiguation" ]
https://en.wikipedia.org/wiki/Central_Coast
7,671
Committee on Data of the International Science Council
The Committee on Data of the International Science Council (CODATA) was established in 1966 as the Committee on Data for Science and Technology, originally part of the International Council of Scientific Unions, now part of the International Science Council (ISC). Since nov 2023 its President is the Catalan Researcher Mercè Crosas. CODATA exists to promote global collaboration to advance open science and to improve the availability and usability of data for all areas of research. CODATA supports the principle that data produced by research and susceptible to being used for research should be as open as possible and as closed as necessary. CODATA works also to advance the interoperability and the usability of such data; research data should be FAIR (findable, accessible, interoperable and reusable). By promoting the policy, technological, and cultural changes that are essential to promote open science, CODATA helps advance ISC's vision and mission of advancing science as a global public good. The CODATA Strategic Plan 2015 and Prospectus of Strategy and Achievement 2016 identify three priority areas: CODATA achieves these objectives through a number of standing committees and strategic executive led initiatives, and through its task groups and working groups. CODATA also works closely with member unions and associations of ISC to promote the efforts on open data and open science. CODATA supports the Data Science Journal and collaborates on major data conferences like SciDataCon and International Data Week. In October 2020 CODATA is co-organising an International FAIR Symposium together with the GO FAIR initiative to provide a forum for advancing international and cross-domain convergence around FAIR. The event will bring together a global data community with an interest in combining data across domains for a host of research issues – including major global challenges, such as those relating to the Sustainable Development Goals. Outcomes will directly link to the CODATA Decadal Programme Data for the Planet: making data work for cross-domain grand challenges and to the developments of GO FAIR community towards the Internet of FAIR data and services. One of the CODATA strategic Initiatives and Task Groups concentrates on Fundamental Physical Constants. Established in 1969, its purpose is to periodically provide the international scientific and technological communities with an internationally accepted set of values of the fundamental physical constants and closely related conversion factors for use worldwide. The first such CODATA set was published in 1973. Later versions are named based on the year of the data incorporated; the 1986 CODATA (published April 1987) used data up to 1 January 1986. All subsequent releases use data up to the end of the stated year, and are necessarily published a year or two later: 1998 (April 2000), 2002 (January 2005), 2006 (June 2008), 2010 (November 2012), and 2014 (June 2015). The CODATA recommended values of fundamental physical constants are published at the National Institute of Standards and Technology Reference on Constants, Units, and Uncertainty. Since 1998, the task group has produced a new version every four years, incorporating results published up to the end of the specified year. In order to support the redefinition of the SI base units, adopted at the 26th General Conference on Weights and Measures on 16 November 2018, CODATA made a special release that was published in October 2017. It incorporates all data up to 1 July 2017, and determines the final numerical values of h, e, k, and NA that are used for the new SI definitions. The last regular version, with a closing date of 31 December 2018, was used to produce the new 2018 CODATA values that were made available by the time the revised SI came into force on 20 May 2019. This was necessary because the redefinitions have a significant (mostly beneficial) effect on the uncertainties and correlation coefficients reported by CODATA.
[ { "paragraph_id": 0, "text": "The Committee on Data of the International Science Council (CODATA) was established in 1966 as the Committee on Data for Science and Technology, originally part of the International Council of Scientific Unions, now part of the International Science Council (ISC). Since nov 2023 its President is the Catalan Researcher Mercè Crosas.", "title": "" }, { "paragraph_id": 1, "text": "CODATA exists to promote global collaboration to advance open science and to improve the availability and usability of data for all areas of research. CODATA supports the principle that data produced by research and susceptible to being used for research should be as open as possible and as closed as necessary. CODATA works also to advance the interoperability and the usability of such data; research data should be FAIR (findable, accessible, interoperable and reusable). By promoting the policy, technological, and cultural changes that are essential to promote open science, CODATA helps advance ISC's vision and mission of advancing science as a global public good.", "title": "" }, { "paragraph_id": 2, "text": "The CODATA Strategic Plan 2015 and Prospectus of Strategy and Achievement 2016 identify three priority areas:", "title": "" }, { "paragraph_id": 3, "text": "CODATA achieves these objectives through a number of standing committees and strategic executive led initiatives, and through its task groups and working groups. CODATA also works closely with member unions and associations of ISC to promote the efforts on open data and open science.", "title": "" }, { "paragraph_id": 4, "text": "CODATA supports the Data Science Journal and collaborates on major data conferences like SciDataCon and International Data Week.", "title": "Publications and conferences" }, { "paragraph_id": 5, "text": "In October 2020 CODATA is co-organising an International FAIR Symposium together with the GO FAIR initiative to provide a forum for advancing international and cross-domain convergence around FAIR. The event will bring together a global data community with an interest in combining data across domains for a host of research issues – including major global challenges, such as those relating to the Sustainable Development Goals. Outcomes will directly link to the CODATA Decadal Programme Data for the Planet: making data work for cross-domain grand challenges and to the developments of GO FAIR community towards the Internet of FAIR data and services.", "title": "Publications and conferences" }, { "paragraph_id": 6, "text": "", "title": "Publications and conferences" }, { "paragraph_id": 7, "text": "One of the CODATA strategic Initiatives and Task Groups concentrates on Fundamental Physical Constants. Established in 1969, its purpose is to periodically provide the international scientific and technological communities with an internationally accepted set of values of the fundamental physical constants and closely related conversion factors for use worldwide.", "title": "Task Group on Fundamental Physical Constants" }, { "paragraph_id": 8, "text": "The first such CODATA set was published in 1973. Later versions are named based on the year of the data incorporated; the 1986 CODATA (published April 1987) used data up to 1 January 1986. All subsequent releases use data up to the end of the stated year, and are necessarily published a year or two later: 1998 (April 2000), 2002 (January 2005), 2006 (June 2008), 2010 (November 2012), and 2014 (June 2015).", "title": "Task Group on Fundamental Physical Constants" }, { "paragraph_id": 9, "text": "The CODATA recommended values of fundamental physical constants are published at the National Institute of Standards and Technology Reference on Constants, Units, and Uncertainty.", "title": "Task Group on Fundamental Physical Constants" }, { "paragraph_id": 10, "text": "Since 1998, the task group has produced a new version every four years, incorporating results published up to the end of the specified year.", "title": "Task Group on Fundamental Physical Constants" }, { "paragraph_id": 11, "text": "In order to support the redefinition of the SI base units, adopted at the 26th General Conference on Weights and Measures on 16 November 2018, CODATA made a special release that was published in October 2017. It incorporates all data up to 1 July 2017, and determines the final numerical values of h, e, k, and NA that are used for the new SI definitions.", "title": "Task Group on Fundamental Physical Constants" }, { "paragraph_id": 12, "text": "The last regular version, with a closing date of 31 December 2018, was used to produce the new 2018 CODATA values that were made available by the time the revised SI came into force on 20 May 2019. This was necessary because the redefinitions have a significant (mostly beneficial) effect on the uncertainties and correlation coefficients reported by CODATA.", "title": "Task Group on Fundamental Physical Constants" } ]
The Committee on Data of the International Science Council (CODATA) was established in 1966 as the Committee on Data for Science and Technology, originally part of the International Council of Scientific Unions, now part of the International Science Council (ISC). Since nov 2023 its President is the Catalan Researcher Mercè Crosas. CODATA exists to promote global collaboration to advance open science and to improve the availability and usability of data for all areas of research. CODATA supports the principle that data produced by research and susceptible to being used for research should be as open as possible and as closed as necessary. CODATA works also to advance the interoperability and the usability of such data; research data should be FAIR. By promoting the policy, technological, and cultural changes that are essential to promote open science, CODATA helps advance ISC's vision and mission of advancing science as a global public good. The CODATA Strategic Plan 2015 and Prospectus of Strategy and Achievement 2016 identify three priority areas: promoting principles, policies and practices for open data and open science; advancing the frontiers of data science; building capacity for open science by improving data skills and the functions of national science systems needed to support open data. CODATA achieves these objectives through a number of standing committees and strategic executive led initiatives, and through its task groups and working groups. CODATA also works closely with member unions and associations of ISC to promote the efforts on open data and open science.
2002-02-25T15:43:11Z
2023-12-02T06:43:46Z
[ "Template:Official website", "Template:Authority control", "Template:Short description", "Template:Infobox organization", "Template:Rp", "Template:Cite web", "Template:Cite journal", "Template:SIbrochure9th", "Template:Multiple issues", "Template:Redir", "Template:Anchor", "Template:Reflist", "Template:Cite book" ]
https://en.wikipedia.org/wiki/Committee_on_Data_of_the_International_Science_Council
7,672
Chuck Jones
Charles Martin Jones (September 21, 1912 – February 22, 2002) was an American animator, painter, voice actor and filmmaker, best known for his work with Warner Bros. Cartoons on the Looney Tunes and Merrie Melodies series of shorts. He wrote, produced, and/or directed many classic animated cartoon shorts starring Bugs Bunny, Daffy Duck, Wile E. Coyote and the Road Runner, Pepé Le Pew, Marvin the Martian, and Porky Pig, among others. Jones started his career in 1933 alongside Tex Avery, Friz Freleng, Bob Clampett, and Robert McKimson at the Leon Schlesinger Production's Termite Terrace studio, the studio that made Warner Brothers cartoons, where they created and developed the Looney Tunes characters. During the Second World War, Jones directed many of the Private Snafu (1943–1946) shorts which were shown to members of the United States military. After his career at Warner Bros. ended in 1962, Jones started Sib Tower 12 Productions and began producing cartoons for Metro-Goldwyn-Mayer, including a new series of Tom and Jerry shorts (1963–1967) as well as the television adaptations of Dr. Seuss's How the Grinch Stole Christmas! (1966) and Horton Hears a Who! (1970). He later started his own studio, Chuck Jones Enterprises, where he directed and produced the film adaptation of Norton Juster's The Phantom Tollbooth (1970). Jones's work along with the other animators was showcased in the documentary, Bugs Bunny: Superstar (1975). Jones directed the first feature-length animated Looney Tunes compilation film, The Bugs Bunny/Road Runner Movie (1979). In 1990 he wrote his memoir, Chuck Amuck: The Life and Times of an Animated Cartoonist, which was made into a documentary film, Chuck Amuck (1991). He was also profiled in the American Masters documentary Chuck Jones: Extremes & Inbetweens – A Life in Animation (2000) which aired on PBS. Jones won three Academy Awards. The cartoons which he directed, For Scent-imental Reasons, So Much for So Little, and The Dot and the Line, won the Best Animated Short. Robin Williams presented Jones with an Honorary Academy Award in 1996 for his work in the animation industry. Film historian Leonard Maltin has praised Jones's work at Warner Bros., MGM and Chuck Jones Enterprises. In Jerry Beck's The 50 Greatest Cartoons, a group of animation professionals ranked What's Opera, Doc? (1957) as the greatest cartoon of all time, with ten of the entries being directed by Jones including Duck Amuck (1953), Duck Dodgers in the 24½th Century (1953), One Froggy Evening (1955), Rabbit of Seville (1950), and Rabbit Seasoning (1952). Charles Martin Jones was born on September 21, 1912, in Spokane, Washington, to Mabel McQuiddy (née Martin) (1882–1971) and Charles Adams Jones (1883–?). When he was six months old, he moved with his parents and three siblings to Los Angeles, California. In his autobiography, Chuck Amuck, Jones credits his artistic bent to circumstances surrounding his father, who was an unsuccessful businessman in California in the 1920s. He recounted that his father would start every new business venture by purchasing new stationery and new pencils with the company name on them. When the business failed, his father would quietly turn the huge stacks of useless stationery and pencils over to his children, requiring them to use up all the material as fast as possible. Armed with an endless supply of high-quality paper and pencils, the children drew constantly. Later, in one art school class, the professor gravely informed the students that they each had 100,000 bad drawings in them that they must first get past before they could possibly draw anything worthwhile. Jones recounted years later that this pronouncement came as a great relief to him, as he was well past the 200,000 mark, having used up all that stationery. Jones and several of his siblings went on to artistic careers. During his artistic education, he worked part-time as a janitor. After graduating from Chouinard Art Institute, Jones got a phone call from a friend named Fred Kopietz, who had been hired by the Ub Iwerks studio and offered him a job. He worked his way up in the animation industry, starting as a cel washer; "then I moved up to become a painter in black and white, some color. Then I went on to take animator's drawings and traced them onto the celluloid. Then I became what they call an in-betweener, which is the guy that does the drawing between the drawings the animator makes". While at Iwerks, he met a cel painter named Dorothy Webster, who later became his first wife. Jones joined Leon Schlesinger Productions, the independent studio that produced Looney Tunes and Merrie Melodies for Warner Bros., in 1933 as an assistant animator. In 1935 he was promoted to animator and assigned to work with a new Schlesinger director, Tex Avery. There was no room for the new Avery unit in Schlesinger's small studio, so Avery, Jones, and fellow animators Bob Clampett, Virgil Ross, and Sid Sutherland were moved into a small adjacent building they dubbed "Termite Terrace". When Clampett was promoted to director in 1937, Jones was assigned to his unit; the Clampett unit was briefly assigned to work with Jones's old employer, Ub Iwerks, when Iwerks subcontracted four cartoons to Schlesinger in 1937. Jones became a director (or "supervisor", the original title for an animation director in the studio) himself in 1938 when Frank Tashlin left the studio. The following year Jones created his first major character, Sniffles, a cute Disney-style mouse, who went on to star in twelve Warner Bros. cartoons. Jones initially struggled in terms of his directorial style. Unlike the other directors in the studio, Jones wanted to make cartoons that would rival the quality and design to that of ones made by Walt Disney Production. As a result, his cartoons suffered from sluggish pacing and a lack of clever gags, with Jones himself later admitting that his early conception of timing and dialog was "formed by watching the action in the La Brea Tar Pits". Schlesinger and the studio heads were unsatisfied with his work and demanded that he make cartoons that were more funny. He responded by creating the 1942 short The Draft Horse. The cartoon that was generally considered his turning point was The Dover Boys. Released the same year, it noticeably featured quickly-timed gags and extensive use of limited animation. Despite this, Schlesinger and the studios heads were still dissatisfied and begun the process to fire him, but they were unable to find a replacement due to a labor shortage stemming from World War II, so Jones kept his position. He was actively involved in efforts to unionize the staff of Leon Schlesinger Studios. He was responsible for recruiting animators, layout men, and background people. Almost all animators joined, in reaction to salary cuts imposed by Leon Schlesinger. The Metro-Goldwyn-Mayer cartoon studio had already signed a union contract, encouraging their counterparts under Schlesinger. In a meeting with his staff, Schlesinger talked for a few minutes, then turned over the meeting to his attorney. His insulting manner had a unifying effect on the staff. Jones gave a pep talk at the union headquarters. As negotiations broke down, the staff decided to go on strike. Schlesinger locked them out of the studio for a few days, before agreeing to sign the contract. A Labor-Management Committee was formed and Jones served as a moderator. Because of his role as a supervisor in the studio, he could not himself join the union. Jones created many of his lesser-known characters during this period, including Charlie Dog, Hubie and Bertie, and The Three Bears. During World War II, Jones worked closely with Theodor Geisel, better known as Dr. Seuss, to create the Private Snafu series of Army educational cartoons (the character was created by director Frank Capra). Jones later collaborated with Seuss on animated adaptations of Seuss' books, including How the Grinch Stole Christmas! in 1966. Jones directed such shorts as The Weakly Reporter, a 1944 short that related to shortages and rationing on the home front. During the same year, he directed Hell-Bent for Election, a campaign film for Franklin D. Roosevelt. Jones created characters through the late 1930s, late 1940s, and the 1950s, which include his collaborative help in co-creating Bugs Bunny and also included creating Claude Cat, Marc Antony and Pussyfoot, Charlie Dog, Michigan J. Frog, Gossamer, and his four most popular creations, Marvin the Martian, Pepé Le Pew, Wile E. Coyote and the Road Runner. Jones and writer Michael Maltese collaborated on the Road Runner cartoons, Duck Amuck, One Froggy Evening, and What's Opera, Doc?. Other staff at Unit A whom Jones collaborated with include layout artist, background designer, and co-director Maurice Noble; animator and co-director Abe Levitow; and animators Ken Harris and Ben Washam. Jones remained at Warner Bros. throughout the 1950s, except for a brief period in 1953 when Warner closed the animation studio. During this interim, Jones found employment at Walt Disney Productions, where he teamed with Ward Kimball for a four-month period of uncredited work on Sleeping Beauty (1959). Upon the reopening of the Warner animation department, Jones was rehired and reunited with most of his unit. In the early 1960s, Jones and his wife Dorothy wrote the screenplay for the animated feature Gay Purr-ee. The finished film featured the voices of Judy Garland, Robert Goulet and Red Buttons as cats in Paris, France. The feature was produced by UPA and directed by his former Warner Bros. collaborator, Abe Levitow. Jones moonlighted to work on the film since he had an exclusive contract with Warner Bros. UPA completed the film and made it available for distribution in 1962; it was picked up by Warner Bros. When Warner Bros. discovered that Jones had violated his exclusive contract with them, they terminated him. Jones's former animation unit was laid off after completing the final cartoon in their pipeline, The Iceman Ducketh, and the rest of the Warner Bros. Cartoons studio was closed in early 1963. With business partner Les Goldman, Jones started an independent animation studio, Sib Tower 12 Productions, and brought on most of his unit from Warner Bros., including Maurice Noble and Michael Maltese. In 1963, Metro-Goldwyn-Mayer contracted with Sib Tower 12 to have Jones and his staff produce new Tom and Jerry cartoons as well as a television adaptation of all Tom and Jerry theatricals produced to that date. This included major editing, including writing out the African-American maid, Mammy Two-Shoes, and replacing her with one of Irish descent voiced by June Foray. In 1964, Sib Tower 12 was absorbed by MGM and was renamed MGM Animation/Visual Arts. His animated short film, The Dot and the Line: A Romance in Lower Mathematics, won the 1965 Academy Award for Best Animated Short Film. Jones directed the classic animated short The Bear That Wasn't. As the Tom and Jerry series wound down (it was discontinued in 1967), Jones produced more for television. In 1966, he produced and directed the TV special How the Grinch Stole Christmas!, featuring the voice and facial models based on the readings by Boris Karloff. Jones continued to work on other TV specials such as Horton Hears a Who! (1970), but his main focus during this time was producing the feature film The Phantom Tollbooth, which did lukewarm business when MGM released it in 1970. Jones co-directed 1969's The Pogo Special Birthday Special, based on the Walt Kelly comic strip, and voiced the characters of Porky Pine and Bun Rab. It was at this point that he decided to start ST Incorporated. MGM closed the animation division in 1970, and Jones once again started his own studio, Chuck Jones Enterprises. He produced a Saturday morning children's TV series for the American Broadcasting Company called The Curiosity Shop in 1971. In 1973, he produced an animated version of the George Selden book The Cricket in Times Square and subsequently produced two sequels. Three of his works during this period were animated TV adaptations of short stories from Rudyard Kipling's The Jungle Book: Mowgli's Brothers, The White Seal and Rikki-Tikki-Tavi. During this period, Jones began to experiment with more realistically designed characters, most of which had larger eyes, leaner bodies, and altered proportions, such as those of the Looney Tunes characters. Jones resumed working with Warner Bros. in 1976 with the animated TV adaptation of The Carnival of the Animals with Bugs Bunny and Daffy Duck. Jones also produced The Bugs Bunny/Road Runner Movie (1979), which was a compilation of Jones's best theatrical shorts, new Road Runner shorts for The Electric Company series and Bugs Bunny's Looney Christmas Tales (1979). New shorts were made for Bugs Bunny's Bustin' Out All Over (1980). From 1977 to 1978, Jones wrote and drew the newspaper comic strip Crawford (also known as Crawford & Morgan) for the Chicago Tribune-NY News Syndicate. In 2011 IDW Publishing collected Jones's strip as part of their Library of American Comic Strips. In 1978, Jones's wife Dorothy died. He married Marian Dern, the writer of the comic strip Rick O'Shay in 1981. On December 11, 1975, shortly after the release of Bugs Bunny: Superstar, which prominently featured Bob Clampett, Jones wrote a letter to Tex Avery, accusing Clampett of taking credit for ideas that were not his, and for characters created by other directors (notably Jones's Sniffles and Friz Freleng's Yosemite Sam). Their correspondence was never published in the media. It was forwarded to Michael Barrier, who conducted the interview with Clampett and was distributed by Jones to multiple people concerned with animation over the years. Through the 1980s and 1990s, Jones was painting cartoon and parody art, sold through animation galleries by his daughter's company, Linda Jones Enterprises. Jones was the creative consultant and character designer for two Raggedy Ann animated specials and the first Alvin and the Chipmunks Christmas special A Chipmunk Christmas. He made a cameo appearance in the film Gremlins (1984) and he wrote and directed the Bugs Bunny/Daffy Duck animated sequences that bookend its sequel Gremlins 2: The New Batch (1990). Jones directed animated sequences for various features such as a lengthy sequence in the film Stay Tuned (1992) and a shorter one seen at the start of the Robin Williams vehicle Mrs. Doubtfire (1993). Also during the 1980s and 1990s, Jones served on the advisory board of the National Student Film Institute. Jones's final Looney Tunes cartoon was From Hare to Eternity (1997), which starred Bugs Bunny and Yosemite Sam, with Greg Burson voicing Bugs. The cartoon was dedicated to Friz Freleng, who had died in 1995. Jones's final animation project was a series of 13 shorts starring a timber wolf character he had designed in the 1960s named Thomas Timber Wolf. The series was released online by Warner Bros. in 2000. From 2001 until 2004, Cartoon Network aired The Chuck Jones Show which features shorts directed by him. The show won the Annie Award for Outstanding Achievement in an Animated Special Project. In 1997, Jones was awarded the Edward MacDowell Medal. In 1999, he founded the non-profit Chuck Jones Center for Creativity, in Costa Mesa, California, an art education "gymnasium for the brain" dedicated to teaching creative skills, primarily to children and seniors, which is still in operation. In his later years, he recovered from skin cancer and received hip and ankle replacements. Jones died of congestive heart failure on February 22, 2002, at his home in Corona del Mar, Newport Beach at the age of 89. He was cremated and his ashes were scattered at sea. After his death, Cartoon Network aired a 20-second segment tracing Jones's portrait with the words "We'll miss you". Also, the Looney Tunes cartoon Daffy Duck for President, based on the book that Jones had written and using Jones's style for the characters, originally scheduled to be released in 2000, was released in 2004 as part of disc three of the Looney Tunes Golden Collection: Volume 2 DVD set. Jones received an Honorary Academy Award in 1996 by the board of governors of the Academy of Motion Picture Arts and Sciences, for "the creation of classic cartoons and cartoon characters whose animated lives have brought joy to our real ones for more than half a century." At that year's awards show, Robin Williams, a self-confessed "Jones-aholic", presented the honorary award to Jones, calling him "The Orson Welles of cartoons", and the audience gave Jones a standing ovation as he walked onto the stage. For himself, a flattered Jones wryly remarked in his acceptance speech, "Well, what can I say in the face of such humiliating evidence? I stand guilty before the world of directing over three hundred cartoons in the last fifty or sixty years. Hopefully, this means you've forgiven me." He received the Lifetime Achievement Award at the World Festival of Animated Film – Animafest Zagreb in 1988. Jones was a historical authority as well as a major contributor to the development of animation throughout the 20th century. In 1990, Jones received the Golden Plate Award of the American Academy of Achievement. He received an honorary degree from Oglethorpe University in 1993. For his contribution to the motion picture industry, Jones has a star on the Hollywood Walk of Fame at 7011 Hollywood Blvd. He was awarded the Inkpot Award in 1974. In 1996, Jones received an Honorary Oscar at the 68th Academy Awards. Jones's life and legacy were celebrated on January 12, 2012, with the official grand opening of The Chuck Jones Experience at Circus Circus Las Vegas. Many of Jones's family welcomed celebrities, animation aficionados and visitors to the new attraction when they opened the attraction in an appropriate and unconventional way. Among those in attendance were Jones's widow, Marian Jones; daughter Linda Clough; and grandchildren Craig, Todd and Valerie Kausen.
[ { "paragraph_id": 0, "text": "Charles Martin Jones (September 21, 1912 – February 22, 2002) was an American animator, painter, voice actor and filmmaker, best known for his work with Warner Bros. Cartoons on the Looney Tunes and Merrie Melodies series of shorts. He wrote, produced, and/or directed many classic animated cartoon shorts starring Bugs Bunny, Daffy Duck, Wile E. Coyote and the Road Runner, Pepé Le Pew, Marvin the Martian, and Porky Pig, among others.", "title": "" }, { "paragraph_id": 1, "text": "Jones started his career in 1933 alongside Tex Avery, Friz Freleng, Bob Clampett, and Robert McKimson at the Leon Schlesinger Production's Termite Terrace studio, the studio that made Warner Brothers cartoons, where they created and developed the Looney Tunes characters. During the Second World War, Jones directed many of the Private Snafu (1943–1946) shorts which were shown to members of the United States military. After his career at Warner Bros. ended in 1962, Jones started Sib Tower 12 Productions and began producing cartoons for Metro-Goldwyn-Mayer, including a new series of Tom and Jerry shorts (1963–1967) as well as the television adaptations of Dr. Seuss's How the Grinch Stole Christmas! (1966) and Horton Hears a Who! (1970). He later started his own studio, Chuck Jones Enterprises, where he directed and produced the film adaptation of Norton Juster's The Phantom Tollbooth (1970).", "title": "" }, { "paragraph_id": 2, "text": "Jones's work along with the other animators was showcased in the documentary, Bugs Bunny: Superstar (1975). Jones directed the first feature-length animated Looney Tunes compilation film, The Bugs Bunny/Road Runner Movie (1979). In 1990 he wrote his memoir, Chuck Amuck: The Life and Times of an Animated Cartoonist, which was made into a documentary film, Chuck Amuck (1991). He was also profiled in the American Masters documentary Chuck Jones: Extremes & Inbetweens – A Life in Animation (2000) which aired on PBS.", "title": "" }, { "paragraph_id": 3, "text": "Jones won three Academy Awards. The cartoons which he directed, For Scent-imental Reasons, So Much for So Little, and The Dot and the Line, won the Best Animated Short. Robin Williams presented Jones with an Honorary Academy Award in 1996 for his work in the animation industry. Film historian Leonard Maltin has praised Jones's work at Warner Bros., MGM and Chuck Jones Enterprises. In Jerry Beck's The 50 Greatest Cartoons, a group of animation professionals ranked What's Opera, Doc? (1957) as the greatest cartoon of all time, with ten of the entries being directed by Jones including Duck Amuck (1953), Duck Dodgers in the 24½th Century (1953), One Froggy Evening (1955), Rabbit of Seville (1950), and Rabbit Seasoning (1952).", "title": "" }, { "paragraph_id": 4, "text": "Charles Martin Jones was born on September 21, 1912, in Spokane, Washington, to Mabel McQuiddy (née Martin) (1882–1971) and Charles Adams Jones (1883–?). When he was six months old, he moved with his parents and three siblings to Los Angeles, California.", "title": "Early life" }, { "paragraph_id": 5, "text": "In his autobiography, Chuck Amuck, Jones credits his artistic bent to circumstances surrounding his father, who was an unsuccessful businessman in California in the 1920s. He recounted that his father would start every new business venture by purchasing new stationery and new pencils with the company name on them. When the business failed, his father would quietly turn the huge stacks of useless stationery and pencils over to his children, requiring them to use up all the material as fast as possible. Armed with an endless supply of high-quality paper and pencils, the children drew constantly. Later, in one art school class, the professor gravely informed the students that they each had 100,000 bad drawings in them that they must first get past before they could possibly draw anything worthwhile. Jones recounted years later that this pronouncement came as a great relief to him, as he was well past the 200,000 mark, having used up all that stationery. Jones and several of his siblings went on to artistic careers.", "title": "Early life" }, { "paragraph_id": 6, "text": "During his artistic education, he worked part-time as a janitor. After graduating from Chouinard Art Institute, Jones got a phone call from a friend named Fred Kopietz, who had been hired by the Ub Iwerks studio and offered him a job. He worked his way up in the animation industry, starting as a cel washer; \"then I moved up to become a painter in black and white, some color. Then I went on to take animator's drawings and traced them onto the celluloid. Then I became what they call an in-betweener, which is the guy that does the drawing between the drawings the animator makes\". While at Iwerks, he met a cel painter named Dorothy Webster, who later became his first wife.", "title": "Early life" }, { "paragraph_id": 7, "text": "Jones joined Leon Schlesinger Productions, the independent studio that produced Looney Tunes and Merrie Melodies for Warner Bros., in 1933 as an assistant animator. In 1935 he was promoted to animator and assigned to work with a new Schlesinger director, Tex Avery. There was no room for the new Avery unit in Schlesinger's small studio, so Avery, Jones, and fellow animators Bob Clampett, Virgil Ross, and Sid Sutherland were moved into a small adjacent building they dubbed \"Termite Terrace\". When Clampett was promoted to director in 1937, Jones was assigned to his unit; the Clampett unit was briefly assigned to work with Jones's old employer, Ub Iwerks, when Iwerks subcontracted four cartoons to Schlesinger in 1937. Jones became a director (or \"supervisor\", the original title for an animation director in the studio) himself in 1938 when Frank Tashlin left the studio. The following year Jones created his first major character, Sniffles, a cute Disney-style mouse, who went on to star in twelve Warner Bros. cartoons.", "title": "Career" }, { "paragraph_id": 8, "text": "Jones initially struggled in terms of his directorial style. Unlike the other directors in the studio, Jones wanted to make cartoons that would rival the quality and design to that of ones made by Walt Disney Production. As a result, his cartoons suffered from sluggish pacing and a lack of clever gags, with Jones himself later admitting that his early conception of timing and dialog was \"formed by watching the action in the La Brea Tar Pits\". Schlesinger and the studio heads were unsatisfied with his work and demanded that he make cartoons that were more funny. He responded by creating the 1942 short The Draft Horse. The cartoon that was generally considered his turning point was The Dover Boys. Released the same year, it noticeably featured quickly-timed gags and extensive use of limited animation. Despite this, Schlesinger and the studios heads were still dissatisfied and begun the process to fire him, but they were unable to find a replacement due to a labor shortage stemming from World War II, so Jones kept his position.", "title": "Career" }, { "paragraph_id": 9, "text": "He was actively involved in efforts to unionize the staff of Leon Schlesinger Studios. He was responsible for recruiting animators, layout men, and background people. Almost all animators joined, in reaction to salary cuts imposed by Leon Schlesinger. The Metro-Goldwyn-Mayer cartoon studio had already signed a union contract, encouraging their counterparts under Schlesinger. In a meeting with his staff, Schlesinger talked for a few minutes, then turned over the meeting to his attorney. His insulting manner had a unifying effect on the staff. Jones gave a pep talk at the union headquarters. As negotiations broke down, the staff decided to go on strike. Schlesinger locked them out of the studio for a few days, before agreeing to sign the contract. A Labor-Management Committee was formed and Jones served as a moderator. Because of his role as a supervisor in the studio, he could not himself join the union. Jones created many of his lesser-known characters during this period, including Charlie Dog, Hubie and Bertie, and The Three Bears.", "title": "Career" }, { "paragraph_id": 10, "text": "During World War II, Jones worked closely with Theodor Geisel, better known as Dr. Seuss, to create the Private Snafu series of Army educational cartoons (the character was created by director Frank Capra). Jones later collaborated with Seuss on animated adaptations of Seuss' books, including How the Grinch Stole Christmas! in 1966. Jones directed such shorts as The Weakly Reporter, a 1944 short that related to shortages and rationing on the home front. During the same year, he directed Hell-Bent for Election, a campaign film for Franklin D. Roosevelt.", "title": "Career" }, { "paragraph_id": 11, "text": "Jones created characters through the late 1930s, late 1940s, and the 1950s, which include his collaborative help in co-creating Bugs Bunny and also included creating Claude Cat, Marc Antony and Pussyfoot, Charlie Dog, Michigan J. Frog, Gossamer, and his four most popular creations, Marvin the Martian, Pepé Le Pew, Wile E. Coyote and the Road Runner. Jones and writer Michael Maltese collaborated on the Road Runner cartoons, Duck Amuck, One Froggy Evening, and What's Opera, Doc?. Other staff at Unit A whom Jones collaborated with include layout artist, background designer, and co-director Maurice Noble; animator and co-director Abe Levitow; and animators Ken Harris and Ben Washam.", "title": "Career" }, { "paragraph_id": 12, "text": "Jones remained at Warner Bros. throughout the 1950s, except for a brief period in 1953 when Warner closed the animation studio. During this interim, Jones found employment at Walt Disney Productions, where he teamed with Ward Kimball for a four-month period of uncredited work on Sleeping Beauty (1959). Upon the reopening of the Warner animation department, Jones was rehired and reunited with most of his unit.", "title": "Career" }, { "paragraph_id": 13, "text": "In the early 1960s, Jones and his wife Dorothy wrote the screenplay for the animated feature Gay Purr-ee. The finished film featured the voices of Judy Garland, Robert Goulet and Red Buttons as cats in Paris, France. The feature was produced by UPA and directed by his former Warner Bros. collaborator, Abe Levitow.", "title": "Career" }, { "paragraph_id": 14, "text": "Jones moonlighted to work on the film since he had an exclusive contract with Warner Bros. UPA completed the film and made it available for distribution in 1962; it was picked up by Warner Bros. When Warner Bros. discovered that Jones had violated his exclusive contract with them, they terminated him. Jones's former animation unit was laid off after completing the final cartoon in their pipeline, The Iceman Ducketh, and the rest of the Warner Bros. Cartoons studio was closed in early 1963.", "title": "Career" }, { "paragraph_id": 15, "text": "With business partner Les Goldman, Jones started an independent animation studio, Sib Tower 12 Productions, and brought on most of his unit from Warner Bros., including Maurice Noble and Michael Maltese. In 1963, Metro-Goldwyn-Mayer contracted with Sib Tower 12 to have Jones and his staff produce new Tom and Jerry cartoons as well as a television adaptation of all Tom and Jerry theatricals produced to that date. This included major editing, including writing out the African-American maid, Mammy Two-Shoes, and replacing her with one of Irish descent voiced by June Foray. In 1964, Sib Tower 12 was absorbed by MGM and was renamed MGM Animation/Visual Arts. His animated short film, The Dot and the Line: A Romance in Lower Mathematics, won the 1965 Academy Award for Best Animated Short Film. Jones directed the classic animated short The Bear That Wasn't.", "title": "Career" }, { "paragraph_id": 16, "text": "As the Tom and Jerry series wound down (it was discontinued in 1967), Jones produced more for television. In 1966, he produced and directed the TV special How the Grinch Stole Christmas!, featuring the voice and facial models based on the readings by Boris Karloff.", "title": "Career" }, { "paragraph_id": 17, "text": "Jones continued to work on other TV specials such as Horton Hears a Who! (1970), but his main focus during this time was producing the feature film The Phantom Tollbooth, which did lukewarm business when MGM released it in 1970. Jones co-directed 1969's The Pogo Special Birthday Special, based on the Walt Kelly comic strip, and voiced the characters of Porky Pine and Bun Rab. It was at this point that he decided to start ST Incorporated.", "title": "Career" }, { "paragraph_id": 18, "text": "MGM closed the animation division in 1970, and Jones once again started his own studio, Chuck Jones Enterprises. He produced a Saturday morning children's TV series for the American Broadcasting Company called The Curiosity Shop in 1971. In 1973, he produced an animated version of the George Selden book The Cricket in Times Square and subsequently produced two sequels.", "title": "Career" }, { "paragraph_id": 19, "text": "Three of his works during this period were animated TV adaptations of short stories from Rudyard Kipling's The Jungle Book: Mowgli's Brothers, The White Seal and Rikki-Tikki-Tavi. During this period, Jones began to experiment with more realistically designed characters, most of which had larger eyes, leaner bodies, and altered proportions, such as those of the Looney Tunes characters.", "title": "Career" }, { "paragraph_id": 20, "text": "Jones resumed working with Warner Bros. in 1976 with the animated TV adaptation of The Carnival of the Animals with Bugs Bunny and Daffy Duck. Jones also produced The Bugs Bunny/Road Runner Movie (1979), which was a compilation of Jones's best theatrical shorts, new Road Runner shorts for The Electric Company series and Bugs Bunny's Looney Christmas Tales (1979). New shorts were made for Bugs Bunny's Bustin' Out All Over (1980).", "title": "Career" }, { "paragraph_id": 21, "text": "From 1977 to 1978, Jones wrote and drew the newspaper comic strip Crawford (also known as Crawford & Morgan) for the Chicago Tribune-NY News Syndicate. In 2011 IDW Publishing collected Jones's strip as part of their Library of American Comic Strips.", "title": "Career" }, { "paragraph_id": 22, "text": "In 1978, Jones's wife Dorothy died. He married Marian Dern, the writer of the comic strip Rick O'Shay in 1981.", "title": "Career" }, { "paragraph_id": 23, "text": "On December 11, 1975, shortly after the release of Bugs Bunny: Superstar, which prominently featured Bob Clampett, Jones wrote a letter to Tex Avery, accusing Clampett of taking credit for ideas that were not his, and for characters created by other directors (notably Jones's Sniffles and Friz Freleng's Yosemite Sam). Their correspondence was never published in the media. It was forwarded to Michael Barrier, who conducted the interview with Clampett and was distributed by Jones to multiple people concerned with animation over the years.", "title": "Jones–Avery letter" }, { "paragraph_id": 24, "text": "Through the 1980s and 1990s, Jones was painting cartoon and parody art, sold through animation galleries by his daughter's company, Linda Jones Enterprises. Jones was the creative consultant and character designer for two Raggedy Ann animated specials and the first Alvin and the Chipmunks Christmas special A Chipmunk Christmas. He made a cameo appearance in the film Gremlins (1984) and he wrote and directed the Bugs Bunny/Daffy Duck animated sequences that bookend its sequel Gremlins 2: The New Batch (1990). Jones directed animated sequences for various features such as a lengthy sequence in the film Stay Tuned (1992) and a shorter one seen at the start of the Robin Williams vehicle Mrs. Doubtfire (1993). Also during the 1980s and 1990s, Jones served on the advisory board of the National Student Film Institute.", "title": "Later years" }, { "paragraph_id": 25, "text": "Jones's final Looney Tunes cartoon was From Hare to Eternity (1997), which starred Bugs Bunny and Yosemite Sam, with Greg Burson voicing Bugs. The cartoon was dedicated to Friz Freleng, who had died in 1995. Jones's final animation project was a series of 13 shorts starring a timber wolf character he had designed in the 1960s named Thomas Timber Wolf. The series was released online by Warner Bros. in 2000. From 2001 until 2004, Cartoon Network aired The Chuck Jones Show which features shorts directed by him. The show won the Annie Award for Outstanding Achievement in an Animated Special Project.", "title": "Later years" }, { "paragraph_id": 26, "text": "In 1997, Jones was awarded the Edward MacDowell Medal.", "title": "Later years" }, { "paragraph_id": 27, "text": "In 1999, he founded the non-profit Chuck Jones Center for Creativity, in Costa Mesa, California, an art education \"gymnasium for the brain\" dedicated to teaching creative skills, primarily to children and seniors, which is still in operation.", "title": "Later years" }, { "paragraph_id": 28, "text": "In his later years, he recovered from skin cancer and received hip and ankle replacements.", "title": "Later years" }, { "paragraph_id": 29, "text": "Jones died of congestive heart failure on February 22, 2002, at his home in Corona del Mar, Newport Beach at the age of 89. He was cremated and his ashes were scattered at sea. After his death, Cartoon Network aired a 20-second segment tracing Jones's portrait with the words \"We'll miss you\". Also, the Looney Tunes cartoon Daffy Duck for President, based on the book that Jones had written and using Jones's style for the characters, originally scheduled to be released in 2000, was released in 2004 as part of disc three of the Looney Tunes Golden Collection: Volume 2 DVD set.", "title": "Later years" }, { "paragraph_id": 30, "text": "Jones received an Honorary Academy Award in 1996 by the board of governors of the Academy of Motion Picture Arts and Sciences, for \"the creation of classic cartoons and cartoon characters whose animated lives have brought joy to our real ones for more than half a century.\" At that year's awards show, Robin Williams, a self-confessed \"Jones-aholic\", presented the honorary award to Jones, calling him \"The Orson Welles of cartoons\", and the audience gave Jones a standing ovation as he walked onto the stage. For himself, a flattered Jones wryly remarked in his acceptance speech, \"Well, what can I say in the face of such humiliating evidence? I stand guilty before the world of directing over three hundred cartoons in the last fifty or sixty years. Hopefully, this means you've forgiven me.\" He received the Lifetime Achievement Award at the World Festival of Animated Film – Animafest Zagreb in 1988.", "title": "Legacy" }, { "paragraph_id": 31, "text": "Jones was a historical authority as well as a major contributor to the development of animation throughout the 20th century. In 1990, Jones received the Golden Plate Award of the American Academy of Achievement. He received an honorary degree from Oglethorpe University in 1993. For his contribution to the motion picture industry, Jones has a star on the Hollywood Walk of Fame at 7011 Hollywood Blvd. He was awarded the Inkpot Award in 1974. In 1996, Jones received an Honorary Oscar at the 68th Academy Awards.", "title": "Legacy" }, { "paragraph_id": 32, "text": "Jones's life and legacy were celebrated on January 12, 2012, with the official grand opening of The Chuck Jones Experience at Circus Circus Las Vegas. Many of Jones's family welcomed celebrities, animation aficionados and visitors to the new attraction when they opened the attraction in an appropriate and unconventional way. Among those in attendance were Jones's widow, Marian Jones; daughter Linda Clough; and grandchildren Craig, Todd and Valerie Kausen.", "title": "Legacy" } ]
Charles Martin Jones was an American animator, painter, voice actor and filmmaker, best known for his work with Warner Bros. Cartoons on the Looney Tunes and Merrie Melodies series of shorts. He wrote, produced, and/or directed many classic animated cartoon shorts starring Bugs Bunny, Daffy Duck, Wile E. Coyote and the Road Runner, Pepé Le Pew, Marvin the Martian, and Porky Pig, among others. Jones started his career in 1933 alongside Tex Avery, Friz Freleng, Bob Clampett, and Robert McKimson at the Leon Schlesinger Production's Termite Terrace studio, the studio that made Warner Brothers cartoons, where they created and developed the Looney Tunes characters. During the Second World War, Jones directed many of the Private Snafu (1943–1946) shorts which were shown to members of the United States military. After his career at Warner Bros. ended in 1962, Jones started Sib Tower 12 Productions and began producing cartoons for Metro-Goldwyn-Mayer, including a new series of Tom and Jerry shorts (1963–1967) as well as the television adaptations of Dr. Seuss's How the Grinch Stole Christmas! (1966) and Horton Hears a Who! (1970). He later started his own studio, Chuck Jones Enterprises, where he directed and produced the film adaptation of Norton Juster's The Phantom Tollbooth (1970). Jones's work along with the other animators was showcased in the documentary, Bugs Bunny: Superstar (1975). Jones directed the first feature-length animated Looney Tunes compilation film, The Bugs Bunny/Road Runner Movie (1979). In 1990 he wrote his memoir, Chuck Amuck: The Life and Times of an Animated Cartoonist, which was made into a documentary film, Chuck Amuck (1991). He was also profiled in the American Masters documentary Chuck Jones: Extremes & Inbetweens – A Life in Animation (2000) which aired on PBS. Jones won three Academy Awards. The cartoons which he directed, For Scent-imental Reasons, So Much for So Little, and The Dot and the Line, won the Best Animated Short. Robin Williams presented Jones with an Honorary Academy Award in 1996 for his work in the animation industry. Film historian Leonard Maltin has praised Jones's work at Warner Bros., MGM and Chuck Jones Enterprises. In Jerry Beck's The 50 Greatest Cartoons, a group of animation professionals ranked What's Opera, Doc? (1957) as the greatest cartoon of all time, with ten of the entries being directed by Jones including Duck Amuck (1953), Duck Dodgers in the 24½th Century (1953), One Froggy Evening (1955), Rabbit of Seville (1950), and Rabbit Seasoning (1952).
2002-01-08T04:46:48Z
2023-12-27T10:58:45Z
[ "Template:Commons category", "Template:Navboxes", "Template:Citation needed", "Template:IMDb name", "Template:Webarchive", "Template:Official website", "Template:Short description", "Template:Cite web", "Template:Cite journal", "Template:Authority control", "Template:Similar names", "Template:Cite book", "Template:Bcdb", "Template:Cite news", "Template:Emmytvlegends name", "Template:Metro-Goldwyn-Mayer Cartoons", "Template:Looney Tunes & Merrie Melodies", "Template:Won", "Template:Wikiquote", "Template:Wikisource author", "Template:Dead link", "Template:Cbignore", "Template:Chuck Jones", "Template:The Chuck Jones Tom and Jerry shorts", "Template:Use mdy dates", "Template:See also", "Template:Reflist", "Template:Infobox person", "Template:Nom", "Template:ISBN" ]
https://en.wikipedia.org/wiki/Chuck_Jones
7,673
Costume
Costume is the distinctive style of dress or cosmetic of an individual or group that reflects class, gender, profession, ethnicity, nationality, activity or epoch. In short costume is a cultural visual of the people. The term also was traditionally used to describe typical appropriate clothing for certain activities, such as riding costume, swimming costume, dance costume, and evening costume. Appropriate and acceptable costume is subject to changes in fashion and local cultural norms. "But sable is worn more in carriages, lined with real lace over ivory satin, and worn over some smart costume suitable for an afternoon reception." A Woman's Letter from London (23 November 1899). This general usage has gradually been replaced by the terms "dress", "attire", "robes" or "wear" and usage of "costume" has become more limited to unusual or out-of-date clothing and to attire intended to evoke a change in identity, such as theatrical, Halloween, and mascot costumes. Before the advent of ready-to-wear apparel, clothing was made by hand. When made for commercial sale it was made, as late as the beginning of the 20th century, by "costumiers", often women who ran businesses that met the demand for complicated or intimate female costume, including millinery and corsetry. Costume comes from the same Italian word, inherited via French, which means fashion or custom. National costume or regional costume expresses local (or exiled) identity and emphasizes a culture's unique attributes. They are often a source of national pride. Examples include the Scottish kilt, Turkish Zeybek, or Japanese kimono. In Bhutan there is a traditional national dress prescribed for men and women, including the monarchy. These have been in vogue for thousands of years and have developed into a distinctive dress style. The dress worn by men is known as Gho which is a robe worn up to knee-length and is fastened at the waist by a band called the Kera. The front part of the dress which is formed like a pouch, in olden days was used to hold baskets of food and short dagger, but now it is used to keep cell phone, purse and the betel nut called Doma. The dress worn by women consist of three pieces known as Kira, Tego and Wonju. The long dress which extends up to the ankle is Kira. The jacket worn above this is Tego which is provided with Wonju, the inner jacket. However, while visiting the Dzong or monastery a long scarf or stoll, called Kabney is worn by men across the shoulder, in colours appropriate to their ranks. Women also wear scarfs or stolls called Rachus, made of raw silk with embroidery, over their shoulder but not indicative of their rank. Costume often refers to a particular style of clothing worn to portray the wearer as a character or type of character at a social event in a theatrical performance on the stage or in film or television. In combination with other aspects of stagecraft, theatrical costumes can help actors portray characters' and their contexts as well as communicate information about the historical period/era, geographic location and time of day, season or weather of the theatrical performance. Some stylized theatrical costumes, such as Harlequin and Pantaloon in the Commedia dell'arte, exaggerate an aspect of a character. A costume technician is a term used for a person that constructs and/or alters the costumes. The costume technician is responsible for taking the two dimensional sketch and translating it to create a garment that resembles the designer's rendering. It is important for a technician to keep the ideas of the designer in mind when building the garment. Draping is the art of manipulating the fabric using pins and hand stitching to create structure on a body. This is usually done on a dress form to get the adequate shape for the performer. Cutting is the act of laying out fabric on a flat surface, using scissors to cut and follow along a pattern. These pieces are put together to create a final costume. The wearing of costumes is an important part of holidays developed from religious festivals such as Mardi Gras (in the lead up to Easter), and Halloween (related to All Hallow's Eve). Mardi Gras costumes usually take the form of jesters and other fantasy characters; Halloween costumes traditionally take the form of supernatural creatures such as ghosts, vampires, pop-culture icons and angels. Halloween costumes developed from pre-Christian religious traditions: to avoid being terrorized by evil spirits walking the Earth during the harvest festival Samhain, the Celts donned disguises. In the eighth century, Pope Gregory VIII designated November 1 as All Saints Day, and the preceding days as All Hallows Eve; Samhain's costuming tradition was incorporated into these Christian holidays. Given the Catholic and pagan roots of the holiday, it has been repudiated by some Protestants. However, in the modern era, Halloween "is widely celebrated in almost every corner of American life," and the wearing of costumes forms part of a secular tradition. In 2022, United States households spent an average of $100 preparing for Halloween, with $34 going to costume-related spending. Christmas costumes typically portray characters such as Santa Claus (developed from Saint Nicholas). In Australia, the United Kingdom and the United States the American version of a Santa suit and beard is popular; in the Netherlands, the costume of Zwarte Piet is customary. Easter costumes are associated with the Easter Bunny or other animal costumes. In Judaism, a common practice is to dress up on Purim. During this holiday, Jews celebrate the change of their destiny. They were delivered from being the victims of an evil decree against them and were instead allowed by the King to destroy their enemies. A quote from the Book of Esther, which says: "On the contrary" (Hebrew: ונהפוך הוא) is the reason that wearing a costume has become customary for this holiday. Buddhist religious festivals in Tibet, Bhutan, Mongolia and Lhasa and Sikkim in India perform the Cham dance, which is a popular dance form utilising masks and costumes. Parades and processions provide opportunities for people to dress up in historical or imaginative costumes. For example, in 1879 the artist Hans Makart designed costumes and scenery to celebrate the wedding anniversary of the Austro-Hungarian Emperor and Empress and led the people of Vienna in a costume parade that became a regular event until the mid-twentieth century. Uncle Sam costumes are worn on Independence Day in the United States. The Lion Dance, which is part of Chinese New Year celebrations, is performed in costume. Some costumes, such as the ones used in the Dragon Dance, need teams of people to create the required effect. Public sporting events such as fun runs also provide opportunities for wearing costumes, as do private masquerade balls and fancy dress parties. Costumes are popularly employed at sporting events, during which fans dress as their team's representative mascot to show their support. Businesses use mascot costumes to bring in people to their business either by placing their mascot in the street by their business or sending their mascot out to sporting events, festivals, national celebrations, fairs, and parades. Mascots appear at organizations wanting to raise awareness of their work. Children's Book authors create mascots from the main character to present at their book signings. Animal costumes that are visually very similar to mascot costumes are also popular among the members of the furry fandom, where the costumes are referred to as fursuits and match one's animal persona, or "fursona". Costumes also serve as an avenue for children to explore and role-play. For example, children may dress up as characters from history or fiction, such as pirates, princesses, cowboys, or superheroes. They may also dress in uniforms used in common jobs, such as nurses, police officers, or firefighters, or as zoo or farm animals. Young boys tend to prefer costumes that reinforce stereotypical ideas of being male, and young girls tend to prefer costumes that reinforce stereotypical ideas of being female. Cosplay, a word of Japanese origin that in English is short for "costume display" or "costume play", is a performance art in which participants wear costumes and accessories to represent a specific character or idea that is usually always identified with a unique name (as opposed to a generic word). These costume wearers often interact to create a subculture centered on role play, so they can be seen most often in play groups, or at a gathering or convention. A significant number of these costumes are homemade and unique, and depend on the character, idea, or object the costume wearer is attempting to imitate or represent. The costumes themselves are often artistically judged to how well they represent the subject or object that the costume wearer is attempting to contrive. Costume design is the envisioning of clothing and the overall appearance of a character or performer. Costume may refer to the style of dress particular to a nation, a class, or a period. In many cases, it may contribute to the fullness of the artistic, visual world that is unique to a particular theatrical or cinematic production. The most basic designs are produced to denote status, provide protection or modesty, or provide visual interest to a character. Costumes may be for, but not limited to, theater, cinema, or musical performances. Costume design should not be confused with costume coordination, which merely involves altering existing clothing, although both processes are used to create stage clothes. The Costume Designers Guild's international membership includes motion picture, television, and commercial costume designers, assistant costume designers and costume illustrators, and totals over 750 members. The National Costumers Association is an 80 year old association of professional costumers and costume shops. The Costume Designer is a quarterly magazine devoted to the costume design industry. Notable costume designers include recipients of the Academy Award for Best Costume Design, Tony Award for Best Costume Design, and Drama Desk Award for Outstanding Costume Design. Edith Head and Orry-Kelly, both of whom were born late in 1897, were two of Hollywood's most notable costume designers. In the 20th century, contemporary fabric stores offered commercial patterns that could be bought and used to make a costume from raw materials. Some companies also began producing catalogs with great numbers of patterns. More recently, and particularly with the advent of the Internet, the DIY movement has ushered in a new era of DIY costumes and pattern sharing. YouTube, Pinterest, Mashable also feature many DIY costumes. Professional-grade costumes are typically designed and produced by costume companies who can design and create unique costumes. These companies have often been in business for over 100 years, and continue to work with individual clients to create professional quality costumes. Professional costume houses rent and sell costumes for the trade. This includes companies that create mascots, costumes for film, TV costumes and theatrical costumes. Larger costume companies have warehouses full of costumes for rental to customers. There is an industry where costumers work with clients and design costumes from scratch. They then will create original costumes specifically to the clients specifications.
[ { "paragraph_id": 0, "text": "Costume is the distinctive style of dress or cosmetic of an individual or group that reflects class, gender, profession, ethnicity, nationality, activity or epoch. In short costume is a cultural visual of the people.", "title": "" }, { "paragraph_id": 1, "text": "The term also was traditionally used to describe typical appropriate clothing for certain activities, such as riding costume, swimming costume, dance costume, and evening costume. Appropriate and acceptable costume is subject to changes in fashion and local cultural norms.", "title": "" }, { "paragraph_id": 2, "text": "\"But sable is worn more in carriages, lined with real lace over ivory satin, and worn over some smart costume suitable for an afternoon reception.\" A Woman's Letter from London (23 November 1899).", "title": "" }, { "paragraph_id": 3, "text": "This general usage has gradually been replaced by the terms \"dress\", \"attire\", \"robes\" or \"wear\" and usage of \"costume\" has become more limited to unusual or out-of-date clothing and to attire intended to evoke a change in identity, such as theatrical, Halloween, and mascot costumes.", "title": "" }, { "paragraph_id": 4, "text": "Before the advent of ready-to-wear apparel, clothing was made by hand. When made for commercial sale it was made, as late as the beginning of the 20th century, by \"costumiers\", often women who ran businesses that met the demand for complicated or intimate female costume, including millinery and corsetry.", "title": "" }, { "paragraph_id": 5, "text": "Costume comes from the same Italian word, inherited via French, which means fashion or custom.", "title": "Etymology" }, { "paragraph_id": 6, "text": "National costume or regional costume expresses local (or exiled) identity and emphasizes a culture's unique attributes. They are often a source of national pride. Examples include the Scottish kilt, Turkish Zeybek, or Japanese kimono.", "title": "National costume" }, { "paragraph_id": 7, "text": "In Bhutan there is a traditional national dress prescribed for men and women, including the monarchy. These have been in vogue for thousands of years and have developed into a distinctive dress style. The dress worn by men is known as Gho which is a robe worn up to knee-length and is fastened at the waist by a band called the Kera. The front part of the dress which is formed like a pouch, in olden days was used to hold baskets of food and short dagger, but now it is used to keep cell phone, purse and the betel nut called Doma. The dress worn by women consist of three pieces known as Kira, Tego and Wonju. The long dress which extends up to the ankle is Kira. The jacket worn above this is Tego which is provided with Wonju, the inner jacket. However, while visiting the Dzong or monastery a long scarf or stoll, called Kabney is worn by men across the shoulder, in colours appropriate to their ranks. Women also wear scarfs or stolls called Rachus, made of raw silk with embroidery, over their shoulder but not indicative of their rank.", "title": "National costume" }, { "paragraph_id": 8, "text": "Costume often refers to a particular style of clothing worn to portray the wearer as a character or type of character at a social event in a theatrical performance on the stage or in film or television. In combination with other aspects of stagecraft, theatrical costumes can help actors portray characters' and their contexts as well as communicate information about the historical period/era, geographic location and time of day, season or weather of the theatrical performance. Some stylized theatrical costumes, such as Harlequin and Pantaloon in the Commedia dell'arte, exaggerate an aspect of a character.", "title": "Theatrical costume" }, { "paragraph_id": 9, "text": "A costume technician is a term used for a person that constructs and/or alters the costumes. The costume technician is responsible for taking the two dimensional sketch and translating it to create a garment that resembles the designer's rendering. It is important for a technician to keep the ideas of the designer in mind when building the garment.", "title": "Costume construction" }, { "paragraph_id": 10, "text": "Draping is the art of manipulating the fabric using pins and hand stitching to create structure on a body. This is usually done on a dress form to get the adequate shape for the performer. Cutting is the act of laying out fabric on a flat surface, using scissors to cut and follow along a pattern. These pieces are put together to create a final costume.", "title": "Costume construction" }, { "paragraph_id": 11, "text": "", "title": "Costume construction" }, { "paragraph_id": 12, "text": "", "title": "Costume construction" }, { "paragraph_id": 13, "text": "The wearing of costumes is an important part of holidays developed from religious festivals such as Mardi Gras (in the lead up to Easter), and Halloween (related to All Hallow's Eve). Mardi Gras costumes usually take the form of jesters and other fantasy characters; Halloween costumes traditionally take the form of supernatural creatures such as ghosts, vampires, pop-culture icons and angels.", "title": "Religious festivals" }, { "paragraph_id": 14, "text": "Halloween costumes developed from pre-Christian religious traditions: to avoid being terrorized by evil spirits walking the Earth during the harvest festival Samhain, the Celts donned disguises. In the eighth century, Pope Gregory VIII designated November 1 as All Saints Day, and the preceding days as All Hallows Eve; Samhain's costuming tradition was incorporated into these Christian holidays. Given the Catholic and pagan roots of the holiday, it has been repudiated by some Protestants. However, in the modern era, Halloween \"is widely celebrated in almost every corner of American life,\" and the wearing of costumes forms part of a secular tradition. In 2022, United States households spent an average of $100 preparing for Halloween, with $34 going to costume-related spending.", "title": "Religious festivals" }, { "paragraph_id": 15, "text": "Christmas costumes typically portray characters such as Santa Claus (developed from Saint Nicholas). In Australia, the United Kingdom and the United States the American version of a Santa suit and beard is popular; in the Netherlands, the costume of Zwarte Piet is customary. Easter costumes are associated with the Easter Bunny or other animal costumes.", "title": "Religious festivals" }, { "paragraph_id": 16, "text": "In Judaism, a common practice is to dress up on Purim. During this holiday, Jews celebrate the change of their destiny. They were delivered from being the victims of an evil decree against them and were instead allowed by the King to destroy their enemies. A quote from the Book of Esther, which says: \"On the contrary\" (Hebrew: ונהפוך הוא) is the reason that wearing a costume has become customary for this holiday.", "title": "Religious festivals" }, { "paragraph_id": 17, "text": "Buddhist religious festivals in Tibet, Bhutan, Mongolia and Lhasa and Sikkim in India perform the Cham dance, which is a popular dance form utilising masks and costumes.", "title": "Religious festivals" }, { "paragraph_id": 18, "text": "Parades and processions provide opportunities for people to dress up in historical or imaginative costumes. For example, in 1879 the artist Hans Makart designed costumes and scenery to celebrate the wedding anniversary of the Austro-Hungarian Emperor and Empress and led the people of Vienna in a costume parade that became a regular event until the mid-twentieth century. Uncle Sam costumes are worn on Independence Day in the United States. The Lion Dance, which is part of Chinese New Year celebrations, is performed in costume. Some costumes, such as the ones used in the Dragon Dance, need teams of people to create the required effect.", "title": "Parades and processions" }, { "paragraph_id": 19, "text": "Public sporting events such as fun runs also provide opportunities for wearing costumes, as do private masquerade balls and fancy dress parties.", "title": "Sporting events and parties" }, { "paragraph_id": 20, "text": "Costumes are popularly employed at sporting events, during which fans dress as their team's representative mascot to show their support. Businesses use mascot costumes to bring in people to their business either by placing their mascot in the street by their business or sending their mascot out to sporting events, festivals, national celebrations, fairs, and parades. Mascots appear at organizations wanting to raise awareness of their work. Children's Book authors create mascots from the main character to present at their book signings. Animal costumes that are visually very similar to mascot costumes are also popular among the members of the furry fandom, where the costumes are referred to as fursuits and match one's animal persona, or \"fursona\".", "title": "Sporting events and parties" }, { "paragraph_id": 21, "text": "Costumes also serve as an avenue for children to explore and role-play. For example, children may dress up as characters from history or fiction, such as pirates, princesses, cowboys, or superheroes. They may also dress in uniforms used in common jobs, such as nurses, police officers, or firefighters, or as zoo or farm animals. Young boys tend to prefer costumes that reinforce stereotypical ideas of being male, and young girls tend to prefer costumes that reinforce stereotypical ideas of being female.", "title": "Sporting events and parties" }, { "paragraph_id": 22, "text": "Cosplay, a word of Japanese origin that in English is short for \"costume display\" or \"costume play\", is a performance art in which participants wear costumes and accessories to represent a specific character or idea that is usually always identified with a unique name (as opposed to a generic word). These costume wearers often interact to create a subculture centered on role play, so they can be seen most often in play groups, or at a gathering or convention. A significant number of these costumes are homemade and unique, and depend on the character, idea, or object the costume wearer is attempting to imitate or represent. The costumes themselves are often artistically judged to how well they represent the subject or object that the costume wearer is attempting to contrive.", "title": "Sporting events and parties" }, { "paragraph_id": 23, "text": "Costume design is the envisioning of clothing and the overall appearance of a character or performer. Costume may refer to the style of dress particular to a nation, a class, or a period. In many cases, it may contribute to the fullness of the artistic, visual world that is unique to a particular theatrical or cinematic production. The most basic designs are produced to denote status, provide protection or modesty, or provide visual interest to a character. Costumes may be for, but not limited to, theater, cinema, or musical performances. Costume design should not be confused with costume coordination, which merely involves altering existing clothing, although both processes are used to create stage clothes.", "title": "Design" }, { "paragraph_id": 24, "text": "The Costume Designers Guild's international membership includes motion picture, television, and commercial costume designers, assistant costume designers and costume illustrators, and totals over 750 members.", "title": "Design" }, { "paragraph_id": 25, "text": "The National Costumers Association is an 80 year old association of professional costumers and costume shops.", "title": "Design" }, { "paragraph_id": 26, "text": "The Costume Designer is a quarterly magazine devoted to the costume design industry.", "title": "Design" }, { "paragraph_id": 27, "text": "Notable costume designers include recipients of the Academy Award for Best Costume Design, Tony Award for Best Costume Design, and Drama Desk Award for Outstanding Costume Design. Edith Head and Orry-Kelly, both of whom were born late in 1897, were two of Hollywood's most notable costume designers.", "title": "Design" }, { "paragraph_id": 28, "text": "In the 20th century, contemporary fabric stores offered commercial patterns that could be bought and used to make a costume from raw materials. Some companies also began producing catalogs with great numbers of patterns.", "title": "Design" }, { "paragraph_id": 29, "text": "More recently, and particularly with the advent of the Internet, the DIY movement has ushered in a new era of DIY costumes and pattern sharing. YouTube, Pinterest, Mashable also feature many DIY costumes.", "title": "Design" }, { "paragraph_id": 30, "text": "Professional-grade costumes are typically designed and produced by costume companies who can design and create unique costumes. These companies have often been in business for over 100 years, and continue to work with individual clients to create professional quality costumes.", "title": "Industry" }, { "paragraph_id": 31, "text": "Professional costume houses rent and sell costumes for the trade. This includes companies that create mascots, costumes for film, TV costumes and theatrical costumes.", "title": "Industry" }, { "paragraph_id": 32, "text": "Larger costume companies have warehouses full of costumes for rental to customers.", "title": "Industry" }, { "paragraph_id": 33, "text": "There is an industry where costumers work with clients and design costumes from scratch. They then will create original costumes specifically to the clients specifications.", "title": "Industry" } ]
Costume is the distinctive style of dress or cosmetic of an individual or group that reflects class, gender, profession, ethnicity, nationality, activity or epoch. In short costume is a cultural visual of the people. The term also was traditionally used to describe typical appropriate clothing for certain activities, such as riding costume, swimming costume, dance costume, and evening costume. Appropriate and acceptable costume is subject to changes in fashion and local cultural norms. This general usage has gradually been replaced by the terms "dress", "attire", "robes" or "wear" and usage of "costume" has become more limited to unusual or out-of-date clothing and to attire intended to evoke a change in identity, such as theatrical, Halloween, and mascot costumes. Before the advent of ready-to-wear apparel, clothing was made by hand. When made for commercial sale it was made, as late as the beginning of the 20th century, by "costumiers", often women who ran businesses that met the demand for complicated or intimate female costume, including millinery and corsetry.
2002-02-07T16:12:25Z
2023-12-30T09:22:23Z
[ "Template:Cite EB1911", "Template:Clothing", "Template:Short description", "Template:Lang-he", "Template:Portal", "Template:Cite book", "Template:Cite web", "Template:Commons category", "Template:Historical clothing", "Template:For", "Template:Quotation", "Template:Multiple image", "Template:Columns-list", "Template:Reflist", "Template:Use dmy dates", "Template:Cite journal", "Template:Theatre", "Template:Main", "Template:Cite news", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Costume
7,674
Cable car (railway)
A cable car (usually known as a cable tram outside North America) is a type of cable railway used for mass transit in which rail cars are hauled by a continuously moving cable running at a constant speed. Individual cars stop and start by releasing and gripping this cable as required. Cable cars are distinct from funiculars, where the cars are permanently attached to the cable. The first cable-operated railway, employing a moving rope that could be picked up or released by a grip on the cars was the Fawdon Wagonway in 1826, a colliery railway line. The London and Blackwall Railway, which opened for passengers in east London, England, in 1840 used such a system. The rope available at the time proved too susceptible to wear and the system was abandoned in favour of steam locomotives after eight years. In America, the first cable car installation in operation probably was the West Side and Yonkers Patent Railway in New York City, as its first-ever elevated railway which ran from 1 July 1868 to 1870. The cable technology used in this elevated railway involved collar-equipped cables and claw-equipped cars, proving cumbersome. The line was closed and rebuilt, reopening with steam locomotives. In 1869 P. G. T. Beauregard demonstrated a cable car at New Orleans and was issued U.S. Patent 97,343. Other cable cars to use grips were those of the Clay Street Hill Railroad, which later became part of the San Francisco cable car system. The building of this line was promoted by Andrew Smith Hallidie with design work by William Eppelsheimer, and it was first tested in 1873. The success of these grips ensured that this line became the model for other cable car transit systems, and this model is often known as the Hallidie Cable Car. In 1881 the Dunedin cable tramway system opened in Dunedin, New Zealand and became the first such system outside San Francisco. For Dunedin, George Smith Duncan further developed the Hallidie model, introducing the pull curve and the slot brake; the former was a way to pull cars through a curve, since Dunedin's curves were too sharp to allow coasting, while the latter forced a wedge down into the cable slot to stop the car. Both of these innovations were generally adopted by other cities, including San Francisco. In Australia, the Melbourne cable tramway system operated from 1885 to 1940. It was one of the most extensive in the world with 1200 trams and trailers operating over 15 routes with 103 km (64 miles) of track. Sydney also had a couple of cable tram routes. Cable cars rapidly spread to other cities, although the major attraction for most was the ability to displace horsecar (or mule-drawn) systems rather than the ability to climb hills. Many people at the time viewed horse-drawn transit as unnecessarily cruel, and the fact that a typical horse could work only four or five hours per day necessitated the maintenance of large stables of draft animals that had to be fed, housed, groomed, medicated and rested. Thus, for a period, economics worked in favour of cable cars even in relatively flat cities. For example, the Chicago City Railway, also designed by Eppelsheimer, opened in Chicago in 1882 and went on to become the largest and most profitable cable car system. As with many cities, the problem in flat Chicago was not one of incline, but of transportation capacity. This caused a different approach to the combination of grip car and trailer. Rather than using a grip car and single trailer, as many cities did, or combining the grip and trailer into a single car, like San Francisco's California Cars, Chicago used grip cars to pull trains of up to three trailers. In 1883 the New York and Brooklyn Bridge Railway was opened, which had a most curious feature: though it was a cable car system, it used steam locomotives to get the cars into and out of the terminals. After 1896 the system was changed to one on which a motor car was added to each train to maneuver at the terminals, while en route, the trains were still propelled by the cable. On 25 September 1883, a test of a cable car system was held by Liverpool Tramways Company in Kirkdale, Liverpool. This would have been the first cable car system in Europe, but the company decided against implementing it. Instead, the distinction went to the 1884 Highgate Hill Cable Tramway, a route from Archway to Highgate, north London, which used a continuous cable and grip system on the 1 in 11 (9%) climb of Highgate Hill. The installation was not reliable and was replaced by electric traction in 1909. Other cable car systems were implemented in Europe, though, among which was the Glasgow District Subway, the first underground cable car system, in 1896. (London, England's first deep-level tube railway, the City & South London Railway, had earlier also been built for cable haulage but had been converted to electric traction before opening in 1890.) A few more cable car systems were built in the United Kingdom, Portugal, and France. European cities, having many more curves in their streets, were ultimately less suitable for cable cars than American cities. Though some new cable car systems were still being built, by 1890 the cheaper to construct and simpler to operate electrically-powered trolley or tram started to become the norm, and eventually started to replace existing cable car systems. For a while hybrid cable/electric systems operated, for example in Chicago where electric cars had to be pulled by grip cars through the loop area, due to the lack of trolley wires there. Eventually, San Francisco became the only street-running manually operated system to survive—Dunedin, the second city with such cars, was also the second-last city to operate them, closing down in 1957. In the last decades of the 20th-century, cable traction in general has seen a limited revival as automatic people movers, used in resort areas, airports (for example, Toronto Airport), huge hospital centers and some urban settings. While many of these systems involve cars permanently attached to the cable, the Minimetro system from Poma/Leitner Group and the Cable Liner system from DCC Doppelmayr Cable Car both have variants that allow the cars to be automatically decoupled from the cable under computer control, and can thus be considered a modern interpretation of the cable car. The cable is itself powered by a stationary engine or motor situated in a cable house or power house. The speed at which it moves is relatively constant depending on the number of units gripping the cable at any given time. The cable car begins moving when a clamping device attached to the car, called a grip, applies pressure to ("grip") the moving cable. Conversely, the car is stopped by releasing pressure on the cable (with or without completely detaching) and applying the brakes. This gripping and releasing action may be manual, as was the case in all early cable car systems, or automatic, as is the case in some recent cable operated people mover type systems. Gripping must be applied evenly and gradually in order to avoid bringing the car to cable speed too quickly and unacceptably jarring passengers. In the case of manual systems, the grip resembles a very large pair of pliers, and considerable strength and skill are required to operate the car. As many early cable car operators discovered the hard way, if the grip is not applied properly, it can damage the cable, or even worse, become entangled in the cable. In the latter case, the cable car may not be able to stop and can wreak havoc along its route until the cable house realizes the mishap and halts the cable. One apparent advantage of the cable car is its relative energy efficiency. This is due to the economy of centrally located power stations, and the ability of descending cars to transfer energy to ascending cars. However, this advantage is totally negated by the relatively large energy consumption required to simply move the cable over and under the numerous guide rollers and around the many sheaves. Approximately 95% of the tractive effort in the San Francisco system is expended in simply moving the four cables at 15.3 km/h (9.5 mph). Electric cars with regenerative braking do offer the advantages, without the problem of moving a cable. In the case of steep grades, however, cable traction has the major advantage of not depending on adhesion between wheels and rails. There is also the advantage that keeping the car gripped to the cable will also limit the downhill speed of the car to that of the cable. Because of the constant and relatively low speed, a cable car's potential to cause harm in an accident can be underestimated. Even with a cable car traveling at only 14 km/h (9 mph), the mass of the cable car and the combined strength and speed of the cable can cause extensive damage in a collision. A cable car is superficially similar to a funicular, but differs from such a system in that its cars are not permanently attached to the cable and can stop independently, whereas a funicular has cars that are permanently attached to the propulsion cable, which is itself stopped and started. A cable car cannot climb as steep a grade as a funicular, but many more cars can be operated with a single cable, making it more flexible, and allowing a higher capacity. During the rush hour on San Francisco's Market Street Railway in 1883, a car would leave the terminal every 15 seconds. A few funicular railways operate in street traffic, and because of this operation are often incorrectly described as cable cars. Examples of such operation, and the consequent confusion, are: Even more confusingly, a hybrid cable car/funicular line once existed in the form of the original Wellington Cable Car, in the New Zealand city of Wellington. This line had both a continuous loop haulage cable that the cars gripped using a cable car gripper, and a balance cable permanently attached to both cars over an undriven pulley at the top of the line. The descending car gripped the haulage cable and was pulled downhill, in turn pulling the ascending car (which remained ungripped) uphill by the balance cable. This line was rebuilt in 1979 and is now a standard funicular, although it retains its old cable car name. The best-known existing cable car system is the San Francisco cable car system in the city of San Francisco, California. San Francisco's cable cars constitute the oldest and largest such system in permanent operation, and it is one of the few still functioning in the traditional manner, with manually operated cars running in street traffic. Other examples of cable powered systems can be found on the Great Orme in North Wales. and in Lisbon in Portugal. All of these however are slightly different to San Francisco in that the cars are permanently attached to the cable. Several cities operate a modern version of the cable car system. These systems are fully automated and run on their own reserved right of way. They are commonly referred to as people movers, although that term is also applied to systems with other forms of propulsion, including funicular style cable propulsion. These cities include: Information Patents
[ { "paragraph_id": 0, "text": "A cable car (usually known as a cable tram outside North America) is a type of cable railway used for mass transit in which rail cars are hauled by a continuously moving cable running at a constant speed. Individual cars stop and start by releasing and gripping this cable as required. Cable cars are distinct from funiculars, where the cars are permanently attached to the cable.", "title": "" }, { "paragraph_id": 1, "text": "The first cable-operated railway, employing a moving rope that could be picked up or released by a grip on the cars was the Fawdon Wagonway in 1826, a colliery railway line. The London and Blackwall Railway, which opened for passengers in east London, England, in 1840 used such a system. The rope available at the time proved too susceptible to wear and the system was abandoned in favour of steam locomotives after eight years. In America, the first cable car installation in operation probably was the West Side and Yonkers Patent Railway in New York City, as its first-ever elevated railway which ran from 1 July 1868 to 1870. The cable technology used in this elevated railway involved collar-equipped cables and claw-equipped cars, proving cumbersome. The line was closed and rebuilt, reopening with steam locomotives.", "title": "History" }, { "paragraph_id": 2, "text": "In 1869 P. G. T. Beauregard demonstrated a cable car at New Orleans and was issued U.S. Patent 97,343.", "title": "History" }, { "paragraph_id": 3, "text": "Other cable cars to use grips were those of the Clay Street Hill Railroad, which later became part of the San Francisco cable car system. The building of this line was promoted by Andrew Smith Hallidie with design work by William Eppelsheimer, and it was first tested in 1873. The success of these grips ensured that this line became the model for other cable car transit systems, and this model is often known as the Hallidie Cable Car.", "title": "History" }, { "paragraph_id": 4, "text": "In 1881 the Dunedin cable tramway system opened in Dunedin, New Zealand and became the first such system outside San Francisco. For Dunedin, George Smith Duncan further developed the Hallidie model, introducing the pull curve and the slot brake; the former was a way to pull cars through a curve, since Dunedin's curves were too sharp to allow coasting, while the latter forced a wedge down into the cable slot to stop the car. Both of these innovations were generally adopted by other cities, including San Francisco.", "title": "History" }, { "paragraph_id": 5, "text": "In Australia, the Melbourne cable tramway system operated from 1885 to 1940. It was one of the most extensive in the world with 1200 trams and trailers operating over 15 routes with 103 km (64 miles) of track. Sydney also had a couple of cable tram routes.", "title": "History" }, { "paragraph_id": 6, "text": "Cable cars rapidly spread to other cities, although the major attraction for most was the ability to displace horsecar (or mule-drawn) systems rather than the ability to climb hills. Many people at the time viewed horse-drawn transit as unnecessarily cruel, and the fact that a typical horse could work only four or five hours per day necessitated the maintenance of large stables of draft animals that had to be fed, housed, groomed, medicated and rested. Thus, for a period, economics worked in favour of cable cars even in relatively flat cities.", "title": "History" }, { "paragraph_id": 7, "text": "For example, the Chicago City Railway, also designed by Eppelsheimer, opened in Chicago in 1882 and went on to become the largest and most profitable cable car system. As with many cities, the problem in flat Chicago was not one of incline, but of transportation capacity. This caused a different approach to the combination of grip car and trailer. Rather than using a grip car and single trailer, as many cities did, or combining the grip and trailer into a single car, like San Francisco's California Cars, Chicago used grip cars to pull trains of up to three trailers.", "title": "History" }, { "paragraph_id": 8, "text": "In 1883 the New York and Brooklyn Bridge Railway was opened, which had a most curious feature: though it was a cable car system, it used steam locomotives to get the cars into and out of the terminals. After 1896 the system was changed to one on which a motor car was added to each train to maneuver at the terminals, while en route, the trains were still propelled by the cable.", "title": "History" }, { "paragraph_id": 9, "text": "On 25 September 1883, a test of a cable car system was held by Liverpool Tramways Company in Kirkdale, Liverpool. This would have been the first cable car system in Europe, but the company decided against implementing it. Instead, the distinction went to the 1884 Highgate Hill Cable Tramway, a route from Archway to Highgate, north London, which used a continuous cable and grip system on the 1 in 11 (9%) climb of Highgate Hill. The installation was not reliable and was replaced by electric traction in 1909. Other cable car systems were implemented in Europe, though, among which was the Glasgow District Subway, the first underground cable car system, in 1896. (London, England's first deep-level tube railway, the City & South London Railway, had earlier also been built for cable haulage but had been converted to electric traction before opening in 1890.) A few more cable car systems were built in the United Kingdom, Portugal, and France. European cities, having many more curves in their streets, were ultimately less suitable for cable cars than American cities.", "title": "History" }, { "paragraph_id": 10, "text": "Though some new cable car systems were still being built, by 1890 the cheaper to construct and simpler to operate electrically-powered trolley or tram started to become the norm, and eventually started to replace existing cable car systems. For a while hybrid cable/electric systems operated, for example in Chicago where electric cars had to be pulled by grip cars through the loop area, due to the lack of trolley wires there. Eventually, San Francisco became the only street-running manually operated system to survive—Dunedin, the second city with such cars, was also the second-last city to operate them, closing down in 1957.", "title": "History" }, { "paragraph_id": 11, "text": "In the last decades of the 20th-century, cable traction in general has seen a limited revival as automatic people movers, used in resort areas, airports (for example, Toronto Airport), huge hospital centers and some urban settings. While many of these systems involve cars permanently attached to the cable, the Minimetro system from Poma/Leitner Group and the Cable Liner system from DCC Doppelmayr Cable Car both have variants that allow the cars to be automatically decoupled from the cable under computer control, and can thus be considered a modern interpretation of the cable car.", "title": "History" }, { "paragraph_id": 12, "text": "The cable is itself powered by a stationary engine or motor situated in a cable house or power house. The speed at which it moves is relatively constant depending on the number of units gripping the cable at any given time.", "title": "Operation" }, { "paragraph_id": 13, "text": "The cable car begins moving when a clamping device attached to the car, called a grip, applies pressure to (\"grip\") the moving cable. Conversely, the car is stopped by releasing pressure on the cable (with or without completely detaching) and applying the brakes. This gripping and releasing action may be manual, as was the case in all early cable car systems, or automatic, as is the case in some recent cable operated people mover type systems. Gripping must be applied evenly and gradually in order to avoid bringing the car to cable speed too quickly and unacceptably jarring passengers.", "title": "Operation" }, { "paragraph_id": 14, "text": "In the case of manual systems, the grip resembles a very large pair of pliers, and considerable strength and skill are required to operate the car. As many early cable car operators discovered the hard way, if the grip is not applied properly, it can damage the cable, or even worse, become entangled in the cable. In the latter case, the cable car may not be able to stop and can wreak havoc along its route until the cable house realizes the mishap and halts the cable.", "title": "Operation" }, { "paragraph_id": 15, "text": "One apparent advantage of the cable car is its relative energy efficiency. This is due to the economy of centrally located power stations, and the ability of descending cars to transfer energy to ascending cars. However, this advantage is totally negated by the relatively large energy consumption required to simply move the cable over and under the numerous guide rollers and around the many sheaves. Approximately 95% of the tractive effort in the San Francisco system is expended in simply moving the four cables at 15.3 km/h (9.5 mph). Electric cars with regenerative braking do offer the advantages, without the problem of moving a cable. In the case of steep grades, however, cable traction has the major advantage of not depending on adhesion between wheels and rails. There is also the advantage that keeping the car gripped to the cable will also limit the downhill speed of the car to that of the cable.", "title": "Operation" }, { "paragraph_id": 16, "text": "Because of the constant and relatively low speed, a cable car's potential to cause harm in an accident can be underestimated. Even with a cable car traveling at only 14 km/h (9 mph), the mass of the cable car and the combined strength and speed of the cable can cause extensive damage in a collision.", "title": "Operation" }, { "paragraph_id": 17, "text": "A cable car is superficially similar to a funicular, but differs from such a system in that its cars are not permanently attached to the cable and can stop independently, whereas a funicular has cars that are permanently attached to the propulsion cable, which is itself stopped and started. A cable car cannot climb as steep a grade as a funicular, but many more cars can be operated with a single cable, making it more flexible, and allowing a higher capacity. During the rush hour on San Francisco's Market Street Railway in 1883, a car would leave the terminal every 15 seconds.", "title": "Relation to funiculars" }, { "paragraph_id": 18, "text": "A few funicular railways operate in street traffic, and because of this operation are often incorrectly described as cable cars. Examples of such operation, and the consequent confusion, are:", "title": "Relation to funiculars" }, { "paragraph_id": 19, "text": "Even more confusingly, a hybrid cable car/funicular line once existed in the form of the original Wellington Cable Car, in the New Zealand city of Wellington. This line had both a continuous loop haulage cable that the cars gripped using a cable car gripper, and a balance cable permanently attached to both cars over an undriven pulley at the top of the line. The descending car gripped the haulage cable and was pulled downhill, in turn pulling the ascending car (which remained ungripped) uphill by the balance cable. This line was rebuilt in 1979 and is now a standard funicular, although it retains its old cable car name.", "title": "Relation to funiculars" }, { "paragraph_id": 20, "text": "The best-known existing cable car system is the San Francisco cable car system in the city of San Francisco, California. San Francisco's cable cars constitute the oldest and largest such system in permanent operation, and it is one of the few still functioning in the traditional manner, with manually operated cars running in street traffic. Other examples of cable powered systems can be found on the Great Orme in North Wales. and in Lisbon in Portugal. All of these however are slightly different to San Francisco in that the cars are permanently attached to the cable.", "title": "List of cable car systems" }, { "paragraph_id": 21, "text": "Several cities operate a modern version of the cable car system. These systems are fully automated and run on their own reserved right of way. They are commonly referred to as people movers, although that term is also applied to systems with other forms of propulsion, including funicular style cable propulsion.", "title": "List of cable car systems" }, { "paragraph_id": 22, "text": "These cities include:", "title": "List of cable car systems" }, { "paragraph_id": 23, "text": "Information", "title": "External links" }, { "paragraph_id": 24, "text": "Patents", "title": "External links" } ]
A cable car is a type of cable railway used for mass transit in which rail cars are hauled by a continuously moving cable running at a constant speed. Individual cars stop and start by releasing and gripping this cable as required. Cable cars are distinct from funiculars, where the cars are permanently attached to the cable.
2002-01-08T09:48:05Z
2023-09-01T06:52:07Z
[ "Template:Webarchive", "Template:Cite journal", "Template:Cite web", "Template:NSRW poster", "Template:Short description", "Template:US patent", "Template:Div col", "Template:Cite book", "Template:Commons", "Template:Commons category", "Template:Cvt", "Template:Div col end", "Template:Reflist", "Template:About", "Template:Cite news", "Template:ISBN", "Template:Authority control", "Template:Snd", "Template:Public transport", "Template:SkiLift" ]
https://en.wikipedia.org/wiki/Cable_car_(railway)
7,676
Creaky voice
In linguistics, creaky voice (sometimes called laryngealisation, pulse phonation, vocal fry, or glottal fry) refers to a low, scratchy sound that occupies the vocal range below the common vocal register. It is a special kind of phonation in which the arytenoid cartilages in the larynx are drawn together; as a result, the vocal folds are compressed rather tightly, becoming relatively slack and compact. They normally vibrate irregularly at 20–50 pulses per second, about two octaves below the frequency of modal voicing, and the airflow through the glottis is very slow. Although creaky voice may occur with very low pitch, as at the end of a long intonation unit, it can also occur with a higher pitch. All contribute to make a speaker's voice sound creaky or raspy. In the Received Pronunciation of English, creaky voice has been described as a possible realisation of glottal reinforcement. For example, an alternative phonetic transcription of attempt [əˈtʰemʔt] could be [əˈtʰem͡m̰t]. In some languages, such as Jalapa Mazatec, creaky voice has a phonemic status; that is, the presence or absence of creaky voice can change the meaning of a word. In the International Phonetic Alphabet, creaky voice of a phone is represented by a diacritical tilde U+0330 ◌̰ COMBINING TILDE BELOW, for example [d̰]. The Danish prosodic feature stød is an example of a form of laryngealisation that has a phonemic function. A slight degree of laryngealisation, occurring in some Korean language consonants for example, is called "stiff voice". Use of creaky voice across general speech and in singing is termed "vocal fry". Some evidence exists of vocal fry becoming more common in the speech of young female speakers of American English in the early 21st century, with researcher Ikuko Patricia Yuasa finding that college-age Americans perceived female creaky voice as "hesitant, nonaggressive, and informal but also educated, urban-oriented, and upwardly mobile." It is subsequently theorized that vocal fry may be a way for women to sound more "authoritative" and credible by using it to emulate the deeper male register. Yuasa further theorizes that because California is at the center of American popular culture and much of the entertainment industry is rooted there, young Americans may unconsciously be using creaky voice more because of the media they consume.
[ { "paragraph_id": 0, "text": "In linguistics, creaky voice (sometimes called laryngealisation, pulse phonation, vocal fry, or glottal fry) refers to a low, scratchy sound that occupies the vocal range below the common vocal register. It is a special kind of phonation in which the arytenoid cartilages in the larynx are drawn together; as a result, the vocal folds are compressed rather tightly, becoming relatively slack and compact. They normally vibrate irregularly at 20–50 pulses per second, about two octaves below the frequency of modal voicing, and the airflow through the glottis is very slow. Although creaky voice may occur with very low pitch, as at the end of a long intonation unit, it can also occur with a higher pitch. All contribute to make a speaker's voice sound creaky or raspy.", "title": "" }, { "paragraph_id": 1, "text": "In the Received Pronunciation of English, creaky voice has been described as a possible realisation of glottal reinforcement. For example, an alternative phonetic transcription of attempt [əˈtʰemʔt] could be [əˈtʰem͡m̰t].", "title": "In phonology" }, { "paragraph_id": 2, "text": "In some languages, such as Jalapa Mazatec, creaky voice has a phonemic status; that is, the presence or absence of creaky voice can change the meaning of a word. In the International Phonetic Alphabet, creaky voice of a phone is represented by a diacritical tilde U+0330 ◌̰ COMBINING TILDE BELOW, for example [d̰]. The Danish prosodic feature stød is an example of a form of laryngealisation that has a phonemic function. A slight degree of laryngealisation, occurring in some Korean language consonants for example, is called \"stiff voice\".", "title": "In phonology" }, { "paragraph_id": 3, "text": "Use of creaky voice across general speech and in singing is termed \"vocal fry\".", "title": "Social aspects" }, { "paragraph_id": 4, "text": "Some evidence exists of vocal fry becoming more common in the speech of young female speakers of American English in the early 21st century, with researcher Ikuko Patricia Yuasa finding that college-age Americans perceived female creaky voice as \"hesitant, nonaggressive, and informal but also educated, urban-oriented, and upwardly mobile.\"", "title": "Social aspects" }, { "paragraph_id": 5, "text": "It is subsequently theorized that vocal fry may be a way for women to sound more \"authoritative\" and credible by using it to emulate the deeper male register. Yuasa further theorizes that because California is at the center of American popular culture and much of the entertainment industry is rooted there, young Americans may unconsciously be using creaky voice more because of the media they consume.", "title": "Social aspects" } ]
In linguistics, creaky voice refers to a low, scratchy sound that occupies the vocal range below the common vocal register. It is a special kind of phonation in which the arytenoid cartilages in the larynx are drawn together; as a result, the vocal folds are compressed rather tightly, becoming relatively slack and compact. They normally vibrate irregularly at 20–50 pulses per second, about two octaves below the frequency of modal voicing, and the airflow through the glottis is very slow. Although creaky voice may occur with very low pitch, as at the end of a long intonation unit, it can also occur with a higher pitch. All contribute to make a speaker's voice sound creaky or raspy.
2002-02-25T15:51:15Z
2023-11-28T17:33:21Z
[ "Template:IPA", "Template:Reflist", "Template:Cite journal", "Template:Short description", "Template:Infobox IPA", "Template:Cite book", "Template:SOWL", "Template:Phonation", "Template:Unichar", "Template:Main" ]
https://en.wikipedia.org/wiki/Creaky_voice
7,677
Computer monitor
A computer monitor is an output device that displays information in pictorial or textual form. A discrete monitor comprises a visual display, support electronics, power supply, housing, electrical connectors, and external user controls. The display in modern monitors is typically an LCD with LED backlight, having by the 2010s replaced CCFL backlit LCDs. Before the mid-2000s, most monitors used a cathode-ray tube (CRT) as the image output technology. A monitor is typically connected to its host computer via DisplayPort, HDMI, USB-C, DVI, or VGA. Less commonly, monitors sometimes use other proprietary connectors and signals to connect to a computer. Originally, computer monitors were used for data processing while television sets were used for video. From the 1980s onward, computers (and their monitors) have been used for both data processing and video, while televisions have implemented some computer functionality. In the 2000s, the typical display aspect ratio of both televisions and computer monitors changed from 4:3 to 16:9. Modern computer monitors are often functionally interchangeable with television sets and vice versa. As most computer monitors do not include integrated speakers, TV tuners, or remote controls, external components such as a DTA box may be needed to use a computer monitor as a TV set. Early electronic computer front panels were fitted with an array of light bulbs where the state of each particular bulb would indicate the on/off state of a particular register bit inside the computer. This allowed the engineers operating the computer to monitor the internal state of the machine, so this panel of lights came to be known as the 'monitor'. As early monitors were only capable of displaying a very limited amount of information and were very transient, they were rarely considered for program output. Instead, a line printer was the primary output device, while the monitor was limited to keeping track of the program's operation. One of the first uses of a standalone computer monitor with a personal computer was with the Apple 1, which connected directly to a consumer television as a monitor instead of using a glass terminal as its output. Computer monitors were formerly known as visual display units (VDU), particularly in British English. This term mostly fell out of use by the 1990s. Multiple technologies have been used for computer monitors. Until the 21st century most used cathode-ray tubes but they have largely been superseded by LCD monitors. The first computer monitors used cathode-ray tubes (CRTs). Prior to the advent of home computers in the late 1970s, it was common for a video display terminal (VDT) using a CRT to be physically integrated with a keyboard and other components of the workstation in a single large chassis, typically limiting them to emulation of a paper teletypewriter, thus the early epithet of 'glass TTY'. The display was monochromatic and far less sharp and detailed than on a modern monitor, necessitating the use of relatively large text and severely limiting the amount of information that could be displayed at one time. High-resolution CRT displays were developed for specialized military, industrial and scientific applications but they were far too costly for general use; wider commercial use became possible after the release of a slow, but affordable Tektronix 4010 terminal in 1972. Some of the earliest home computers (such as the TRS-80 and Commodore PET) were limited to monochrome CRT displays, but color display capability was already a possible feature for a few MOS 6500 series-based machines (such as introduced in 1977 Apple II computer or Atari 2600 console), and the color output was a specialty of the more graphically sophisticated Atari 800 computer, introduced in 1979. Either computer could be connected to the antenna terminals of an ordinary color TV set or used with a purpose-made CRT color monitor for optimum resolution and color quality. Lagging several years behind, in 1981 IBM introduced the Color Graphics Adapter, which could display four colors with a resolution of 320 × 200 pixels, or it could produce 640 × 200 pixels with two colors. In 1984 IBM introduced the Enhanced Graphics Adapter which was capable of producing 16 colors and had a resolution of 640 × 350. By the end of the 1980s color progressive scan CRT monitors were widely available and increasingly affordable, while the sharpest prosumer monitors could clearly display high-definition video, against the backdrop of efforts at HDTV standardization from the 1970s to the 1980s failing continuously, leaving consumer SDTVs to stagnate increasingly far behind the capabilities of computer CRT monitors well into the 2000s. During the following decade, maximum display resolutions gradually increased and prices continued to fall as CRT technology remained dominant in the PC monitor market into the new millennium, partly because it remained cheaper to produce. CRTs still offer color, grayscale, motion, and latency advantages over today's LCDs, but improvements to the latter have made them much less obvious. The dynamic range of early LCD panels was very poor, and although text and other motionless graphics were sharper than on a CRT, an LCD characteristic known as pixel lag caused moving graphics to appear noticeably smeared and blurry. There are multiple technologies that have been used to implement liquid-crystal displays (LCD). Throughout the 1990s, the primary use of LCD technology as computer monitors was in laptops where the lower power consumption, lighter weight, and smaller physical size of LCDs justified the higher price versus a CRT. Commonly, the same laptop would be offered with an assortment of display options at increasing price points: (active or passive) monochrome, passive color, or active matrix color (TFT). As volume and manufacturing capability have improved, the monochrome and passive color technologies were dropped from most product lines. TFT-LCD is a variant of LCD which is now the dominant technology used for computer monitors. The first standalone LCDs appeared in the mid-1990s selling for high prices. As prices declined they became more popular, and by 1997 were competing with CRT monitors. Among the first desktop LCD computer monitors were the Eizo FlexScan L66 in the mid-1990s, the SGI 1600SW, Apple Studio Display and the ViewSonic VP140 in 1998. In 2003, LCDs outsold CRTs for the first time, becoming the primary technology used for computer monitors. The physical advantages of LCD over CRT monitors are that LCDs are lighter, smaller, and consume less power. In terms of performance, LCDs produce less or no flicker, reducing eyestrain, sharper image at native resolution, and better checkerboard contrast. On the other hand, CRT monitors have superior blacks, viewing angles, and response time, can use arbitrary lower resolutions without aliasing, and flicker can be reduced with higher refresh rates, though this flicker can also be used to reduce motion blur compared to less flickery displays such as most LCDs. Many specialized fields such as vision science remain dependent on CRTs, the best LCD monitors having achieved moderate temporal accuracy, and so can be used only if their poor spatial accuracy is unimportant. High dynamic range (HDR) has been implemented into high-end LCD monitors to improve grayscale accuracy. Since around the late 2000s, widescreen LCD monitors have become popular, in part due to television series, motion pictures and video games transitioning to widescreen, which makes squarer monitors unsuited to display them correctly. Organic light-emitting diode (OLED) monitors provide most of the benefits of both LCD and CRT monitors with few of their drawbacks, though much like plasma panels or very early CRTs they suffer from burn-in, and remain very expensive. The performance of a monitor is measured by the following parameters: On two-dimensional display devices such as computer monitors the display size or viewable image size is the actual amount of screen space that is available to display a picture, video or working space, without obstruction from the bezel or other aspects of the unit's design. The main measurements for display devices are width, height, total area and the diagonal. The size of a display is usually given by manufacturers diagonally, i.e. as the distance between two opposite screen corners. This method of measurement is inherited from the method used for the first generation of CRT television when picture tubes with circular faces were in common use. Being circular, it was the external diameter of the glass envelope that described their size. Since these circular tubes were used to display rectangular images, the diagonal measurement of the rectangular image was smaller than the diameter of the tube's face (due to the thickness of the glass). This method continued even when cathode-ray tubes were manufactured as rounded rectangles; it had the advantage of being a single number specifying the size and was not confusing when the aspect ratio was universally 4:3. With the introduction of flat panel technology, the diagonal measurement became the actual diagonal of the visible display. This meant that an eighteen-inch LCD had a larger viewable area than an eighteen-inch cathode-ray tube. Estimation of monitor size by the distance between opposite corners does not take into account the display aspect ratio, so that for example a 16:9 21-inch (53 cm) widescreen display has less area, than a 21-inch (53 cm) 4:3 screen. The 4:3 screen has dimensions of 16.8 in × 12.6 in (43 cm × 32 cm) and an area 211 sq in (1,360 cm), while the widescreen is 18.3 in × 10.3 in (46 cm × 26 cm), 188 sq in (1,210 cm). Until about 2003, most computer monitors had a 4:3 aspect ratio and some had 5:4. Between 2003 and 2006, monitors with 16:9 and mostly 16:10 (8:5) aspect ratios became commonly available, first in laptops and later also in standalone monitors. Reasons for this transition included productive uses (i.e. Field of view in video games and movie viewing) such as the word processor display of two standard letter pages side by side, as well as CAD displays of large-size drawings and application menus at the same time. In 2008 16:10 became the most common sold aspect ratio for LCD monitors and the same year 16:10 was the mainstream standard for laptops and notebook computers. In 2010, the computer industry started to move over from 16:10 to 16:9 because 16:9 was chosen to be the standard high-definition television display size, and because they were cheaper to manufacture. In 2011, non-widescreen displays with 4:3 aspect ratios were only being manufactured in small quantities. According to Samsung, this was because the "Demand for the old 'Square monitors' has decreased rapidly over the last couple of years," and "I predict that by the end of 2011, production on all 4:3 or similar panels will be halted due to a lack of demand." The resolution for computer monitors has increased over time. From 280 × 192 during the late 1970s, to 1024 × 768 during the late 1990s. Since 2009, the most commonly sold resolution for computer monitors is 1920 × 1080, shared with the 1080p of HDTV. Before 2013 mass market LCD monitors were limited to 2560 × 1600 at 30 in (76 cm), excluding niche professional monitors. By 2015 most major display manufacturers had released 3840 × 2160 (4K UHD) displays, and the first 7680 × 4320 (8K) monitors had begun shipping. Every RGB monitor has its own color gamut, bounded in chromaticity by a color triangle. Some of these triangles are smaller than the sRGB triangle, some are larger. Colors are typically encoded by 8 bits per primary color. The RGB value [255, 0, 0] represents red, but slightly different colors in different color spaces such as Adobe RGB and sRGB. Displaying sRGB-encoded data on wide-gamut devices can give an unrealistic result. The gamut is a property of the monitor; the image color space can be forwarded as Exif metadata in the picture. As long as the monitor gamut is wider than the color space gamut, correct display is possible, if the monitor is calibrated. A picture that uses colors that are outside the sRGB color space will display on an sRGB color space monitor with limitations. Still today, many monitors that can display the sRGB color space are not factory nor user-calibrated to display it correctly. Color management is needed both in electronic publishing (via the Internet for display in browsers) and in desktop publishing targeted to print. Most modern monitors will switch to a power-saving mode if no video-input signal is received. This allows modern operating systems to turn off a monitor after a specified period of inactivity. This also extends the monitor's service life. Some monitors will also switch themselves off after a time period on standby. Most modern laptops provide a method of screen dimming after periods of inactivity or when the battery is in use. This extends battery life and reduces wear. Most modern monitors have two different indicator light colors wherein if video-input signal was detected, the indicator light is green and when the monitor is in power-saving mode, the screen is black and the indicator light is orange. Some monitors have different indicator light colors and some monitors have blinking indicator light when in power-saving mode. Many monitors have other accessories (or connections for them) integrated. This places standard ports within easy reach and eliminates the need for another separate hub, camera, microphone, or set of speakers. These monitors have advanced microprocessors which contain codec information, Windows interface drivers and other small software which help in proper functioning of these functions. Monitors that feature an aspect ratio greater than 2:1 (for instance, 21:9 or 32:9, as opposed to the more common 16:9, which resolves to 1.77:1).Monitors with an aspect ratio greater than 3:1 are marketed as super ultrawide monitors. These are typically massive curved screens intended to replace a multi-monitor deployment. These monitors use touching of the screen as an input method. Items can be selected or moved with a finger, and finger gestures may be used to convey commands. The screen will need frequent cleaning due to image degradation from fingerprints. Some displays, especially newer flat panel monitors, replace the traditional anti-glare matte finish with a glossy one. This increases color saturation and sharpness but reflections from lights and windows are more visible. Anti-reflective coatings are sometimes applied to help reduce reflections, although this only partly mitigates the problem. Most often using nominally flat-panel display technology such as LCD or OLED, a concave rather than convex curve is imparted, reducing geometric distortion, especially in extremely large and wide seamless desktop monitors intended for close viewing range. Newer monitors are able to display a different image for each eye, often with the help of special glasses and polarizers, giving the perception of depth. An autostereoscopic screen can generate 3D images without headgear. Features for medical using or for outdoor placement. Narrow viewing angle screens are used in some security-conscious applications. Integrated screen calibration tools, screen hoods, signal transmitters; Protective screens. A combination of a monitor with a graphics tablet. Such devices are typically unresponsive to touch without the use of one or more special tools' pressure. Newer models however are now able to detect touch from any pressure and often have the ability to detect tool tilt and rotation as well. Touch and tablet sensors are often used on sample and hold displays such as LCDs to substitute for the light pen, which can only work on CRTs. The option for using the display as a reference monitor; these calibration features can give an advanced color management control for take a near-perfect image. Option for professional LCD monitors, inherent to OLED & CRT; professional feature with mainstream tendency. Near to mainstream professional feature; advanced hardware driver for backlit modules with local zones of uniformity correction. Computer monitors are provided with a variety of methods for mounting them depending on the application and environment. Raw monitors are raw framed LCD monitors, to install a monitor on a not so common place, ie, on the car door or you need it in the trunk. It is usually paired with a power adapter to have a versatile monitor for home or commercial use. A desktop monitor is typically provided with a stand from the manufacturer which lifts the monitor up to a more ergonomic viewing height. The stand may be attached to the monitor using a proprietary method or may use, or be adaptable to, a VESA mount. A VESA standard mount allows the monitor to be used with more after-market stands if the original stand is removed. Stands may be fixed or offer a variety of features such as height adjustment, horizontal swivel, and landscape or portrait screen orientation. The Flat Display Mounting Interface (FDMI), also known as VESA Mounting Interface Standard (MIS) or colloquially as a VESA mount, is a family of standards defined by the Video Electronics Standards Association for mounting flat panel displays to stands or wall mounts. It is implemented on most modern flat-panel monitors and TVs. For computer monitors, the VESA Mount typically consists of four threaded holes on the rear of the display that will mate with an adapter bracket. Rack mount computer monitors are available in two styles and are intended to be mounted into a 19-inch rack: A fixed rack mount monitor is mounted directly to the rack with the flat-panel or CRT visible at all times. The height of the unit is measured in rack units (RU) and 8U or 9U are most common to fit 17-inch or 19-inch screens. The front sides of the unit are provided with flanges to mount to the rack, providing appropriately spaced holes or slots for the rack mounting screws. A 19-inch diagonal screen is the largest size that will fit within the rails of a 19-inch rack. Larger flat-panels may be accommodated but are 'mount-on-rack' and extend forward of the rack. There are smaller display units, typically used in broadcast environments, which fit multiple smaller screens side by side into one rack mount. A stowable rack mount monitor is 1U, 2U or 3U high and is mounted on rack slides allowing the display to be folded down and the unit slid into the rack for storage as a drawer. The flat display is visible only when pulled out of the rack and deployed. These units may include only a display or may be equipped with a keyboard creating a KVM (Keyboard Video Monitor). Most common are systems with a single LCD but there are systems providing two or three displays in a single rack mount system. A panel mount computer monitor is intended for mounting into a flat surface with the front of the display unit protruding just slightly. They may also be mounted to the rear of the panel. A flange is provided around the screen, sides, top and bottom, to allow mounting. This contrasts with a rack mount display where the flanges are only on the sides. The flanges will be provided with holes for thru-bolts or may have studs welded to the rear surface to secure the unit in the hole in the panel. Often a gasket is provided to provide a water-tight seal to the panel and the front of the screen will be sealed to the back of the front panel to prevent water and dirt contamination. An open frame monitor provides the display and enough supporting structure to hold associated electronics and to minimally support the display. Provision will be made for attaching the unit to some external structure for support and protection. Open frame monitors are intended to be built into some other piece of equipment providing its own case. An arcade video game would be a good example with the display mounted inside the cabinet. There is usually an open frame display inside all end-use displays with the end-use display simply providing an attractive protective enclosure. Some rack mount monitor manufacturers will purchase desktop displays, take them apart, and discard the outer plastic parts, keeping the inner open-frame display for inclusion into their product. According to an NSA document leaked to Der Spiegel, the NSA sometimes swaps the monitor cables on targeted computers with a bugged monitor cable in order to allow the NSA to remotely see what is being displayed on the targeted computer monitor. Van Eck phreaking is the process of remotely displaying the contents of a CRT or LCD by detecting its electromagnetic emissions. It is named after Dutch computer researcher Wim van Eck, who in 1985 published the first paper on it, including proof of concept. Phreaking more generally is the process of exploiting telephone networks.
[ { "paragraph_id": 0, "text": "A computer monitor is an output device that displays information in pictorial or textual form. A discrete monitor comprises a visual display, support electronics, power supply, housing, electrical connectors, and external user controls.", "title": "" }, { "paragraph_id": 1, "text": "The display in modern monitors is typically an LCD with LED backlight, having by the 2010s replaced CCFL backlit LCDs. Before the mid-2000s, most monitors used a cathode-ray tube (CRT) as the image output technology. A monitor is typically connected to its host computer via DisplayPort, HDMI, USB-C, DVI, or VGA. Less commonly, monitors sometimes use other proprietary connectors and signals to connect to a computer.", "title": "" }, { "paragraph_id": 2, "text": "Originally, computer monitors were used for data processing while television sets were used for video. From the 1980s onward, computers (and their monitors) have been used for both data processing and video, while televisions have implemented some computer functionality. In the 2000s, the typical display aspect ratio of both televisions and computer monitors changed from 4:3 to 16:9.", "title": "" }, { "paragraph_id": 3, "text": "Modern computer monitors are often functionally interchangeable with television sets and vice versa. As most computer monitors do not include integrated speakers, TV tuners, or remote controls, external components such as a DTA box may be needed to use a computer monitor as a TV set.", "title": "" }, { "paragraph_id": 4, "text": "Early electronic computer front panels were fitted with an array of light bulbs where the state of each particular bulb would indicate the on/off state of a particular register bit inside the computer. This allowed the engineers operating the computer to monitor the internal state of the machine, so this panel of lights came to be known as the 'monitor'. As early monitors were only capable of displaying a very limited amount of information and were very transient, they were rarely considered for program output. Instead, a line printer was the primary output device, while the monitor was limited to keeping track of the program's operation.", "title": "History" }, { "paragraph_id": 5, "text": "One of the first uses of a standalone computer monitor with a personal computer was with the Apple 1, which connected directly to a consumer television as a monitor instead of using a glass terminal as its output.", "title": "History" }, { "paragraph_id": 6, "text": "Computer monitors were formerly known as visual display units (VDU), particularly in British English. This term mostly fell out of use by the 1990s.", "title": "History" }, { "paragraph_id": 7, "text": "Multiple technologies have been used for computer monitors. Until the 21st century most used cathode-ray tubes but they have largely been superseded by LCD monitors.", "title": "Technologies" }, { "paragraph_id": 8, "text": "The first computer monitors used cathode-ray tubes (CRTs). Prior to the advent of home computers in the late 1970s, it was common for a video display terminal (VDT) using a CRT to be physically integrated with a keyboard and other components of the workstation in a single large chassis, typically limiting them to emulation of a paper teletypewriter, thus the early epithet of 'glass TTY'. The display was monochromatic and far less sharp and detailed than on a modern monitor, necessitating the use of relatively large text and severely limiting the amount of information that could be displayed at one time. High-resolution CRT displays were developed for specialized military, industrial and scientific applications but they were far too costly for general use; wider commercial use became possible after the release of a slow, but affordable Tektronix 4010 terminal in 1972.", "title": "Technologies" }, { "paragraph_id": 9, "text": "Some of the earliest home computers (such as the TRS-80 and Commodore PET) were limited to monochrome CRT displays, but color display capability was already a possible feature for a few MOS 6500 series-based machines (such as introduced in 1977 Apple II computer or Atari 2600 console), and the color output was a specialty of the more graphically sophisticated Atari 800 computer, introduced in 1979. Either computer could be connected to the antenna terminals of an ordinary color TV set or used with a purpose-made CRT color monitor for optimum resolution and color quality. Lagging several years behind, in 1981 IBM introduced the Color Graphics Adapter, which could display four colors with a resolution of 320 × 200 pixels, or it could produce 640 × 200 pixels with two colors. In 1984 IBM introduced the Enhanced Graphics Adapter which was capable of producing 16 colors and had a resolution of 640 × 350.", "title": "Technologies" }, { "paragraph_id": 10, "text": "By the end of the 1980s color progressive scan CRT monitors were widely available and increasingly affordable, while the sharpest prosumer monitors could clearly display high-definition video, against the backdrop of efforts at HDTV standardization from the 1970s to the 1980s failing continuously, leaving consumer SDTVs to stagnate increasingly far behind the capabilities of computer CRT monitors well into the 2000s. During the following decade, maximum display resolutions gradually increased and prices continued to fall as CRT technology remained dominant in the PC monitor market into the new millennium, partly because it remained cheaper to produce. CRTs still offer color, grayscale, motion, and latency advantages over today's LCDs, but improvements to the latter have made them much less obvious. The dynamic range of early LCD panels was very poor, and although text and other motionless graphics were sharper than on a CRT, an LCD characteristic known as pixel lag caused moving graphics to appear noticeably smeared and blurry.", "title": "Technologies" }, { "paragraph_id": 11, "text": "There are multiple technologies that have been used to implement liquid-crystal displays (LCD). Throughout the 1990s, the primary use of LCD technology as computer monitors was in laptops where the lower power consumption, lighter weight, and smaller physical size of LCDs justified the higher price versus a CRT. Commonly, the same laptop would be offered with an assortment of display options at increasing price points: (active or passive) monochrome, passive color, or active matrix color (TFT). As volume and manufacturing capability have improved, the monochrome and passive color technologies were dropped from most product lines.", "title": "Technologies" }, { "paragraph_id": 12, "text": "TFT-LCD is a variant of LCD which is now the dominant technology used for computer monitors.", "title": "Technologies" }, { "paragraph_id": 13, "text": "The first standalone LCDs appeared in the mid-1990s selling for high prices. As prices declined they became more popular, and by 1997 were competing with CRT monitors. Among the first desktop LCD computer monitors were the Eizo FlexScan L66 in the mid-1990s, the SGI 1600SW, Apple Studio Display and the ViewSonic VP140 in 1998. In 2003, LCDs outsold CRTs for the first time, becoming the primary technology used for computer monitors. The physical advantages of LCD over CRT monitors are that LCDs are lighter, smaller, and consume less power. In terms of performance, LCDs produce less or no flicker, reducing eyestrain, sharper image at native resolution, and better checkerboard contrast. On the other hand, CRT monitors have superior blacks, viewing angles, and response time, can use arbitrary lower resolutions without aliasing, and flicker can be reduced with higher refresh rates, though this flicker can also be used to reduce motion blur compared to less flickery displays such as most LCDs. Many specialized fields such as vision science remain dependent on CRTs, the best LCD monitors having achieved moderate temporal accuracy, and so can be used only if their poor spatial accuracy is unimportant.", "title": "Technologies" }, { "paragraph_id": 14, "text": "High dynamic range (HDR) has been implemented into high-end LCD monitors to improve grayscale accuracy. Since around the late 2000s, widescreen LCD monitors have become popular, in part due to television series, motion pictures and video games transitioning to widescreen, which makes squarer monitors unsuited to display them correctly.", "title": "Technologies" }, { "paragraph_id": 15, "text": "Organic light-emitting diode (OLED) monitors provide most of the benefits of both LCD and CRT monitors with few of their drawbacks, though much like plasma panels or very early CRTs they suffer from burn-in, and remain very expensive.", "title": "Technologies" }, { "paragraph_id": 16, "text": "The performance of a monitor is measured by the following parameters:", "title": "Measurements of performance" }, { "paragraph_id": 17, "text": "On two-dimensional display devices such as computer monitors the display size or viewable image size is the actual amount of screen space that is available to display a picture, video or working space, without obstruction from the bezel or other aspects of the unit's design. The main measurements for display devices are width, height, total area and the diagonal.", "title": "Measurements of performance" }, { "paragraph_id": 18, "text": "The size of a display is usually given by manufacturers diagonally, i.e. as the distance between two opposite screen corners. This method of measurement is inherited from the method used for the first generation of CRT television when picture tubes with circular faces were in common use. Being circular, it was the external diameter of the glass envelope that described their size. Since these circular tubes were used to display rectangular images, the diagonal measurement of the rectangular image was smaller than the diameter of the tube's face (due to the thickness of the glass). This method continued even when cathode-ray tubes were manufactured as rounded rectangles; it had the advantage of being a single number specifying the size and was not confusing when the aspect ratio was universally 4:3.", "title": "Measurements of performance" }, { "paragraph_id": 19, "text": "With the introduction of flat panel technology, the diagonal measurement became the actual diagonal of the visible display. This meant that an eighteen-inch LCD had a larger viewable area than an eighteen-inch cathode-ray tube.", "title": "Measurements of performance" }, { "paragraph_id": 20, "text": "Estimation of monitor size by the distance between opposite corners does not take into account the display aspect ratio, so that for example a 16:9 21-inch (53 cm) widescreen display has less area, than a 21-inch (53 cm) 4:3 screen. The 4:3 screen has dimensions of 16.8 in × 12.6 in (43 cm × 32 cm) and an area 211 sq in (1,360 cm), while the widescreen is 18.3 in × 10.3 in (46 cm × 26 cm), 188 sq in (1,210 cm).", "title": "Measurements of performance" }, { "paragraph_id": 21, "text": "Until about 2003, most computer monitors had a 4:3 aspect ratio and some had 5:4. Between 2003 and 2006, monitors with 16:9 and mostly 16:10 (8:5) aspect ratios became commonly available, first in laptops and later also in standalone monitors. Reasons for this transition included productive uses (i.e. Field of view in video games and movie viewing) such as the word processor display of two standard letter pages side by side, as well as CAD displays of large-size drawings and application menus at the same time. In 2008 16:10 became the most common sold aspect ratio for LCD monitors and the same year 16:10 was the mainstream standard for laptops and notebook computers.", "title": "Measurements of performance" }, { "paragraph_id": 22, "text": "In 2010, the computer industry started to move over from 16:10 to 16:9 because 16:9 was chosen to be the standard high-definition television display size, and because they were cheaper to manufacture.", "title": "Measurements of performance" }, { "paragraph_id": 23, "text": "In 2011, non-widescreen displays with 4:3 aspect ratios were only being manufactured in small quantities. According to Samsung, this was because the \"Demand for the old 'Square monitors' has decreased rapidly over the last couple of years,\" and \"I predict that by the end of 2011, production on all 4:3 or similar panels will be halted due to a lack of demand.\"", "title": "Measurements of performance" }, { "paragraph_id": 24, "text": "The resolution for computer monitors has increased over time. From 280 × 192 during the late 1970s, to 1024 × 768 during the late 1990s. Since 2009, the most commonly sold resolution for computer monitors is 1920 × 1080, shared with the 1080p of HDTV. Before 2013 mass market LCD monitors were limited to 2560 × 1600 at 30 in (76 cm), excluding niche professional monitors. By 2015 most major display manufacturers had released 3840 × 2160 (4K UHD) displays, and the first 7680 × 4320 (8K) monitors had begun shipping.", "title": "Measurements of performance" }, { "paragraph_id": 25, "text": "Every RGB monitor has its own color gamut, bounded in chromaticity by a color triangle. Some of these triangles are smaller than the sRGB triangle, some are larger. Colors are typically encoded by 8 bits per primary color. The RGB value [255, 0, 0] represents red, but slightly different colors in different color spaces such as Adobe RGB and sRGB. Displaying sRGB-encoded data on wide-gamut devices can give an unrealistic result. The gamut is a property of the monitor; the image color space can be forwarded as Exif metadata in the picture. As long as the monitor gamut is wider than the color space gamut, correct display is possible, if the monitor is calibrated. A picture that uses colors that are outside the sRGB color space will display on an sRGB color space monitor with limitations. Still today, many monitors that can display the sRGB color space are not factory nor user-calibrated to display it correctly. Color management is needed both in electronic publishing (via the Internet for display in browsers) and in desktop publishing targeted to print.", "title": "Measurements of performance" }, { "paragraph_id": 26, "text": "Most modern monitors will switch to a power-saving mode if no video-input signal is received. This allows modern operating systems to turn off a monitor after a specified period of inactivity. This also extends the monitor's service life. Some monitors will also switch themselves off after a time period on standby.", "title": "Additional features" }, { "paragraph_id": 27, "text": "Most modern laptops provide a method of screen dimming after periods of inactivity or when the battery is in use. This extends battery life and reduces wear.", "title": "Additional features" }, { "paragraph_id": 28, "text": "Most modern monitors have two different indicator light colors wherein if video-input signal was detected, the indicator light is green and when the monitor is in power-saving mode, the screen is black and the indicator light is orange. Some monitors have different indicator light colors and some monitors have blinking indicator light when in power-saving mode.", "title": "Additional features" }, { "paragraph_id": 29, "text": "Many monitors have other accessories (or connections for them) integrated. This places standard ports within easy reach and eliminates the need for another separate hub, camera, microphone, or set of speakers. These monitors have advanced microprocessors which contain codec information, Windows interface drivers and other small software which help in proper functioning of these functions.", "title": "Additional features" }, { "paragraph_id": 30, "text": "Monitors that feature an aspect ratio greater than 2:1 (for instance, 21:9 or 32:9, as opposed to the more common 16:9, which resolves to 1.77:1).Monitors with an aspect ratio greater than 3:1 are marketed as super ultrawide monitors. These are typically massive curved screens intended to replace a multi-monitor deployment.", "title": "Additional features" }, { "paragraph_id": 31, "text": "These monitors use touching of the screen as an input method. Items can be selected or moved with a finger, and finger gestures may be used to convey commands. The screen will need frequent cleaning due to image degradation from fingerprints.", "title": "Additional features" }, { "paragraph_id": 32, "text": "Some displays, especially newer flat panel monitors, replace the traditional anti-glare matte finish with a glossy one. This increases color saturation and sharpness but reflections from lights and windows are more visible. Anti-reflective coatings are sometimes applied to help reduce reflections, although this only partly mitigates the problem.", "title": "Additional features" }, { "paragraph_id": 33, "text": "Most often using nominally flat-panel display technology such as LCD or OLED, a concave rather than convex curve is imparted, reducing geometric distortion, especially in extremely large and wide seamless desktop monitors intended for close viewing range.", "title": "Additional features" }, { "paragraph_id": 34, "text": "Newer monitors are able to display a different image for each eye, often with the help of special glasses and polarizers, giving the perception of depth. An autostereoscopic screen can generate 3D images without headgear.", "title": "Additional features" }, { "paragraph_id": 35, "text": "Features for medical using or for outdoor placement.", "title": "Additional features" }, { "paragraph_id": 36, "text": "Narrow viewing angle screens are used in some security-conscious applications.", "title": "Additional features" }, { "paragraph_id": 37, "text": "Integrated screen calibration tools, screen hoods, signal transmitters; Protective screens.", "title": "Additional features" }, { "paragraph_id": 38, "text": "A combination of a monitor with a graphics tablet. Such devices are typically unresponsive to touch without the use of one or more special tools' pressure. Newer models however are now able to detect touch from any pressure and often have the ability to detect tool tilt and rotation as well.", "title": "Additional features" }, { "paragraph_id": 39, "text": "Touch and tablet sensors are often used on sample and hold displays such as LCDs to substitute for the light pen, which can only work on CRTs.", "title": "Additional features" }, { "paragraph_id": 40, "text": "The option for using the display as a reference monitor; these calibration features can give an advanced color management control for take a near-perfect image.", "title": "Additional features" }, { "paragraph_id": 41, "text": "Option for professional LCD monitors, inherent to OLED & CRT; professional feature with mainstream tendency.", "title": "Additional features" }, { "paragraph_id": 42, "text": "Near to mainstream professional feature; advanced hardware driver for backlit modules with local zones of uniformity correction.", "title": "Additional features" }, { "paragraph_id": 43, "text": "Computer monitors are provided with a variety of methods for mounting them depending on the application and environment.", "title": "Mounting" }, { "paragraph_id": 44, "text": "Raw monitors are raw framed LCD monitors, to install a monitor on a not so common place, ie, on the car door or you need it in the trunk. It is usually paired with a power adapter to have a versatile monitor for home or commercial use.", "title": "Mounting" }, { "paragraph_id": 45, "text": "A desktop monitor is typically provided with a stand from the manufacturer which lifts the monitor up to a more ergonomic viewing height. The stand may be attached to the monitor using a proprietary method or may use, or be adaptable to, a VESA mount. A VESA standard mount allows the monitor to be used with more after-market stands if the original stand is removed. Stands may be fixed or offer a variety of features such as height adjustment, horizontal swivel, and landscape or portrait screen orientation.", "title": "Mounting" }, { "paragraph_id": 46, "text": "The Flat Display Mounting Interface (FDMI), also known as VESA Mounting Interface Standard (MIS) or colloquially as a VESA mount, is a family of standards defined by the Video Electronics Standards Association for mounting flat panel displays to stands or wall mounts. It is implemented on most modern flat-panel monitors and TVs.", "title": "Mounting" }, { "paragraph_id": 47, "text": "For computer monitors, the VESA Mount typically consists of four threaded holes on the rear of the display that will mate with an adapter bracket.", "title": "Mounting" }, { "paragraph_id": 48, "text": "Rack mount computer monitors are available in two styles and are intended to be mounted into a 19-inch rack:", "title": "Mounting" }, { "paragraph_id": 49, "text": "A fixed rack mount monitor is mounted directly to the rack with the flat-panel or CRT visible at all times. The height of the unit is measured in rack units (RU) and 8U or 9U are most common to fit 17-inch or 19-inch screens. The front sides of the unit are provided with flanges to mount to the rack, providing appropriately spaced holes or slots for the rack mounting screws. A 19-inch diagonal screen is the largest size that will fit within the rails of a 19-inch rack. Larger flat-panels may be accommodated but are 'mount-on-rack' and extend forward of the rack. There are smaller display units, typically used in broadcast environments, which fit multiple smaller screens side by side into one rack mount.", "title": "Mounting" }, { "paragraph_id": 50, "text": "A stowable rack mount monitor is 1U, 2U or 3U high and is mounted on rack slides allowing the display to be folded down and the unit slid into the rack for storage as a drawer. The flat display is visible only when pulled out of the rack and deployed. These units may include only a display or may be equipped with a keyboard creating a KVM (Keyboard Video Monitor). Most common are systems with a single LCD but there are systems providing two or three displays in a single rack mount system.", "title": "Mounting" }, { "paragraph_id": 51, "text": "A panel mount computer monitor is intended for mounting into a flat surface with the front of the display unit protruding just slightly. They may also be mounted to the rear of the panel. A flange is provided around the screen, sides, top and bottom, to allow mounting. This contrasts with a rack mount display where the flanges are only on the sides. The flanges will be provided with holes for thru-bolts or may have studs welded to the rear surface to secure the unit in the hole in the panel. Often a gasket is provided to provide a water-tight seal to the panel and the front of the screen will be sealed to the back of the front panel to prevent water and dirt contamination.", "title": "Mounting" }, { "paragraph_id": 52, "text": "An open frame monitor provides the display and enough supporting structure to hold associated electronics and to minimally support the display. Provision will be made for attaching the unit to some external structure for support and protection. Open frame monitors are intended to be built into some other piece of equipment providing its own case. An arcade video game would be a good example with the display mounted inside the cabinet. There is usually an open frame display inside all end-use displays with the end-use display simply providing an attractive protective enclosure. Some rack mount monitor manufacturers will purchase desktop displays, take them apart, and discard the outer plastic parts, keeping the inner open-frame display for inclusion into their product.", "title": "Mounting" }, { "paragraph_id": 53, "text": "According to an NSA document leaked to Der Spiegel, the NSA sometimes swaps the monitor cables on targeted computers with a bugged monitor cable in order to allow the NSA to remotely see what is being displayed on the targeted computer monitor.", "title": "Security vulnerabilities" }, { "paragraph_id": 54, "text": "Van Eck phreaking is the process of remotely displaying the contents of a CRT or LCD by detecting its electromagnetic emissions. It is named after Dutch computer researcher Wim van Eck, who in 1985 published the first paper on it, including proof of concept. Phreaking more generally is the process of exploiting telephone networks.", "title": "Security vulnerabilities" } ]
A computer monitor is an output device that displays information in pictorial or textual form. A discrete monitor comprises a visual display, support electronics, power supply, housing, electrical connectors, and external user controls. The display in modern monitors is typically an LCD with LED backlight, having by the 2010s replaced CCFL backlit LCDs. Before the mid-2000s, most monitors used a cathode-ray tube (CRT) as the image output technology. A monitor is typically connected to its host computer via DisplayPort, HDMI, USB-C, DVI, or VGA. Less commonly, monitors sometimes use other proprietary connectors and signals to connect to a computer. Originally, computer monitors were used for data processing while television sets were used for video. From the 1980s onward, computers have been used for both data processing and video, while televisions have implemented some computer functionality. In the 2000s, the typical display aspect ratio of both televisions and computer monitors changed from 4:3 to 16:9. Modern computer monitors are often functionally interchangeable with television sets and vice versa. As most computer monitors do not include integrated speakers, TV tuners, or remote controls, external components such as a DTA box may be needed to use a computer monitor as a TV set.
2001-03-16T00:24:25Z
2023-12-28T20:34:35Z
[ "Template:Cn", "Template:ISBN", "Template:Commons category", "Template:Distinguish", "Template:Further", "Template:Sup", "Template:Ratio", "Template:Overline", "Template:See also", "Template:Reflist", "Template:Basic computer components", "Template:Short description", "Template:Main", "Template:Nbsp", "Template:Multiple image", "Template:Webarchive", "Template:Resx", "Template:Anchor", "Template:Refimprove section", "Template:Cite web", "Template:Cite book", "Template:Cite journal", "Template:Convert", "Template:Ndash", "Template:Lang" ]
https://en.wikipedia.org/wiki/Computer_monitor
7,681
ClearType
ClearType is Microsoft's implementation of subpixel rendering technology in rendering text in a font system. ClearType attempts to improve the appearance of text on certain types of computer display screens by sacrificing color fidelity for additional intensity variation. This trade-off is asserted to work well on LCD flat panel monitors. ClearType was first announced at the November 1998 COMDEX exhibition. The technology was first introduced in software in January 2000 as an always-on feature of Microsoft Reader, which was released to the public in August 2000. ClearType was significantly changed with the introduction of DirectWrite in Windows 7. With the increasing availability of HiDPI displays after 2012, subpixel rendering has become less necessary. Computer displays where the positions of individual pixels are permanently fixed – such as most modern flat panel displays – can show saw-tooth edges when displaying small, high-contrast graphic elements, such as text. ClearType uses spatial anti-aliasing at the subpixel level to reduce visible artifacts on such displays when text is rendered, making the text appear "smoother" and less jagged. ClearType also uses very heavy font hinting to force the font to fit into the pixel grid. This increases edge contrast and readability of small fonts at the expense of font rendering fidelity and has been criticized by graphic designers for making different fonts look similar. Like most other types of subpixel rendering, ClearType involves a compromise, sacrificing one aspect of image quality (color or chrominance detail) for another (light and dark or luminance detail). The compromise can improve text appearance when luminance detail is more important than chrominance. Only user and system applications render the application of ClearType. ClearType does not alter other graphic display elements (including text already in bitmaps). For example, ClearType enhancement renders text on the screen in Microsoft Word, but text placed in a bitmapped image in a program such as Adobe Photoshop is not. In theory, the method (called "RGB Decimation" internally) can enhance the anti-aliasing of any digital image. ClearType was invented in the Microsoft e-Books team by Bert Keely and Greg Hitchcock. It was then analyzed by researchers in the company, and signal processing expert John Platt designed an improved version of the algorithm. Dick Brass, a Vice President at Microsoft from 1997 to 2004, complained that the company was slow in moving ClearType to market in the portable computing field. Normally, the software in a computer treats the computer’s display screen as a rectangular array of square, indivisible pixels, each of which has an intensity and color that are determined by the blending of three primary colors: red, green, and blue. However, actual display hardware usually implements each pixel as a group of three adjacent, independent subpixels, each of which displays a different primary color. Thus, on a real computer display, each pixel is actually composed of separate red, green, and blue subpixels. For example, if a flat-panel display is examined under a magnifying glass, the pixels may appear as follows: In the illustration above, there are nine pixels but 27 subpixels. If the computer controlling the display knows the exact position and color of all the subpixels on the screen, it can take advantage of this to improve the apparent resolution in certain situations. If each pixel on the display actually contains three rectangular subpixels of red, green, and blue, in that fixed order, then things on the screen that are smaller than one full pixel in size can be rendered by lighting only one or two of the subpixels. For example, if a diagonal line with a width smaller than a full pixel must be rendered, then this can be done by lighting only the subpixels that the line actually touches. If the line passes through the leftmost portion of the pixel, only the red subpixel is lit; if it passes through the rightmost portion of the pixel, only the blue subpixel is lit. This effectively triples the horizontal resolution of the image at normal viewing distances; the drawback is that the line thus drawn will show color fringes (at some points it might look green, at other points it might look red or blue). ClearType uses this method to improve the smoothness of text. When the elements of a type character are smaller than a full pixel, ClearType lights only the appropriate subpixels of each full pixel in order to more closely follow the outlines of that character. Text rendered with ClearType looks “smoother” than text rendered without it, provided that the pixel layout of the display screen exactly matches what ClearType expects. The following picture shows a 4× enlargement of the word Wikipedia rendered using ClearType. The word was originally rendered using a Times New Roman 12 pt font. In this magnified view, it becomes clear that, while the overall smoothness of the text seems to improve, there is also color fringing of the text. An extreme close-up of a color display shows (a) text rendered without ClearType and (b) text rendered with ClearType. Note the changes in subpixel intensity that are used to increase effective resolution when ClearType is enabled – without ClearType, all sub-pixels of a given pixel have the same intensity. In the above lines of text, when the orange circle is shown, all the text in the frame is rendered using ClearType (RGB subpixel rendering); when the orange circle is absent all the text is rendered using normal (full pixel greyscale) anti-aliasing. ClearType and similar technologies work on the theory that variations in intensity are more noticeable than variations in color. In a MSDN article, Microsoft acknowledges that "[te]xt that is rendered with ClearType can also appear significantly different when viewed by individuals with varying levels of color sensitivity. Some individuals can detect slight differences in color better than others." This opinion is shared by font designer Thomas Phinney (former CEO of FontLab, also formerly with Adobe Systems): "There is also considerable variation between individuals in their sensitivity to color fringing. Some people just notice it and are bothered by it a lot more than others." Software developer Melissa Elliott has written about finding ClearType rendering uncomfortable to read, saying that "instead of seeing black text, I see blue text, and rendered over it but offset by a pixel or two, I see orange text, and someone reached into a bag of purple pixel glitter and just tossed it on...I’m not the only person in the world with this problem, and yet, every time it comes up, people are quick to assure me it works for them as if that’s supposed to make me feel better." Hinting expert Beat Stamm, who worked on ClearType at Microsoft, agrees that ClearType may look blurry at 96 dpi, which was a typical resolution for LCDs in 2008, but adds that higher resolution displays improve on this aspect: "WPF [Windows Presentation Foundation] uses method C [ClearType with fractional pixel positioning], but few display devices have a sufficiently high resolution to make the potential blur a moot point for everybody. . . . Some people are ok with the blur in Method C, some aren’t. Anecdotal evidence suggests that some people are fine with Method C when reading continuous text at 96 dpi (e.g. Times Reader, etc.) but not in UI scenarios. Many people are fine with the colors of ClearType, even at 96 dpi, but a few aren’t… To my eyes and at 96 dpi, Method C doesn’t read as well as Method A. It reads “blurrily” to me. Conversely, at 144 dpi, I don’t see a problem with Method C. It looks and reads just fine to me." One illustration of the potential problem is the following image: In the above block of text, the same portion of text is shown in the upper half without and in the lower half with ClearType rendering (as opposed to Standard and ClearType in the previous image). This and the previous example with the orange circle demonstrate the blurring introduced. A 2001 study, conducted by researchers from Clemson University and The University of Pennsylvania on "18 users who spent 60 minutes reading fiction from each of three different displays" found that "When reading from an LCD display, users preferred text rendered with ClearType™. ClearType also yielded higher readability judgments and lower ratings of mental fatigue." A 2002 study on 24 users conducted by the same researchers from Clemson University also found that "Participants were significantly more accurate at identifying words with ClearType™ than without ClearType™." According to a 2006 study, at the University of Texas at Austin by Dillon et al., ClearType "may not be universally beneficial". The study notes that maximum benefit may be seen when the information worker is spending large proportions of their time reading text (which is not necessarily the case for the majority of computer users today). Additionally, over one third of the study participants experienced some disadvantage when using ClearType. Whether ClearType, or other rendering, should be used is very subjective and it must be the choice of the individual, with the report recommending "to allow users to disable [ClearType] if they find it produces effects other than improved performance". Another 2007 empirical study, found that "while ClearType rendering does not improve text legibility, reading speed or comfort compared to perceptually-tuned grayscale rendering, subjects prefer text with moderate ClearType rendering to text with grayscale or higher-level ClearType contrast." A 2007 survey, of the literature by Microsoft researcher Kevin Larson presented a different picture: "Peer-reviewed studies have consistently found that using ClearType boosts reading performance compared with other text-rendering systems. In a 2004 study, for instance, Lee Gugerty, a psychology professor at Clemson University, in South Carolina, measured a 17 percent improvement in word recognition accuracy with ClearType. Gugerty’s group also showed, in a sentence comprehension study, that ClearType boosted reading speed by 5 percent and comprehension by 2 percent. Similarly, in a study published in 2007, psychologist Andrew Dillon at the University of Texas at Austin found that when subjects were asked to scan a spreadsheet and pick out certain information, they did those tasks 7 percent faster with ClearType." ClearType and allied technologies require display hardware with fixed pixels and subpixels. More precisely, the positions of the pixels and subpixels on the screen must be exactly known to the computer to which it is connected. This is the case for flat-panel displays, on which the positions of the pixels are permanently fixed by the design of the screen itself. Almost all flat panels have a perfectly rectangular array of square pixels, each of which contains three rectangular subpixels in the three primary colors, with the normal ordering being red, green, and blue, arranged in vertical bands. ClearType assumes this arrangement of pixels when rendering text. ClearType does not work properly with flat-panel displays that are operated at resolutions other than their “native” resolutions, since only the native resolution corresponds exactly to the actual positions of pixels on the screen of the display. If a display does not have the type of fixed pixels that ClearType expects, text rendered with ClearType enabled actually looks worse than type rendered without it. Some flat panels have unusual pixel arrangements, with the colors in a different order, or with the subpixels positioned differently (in three horizontal bands, or in other ways). ClearType needs to be manually tuned for use with such displays (see below). ClearType will not work as intended on displays that have no fixed pixel positions, such as CRT displays, however it will still have some antialiasing effect and may be preferable to some users as compared to non-anti-aliased type. Because ClearType utilizes the physical layout of the red, green and blue pigments of the LCD screen, it is sensitive to the orientation of the display. ClearType in Windows XP supports the RGB and BGR sub pixel structures; rotated displays, in which the subpixels are stacked vertically rather than arranged horizontally, are not supported. Using ClearType on these display configurations will actually reduce the display quality. The best option for users of Windows XP having rotated LCD displays (Tablet PCs or swivel-stand LCD displays) is using regular anti-aliasing, or switching off font-smoothing altogether. The software developer documentation for Windows CE states that ClearType for rotated screens is supported on that platform. ClearType is also an integrated component of the Windows Presentation Foundation text-rendering engine. As part of the Vista release, Microsoft released a set of fonts, known as the ClearType Font Collection, thought to work well with the ClearType system: ClearType can be globally enabled or disabled for GDI applications. A control panel applet is available to let the users tune the GDI ClearType settings. The GDI implementation of ClearType does not support sub-pixel positioning. Windows XP, as supplied, allow ClearType to be turned on or off, with no adjustment; Windows 7 and later allow tuning of the ClearType parameters in Control Panel. A Microsoft ClearType tuner utility is available for free download for Windows versions lacking this facility. If ClearType is disabled in the operating system, applications with their own ClearType controls can still support it. Microsoft Reader (for e-books) has its own ClearType tuner. All text in Windows Presentation Foundation is anti-aliased and rendered using ClearType. There are separate ClearType registry settings for GDI and WPF applications, but by default the WPF entries are absent, and the GDI values are used in their absence. WPF registry entries can be tuned using the instructions from the MSDN WPF Text Blog. ClearType in WPF supports sub-pixel positioning, natural advance widths, Y-direction anti-aliasing and hardware acceleration. WPF supports aggressive caching of pre-rendered ClearType text in video memory. The extent to which this is supported is dependent on the video card. DirectX 10 cards will be able to cache the font glyphs in video memory, then perform the composition (assembling of character glyphs in the correct order, with the correct spacing), alpha blending (application of anti-aliasing), and RGB blending (ClearType's sub-pixel color calculations), entirely in hardware. This means that only the original glyphs need to be stored in video memory once per font (Microsoft estimates that this would require 2 MB of video memory per font), and other operations such as the display of anti-aliased text on top of other graphics – including video – can also be done with no computation effort on the part of the CPU. DirectX 9 cards will only be able to cache the alpha-blended glyphs in memory, thus requiring the CPU to handle glyph composition and alpha-blending before passing this to the video card. Caching these partially rendered glyphs requires significantly more memory (Microsoft estimates 5 MB per process). Cards that don't support DirectX 9 have no hardware-accelerated text rendering capabilities. As pixel densities of displays improved and more high DPI screens became available, colored subpixel rendering became less of a necessity according to Microsoft. Also Windows tablet user interfaces evolved to support vertical screen orientations where the LCD color stripes would run horizontally. The original colored ClearType subpixel rendering was tuned to work optimally with horizontal orientation LCD displays where RGB or BGR stripes run vertically. For these reasons, DirectWrite which is the next-generation text rendering API from Microsoft moved away from color-aware ClearType. The font rendering engine in DirectWrite supports a different version of ClearType with only greyscale anti-aliasing, not color subpixel rendering, as demonstrated at PDC 2008. This version is sometimes called Natural ClearType but is often referred to simply as DirectWrite rendering (with the term "ClearType" being designated to only the RGB/BGR color subpixel rendering version). The improvements have been confirmed by independent sources, such as Firefox developers; they were particularly noticeable for OpenType fonts in Compact Font Format (CFF). Many Office 2013 apps including Word 2013, Excel 2013, parts of Outlook 2013 stopped using ClearType and switched to this DirectWrite greyscale antialiasing. The reasons invoked are, in the words of Murray Sargent: "There is a problem with ClearType: it depends critically on the color of the background pixels. This isn’t a problem if you know a priori that those pixels are white, which is usually the case for text. But the general case involves calculating what the colors should be for an arbitrary background and that takes time. Meanwhile, Word 2013 enjoys cool animations and smooth zooming. Nothing jumps any more. Even the caret (the blinking vertical line at the text insertion point) glides from one position to the next as you type. Jerking movement just isn’t considered cool any more. Well animations and zooms have to be faster than human response times in order to appear smooth. And that rules out ClearType in animated scenarios at least with present generation hardware. And in future scenarios, screens will have sufficiently high resolution that gray-scale anti-aliasing should suffice." For the same reasons related to animation performance and vertical screen orientations where the colored RGB/BGR ClearType antialiasing would be a problem, the color-aware version of ClearType was abandoned in Metro-style apps platform of Windows 8 (and Universal Windows Platform of Windows 10)., including the Start menu and everything not using classic Win32 APIs (GDI/GDI+). ClearType is a registered trademark and Microsoft claims protection under the following U.S. patents, all expired: The ClearType name was also used to refer to the screens of Microsoft Surface tablets. ClearType HD Display indicates a 1366×768 screen, while ClearType Full HD Display indicates a 1920×1080 screen.
[ { "paragraph_id": 0, "text": "ClearType is Microsoft's implementation of subpixel rendering technology in rendering text in a font system. ClearType attempts to improve the appearance of text on certain types of computer display screens by sacrificing color fidelity for additional intensity variation. This trade-off is asserted to work well on LCD flat panel monitors.", "title": "" }, { "paragraph_id": 1, "text": "ClearType was first announced at the November 1998 COMDEX exhibition. The technology was first introduced in software in January 2000 as an always-on feature of Microsoft Reader, which was released to the public in August 2000.", "title": "" }, { "paragraph_id": 2, "text": "ClearType was significantly changed with the introduction of DirectWrite in Windows 7.", "title": "" }, { "paragraph_id": 3, "text": "With the increasing availability of HiDPI displays after 2012, subpixel rendering has become less necessary.", "title": "" }, { "paragraph_id": 4, "text": "Computer displays where the positions of individual pixels are permanently fixed – such as most modern flat panel displays – can show saw-tooth edges when displaying small, high-contrast graphic elements, such as text. ClearType uses spatial anti-aliasing at the subpixel level to reduce visible artifacts on such displays when text is rendered, making the text appear \"smoother\" and less jagged. ClearType also uses very heavy font hinting to force the font to fit into the pixel grid. This increases edge contrast and readability of small fonts at the expense of font rendering fidelity and has been criticized by graphic designers for making different fonts look similar.", "title": "Background" }, { "paragraph_id": 5, "text": "Like most other types of subpixel rendering, ClearType involves a compromise, sacrificing one aspect of image quality (color or chrominance detail) for another (light and dark or luminance detail). The compromise can improve text appearance when luminance detail is more important than chrominance.", "title": "Background" }, { "paragraph_id": 6, "text": "Only user and system applications render the application of ClearType. ClearType does not alter other graphic display elements (including text already in bitmaps). For example, ClearType enhancement renders text on the screen in Microsoft Word, but text placed in a bitmapped image in a program such as Adobe Photoshop is not. In theory, the method (called \"RGB Decimation\" internally) can enhance the anti-aliasing of any digital image.", "title": "Background" }, { "paragraph_id": 7, "text": "ClearType was invented in the Microsoft e-Books team by Bert Keely and Greg Hitchcock. It was then analyzed by researchers in the company, and signal processing expert John Platt designed an improved version of the algorithm. Dick Brass, a Vice President at Microsoft from 1997 to 2004, complained that the company was slow in moving ClearType to market in the portable computing field.", "title": "Background" }, { "paragraph_id": 8, "text": "Normally, the software in a computer treats the computer’s display screen as a rectangular array of square, indivisible pixels, each of which has an intensity and color that are determined by the blending of three primary colors: red, green, and blue. However, actual display hardware usually implements each pixel as a group of three adjacent, independent subpixels, each of which displays a different primary color. Thus, on a real computer display, each pixel is actually composed of separate red, green, and blue subpixels. For example, if a flat-panel display is examined under a magnifying glass, the pixels may appear as follows:", "title": "How ClearType works" }, { "paragraph_id": 9, "text": "In the illustration above, there are nine pixels but 27 subpixels.", "title": "How ClearType works" }, { "paragraph_id": 10, "text": "If the computer controlling the display knows the exact position and color of all the subpixels on the screen, it can take advantage of this to improve the apparent resolution in certain situations. If each pixel on the display actually contains three rectangular subpixels of red, green, and blue, in that fixed order, then things on the screen that are smaller than one full pixel in size can be rendered by lighting only one or two of the subpixels. For example, if a diagonal line with a width smaller than a full pixel must be rendered, then this can be done by lighting only the subpixels that the line actually touches. If the line passes through the leftmost portion of the pixel, only the red subpixel is lit; if it passes through the rightmost portion of the pixel, only the blue subpixel is lit. This effectively triples the horizontal resolution of the image at normal viewing distances; the drawback is that the line thus drawn will show color fringes (at some points it might look green, at other points it might look red or blue).", "title": "How ClearType works" }, { "paragraph_id": 11, "text": "ClearType uses this method to improve the smoothness of text. When the elements of a type character are smaller than a full pixel, ClearType lights only the appropriate subpixels of each full pixel in order to more closely follow the outlines of that character. Text rendered with ClearType looks “smoother” than text rendered without it, provided that the pixel layout of the display screen exactly matches what ClearType expects.", "title": "How ClearType works" }, { "paragraph_id": 12, "text": "The following picture shows a 4× enlargement of the word Wikipedia rendered using ClearType. The word was originally rendered using a Times New Roman 12 pt font.", "title": "How ClearType works" }, { "paragraph_id": 13, "text": "In this magnified view, it becomes clear that, while the overall smoothness of the text seems to improve, there is also color fringing of the text.", "title": "How ClearType works" }, { "paragraph_id": 14, "text": "An extreme close-up of a color display shows (a) text rendered without ClearType and (b) text rendered with ClearType. Note the changes in subpixel intensity that are used to increase effective resolution when ClearType is enabled – without ClearType, all sub-pixels of a given pixel have the same intensity.", "title": "How ClearType works" }, { "paragraph_id": 15, "text": "In the above lines of text, when the orange circle is shown, all the text in the frame is rendered using ClearType (RGB subpixel rendering); when the orange circle is absent all the text is rendered using normal (full pixel greyscale) anti-aliasing.", "title": "How ClearType works" }, { "paragraph_id": 16, "text": "ClearType and similar technologies work on the theory that variations in intensity are more noticeable than variations in color.", "title": "Human vision and cognition" }, { "paragraph_id": 17, "text": "In a MSDN article, Microsoft acknowledges that \"[te]xt that is rendered with ClearType can also appear significantly different when viewed by individuals with varying levels of color sensitivity. Some individuals can detect slight differences in color better than others.\" This opinion is shared by font designer Thomas Phinney (former CEO of FontLab, also formerly with Adobe Systems): \"There is also considerable variation between individuals in their sensitivity to color fringing. Some people just notice it and are bothered by it a lot more than others.\" Software developer Melissa Elliott has written about finding ClearType rendering uncomfortable to read, saying that \"instead of seeing black text, I see blue text, and rendered over it but offset by a pixel or two, I see orange text, and someone reached into a bag of purple pixel glitter and just tossed it on...I’m not the only person in the world with this problem, and yet, every time it comes up, people are quick to assure me it works for them as if that’s supposed to make me feel better.\"", "title": "Human vision and cognition" }, { "paragraph_id": 18, "text": "Hinting expert Beat Stamm, who worked on ClearType at Microsoft, agrees that ClearType may look blurry at 96 dpi, which was a typical resolution for LCDs in 2008, but adds that higher resolution displays improve on this aspect: \"WPF [Windows Presentation Foundation] uses method C [ClearType with fractional pixel positioning], but few display devices have a sufficiently high resolution to make the potential blur a moot point for everybody. . . . Some people are ok with the blur in Method C, some aren’t. Anecdotal evidence suggests that some people are fine with Method C when reading continuous text at 96 dpi (e.g. Times Reader, etc.) but not in UI scenarios. Many people are fine with the colors of ClearType, even at 96 dpi, but a few aren’t… To my eyes and at 96 dpi, Method C doesn’t read as well as Method A. It reads “blurrily” to me. Conversely, at 144 dpi, I don’t see a problem with Method C. It looks and reads just fine to me.\" One illustration of the potential problem is the following image:", "title": "Human vision and cognition" }, { "paragraph_id": 19, "text": "In the above block of text, the same portion of text is shown in the upper half without and in the lower half with ClearType rendering (as opposed to Standard and ClearType in the previous image). This and the previous example with the orange circle demonstrate the blurring introduced.", "title": "Human vision and cognition" }, { "paragraph_id": 20, "text": "A 2001 study, conducted by researchers from Clemson University and The University of Pennsylvania on \"18 users who spent 60 minutes reading fiction from each of three different displays\" found that \"When reading from an LCD display, users preferred text rendered with ClearType™. ClearType also yielded higher readability judgments and lower ratings of mental fatigue.\" A 2002 study on 24 users conducted by the same researchers from Clemson University also found that \"Participants were significantly more accurate at identifying words with ClearType™ than without ClearType™.\"", "title": "Human vision and cognition" }, { "paragraph_id": 21, "text": "According to a 2006 study, at the University of Texas at Austin by Dillon et al., ClearType \"may not be universally beneficial\". The study notes that maximum benefit may be seen when the information worker is spending large proportions of their time reading text (which is not necessarily the case for the majority of computer users today). Additionally, over one third of the study participants experienced some disadvantage when using ClearType. Whether ClearType, or other rendering, should be used is very subjective and it must be the choice of the individual, with the report recommending \"to allow users to disable [ClearType] if they find it produces effects other than improved performance\".", "title": "Human vision and cognition" }, { "paragraph_id": 22, "text": "Another 2007 empirical study, found that \"while ClearType rendering does not improve text legibility, reading speed or comfort compared to perceptually-tuned grayscale rendering, subjects prefer text with moderate ClearType rendering to text with grayscale or higher-level ClearType contrast.\"", "title": "Human vision and cognition" }, { "paragraph_id": 23, "text": "A 2007 survey, of the literature by Microsoft researcher Kevin Larson presented a different picture: \"Peer-reviewed studies have consistently found that using ClearType boosts reading performance compared with other text-rendering systems. In a 2004 study, for instance, Lee Gugerty, a psychology professor at Clemson University, in South Carolina, measured a 17 percent improvement in word recognition accuracy with ClearType. Gugerty’s group also showed, in a sentence comprehension study, that ClearType boosted reading speed by 5 percent and comprehension by 2 percent. Similarly, in a study published in 2007, psychologist Andrew Dillon at the University of Texas at Austin found that when subjects were asked to scan a spreadsheet and pick out certain information, they did those tasks 7 percent faster with ClearType.\"", "title": "Human vision and cognition" }, { "paragraph_id": 24, "text": "ClearType and allied technologies require display hardware with fixed pixels and subpixels. More precisely, the positions of the pixels and subpixels on the screen must be exactly known to the computer to which it is connected. This is the case for flat-panel displays, on which the positions of the pixels are permanently fixed by the design of the screen itself. Almost all flat panels have a perfectly rectangular array of square pixels, each of which contains three rectangular subpixels in the three primary colors, with the normal ordering being red, green, and blue, arranged in vertical bands. ClearType assumes this arrangement of pixels when rendering text.", "title": "Display requirements" }, { "paragraph_id": 25, "text": "ClearType does not work properly with flat-panel displays that are operated at resolutions other than their “native” resolutions, since only the native resolution corresponds exactly to the actual positions of pixels on the screen of the display.", "title": "Display requirements" }, { "paragraph_id": 26, "text": "If a display does not have the type of fixed pixels that ClearType expects, text rendered with ClearType enabled actually looks worse than type rendered without it. Some flat panels have unusual pixel arrangements, with the colors in a different order, or with the subpixels positioned differently (in three horizontal bands, or in other ways). ClearType needs to be manually tuned for use with such displays (see below).", "title": "Display requirements" }, { "paragraph_id": 27, "text": "ClearType will not work as intended on displays that have no fixed pixel positions, such as CRT displays, however it will still have some antialiasing effect and may be preferable to some users as compared to non-anti-aliased type.", "title": "Display requirements" }, { "paragraph_id": 28, "text": "Because ClearType utilizes the physical layout of the red, green and blue pigments of the LCD screen, it is sensitive to the orientation of the display.", "title": "Sensitivity to display orientation" }, { "paragraph_id": 29, "text": "ClearType in Windows XP supports the RGB and BGR sub pixel structures; rotated displays, in which the subpixels are stacked vertically rather than arranged horizontally, are not supported. Using ClearType on these display configurations will actually reduce the display quality. The best option for users of Windows XP having rotated LCD displays (Tablet PCs or swivel-stand LCD displays) is using regular anti-aliasing, or switching off font-smoothing altogether.", "title": "Sensitivity to display orientation" }, { "paragraph_id": 30, "text": "The software developer documentation for Windows CE states that ClearType for rotated screens is supported on that platform.", "title": "Sensitivity to display orientation" }, { "paragraph_id": 31, "text": "ClearType is also an integrated component of the Windows Presentation Foundation text-rendering engine.", "title": "Implementations" }, { "paragraph_id": 32, "text": "As part of the Vista release, Microsoft released a set of fonts, known as the ClearType Font Collection, thought to work well with the ClearType system:", "title": "Implementations" }, { "paragraph_id": 33, "text": "ClearType can be globally enabled or disabled for GDI applications. A control panel applet is available to let the users tune the GDI ClearType settings. The GDI implementation of ClearType does not support sub-pixel positioning.", "title": "Implementations" }, { "paragraph_id": 34, "text": "Windows XP, as supplied, allow ClearType to be turned on or off, with no adjustment; Windows 7 and later allow tuning of the ClearType parameters in Control Panel. A Microsoft ClearType tuner utility is available for free download for Windows versions lacking this facility. If ClearType is disabled in the operating system, applications with their own ClearType controls can still support it. Microsoft Reader (for e-books) has its own ClearType tuner.", "title": "Implementations" }, { "paragraph_id": 35, "text": "All text in Windows Presentation Foundation is anti-aliased and rendered using ClearType. There are separate ClearType registry settings for GDI and WPF applications, but by default the WPF entries are absent, and the GDI values are used in their absence. WPF registry entries can be tuned using the instructions from the MSDN WPF Text Blog.", "title": "Implementations" }, { "paragraph_id": 36, "text": "ClearType in WPF supports sub-pixel positioning, natural advance widths, Y-direction anti-aliasing and hardware acceleration. WPF supports aggressive caching of pre-rendered ClearType text in video memory. The extent to which this is supported is dependent on the video card. DirectX 10 cards will be able to cache the font glyphs in video memory, then perform the composition (assembling of character glyphs in the correct order, with the correct spacing), alpha blending (application of anti-aliasing), and RGB blending (ClearType's sub-pixel color calculations), entirely in hardware. This means that only the original glyphs need to be stored in video memory once per font (Microsoft estimates that this would require 2 MB of video memory per font), and other operations such as the display of anti-aliased text on top of other graphics – including video – can also be done with no computation effort on the part of the CPU. DirectX 9 cards will only be able to cache the alpha-blended glyphs in memory, thus requiring the CPU to handle glyph composition and alpha-blending before passing this to the video card. Caching these partially rendered glyphs requires significantly more memory (Microsoft estimates 5 MB per process). Cards that don't support DirectX 9 have no hardware-accelerated text rendering capabilities.", "title": "Implementations" }, { "paragraph_id": 37, "text": "As pixel densities of displays improved and more high DPI screens became available, colored subpixel rendering became less of a necessity according to Microsoft. Also Windows tablet user interfaces evolved to support vertical screen orientations where the LCD color stripes would run horizontally. The original colored ClearType subpixel rendering was tuned to work optimally with horizontal orientation LCD displays where RGB or BGR stripes run vertically. For these reasons, DirectWrite which is the next-generation text rendering API from Microsoft moved away from color-aware ClearType. The font rendering engine in DirectWrite supports a different version of ClearType with only greyscale anti-aliasing, not color subpixel rendering, as demonstrated at PDC 2008. This version is sometimes called Natural ClearType but is often referred to simply as DirectWrite rendering (with the term \"ClearType\" being designated to only the RGB/BGR color subpixel rendering version). The improvements have been confirmed by independent sources, such as Firefox developers; they were particularly noticeable for OpenType fonts in Compact Font Format (CFF).", "title": "Implementations" }, { "paragraph_id": 38, "text": "Many Office 2013 apps including Word 2013, Excel 2013, parts of Outlook 2013 stopped using ClearType and switched to this DirectWrite greyscale antialiasing. The reasons invoked are, in the words of Murray Sargent: \"There is a problem with ClearType: it depends critically on the color of the background pixels. This isn’t a problem if you know a priori that those pixels are white, which is usually the case for text. But the general case involves calculating what the colors should be for an arbitrary background and that takes time. Meanwhile, Word 2013 enjoys cool animations and smooth zooming. Nothing jumps any more. Even the caret (the blinking vertical line at the text insertion point) glides from one position to the next as you type. Jerking movement just isn’t considered cool any more. Well animations and zooms have to be faster than human response times in order to appear smooth. And that rules out ClearType in animated scenarios at least with present generation hardware. And in future scenarios, screens will have sufficiently high resolution that gray-scale anti-aliasing should suffice.\"", "title": "Implementations" }, { "paragraph_id": 39, "text": "For the same reasons related to animation performance and vertical screen orientations where the colored RGB/BGR ClearType antialiasing would be a problem, the color-aware version of ClearType was abandoned in Metro-style apps platform of Windows 8 (and Universal Windows Platform of Windows 10)., including the Start menu and everything not using classic Win32 APIs (GDI/GDI+).", "title": "Implementations" }, { "paragraph_id": 40, "text": "ClearType is a registered trademark and Microsoft claims protection under the following U.S. patents, all expired:", "title": "Patents" }, { "paragraph_id": 41, "text": "The ClearType name was also used to refer to the screens of Microsoft Surface tablets. ClearType HD Display indicates a 1366×768 screen, while ClearType Full HD Display indicates a 1920×1080 screen.", "title": "Other uses of the ClearType brand" } ]
ClearType is Microsoft's implementation of subpixel rendering technology in rendering text in a font system. ClearType attempts to improve the appearance of text on certain types of computer display screens by sacrificing color fidelity for additional intensity variation. This trade-off is asserted to work well on LCD flat panel monitors. ClearType was first announced at the November 1998 COMDEX exhibition. The technology was first introduced in software in January 2000 as an always-on feature of Microsoft Reader, which was released to the public in August 2000. ClearType was significantly changed with the introduction of DirectWrite in Windows 7. With the increasing availability of HiDPI displays after 2012, subpixel rendering has become less necessary.
2002-01-08T16:48:01Z
2023-12-31T09:31:45Z
[ "Template:US patent", "Template:Reflist", "Template:Cite web", "Template:Cite book", "Template:Commons category", "Template:Windows Components", "Template:Short description", "Template:Unreferenced section", "Template:Cite journal", "Template:Webarchive", "Template:Doi", "Template:Snd", "Template:Duplication" ]
https://en.wikipedia.org/wiki/ClearType
7,682
Centriole
In cell biology a centriole is a cylindrical organelle composed mainly of a protein called tubulin. Centrioles are found in most eukaryotic cells, but are not present in conifers (Pinophyta), flowering plants (angiosperms) and most fungi, and are only present in the male gametes of charophytes, bryophytes, seedless vascular plants, cycads, and Ginkgo. A bound pair of centrioles, surrounded by a highly ordered mass of dense material, called the pericentriolar material (PCM), makes up a structure called a centrosome. Centrioles are typically made up of nine sets of short microtubule triplets, arranged in a cylinder. Deviations from this structure include crabs and Drosophila melanogaster embryos, with nine doublets, and Caenorhabditis elegans sperm cells and early embryos, with nine singlets. Additional proteins include centrin, cenexin and tektin. The main function of centrioles is to produce cilia during interphase and the aster and the spindle during cell division. The centrosome was discovered jointly by Walther Flemming in 1875 and Edouard Van Beneden in 1876.Edouard Van Beneden made the first observation of centrosomes as composed of two orthogonal centrioles in 1883. Theodor Boveri introduced the term "centrosome" in 1888 and the term "centriole" in 1895. The basal body was named by Theodor Wilhelm Engelmann in 1880. The pattern of centriole duplication was first worked out independently by Étienne de Harven and Joseph G. Gall c. 1950. Centrioles are involved in the organization of the mitotic spindle and in the completion of cytokinesis. Centrioles were previously thought to be required for the formation of a mitotic spindle in animal cells. However, more recent experiments have demonstrated that cells whose centrioles have been removed via laser ablation can still progress through the G1 stage of interphase before centrioles can be synthesized later in a de novo fashion. Additionally, mutant flies lacking centrioles develop normally, although the adult flies' cells lack flagella and cilia and as a result, they die shortly after birth. The centrioles can self replicate during cell division. Centrioles are a very important part of centrosomes, which are involved in organizing microtubules in the cytoplasm. The position of the centriole determines the position of the nucleus and plays a crucial role in the spatial arrangement of the cell. Sperm centrioles are important for 2 functions: (1) to form the sperm flagellum and sperm movement and (2) for the development of the embryo after fertilization. The sperm supplies the centriole that creates the centrosome and microtubule system of the zygote. In flagellates and ciliates, the position of the flagellum or cilium is determined by the mother centriole, which becomes the basal body. An inability of cells to use centrioles to make functional flagella and cilia has been linked to a number of genetic and developmental diseases. In particular, the inability of centrioles to properly migrate prior to ciliary assembly has recently been linked to Meckel–Gruber syndrome. Proper orientation of cilia via centriole positioning toward the posterior of embryonic node cells is critical for establishing left-right asymmetry, during mammalian development. Before DNA replication, cells contain two centrioles, an older mother centriole, and a younger daughter centriole. During cell division, a new centriole grows at the proximal end of both mother and daughter centrioles. After duplication, the two centriole pairs (the freshly assembled centriole is now a daughter centriole in each pair) will remain attached to each other orthogonally until mitosis. At that point the mother and daughter centrioles separate dependently on an enzyme called separase. The two centrioles in the centrosome are tied to one another. The mother centriole has radiating appendages at the distal end of its long axis and is attached to its daughter at the proximal end. Each daughter cell formed after cell division will inherit one of these pairs. Centrioles start duplicating when DNA replicates. The last common ancestor of all eukaryotes was a ciliated cell with centrioles. Some lineages of eukaryotes, such as land plants, do not have centrioles except in their motile male gametes. Centrioles are completely absent from all cells of conifers and flowering plants, which do not have ciliate or flagellate gametes. It is unclear if the last common ancestor had one or two cilia. Important genes such as centrins required for centriole growth, are only found in eukaryotes, and not in bacteria or archaea. The word centriole (/ˈsɛntrioʊl/) uses combining forms of centri- and -ole, yielding "little central part", which describes a centriole's typical location near the center of the cell. Typical centrioles are made of 9 triplets of microtubules organized with radial symmetry. Centrioles can vary the number of microtubules and can be made of 9 doublets of microtubules (as in Drosophila melanogaster) or 9 singlets of microtubules as in C. elegans. Atypical centrioles are centrioles that do not have microtubules, such as the Proximal Centriole-Like found in D. melanogaster sperm, or that have microtubules with no radial symmetry, such as in the distal centriole of human spermatozoon. Atypical centrioles may have evolved at least eight times independently during vertebrate evolution and may evolve in the sperm after internal fertilization evolves. It wasn't clear why centriole become atypical until recently. The atypical distal centriole forms a dynamic basal complex (DBC) that, together with other structures in the sperm neck, facilitates a cascade of internal sliding, coupling tail beating with head kinking. The atypical distal centriole's properties suggest that it evolved into a transmission system that couples the sperm tail motors to the whole sperm, thereby enhancing sperm function.
[ { "paragraph_id": 0, "text": "In cell biology a centriole is a cylindrical organelle composed mainly of a protein called tubulin. Centrioles are found in most eukaryotic cells, but are not present in conifers (Pinophyta), flowering plants (angiosperms) and most fungi, and are only present in the male gametes of charophytes, bryophytes, seedless vascular plants, cycads, and Ginkgo. A bound pair of centrioles, surrounded by a highly ordered mass of dense material, called the pericentriolar material (PCM), makes up a structure called a centrosome.", "title": "" }, { "paragraph_id": 1, "text": "Centrioles are typically made up of nine sets of short microtubule triplets, arranged in a cylinder. Deviations from this structure include crabs and Drosophila melanogaster embryos, with nine doublets, and Caenorhabditis elegans sperm cells and early embryos, with nine singlets. Additional proteins include centrin, cenexin and tektin.", "title": "" }, { "paragraph_id": 2, "text": "The main function of centrioles is to produce cilia during interphase and the aster and the spindle during cell division.", "title": "" }, { "paragraph_id": 3, "text": "The centrosome was discovered jointly by Walther Flemming in 1875 and Edouard Van Beneden in 1876.Edouard Van Beneden made the first observation of centrosomes as composed of two orthogonal centrioles in 1883. Theodor Boveri introduced the term \"centrosome\" in 1888 and the term \"centriole\" in 1895. The basal body was named by Theodor Wilhelm Engelmann in 1880. The pattern of centriole duplication was first worked out independently by Étienne de Harven and Joseph G. Gall c. 1950.", "title": "History" }, { "paragraph_id": 4, "text": "Centrioles are involved in the organization of the mitotic spindle and in the completion of cytokinesis. Centrioles were previously thought to be required for the formation of a mitotic spindle in animal cells. However, more recent experiments have demonstrated that cells whose centrioles have been removed via laser ablation can still progress through the G1 stage of interphase before centrioles can be synthesized later in a de novo fashion. Additionally, mutant flies lacking centrioles develop normally, although the adult flies' cells lack flagella and cilia and as a result, they die shortly after birth. The centrioles can self replicate during cell division.", "title": "Role in cell division" }, { "paragraph_id": 5, "text": "Centrioles are a very important part of centrosomes, which are involved in organizing microtubules in the cytoplasm. The position of the centriole determines the position of the nucleus and plays a crucial role in the spatial arrangement of the cell.", "title": "Cellular organization" }, { "paragraph_id": 6, "text": "Sperm centrioles are important for 2 functions: (1) to form the sperm flagellum and sperm movement and (2) for the development of the embryo after fertilization. The sperm supplies the centriole that creates the centrosome and microtubule system of the zygote.", "title": "Fertility" }, { "paragraph_id": 7, "text": "In flagellates and ciliates, the position of the flagellum or cilium is determined by the mother centriole, which becomes the basal body. An inability of cells to use centrioles to make functional flagella and cilia has been linked to a number of genetic and developmental diseases. In particular, the inability of centrioles to properly migrate prior to ciliary assembly has recently been linked to Meckel–Gruber syndrome.", "title": "Ciliogenesis" }, { "paragraph_id": 8, "text": "Proper orientation of cilia via centriole positioning toward the posterior of embryonic node cells is critical for establishing left-right asymmetry, during mammalian development.", "title": "Animal development" }, { "paragraph_id": 9, "text": "Before DNA replication, cells contain two centrioles, an older mother centriole, and a younger daughter centriole. During cell division, a new centriole grows at the proximal end of both mother and daughter centrioles. After duplication, the two centriole pairs (the freshly assembled centriole is now a daughter centriole in each pair) will remain attached to each other orthogonally until mitosis. At that point the mother and daughter centrioles separate dependently on an enzyme called separase.", "title": "Centriole duplication" }, { "paragraph_id": 10, "text": "The two centrioles in the centrosome are tied to one another. The mother centriole has radiating appendages at the distal end of its long axis and is attached to its daughter at the proximal end. Each daughter cell formed after cell division will inherit one of these pairs. Centrioles start duplicating when DNA replicates.", "title": "Centriole duplication" }, { "paragraph_id": 11, "text": "The last common ancestor of all eukaryotes was a ciliated cell with centrioles. Some lineages of eukaryotes, such as land plants, do not have centrioles except in their motile male gametes. Centrioles are completely absent from all cells of conifers and flowering plants, which do not have ciliate or flagellate gametes. It is unclear if the last common ancestor had one or two cilia. Important genes such as centrins required for centriole growth, are only found in eukaryotes, and not in bacteria or archaea.", "title": "Origin" }, { "paragraph_id": 12, "text": "The word centriole (/ˈsɛntrioʊl/) uses combining forms of centri- and -ole, yielding \"little central part\", which describes a centriole's typical location near the center of the cell.", "title": "Etymology and pronunciation" }, { "paragraph_id": 13, "text": "Typical centrioles are made of 9 triplets of microtubules organized with radial symmetry. Centrioles can vary the number of microtubules and can be made of 9 doublets of microtubules (as in Drosophila melanogaster) or 9 singlets of microtubules as in C. elegans. Atypical centrioles are centrioles that do not have microtubules, such as the Proximal Centriole-Like found in D. melanogaster sperm, or that have microtubules with no radial symmetry, such as in the distal centriole of human spermatozoon. Atypical centrioles may have evolved at least eight times independently during vertebrate evolution and may evolve in the sperm after internal fertilization evolves.", "title": "Atypical centrioles" }, { "paragraph_id": 14, "text": "It wasn't clear why centriole become atypical until recently. The atypical distal centriole forms a dynamic basal complex (DBC) that, together with other structures in the sperm neck, facilitates a cascade of internal sliding, coupling tail beating with head kinking. The atypical distal centriole's properties suggest that it evolved into a transmission system that couples the sperm tail motors to the whole sperm, thereby enhancing sperm function.", "title": "Atypical centrioles" }, { "paragraph_id": 15, "text": "", "title": "References" } ]
In cell biology a centriole is a cylindrical organelle composed mainly of a protein called tubulin. Centrioles are found in most eukaryotic cells, but are not present in conifers (Pinophyta), flowering plants (angiosperms) and most fungi, and are only present in the male gametes of charophytes, bryophytes, seedless vascular plants, cycads, and Ginkgo. A bound pair of centrioles, surrounded by a highly ordered mass of dense material, called the pericentriolar material (PCM), makes up a structure called a centrosome. Centrioles are typically made up of nine sets of short microtubule triplets, arranged in a cylinder. Deviations from this structure include crabs and Drosophila melanogaster embryos, with nine doublets, and Caenorhabditis elegans sperm cells and early embryos, with nine singlets. Additional proteins include centrin, cenexin and tektin. The main function of centrioles is to produce cilia during interphase and the aster and the spindle during cell division.
2002-01-08T18:19:09Z
2023-11-26T18:23:39Z
[ "Template:Cellular structures", "Template:Authority control", "Template:Reflist", "Template:Cite book", "Template:Centrosome", "Template:Cite journal", "Template:Use dmy dates", "Template:Short description", "Template:Cell biology", "Template:IPAc-en" ]
https://en.wikipedia.org/wiki/Centriole
7,683
Creation science
Creation science or scientific creationism is a pseudoscientific form of Young Earth creationism which claims to offer scientific arguments for certain literalist and inerrantist interpretations of the Bible. It is often presented without overt faith-based language, but instead relies on reinterpreting scientific results to argue that various myths in the Book of Genesis and other select biblical passages are scientifically valid. The most commonly advanced ideas of creation science include special creation based on the Genesis creation narrative and flood geology based on the Genesis flood narrative. Creationists also claim they can disprove or reexplain a variety of scientific facts, theories and paradigms of geology, cosmology, biological evolution, archaeology, history, and linguistics using creation science. Creation science was foundational to intelligent design. The overwhelming consensus of the scientific community is that creation science fails to qualify as scientific because it lacks empirical support, supplies no testable hypotheses, and resolves to describe natural history in terms of scientifically untestable supernatural causes. Courts, most often in the United States where the question has been asked in the context of teaching the subject in public schools, have consistently ruled since the 1980s that creation science is a religious view rather than a scientific one. Historians, philosophers of science and skeptics have described creation science as a pseudoscientific attempt to map the Bible into scientific facts. Professional biologists have criticized creation science for being unscholarly, and even as a dishonest and misguided sham, with extremely harmful educational consequences. Creation science is based largely upon chapters 1–11 of the Book of Genesis. These describe how God calls the world into existence through the power of speech ("And God said, Let there be light," etc.) in six days, calls all the animals and plants into existence, and molds the first man from clay and the first woman from a rib taken from the man's side; a worldwide flood destroys all life except for Noah and his family and representatives of the animals, and Noah becomes the ancestor of the 70 "nations" of the world; the nations live together until the incident of the Tower of Babel, when God disperses them and gives them their different languages. Creation science attempts to explain history and science within the span of Biblical chronology, which places the initial act of creation some six thousand years ago. Most creation science proponents hold fundamentalist or Evangelical Christian beliefs in Biblical literalism or Biblical inerrancy, as opposed to the higher criticism supported by liberal Christianity in the Fundamentalist–Modernist Controversy. However, there are also examples of Islamic and Jewish scientific creationism that conform to the accounts of creation as recorded in their religious doctrines. The Seventh-day Adventist Church has a history of support for creation science. This dates back to George McCready Price, an active Seventh-day Adventist who developed views of flood geology, which formed the basis of creation science. This work was continued by the Geoscience Research Institute, an official institute of the Seventh-day Adventist Church, located on its Loma Linda University campus in California. Creation science is generally rejected by the Church of England as well as the Roman Catholic Church. The Pontifical Gregorian University has officially discussed intelligent design as a "cultural phenomenon" without scientific elements. The Church of England's official website cites Charles Darwin's local work assisting people in his religious parish. Creation science rejects evolution and the common descent of all living things on Earth. Instead, it asserts that the field of evolutionary biology is itself pseudoscientific or even a religion. Creationists argue instead for a system called baraminology, which considers the living world to be descended from uniquely created kinds or "baramins." Creation science incorporates the concept of catastrophism to reconcile current landforms and fossil distributions with Biblical interpretations, proposing the remains resulted from successive cataclysmic events, such as a worldwide flood and subsequent ice age. It rejects one of the fundamental principles of modern geology (and of modern science generally), uniformitarianism, which applies the same physical and geological laws observed on the Earth today to interpret the Earth's geological history. Sometimes creationists attack other scientific concepts, like the Big Bang cosmological model or methods of scientific dating based upon radioactive decay. Young Earth creationists also reject current estimates of the age of the universe and the age of the Earth, arguing for creationist cosmologies with timescales much shorter than those determined by modern physical cosmology and geological science, typically less than 10,000 years. The scientific community has overwhelmingly rejected the ideas put forth in creation science as lying outside the boundaries of a legitimate science. The foundational premises underlying scientific creationism disqualify it as a science because the answers to all inquiry therein are preordained to conform to Bible doctrine, and because that inquiry is constructed upon theories which are not empirically testable in nature. Scientists also deem creation science's attacks against biological evolution to be without scientific merit. The views of the scientific community were accepted in two significant court decisions in the 1980s, which found the field of creation science to be a religious mode of inquiry, not a scientific one. Creation science began in the 1960s, as a fundamentalist Christian effort in the United States to prove Biblical inerrancy and nullify the scientific evidence for evolution. It has since developed a sizable religious following in the United States, with creation science ministries branching worldwide. The main ideas in creation science are: the belief in creation ex nihilo (Latin: out of nothing); the conviction that the Earth was created within the last 6,000–10,000 years; the belief that humans and other life on Earth were created as distinct fixed "baraminological" kinds; and "flood geology" or the idea that fossils found in geological strata were deposited during a cataclysmic flood which completely covered the entire Earth. As a result, creationists also challenge the geologic and astrophysical measurements of the age of the Earth and the universe along with their origins, which creationists believe are irreconcilable with the account in the Book of Genesis. Creation science proponents often refer to the theory of evolution as "Darwinism" or as "Darwinian evolution." The creation science texts and curricula that first emerged in the 1960s focused upon concepts derived from a literal interpretation of the Bible and were overtly religious in nature, most notably proposing Noah's flood in the Biblical Genesis account as an explanation for the geological and fossil record. These works attracted little notice beyond the schools and congregations of conservative fundamental and Evangelical Christians until the 1970s, when its followers challenged the teaching of evolution in the public schools and other venues in the United States, bringing it to the attention of the public-at-large and the scientific community. Many school boards and lawmakers were persuaded to include the teaching of creation science alongside evolution in the science curriculum. Creation science texts and curricula used in churches and Christian schools were revised to eliminate their Biblical and theological references, and less explicitly sectarian versions of creation science education were introduced in public schools in Louisiana, Arkansas, and other regions in the United States. The 1982 ruling in McLean v. Arkansas found that creation science fails to meet the essential characteristics of science and that its chief intent is to advance a particular religious view. The teaching of creation science in public schools in the United States effectively ended in 1987 following the United States Supreme Court decision in Edwards v. Aguillard. The court affirmed that a statute requiring the teaching of creation science alongside evolution when evolution is taught in Louisiana public schools was unconstitutional because its sole true purpose was to advance a particular religious belief. In response to this ruling, drafts of the creation science school textbook Of Pandas and People were edited to change references of creation to intelligent design before its publication in 1989. The intelligent design movement promoted this version. Requiring intelligent design to be taught in public school science classes was found to be unconstitutional in the 2005 Kitzmiller v. Dover Area School District federal court case. The teaching of evolution was gradually introduced into more and more public high school textbooks in the United States after 1900, but in the aftermath of the First World War the growth of fundamentalist Christianity gave rise to a creationist opposition to such teaching. Legislation prohibiting the teaching of evolution was passed in certain regions, most notably Tennessee's Butler Act of 1925. The Soviet Union's successful launch of Sputnik 1 in 1957 sparked national concern that the science education in public schools was outdated. In 1958, the United States passed National Defense Education Act which introduced new education guidelines for science instruction. With federal grant funding, the Biological Sciences Curriculum Study (BSCS) drafted new standards for the public schools' science textbooks which included the teaching of evolution. Almost half the nation's high schools were using textbooks based on the guidelines of the BSCS soon after they were published in 1963. The Tennessee legislature did not repeal the Butler Act until 1967. Creation science (dubbed "scientific creationism" at the time) emerged as an organized movement during the 1960s. It was strongly influenced by the earlier work of armchair geologist George McCready Price who wrote works such as Illogical Geology: The Weakest Point in the Evolution Theory (1906) and The New Geology (1923) to advance what he termed "new catastrophism" and dispute the current geological time frames and explanations of geologic history. Price was cited at the Scopes Trial of 1925, but his writings had no credence among geologists and other scientists. Price's "new catastrophism" was also disputed by most other creationists until its revival with the 1961 publication of The Genesis Flood by John C. Whitcomb and Henry M. Morris, a work which quickly became an important text on the issue to fundamentalist Christians and expanded the field of creation science beyond critiques of geology into biology and cosmology as well. Soon after its publication, a movement was underway to have the subject taught in United States' public schools. The various state laws prohibiting teaching of evolution were overturned in 1968 when the United States Supreme Court ruled in Epperson v. Arkansas such laws violated the Establishment Clause of the First Amendment to the United States Constitution. This ruling inspired a new creationist movement to promote laws requiring that schools give balanced treatment to creation science when evolution is taught. The 1981 Arkansas Act 590 was one such law that carefully detailed the principles of creation science that were to receive equal time in public schools alongside evolutionary principles. The act defined creation science as follows: "'Creation-science' means the scientific evidences for creation and inferences from those evidences. Creation-science includes the scientific evidences and related inferences that indicate: This legislation was examined in McLean v. Arkansas, and the ruling handed down on January 5, 1982, concluded that creation-science as defined in the act "is simply not science". The judgement defined the following as essential characteristics of science: The court ruled that creation science failed to meet these essential characteristics and identified specific reasons. After examining the key concepts from creation science, the court found: The court further noted that no recognized scientific journal had published any article espousing the creation science theory as described in the Arkansas law, and stated that the testimony presented by defense attributing the absence to censorship was not credible. In its ruling, the court wrote that for any theory to qualify as scientific, the theory must be tentative, and open to revision or abandonment as new facts come to light. It wrote that any methodology which begins with an immutable conclusion that cannot be revised or rejected, regardless of the evidence, is not a scientific theory. The court found that creation science does not culminate in conclusions formed from scientific inquiry, but instead begins with the conclusion, one taken from a literal wording of the Book of Genesis, and seeks only scientific evidence to support it. The law in Arkansas adopted the same two-model approach as that put forward by the Institute for Creation Research, one allowing only two possible explanations for the origins of life and existence of man, plants and animals: it was either the work of a creator or it was not. Scientific evidence that failed to support the theory of evolution was posed as necessarily scientific evidence in support of creationism, but in its judgment the court ruled this approach to be no more than a "contrived dualism which has not scientific factual basis or legitimate educational purpose." The judge concluded that "Act 590 is a religious crusade, coupled with a desire to conceal this fact," and that it violated the First Amendment's Establishment Clause. The decision was not appealed to a higher court, but had a powerful influence on subsequent rulings. Louisiana's 1982 Balanced Treatment for Creation-Science and Evolution-Science Act, authored by State Senator Bill P. Keith, judged in the 1987 United States Supreme Court case Edwards v. Aguillard, and was handed a similar ruling. It found the law to require the balanced teaching of creation science with evolution had a particular religious purpose and was therefore unconstitutional. In 1984, The Mystery of Life's Origin was first published. It was co-authored by chemist and creationist Charles B. Thaxton with Walter L. Bradley and Roger L. Olsen, the foreword written by Dean H. Kenyon, and sponsored by the Christian-based Foundation for Thought and Ethics (FTE). The work presented scientific arguments against current theories of abiogenesis and offered a hypothesis of special creation instead. While the focus of creation science had until that time centered primarily on the criticism of the fossil evidence for evolution and validation of the creation myth of the Bible, this new work posed the question whether science reveals that even the simplest living systems were far too complex to have developed by natural, unguided processes. Kenyon later co-wrote with creationist Percival Davis a book intended as a "scientific brief for creationism" to use as a supplement to public high school biology textbooks. Thaxton was enlisted as the book's editor, and the book received publishing support from the FTE. Prior to its release, the 1987 Supreme Court ruling in Edwards v. Aguillard barred the teaching of creation science and creationism in public school classrooms. The book, originally titled Biology and Creation but renamed Of Pandas and People, was released in 1989 and became the first published work to promote the anti-evolutionist design argument under the name intelligent design. The contents of the book later became a focus of evidence in the federal court case, Kitzmiller v. Dover Area School District, when a group of parents filed suit to halt the teaching of intelligent design in Dover, Pennsylvania, public schools. School board officials there had attempted to include Of Pandas and People in their biology classrooms and testimony given during the trial revealed the book was originally written as a creationist text but following the adverse decision in the Supreme Court it underwent simple cosmetic editing to remove the explicit allusions to "creation" or "creator," and replace them instead with references to "design" or "designer." By the mid-1990s, intelligent design had become a separate movement. The creation science movement is distinguished from the intelligent design movement, or neo-creationism, because most advocates of creation science accept scripture as a literal and inerrant historical account, and their primary goal is to corroborate the scriptural account through the use of science. In contrast, as a matter of principle, neo-creationism eschews references to scripture altogether in its polemics and stated goals (see Wedge strategy). By so doing, intelligent design proponents have attempted to succeed where creation science has failed in securing a place in public school science curricula. Carefully avoiding any reference to the identity of the intelligent designer as God in their public arguments, intelligent design proponents sought to reintroduce the creationist ideas into science classrooms while sidestepping the First Amendment's prohibition against religious infringement. However, the intelligent design curriculum was struck down as a violation of the Establishment Clause in Kitzmiller v. Dover Area School District, the judge in the case ruled "that ID is nothing less than the progeny of creationism." Today, creation science as an organized movement is primarily centered within the United States. Creation science organizations are also known in other countries, most notably Creation Ministries International which was founded (under the name Creation Science Foundation) in Australia. Proponents are usually aligned with a Christian denomination, primarily with those characterized as evangelical, conservative, or fundamentalist. While creationist movements also exist in Islam and Judaism, these movements do not use the phrase creation science to describe their beliefs. Creation science has its roots in the work of young Earth creationist George McCready Price disputing modern science's account of natural history, focusing particularly on geology and its concept of uniformitarianism, and his efforts instead to furnish an alternative empirical explanation of observable phenomena which was compatible with strict Biblical literalism. Price's work was later discovered by civil engineer Henry M. Morris, who is now considered to be the father of creation science. Morris and later creationists expanded the scope with attacks against the broad spectrum scientific findings that point to the antiquity of the Universe and common ancestry among species, including growing body of evidence from the fossil record, absolute dating techniques, and cosmogony. The proponents of creation science often say that they are concerned with religious and moral questions as well as natural observations and predictive hypotheses. Many state that their opposition to scientific evolution is primarily based on religion. The overwhelming majority of scientists are in agreement that the claims of science are necessarily limited to those that develop from natural observations and experiments which can be replicated and substantiated by other scientists, and that claims made by creation science do not meet those criteria. Duane Gish, a prominent creation science proponent, has similarly claimed, "We do not know how the creator created, what processes He used, for He used processes which are not now operating anywhere in the natural universe. This is why we refer to creation as special creation. We cannot discover by scientific investigation anything about the creative processes used by the Creator." But he also makes the same claim against science's evolutionary theory, maintaining that on the subject of origins, scientific evolution is a religious theory which cannot be validated by science. Creation science makes the a priori metaphysical assumption that there exists a creator of the life whose origin is being examined. Christian creation science holds that the description of creation is given in the Bible, that the Bible is inerrant in this description (and elsewhere), and therefore empirical scientific evidence must correspond with that description. Creationists also view the preclusion of all supernatural explanations within the sciences as a doctrinaire commitment to exclude the supreme being and miracles. They claim this to be the motivating factor in science's acceptance of Darwinism, a term used in creation science to refer to evolutionary biology which is also often used as a disparagement. Critics argue that creation science is religious rather than scientific because it stems from faith in a religious text rather than by the application of the scientific method. The United States National Academy of Sciences (NAS) has stated unequivocally, "Evolution pervades all biological phenomena. To ignore that it occurred or to classify it as a form of dogma is to deprive the student of the most fundamental organizational concept in the biological sciences. No other biological concept has been more extensively tested and more thoroughly corroborated than the evolutionary history of organisms." Anthropologist Eugenie Scott has noted further, "Religious opposition to evolution propels antievolutionism. Although antievolutionists pay lip service to supposed scientific problems with evolution, what motivates them to battle its teaching is apprehension over the implications of evolution for religion." Creation science advocates argue that scientific theories of the origins of the Universe, Earth, and life are rooted in a priori presumptions of methodological naturalism and uniformitarianism, each of which they reject. In some areas of science such as chemistry, meteorology or medicine, creation science proponents do not necessarily challenge the application of naturalistic or uniformitarian assumptions, but instead single out those scientific theories they judge to be in conflict with their religious beliefs, and it is against those theories that they concentrate their efforts. Many mainstream Christian churches criticize creation science on theological grounds, asserting either that religious faith alone should be a sufficient basis for belief in the truth of creation, or that efforts to prove the Genesis account of creation on scientific grounds are inherently futile because reason is subordinate to faith and cannot thus be used to prove it. Many Christian theologies, including Liberal Christianity, consider the Genesis creation narrative to be a poetic and allegorical work rather than a literal history, and many Christian churches—including the Eastern Orthodox Church, the Roman Catholic, Anglican and the more liberal denominations of the Lutheran, Methodist, Congregationalist and Presbyterian faiths—have either rejected creation science outright or are ambivalent to it. Belief in non-literal interpretations of Genesis is often cited as going back to Saint Augustine. Theistic evolution and evolutionary creationism are theologies that reconcile belief in a creator with biological evolution. Each holds the view that there is a creator but that this creator has employed the natural force of evolution to unfold a divine plan. Religious representatives from faiths compatible with theistic evolution and evolutionary creationism have challenged the growing perception that belief in a creator is inconsistent with the acceptance of evolutionary theory. Spokespersons from the Catholic Church have specifically criticized biblical creationism for relying upon literal interpretations of biblical scripture as the basis for determining scientific fact. The National Academy of Sciences states that "the claims of creation science lack empirical support and cannot be meaningfully tested" and that "creation science is in fact not science and should not be presented as such in science classes." According to Joyce Arthur writing for Skeptic magazine, the "creation 'science' movement gains much of its strength through the use of distortion and scientifically unethical tactics" and "seriously misrepresents the theory of evolution." Scientists have considered the hypotheses proposed by creation science and have rejected them because of a lack of evidence. Furthermore, the claims of creation science do not refer to natural causes and cannot be subject to meaningful tests, so they do not qualify as scientific hypotheses. In 1987, the United States Supreme Court ruled that creationism is religion, not science, and cannot be advocated in public school classrooms. Most mainline Christian denominations have concluded that the concept of evolution is not at odds with their descriptions of creation and human origins. A summary of the objections to creation science by scientists follows: By invoking claims of "abrupt appearance" of species as a miraculous act, creation science is unsuited for the tools and methods demanded by science, and it cannot be considered scientific in the way that the term "science" is currently defined. Scientists and science writers commonly characterize creation science as a pseudoscience. Historically, the debate of whether creationism is compatible with science can be traced back to 1874, the year science historian John William Draper published his History of the Conflict between Religion and Science. In it Draper portrayed the entire history of scientific development as a war against religion. This presentation of history was propagated further by followers such as Andrew Dickson White in his two-volume A History of the Warfare of Science with Theology in Christendom (1896). Their conclusions have been disputed. In the United States, the principal focus of creation science advocates is on the government-supported public school systems, which are prohibited by the Establishment Clause from promoting specific religions. Historical communities have argued that Biblical translations contain many translation errors and errata, and therefore that the use of biblical literalism in creation science is self-contradictory. Creationist arguments in relation to biology center on an idea derived from Genesis that states that life was created by God, in a finite number of "created kinds," rather than through biological evolution from a common ancestor. Creationists contend that any observable speciation descends from these distinctly created kinds through inbreeding, deleterious mutations and other genetic mechanisms. Whereas evolutionary biologists and creationists share similar views of microevolution, creationists reject the fact that the process of macroevolution can explain common ancestry among organisms far beyond the level of common species. Creationists contend that there is no empirical evidence for new plant or animal species, and deny fossil evidence has ever been found documenting the process. Popular arguments against evolution have changed since the publishing of Henry M. Morris' first book on the subject, Scientific Creationism (1974), but some consistent themes remain: that missing links or gaps in the fossil record are proof against evolution; that the increased complexity of organisms over time through evolution is not possible due to the law of increasing entropy; that it is impossible that the mechanism of natural selection could account for common ancestry; and that evolutionary theory is untestable. The origin of the human species is particularly hotly contested; the fossil remains of hominid ancestors are not considered by advocates of creation biology to be evidence for a speciation event involving Homo sapiens. Creationists also assert that early hominids, are either apes, or humans. Richard Dawkins has explained evolution as "a theory of gradual, incremental change over millions of years, which starts with something very simple and works up along slow, gradual gradients to greater complexity," and described the existing fossil record as entirely consistent with that process. Biologists emphasize that transitional gaps between recovered fossils are to be expected, that the existence of any such gaps cannot be invoked to disprove evolution, and that instead the fossil evidence that could be used to disprove the theory would be those fossils which are found and which are entirely inconsistent with what can be predicted or anticipated by the evolutionary model. One example given by Dawkins was, "If there were a single hippo or rabbit in the Precambrian, that would completely blow evolution out of the water. None have ever been found." Flood geology is a concept based on the belief that most of Earth's geological record was formed by the Great Flood described in the story of Noah's Ark. Fossils and fossil fuels are believed to have formed from animal and plant matter which was buried rapidly during this flood, while submarine canyons are explained as having formed during a rapid runoff from the continents at the end of the flood. Sedimentary strata are also claimed to have been predominantly laid down during or after Noah's flood and orogeny. Flood geology is a variant of catastrophism and is contrasted with geological science in that it rejects standard geological principles such as uniformitarianism and radiometric dating. For example, the Creation Research Society argues that "uniformitarianism is wishful thinking." Geologists conclude that no evidence for such a flood is observed in the preserved rock layers and moreover that such a flood is physically impossible, given the current layout of land masses. For instance, since Mount Everest currently is approximately 8.8 kilometres in elevation and the Earth's surface area is 510,065,600 km, the volume of water required to cover Mount Everest to a depth of 15 cubits (6.8 m), as indicated by Genesis 7:20, would be 4.6 billion cubic kilometres. Measurements of the amount of precipitable water vapor in the atmosphere have yielded results indicating that condensing all water vapor in a column of atmosphere would produce liquid water with a depth ranging between zero and approximately 70mm, depending on the date and the location of the column. Nevertheless, there continue to be adherents to the belief in flood geology, and in recent years new creationist models have been introduced such as catastrophic plate tectonics and catastrophic orogeny. Creationists point to flawed experiments they have performed, which they claim demonstrate that 1.5 billion years of nuclear decay took place over a short period of time, from which they infer that "billion-fold speed-ups of nuclear decay" have occurred, a massive violation of the principle that radioisotope decay rates are constant, a core principle underlying nuclear physics generally, and radiometric dating in particular. The scientific community points to numerous flaws in the creationists' experiments, to the fact that their results have not been accepted for publication by any peer-reviewed scientific journal, and to the fact that the creationist scientists conducting them were untrained in experimental geochronology. They have also been criticised for widely publicising the results of their research as successful despite their own admission of insurmountable problems with their hypothesis. The constancy of the decay rates of isotopes is well supported in science. Evidence for this constancy includes the correspondences of date estimates taken from different radioactive isotopes as well as correspondences with non-radiometric dating techniques such as dendrochronology, ice core dating, and historical records. Although scientists have noted slight increases in the decay rate for isotopes subject to extreme pressures, those differences were too small to significantly impact date estimates. The constancy of the decay rates is also governed by first principles in quantum mechanics, wherein any deviation in the rate would require a change in the fundamental constants. According to these principles, a change in the fundamental constants could not influence different elements uniformly, and a comparison between each of the elements' resulting unique chronological timescales would then give inconsistent time estimates. In refutation of young Earth claims of inconstant decay rates affecting the reliability of radiometric dating, Roger C. Wiens, a physicist specializing in isotope dating states: There are only three quite technical instances where a half-life changes, and these do not affect the dating methods: In the 1970s, young Earth creationist Robert V. Gentry proposed that radiohaloes in certain granites represented evidence for the Earth being created instantaneously rather than gradually. This idea has been criticized by physicists and geologists on many grounds including that the rocks Gentry studied were not primordial and that the radionuclides in question need not have been in the rocks initially. Thomas A. Baillieul, a geologist and retired senior environmental scientist with the United States Department of Energy, disputed Gentry's claims in an article entitled, "'Polonium Haloes' Refuted: A Review of 'Radioactive Halos in a Radio-Chronological and Cosmological Perspective' by Robert V. Gentry." Baillieul noted that Gentry was a physicist with no background in geology and given the absence of this background, Gentry had misrepresented the geological context from which the specimens were collected. Additionally, he noted that Gentry relied on research from the beginning of the 20th century, long before radioisotopes were thoroughly understood; that his assumption that a polonium isotope caused the rings was speculative; and that Gentry falsely argued that the half-life of radioactive elements varies with time. Gentry claimed that Baillieul could not publish his criticisms in a reputable scientific journal, although some of Baillieul's criticisms rested on work previously published in reputable scientific journals. Several attempts have been made by creationists to construct a cosmology consistent with a young Universe rather than the standard cosmological age of the universe, based on the belief that Genesis describes the creation of the Universe as well as the Earth. The primary challenge for young-universe cosmologies is that the accepted distances in the Universe require millions or billions of years for light to travel to Earth (the "starlight problem"). An older creationist idea, proposed by creationist astronomer Barry Setterfield, is that the speed of light has decayed in the history of the Universe. More recently, creationist physicist Russell Humphreys has proposed a hypothesis called "white hole cosmology", asserting that the Universe expanded out of a white hole less than 10,000 years ago; claiming that the age of the universe is illusory and results from relativistic effects. Humphreys' cosmology is advocated by creationist organisations such as Answers in Genesis; however because its predictions conflict with current observations, it is not accepted by the scientific community. Various claims are made by creationists concerning alleged evidence that the age of the Solar System is of the order of thousands of years, in contrast to the scientifically accepted age of 4.6 billion years. It is commonly argued that the number of comets in the Solar System is much higher than would be expected given its supposed age. Young Earth Creationists reject the existence of the Kuiper belt and Oort cloud. They also argue that the recession of the Moon from the Earth is incompatible with either the Moon or the Earth being billions of years old. These claims have been refuted by planetologists. In response to increasing evidence suggesting that Mars once possessed a wetter climate, some creationists have proposed that the global flood affected not only the Earth but also Mars and other planets. People who support this claim include creationist astronomer Wayne Spencer and Russell Humphreys. An ongoing problem for creationists is the presence of impact craters on nearly all Solar System objects, which is consistent with scientific explanations of solar system origins but creates insuperable problems for young Earth claims. Creationists Harold Slusher and Richard Mandock, along with Glenn Morton (who later repudiated this claim) asserted that impact craters on the Moon are subject to rock flow, and so cannot be more than a few thousand years old. While some creationist astronomers assert that different phases of meteoritic bombardment of the Solar System occurred during "creation week" and during the subsequent Great Flood, others regard this as unsupported by the evidence and call for further research. Notable creationist museums in the United States:
[ { "paragraph_id": 0, "text": "Creation science or scientific creationism is a pseudoscientific form of Young Earth creationism which claims to offer scientific arguments for certain literalist and inerrantist interpretations of the Bible. It is often presented without overt faith-based language, but instead relies on reinterpreting scientific results to argue that various myths in the Book of Genesis and other select biblical passages are scientifically valid. The most commonly advanced ideas of creation science include special creation based on the Genesis creation narrative and flood geology based on the Genesis flood narrative. Creationists also claim they can disprove or reexplain a variety of scientific facts, theories and paradigms of geology, cosmology, biological evolution, archaeology, history, and linguistics using creation science. Creation science was foundational to intelligent design.", "title": "" }, { "paragraph_id": 1, "text": "The overwhelming consensus of the scientific community is that creation science fails to qualify as scientific because it lacks empirical support, supplies no testable hypotheses, and resolves to describe natural history in terms of scientifically untestable supernatural causes. Courts, most often in the United States where the question has been asked in the context of teaching the subject in public schools, have consistently ruled since the 1980s that creation science is a religious view rather than a scientific one. Historians, philosophers of science and skeptics have described creation science as a pseudoscientific attempt to map the Bible into scientific facts. Professional biologists have criticized creation science for being unscholarly, and even as a dishonest and misguided sham, with extremely harmful educational consequences.", "title": "" }, { "paragraph_id": 2, "text": "Creation science is based largely upon chapters 1–11 of the Book of Genesis. These describe how God calls the world into existence through the power of speech (\"And God said, Let there be light,\" etc.) in six days, calls all the animals and plants into existence, and molds the first man from clay and the first woman from a rib taken from the man's side; a worldwide flood destroys all life except for Noah and his family and representatives of the animals, and Noah becomes the ancestor of the 70 \"nations\" of the world; the nations live together until the incident of the Tower of Babel, when God disperses them and gives them their different languages. Creation science attempts to explain history and science within the span of Biblical chronology, which places the initial act of creation some six thousand years ago.", "title": "Beliefs and activities" }, { "paragraph_id": 3, "text": "Most creation science proponents hold fundamentalist or Evangelical Christian beliefs in Biblical literalism or Biblical inerrancy, as opposed to the higher criticism supported by liberal Christianity in the Fundamentalist–Modernist Controversy. However, there are also examples of Islamic and Jewish scientific creationism that conform to the accounts of creation as recorded in their religious doctrines.", "title": "Beliefs and activities" }, { "paragraph_id": 4, "text": "The Seventh-day Adventist Church has a history of support for creation science. This dates back to George McCready Price, an active Seventh-day Adventist who developed views of flood geology, which formed the basis of creation science. This work was continued by the Geoscience Research Institute, an official institute of the Seventh-day Adventist Church, located on its Loma Linda University campus in California.", "title": "Beliefs and activities" }, { "paragraph_id": 5, "text": "Creation science is generally rejected by the Church of England as well as the Roman Catholic Church. The Pontifical Gregorian University has officially discussed intelligent design as a \"cultural phenomenon\" without scientific elements. The Church of England's official website cites Charles Darwin's local work assisting people in his religious parish.", "title": "Beliefs and activities" }, { "paragraph_id": 6, "text": "Creation science rejects evolution and the common descent of all living things on Earth. Instead, it asserts that the field of evolutionary biology is itself pseudoscientific or even a religion. Creationists argue instead for a system called baraminology, which considers the living world to be descended from uniquely created kinds or \"baramins.\"", "title": "Beliefs and activities" }, { "paragraph_id": 7, "text": "Creation science incorporates the concept of catastrophism to reconcile current landforms and fossil distributions with Biblical interpretations, proposing the remains resulted from successive cataclysmic events, such as a worldwide flood and subsequent ice age. It rejects one of the fundamental principles of modern geology (and of modern science generally), uniformitarianism, which applies the same physical and geological laws observed on the Earth today to interpret the Earth's geological history.", "title": "Beliefs and activities" }, { "paragraph_id": 8, "text": "Sometimes creationists attack other scientific concepts, like the Big Bang cosmological model or methods of scientific dating based upon radioactive decay. Young Earth creationists also reject current estimates of the age of the universe and the age of the Earth, arguing for creationist cosmologies with timescales much shorter than those determined by modern physical cosmology and geological science, typically less than 10,000 years.", "title": "Beliefs and activities" }, { "paragraph_id": 9, "text": "The scientific community has overwhelmingly rejected the ideas put forth in creation science as lying outside the boundaries of a legitimate science. The foundational premises underlying scientific creationism disqualify it as a science because the answers to all inquiry therein are preordained to conform to Bible doctrine, and because that inquiry is constructed upon theories which are not empirically testable in nature.", "title": "Beliefs and activities" }, { "paragraph_id": 10, "text": "Scientists also deem creation science's attacks against biological evolution to be without scientific merit. The views of the scientific community were accepted in two significant court decisions in the 1980s, which found the field of creation science to be a religious mode of inquiry, not a scientific one.", "title": "Beliefs and activities" }, { "paragraph_id": 11, "text": "Creation science began in the 1960s, as a fundamentalist Christian effort in the United States to prove Biblical inerrancy and nullify the scientific evidence for evolution. It has since developed a sizable religious following in the United States, with creation science ministries branching worldwide. The main ideas in creation science are: the belief in creation ex nihilo (Latin: out of nothing); the conviction that the Earth was created within the last 6,000–10,000 years; the belief that humans and other life on Earth were created as distinct fixed \"baraminological\" kinds; and \"flood geology\" or the idea that fossils found in geological strata were deposited during a cataclysmic flood which completely covered the entire Earth. As a result, creationists also challenge the geologic and astrophysical measurements of the age of the Earth and the universe along with their origins, which creationists believe are irreconcilable with the account in the Book of Genesis. Creation science proponents often refer to the theory of evolution as \"Darwinism\" or as \"Darwinian evolution.\"", "title": "History" }, { "paragraph_id": 12, "text": "The creation science texts and curricula that first emerged in the 1960s focused upon concepts derived from a literal interpretation of the Bible and were overtly religious in nature, most notably proposing Noah's flood in the Biblical Genesis account as an explanation for the geological and fossil record. These works attracted little notice beyond the schools and congregations of conservative fundamental and Evangelical Christians until the 1970s, when its followers challenged the teaching of evolution in the public schools and other venues in the United States, bringing it to the attention of the public-at-large and the scientific community. Many school boards and lawmakers were persuaded to include the teaching of creation science alongside evolution in the science curriculum. Creation science texts and curricula used in churches and Christian schools were revised to eliminate their Biblical and theological references, and less explicitly sectarian versions of creation science education were introduced in public schools in Louisiana, Arkansas, and other regions in the United States.", "title": "History" }, { "paragraph_id": 13, "text": "The 1982 ruling in McLean v. Arkansas found that creation science fails to meet the essential characteristics of science and that its chief intent is to advance a particular religious view. The teaching of creation science in public schools in the United States effectively ended in 1987 following the United States Supreme Court decision in Edwards v. Aguillard. The court affirmed that a statute requiring the teaching of creation science alongside evolution when evolution is taught in Louisiana public schools was unconstitutional because its sole true purpose was to advance a particular religious belief.", "title": "History" }, { "paragraph_id": 14, "text": "In response to this ruling, drafts of the creation science school textbook Of Pandas and People were edited to change references of creation to intelligent design before its publication in 1989. The intelligent design movement promoted this version. Requiring intelligent design to be taught in public school science classes was found to be unconstitutional in the 2005 Kitzmiller v. Dover Area School District federal court case.", "title": "History" }, { "paragraph_id": 15, "text": "The teaching of evolution was gradually introduced into more and more public high school textbooks in the United States after 1900, but in the aftermath of the First World War the growth of fundamentalist Christianity gave rise to a creationist opposition to such teaching. Legislation prohibiting the teaching of evolution was passed in certain regions, most notably Tennessee's Butler Act of 1925. The Soviet Union's successful launch of Sputnik 1 in 1957 sparked national concern that the science education in public schools was outdated. In 1958, the United States passed National Defense Education Act which introduced new education guidelines for science instruction. With federal grant funding, the Biological Sciences Curriculum Study (BSCS) drafted new standards for the public schools' science textbooks which included the teaching of evolution. Almost half the nation's high schools were using textbooks based on the guidelines of the BSCS soon after they were published in 1963. The Tennessee legislature did not repeal the Butler Act until 1967.", "title": "History" }, { "paragraph_id": 16, "text": "Creation science (dubbed \"scientific creationism\" at the time) emerged as an organized movement during the 1960s. It was strongly influenced by the earlier work of armchair geologist George McCready Price who wrote works such as Illogical Geology: The Weakest Point in the Evolution Theory (1906) and The New Geology (1923) to advance what he termed \"new catastrophism\" and dispute the current geological time frames and explanations of geologic history. Price was cited at the Scopes Trial of 1925, but his writings had no credence among geologists and other scientists. Price's \"new catastrophism\" was also disputed by most other creationists until its revival with the 1961 publication of The Genesis Flood by John C. Whitcomb and Henry M. Morris, a work which quickly became an important text on the issue to fundamentalist Christians and expanded the field of creation science beyond critiques of geology into biology and cosmology as well. Soon after its publication, a movement was underway to have the subject taught in United States' public schools.", "title": "History" }, { "paragraph_id": 17, "text": "The various state laws prohibiting teaching of evolution were overturned in 1968 when the United States Supreme Court ruled in Epperson v. Arkansas such laws violated the Establishment Clause of the First Amendment to the United States Constitution. This ruling inspired a new creationist movement to promote laws requiring that schools give balanced treatment to creation science when evolution is taught. The 1981 Arkansas Act 590 was one such law that carefully detailed the principles of creation science that were to receive equal time in public schools alongside evolutionary principles. The act defined creation science as follows:", "title": "History" }, { "paragraph_id": 18, "text": "\"'Creation-science' means the scientific evidences for creation and inferences from those evidences. Creation-science includes the scientific evidences and related inferences that indicate:", "title": "History" }, { "paragraph_id": 19, "text": "This legislation was examined in McLean v. Arkansas, and the ruling handed down on January 5, 1982, concluded that creation-science as defined in the act \"is simply not science\". The judgement defined the following as essential characteristics of science:", "title": "History" }, { "paragraph_id": 20, "text": "The court ruled that creation science failed to meet these essential characteristics and identified specific reasons. After examining the key concepts from creation science, the court found:", "title": "History" }, { "paragraph_id": 21, "text": "The court further noted that no recognized scientific journal had published any article espousing the creation science theory as described in the Arkansas law, and stated that the testimony presented by defense attributing the absence to censorship was not credible.", "title": "History" }, { "paragraph_id": 22, "text": "In its ruling, the court wrote that for any theory to qualify as scientific, the theory must be tentative, and open to revision or abandonment as new facts come to light. It wrote that any methodology which begins with an immutable conclusion that cannot be revised or rejected, regardless of the evidence, is not a scientific theory. The court found that creation science does not culminate in conclusions formed from scientific inquiry, but instead begins with the conclusion, one taken from a literal wording of the Book of Genesis, and seeks only scientific evidence to support it.", "title": "History" }, { "paragraph_id": 23, "text": "The law in Arkansas adopted the same two-model approach as that put forward by the Institute for Creation Research, one allowing only two possible explanations for the origins of life and existence of man, plants and animals: it was either the work of a creator or it was not. Scientific evidence that failed to support the theory of evolution was posed as necessarily scientific evidence in support of creationism, but in its judgment the court ruled this approach to be no more than a \"contrived dualism which has not scientific factual basis or legitimate educational purpose.\"", "title": "History" }, { "paragraph_id": 24, "text": "The judge concluded that \"Act 590 is a religious crusade, coupled with a desire to conceal this fact,\" and that it violated the First Amendment's Establishment Clause. The decision was not appealed to a higher court, but had a powerful influence on subsequent rulings. Louisiana's 1982 Balanced Treatment for Creation-Science and Evolution-Science Act, authored by State Senator Bill P. Keith, judged in the 1987 United States Supreme Court case Edwards v. Aguillard, and was handed a similar ruling. It found the law to require the balanced teaching of creation science with evolution had a particular religious purpose and was therefore unconstitutional.", "title": "History" }, { "paragraph_id": 25, "text": "In 1984, The Mystery of Life's Origin was first published. It was co-authored by chemist and creationist Charles B. Thaxton with Walter L. Bradley and Roger L. Olsen, the foreword written by Dean H. Kenyon, and sponsored by the Christian-based Foundation for Thought and Ethics (FTE). The work presented scientific arguments against current theories of abiogenesis and offered a hypothesis of special creation instead. While the focus of creation science had until that time centered primarily on the criticism of the fossil evidence for evolution and validation of the creation myth of the Bible, this new work posed the question whether science reveals that even the simplest living systems were far too complex to have developed by natural, unguided processes.", "title": "History" }, { "paragraph_id": 26, "text": "Kenyon later co-wrote with creationist Percival Davis a book intended as a \"scientific brief for creationism\" to use as a supplement to public high school biology textbooks. Thaxton was enlisted as the book's editor, and the book received publishing support from the FTE. Prior to its release, the 1987 Supreme Court ruling in Edwards v. Aguillard barred the teaching of creation science and creationism in public school classrooms. The book, originally titled Biology and Creation but renamed Of Pandas and People, was released in 1989 and became the first published work to promote the anti-evolutionist design argument under the name intelligent design. The contents of the book later became a focus of evidence in the federal court case, Kitzmiller v. Dover Area School District, when a group of parents filed suit to halt the teaching of intelligent design in Dover, Pennsylvania, public schools. School board officials there had attempted to include Of Pandas and People in their biology classrooms and testimony given during the trial revealed the book was originally written as a creationist text but following the adverse decision in the Supreme Court it underwent simple cosmetic editing to remove the explicit allusions to \"creation\" or \"creator,\" and replace them instead with references to \"design\" or \"designer.\"", "title": "History" }, { "paragraph_id": 27, "text": "By the mid-1990s, intelligent design had become a separate movement. The creation science movement is distinguished from the intelligent design movement, or neo-creationism, because most advocates of creation science accept scripture as a literal and inerrant historical account, and their primary goal is to corroborate the scriptural account through the use of science. In contrast, as a matter of principle, neo-creationism eschews references to scripture altogether in its polemics and stated goals (see Wedge strategy). By so doing, intelligent design proponents have attempted to succeed where creation science has failed in securing a place in public school science curricula. Carefully avoiding any reference to the identity of the intelligent designer as God in their public arguments, intelligent design proponents sought to reintroduce the creationist ideas into science classrooms while sidestepping the First Amendment's prohibition against religious infringement. However, the intelligent design curriculum was struck down as a violation of the Establishment Clause in Kitzmiller v. Dover Area School District, the judge in the case ruled \"that ID is nothing less than the progeny of creationism.\"", "title": "History" }, { "paragraph_id": 28, "text": "Today, creation science as an organized movement is primarily centered within the United States. Creation science organizations are also known in other countries, most notably Creation Ministries International which was founded (under the name Creation Science Foundation) in Australia. Proponents are usually aligned with a Christian denomination, primarily with those characterized as evangelical, conservative, or fundamentalist. While creationist movements also exist in Islam and Judaism, these movements do not use the phrase creation science to describe their beliefs.", "title": "History" }, { "paragraph_id": 29, "text": "Creation science has its roots in the work of young Earth creationist George McCready Price disputing modern science's account of natural history, focusing particularly on geology and its concept of uniformitarianism, and his efforts instead to furnish an alternative empirical explanation of observable phenomena which was compatible with strict Biblical literalism. Price's work was later discovered by civil engineer Henry M. Morris, who is now considered to be the father of creation science. Morris and later creationists expanded the scope with attacks against the broad spectrum scientific findings that point to the antiquity of the Universe and common ancestry among species, including growing body of evidence from the fossil record, absolute dating techniques, and cosmogony.", "title": "Issues" }, { "paragraph_id": 30, "text": "The proponents of creation science often say that they are concerned with religious and moral questions as well as natural observations and predictive hypotheses. Many state that their opposition to scientific evolution is primarily based on religion.", "title": "Issues" }, { "paragraph_id": 31, "text": "The overwhelming majority of scientists are in agreement that the claims of science are necessarily limited to those that develop from natural observations and experiments which can be replicated and substantiated by other scientists, and that claims made by creation science do not meet those criteria. Duane Gish, a prominent creation science proponent, has similarly claimed, \"We do not know how the creator created, what processes He used, for He used processes which are not now operating anywhere in the natural universe. This is why we refer to creation as special creation. We cannot discover by scientific investigation anything about the creative processes used by the Creator.\" But he also makes the same claim against science's evolutionary theory, maintaining that on the subject of origins, scientific evolution is a religious theory which cannot be validated by science.", "title": "Issues" }, { "paragraph_id": 32, "text": "Creation science makes the a priori metaphysical assumption that there exists a creator of the life whose origin is being examined. Christian creation science holds that the description of creation is given in the Bible, that the Bible is inerrant in this description (and elsewhere), and therefore empirical scientific evidence must correspond with that description. Creationists also view the preclusion of all supernatural explanations within the sciences as a doctrinaire commitment to exclude the supreme being and miracles. They claim this to be the motivating factor in science's acceptance of Darwinism, a term used in creation science to refer to evolutionary biology which is also often used as a disparagement. Critics argue that creation science is religious rather than scientific because it stems from faith in a religious text rather than by the application of the scientific method. The United States National Academy of Sciences (NAS) has stated unequivocally, \"Evolution pervades all biological phenomena. To ignore that it occurred or to classify it as a form of dogma is to deprive the student of the most fundamental organizational concept in the biological sciences. No other biological concept has been more extensively tested and more thoroughly corroborated than the evolutionary history of organisms.\" Anthropologist Eugenie Scott has noted further, \"Religious opposition to evolution propels antievolutionism. Although antievolutionists pay lip service to supposed scientific problems with evolution, what motivates them to battle its teaching is apprehension over the implications of evolution for religion.\"", "title": "Issues" }, { "paragraph_id": 33, "text": "Creation science advocates argue that scientific theories of the origins of the Universe, Earth, and life are rooted in a priori presumptions of methodological naturalism and uniformitarianism, each of which they reject. In some areas of science such as chemistry, meteorology or medicine, creation science proponents do not necessarily challenge the application of naturalistic or uniformitarian assumptions, but instead single out those scientific theories they judge to be in conflict with their religious beliefs, and it is against those theories that they concentrate their efforts.", "title": "Issues" }, { "paragraph_id": 34, "text": "Many mainstream Christian churches criticize creation science on theological grounds, asserting either that religious faith alone should be a sufficient basis for belief in the truth of creation, or that efforts to prove the Genesis account of creation on scientific grounds are inherently futile because reason is subordinate to faith and cannot thus be used to prove it.", "title": "Issues" }, { "paragraph_id": 35, "text": "Many Christian theologies, including Liberal Christianity, consider the Genesis creation narrative to be a poetic and allegorical work rather than a literal history, and many Christian churches—including the Eastern Orthodox Church, the Roman Catholic, Anglican and the more liberal denominations of the Lutheran, Methodist, Congregationalist and Presbyterian faiths—have either rejected creation science outright or are ambivalent to it. Belief in non-literal interpretations of Genesis is often cited as going back to Saint Augustine.", "title": "Issues" }, { "paragraph_id": 36, "text": "Theistic evolution and evolutionary creationism are theologies that reconcile belief in a creator with biological evolution. Each holds the view that there is a creator but that this creator has employed the natural force of evolution to unfold a divine plan. Religious representatives from faiths compatible with theistic evolution and evolutionary creationism have challenged the growing perception that belief in a creator is inconsistent with the acceptance of evolutionary theory. Spokespersons from the Catholic Church have specifically criticized biblical creationism for relying upon literal interpretations of biblical scripture as the basis for determining scientific fact.", "title": "Issues" }, { "paragraph_id": 37, "text": "The National Academy of Sciences states that \"the claims of creation science lack empirical support and cannot be meaningfully tested\" and that \"creation science is in fact not science and should not be presented as such in science classes.\" According to Joyce Arthur writing for Skeptic magazine, the \"creation 'science' movement gains much of its strength through the use of distortion and scientifically unethical tactics\" and \"seriously misrepresents the theory of evolution.\"", "title": "Issues" }, { "paragraph_id": 38, "text": "Scientists have considered the hypotheses proposed by creation science and have rejected them because of a lack of evidence. Furthermore, the claims of creation science do not refer to natural causes and cannot be subject to meaningful tests, so they do not qualify as scientific hypotheses. In 1987, the United States Supreme Court ruled that creationism is religion, not science, and cannot be advocated in public school classrooms. Most mainline Christian denominations have concluded that the concept of evolution is not at odds with their descriptions of creation and human origins.", "title": "Issues" }, { "paragraph_id": 39, "text": "A summary of the objections to creation science by scientists follows:", "title": "Issues" }, { "paragraph_id": 40, "text": "By invoking claims of \"abrupt appearance\" of species as a miraculous act, creation science is unsuited for the tools and methods demanded by science, and it cannot be considered scientific in the way that the term \"science\" is currently defined. Scientists and science writers commonly characterize creation science as a pseudoscience.", "title": "Issues" }, { "paragraph_id": 41, "text": "Historically, the debate of whether creationism is compatible with science can be traced back to 1874, the year science historian John William Draper published his History of the Conflict between Religion and Science. In it Draper portrayed the entire history of scientific development as a war against religion. This presentation of history was propagated further by followers such as Andrew Dickson White in his two-volume A History of the Warfare of Science with Theology in Christendom (1896). Their conclusions have been disputed.", "title": "Issues" }, { "paragraph_id": 42, "text": "In the United States, the principal focus of creation science advocates is on the government-supported public school systems, which are prohibited by the Establishment Clause from promoting specific religions. Historical communities have argued that Biblical translations contain many translation errors and errata, and therefore that the use of biblical literalism in creation science is self-contradictory.", "title": "Issues" }, { "paragraph_id": 43, "text": "Creationist arguments in relation to biology center on an idea derived from Genesis that states that life was created by God, in a finite number of \"created kinds,\" rather than through biological evolution from a common ancestor. Creationists contend that any observable speciation descends from these distinctly created kinds through inbreeding, deleterious mutations and other genetic mechanisms. Whereas evolutionary biologists and creationists share similar views of microevolution, creationists reject the fact that the process of macroevolution can explain common ancestry among organisms far beyond the level of common species. Creationists contend that there is no empirical evidence for new plant or animal species, and deny fossil evidence has ever been found documenting the process.", "title": "Kinds of creation science" }, { "paragraph_id": 44, "text": "Popular arguments against evolution have changed since the publishing of Henry M. Morris' first book on the subject, Scientific Creationism (1974), but some consistent themes remain: that missing links or gaps in the fossil record are proof against evolution; that the increased complexity of organisms over time through evolution is not possible due to the law of increasing entropy; that it is impossible that the mechanism of natural selection could account for common ancestry; and that evolutionary theory is untestable. The origin of the human species is particularly hotly contested; the fossil remains of hominid ancestors are not considered by advocates of creation biology to be evidence for a speciation event involving Homo sapiens. Creationists also assert that early hominids, are either apes, or humans.", "title": "Kinds of creation science" }, { "paragraph_id": 45, "text": "Richard Dawkins has explained evolution as \"a theory of gradual, incremental change over millions of years, which starts with something very simple and works up along slow, gradual gradients to greater complexity,\" and described the existing fossil record as entirely consistent with that process. Biologists emphasize that transitional gaps between recovered fossils are to be expected, that the existence of any such gaps cannot be invoked to disprove evolution, and that instead the fossil evidence that could be used to disprove the theory would be those fossils which are found and which are entirely inconsistent with what can be predicted or anticipated by the evolutionary model. One example given by Dawkins was, \"If there were a single hippo or rabbit in the Precambrian, that would completely blow evolution out of the water. None have ever been found.\"", "title": "Kinds of creation science" }, { "paragraph_id": 46, "text": "Flood geology is a concept based on the belief that most of Earth's geological record was formed by the Great Flood described in the story of Noah's Ark. Fossils and fossil fuels are believed to have formed from animal and plant matter which was buried rapidly during this flood, while submarine canyons are explained as having formed during a rapid runoff from the continents at the end of the flood. Sedimentary strata are also claimed to have been predominantly laid down during or after Noah's flood and orogeny. Flood geology is a variant of catastrophism and is contrasted with geological science in that it rejects standard geological principles such as uniformitarianism and radiometric dating. For example, the Creation Research Society argues that \"uniformitarianism is wishful thinking.\"", "title": "Kinds of creation science" }, { "paragraph_id": 47, "text": "Geologists conclude that no evidence for such a flood is observed in the preserved rock layers and moreover that such a flood is physically impossible, given the current layout of land masses. For instance, since Mount Everest currently is approximately 8.8 kilometres in elevation and the Earth's surface area is 510,065,600 km, the volume of water required to cover Mount Everest to a depth of 15 cubits (6.8 m), as indicated by Genesis 7:20, would be 4.6 billion cubic kilometres. Measurements of the amount of precipitable water vapor in the atmosphere have yielded results indicating that condensing all water vapor in a column of atmosphere would produce liquid water with a depth ranging between zero and approximately 70mm, depending on the date and the location of the column. Nevertheless, there continue to be adherents to the belief in flood geology, and in recent years new creationist models have been introduced such as catastrophic plate tectonics and catastrophic orogeny.", "title": "Kinds of creation science" }, { "paragraph_id": 48, "text": "Creationists point to flawed experiments they have performed, which they claim demonstrate that 1.5 billion years of nuclear decay took place over a short period of time, from which they infer that \"billion-fold speed-ups of nuclear decay\" have occurred, a massive violation of the principle that radioisotope decay rates are constant, a core principle underlying nuclear physics generally, and radiometric dating in particular.", "title": "Kinds of creation science" }, { "paragraph_id": 49, "text": "The scientific community points to numerous flaws in the creationists' experiments, to the fact that their results have not been accepted for publication by any peer-reviewed scientific journal, and to the fact that the creationist scientists conducting them were untrained in experimental geochronology. They have also been criticised for widely publicising the results of their research as successful despite their own admission of insurmountable problems with their hypothesis.", "title": "Kinds of creation science" }, { "paragraph_id": 50, "text": "The constancy of the decay rates of isotopes is well supported in science. Evidence for this constancy includes the correspondences of date estimates taken from different radioactive isotopes as well as correspondences with non-radiometric dating techniques such as dendrochronology, ice core dating, and historical records. Although scientists have noted slight increases in the decay rate for isotopes subject to extreme pressures, those differences were too small to significantly impact date estimates. The constancy of the decay rates is also governed by first principles in quantum mechanics, wherein any deviation in the rate would require a change in the fundamental constants. According to these principles, a change in the fundamental constants could not influence different elements uniformly, and a comparison between each of the elements' resulting unique chronological timescales would then give inconsistent time estimates.", "title": "Kinds of creation science" }, { "paragraph_id": 51, "text": "In refutation of young Earth claims of inconstant decay rates affecting the reliability of radiometric dating, Roger C. Wiens, a physicist specializing in isotope dating states:", "title": "Kinds of creation science" }, { "paragraph_id": 52, "text": "There are only three quite technical instances where a half-life changes, and these do not affect the dating methods:", "title": "Kinds of creation science" }, { "paragraph_id": 53, "text": "In the 1970s, young Earth creationist Robert V. Gentry proposed that radiohaloes in certain granites represented evidence for the Earth being created instantaneously rather than gradually. This idea has been criticized by physicists and geologists on many grounds including that the rocks Gentry studied were not primordial and that the radionuclides in question need not have been in the rocks initially.", "title": "Kinds of creation science" }, { "paragraph_id": 54, "text": "Thomas A. Baillieul, a geologist and retired senior environmental scientist with the United States Department of Energy, disputed Gentry's claims in an article entitled, \"'Polonium Haloes' Refuted: A Review of 'Radioactive Halos in a Radio-Chronological and Cosmological Perspective' by Robert V. Gentry.\" Baillieul noted that Gentry was a physicist with no background in geology and given the absence of this background, Gentry had misrepresented the geological context from which the specimens were collected. Additionally, he noted that Gentry relied on research from the beginning of the 20th century, long before radioisotopes were thoroughly understood; that his assumption that a polonium isotope caused the rings was speculative; and that Gentry falsely argued that the half-life of radioactive elements varies with time. Gentry claimed that Baillieul could not publish his criticisms in a reputable scientific journal, although some of Baillieul's criticisms rested on work previously published in reputable scientific journals.", "title": "Kinds of creation science" }, { "paragraph_id": 55, "text": "Several attempts have been made by creationists to construct a cosmology consistent with a young Universe rather than the standard cosmological age of the universe, based on the belief that Genesis describes the creation of the Universe as well as the Earth. The primary challenge for young-universe cosmologies is that the accepted distances in the Universe require millions or billions of years for light to travel to Earth (the \"starlight problem\"). An older creationist idea, proposed by creationist astronomer Barry Setterfield, is that the speed of light has decayed in the history of the Universe. More recently, creationist physicist Russell Humphreys has proposed a hypothesis called \"white hole cosmology\", asserting that the Universe expanded out of a white hole less than 10,000 years ago; claiming that the age of the universe is illusory and results from relativistic effects. Humphreys' cosmology is advocated by creationist organisations such as Answers in Genesis; however because its predictions conflict with current observations, it is not accepted by the scientific community.", "title": "Kinds of creation science" }, { "paragraph_id": 56, "text": "Various claims are made by creationists concerning alleged evidence that the age of the Solar System is of the order of thousands of years, in contrast to the scientifically accepted age of 4.6 billion years. It is commonly argued that the number of comets in the Solar System is much higher than would be expected given its supposed age. Young Earth Creationists reject the existence of the Kuiper belt and Oort cloud. They also argue that the recession of the Moon from the Earth is incompatible with either the Moon or the Earth being billions of years old. These claims have been refuted by planetologists.", "title": "Kinds of creation science" }, { "paragraph_id": 57, "text": "In response to increasing evidence suggesting that Mars once possessed a wetter climate, some creationists have proposed that the global flood affected not only the Earth but also Mars and other planets. People who support this claim include creationist astronomer Wayne Spencer and Russell Humphreys.", "title": "Kinds of creation science" }, { "paragraph_id": 58, "text": "An ongoing problem for creationists is the presence of impact craters on nearly all Solar System objects, which is consistent with scientific explanations of solar system origins but creates insuperable problems for young Earth claims. Creationists Harold Slusher and Richard Mandock, along with Glenn Morton (who later repudiated this claim) asserted that impact craters on the Moon are subject to rock flow, and so cannot be more than a few thousand years old. While some creationist astronomers assert that different phases of meteoritic bombardment of the Solar System occurred during \"creation week\" and during the subsequent Great Flood, others regard this as unsupported by the evidence and call for further research.", "title": "Kinds of creation science" }, { "paragraph_id": 59, "text": "Notable creationist museums in the United States:", "title": "External links" } ]
Creation science or scientific creationism is a pseudoscientific form of Young Earth creationism which claims to offer scientific arguments for certain literalist and inerrantist interpretations of the Bible. It is often presented without overt faith-based language, but instead relies on reinterpreting scientific results to argue that various myths in the Book of Genesis and other select biblical passages are scientifically valid. The most commonly advanced ideas of creation science include special creation based on the Genesis creation narrative and flood geology based on the Genesis flood narrative. Creationists also claim they can disprove or reexplain a variety of scientific facts, theories and paradigms of geology, cosmology, biological evolution, archaeology, history, and linguistics using creation science. Creation science was foundational to intelligent design. The overwhelming consensus of the scientific community is that creation science fails to qualify as scientific because it lacks empirical support, supplies no testable hypotheses, and resolves to describe natural history in terms of scientifically untestable supernatural causes. Courts, most often in the United States where the question has been asked in the context of teaching the subject in public schools, have consistently ruled since the 1980s that creation science is a religious view rather than a scientific one. Historians, philosophers of science and skeptics have described creation science as a pseudoscientific attempt to map the Bible into scientific facts. Professional biologists have criticized creation science for being unscholarly, and even as a dishonest and misguided sham, with extremely harmful educational consequences.
2002-01-08T20:32:13Z
2023-11-22T14:04:21Z
[ "Template:Harvnb", "Template:Creationism topics", "Template:Citation needed", "Template:Cite court", "Template:Cite podcast", "Template:Pseudoscience", "Template:See also", "Template:Reflist", "Template:Cite journal", "Template:Cite news", "Template:Cite press release", "Template:Short description", "Template:Sfn", "Template:Infobox pseudoscience", "Template:Anchor", "Template:Creation Science", "Template:Refbegin", "Template:Creationism2", "Template:Page needed", "Template:Cite encyclopedia", "Template:Cbignore", "Template:Unreliable source?", "Template:Cite book", "Template:Ussc", "Template:Portal bar", "Template:Pp-semi-indef", "Template:Distinguish", "Template:Main", "Template:Primary sources", "Template:Blockquote", "Template:Citation", "Template:Primary sources section", "Template:Cite web", "Template:Refend", "Template:Commons category" ]
https://en.wikipedia.org/wiki/Creation_science
7,685
List of cartographers
Cartography is the study of map making and cartographers are map makers.
[ { "paragraph_id": 0, "text": "Cartography is the study of map making and cartographers are map makers.", "title": "" } ]
Cartography is the study of map making and cartographers are map makers.
2001-11-20T06:15:53Z
2023-12-09T12:57:17Z
[ "Template:Cite web", "Template:Short description", "Template:Use dmy dates", "Template:Circa", "Template:C.", "Template:Ill", "Template:Aka", "Template:Lang" ]
https://en.wikipedia.org/wiki/List_of_cartographers
7,689
Cirth
The Cirth (Sindarin pronunciation: [ˈkirθ], meaning "runes"; sg. certh [ˈkɛrθ]) is a semi‑artificial script, based on real‑life runic alphabets, one of several scripts invented by J. R. R. Tolkien for the constructed languages he devised and used in his works. Cirth is written with a capital letter when referring to the writing system; the letters themselves can be called cirth. In the fictional history of Middle-earth, the original Certhas was created by the Sindar (or Grey Elves) for their language, Sindarin. Its extension and elaboration was known as the Angerthas Daeron, as it was attributed to the Sinda Daeron, despite the fact that it was most probably arranged by the Noldor in order to represent the sounds of other languages like Quenya and Telerin. Although it was later largely replaced by the Tengwar, the Cirth was nonetheless adopted by the Dwarves to write down both their Khuzdul language (Angerthas Moria) and the languages of Men (Angerthas Erebor). The Cirth was also adapted, in its oldest and simplest form, by various races including Men and even Orcs. Many letters have shapes also found in the historical runic alphabets, but their sound values are only similar in a few of the vowels. Rather, the system of assignment of sound values is much more systematic in the Cirth than in the historical runes (e.g., voiced variants of a voiceless sound are expressed by an additional stroke). The division between the older Cirth of Daeron and their adaptation by Dwarves and Men has been interpreted as a parallel drawn by Tolkien to the development of the Fuþorc to the Younger Fuþark. The original Elvish Cirth "as supposed products of a superior culture" are focused on logical arrangement and a close connection between form and value whereas the adaptations by mortal races introduced irregularities. Similar to the Germanic tribes who had no written literature and used only simple runes before their conversion to Christianity, the Sindarin Elves of Beleriand with their Cirth were introduced to the more elaborate Tengwar of Fëanor when the Noldorin Elves returned to Middle-earth from the lands of the divine Valar. In the Appendix E to The Return of the King, Tolkien writes that the Sindar of Beleriand first developed an alphabet for their language some time between the invention of the Tengwar by Fëanor (YT 1250) and the introduction thereof to Middle-earth by the Exiled Noldor at the beginning of the First Age. This alphabet was devised to represent only the sounds of their Sindarin language and its letters were mostly used for inscribing names or brief memorials on wood, stone or metal, hence their angular shapes and straight lines. In Sindarin these letters were named cirth (sing. certh), from the Elvish root *kir- meaning "to cleave, to cut". An abecedarium of cirth, consisting of the runes listed in due order, was commonly known as Certhas ([ˈkɛrθɑs], meaning "rune-rows" in Sindarin and loosely translated as "runic alphabet"). The oldest cirth were the following: The form of these letters was somewhat unsystematic, unlike later rearrangements and extensions that made them more featural. The cirth and were used for ⟨h⟩ and ⟨s⟩, but varied as to which was which. Many of the runes consisted of a single vertical line (or "stem") with an appendage (or "branch") attached to one or both sides. If the attachment was made on one side only, it was usually to the right, but "the reverse was not infrequent" and did not change the value of the letter. (For example, the variants or specifically mentioned for h or s, also or for t, etc). In Beleriand, before the end of the First Age, the Certhas was rearranged and further developed, partly under the influence of the Tengwar introduced by the Noldor. This reorganisation of the Cirth was commonly attributed to the Elf Daeron, minstrel and loremaster of King Thingol of Doriath. Thus, the new system became known as the Angerthas Daeron (where "angerthas" [ɑŋˈɡɛrθɑs] is from Sindarin "an(d)" [ɑn(d)] + "certhas" [ˈkɛrθɑs], meaning "long rune-rows"). In this arrangement, the assignment of values to each certh is systematic. The runes consisting of a stem and a branch attached to the right are used for voiceless stops, while other sounds are allocated according to the following principles: The cirth constructed in this way can therefore be arranged into series, each corresponding to a place of articulation: Other letters introduced in this system include: and for ⟨a⟩ and ⟨w⟩, respectively; runes for long vowels, evidently originated by doubling and binding the certh of the corresponding short vowel (e.g., ⟨oo⟩ → ⟨ō⟩); two front vowels, probably stemming from ligatures of the corresponding back vowel with the ⟨i⟩-certh (i.e., → ⟨ü⟩, and → ⟨ö⟩); some homorganic nasal + stop clusters (e.g., [nd]). Back to the fictional history, since the new -series and -series encompass sounds which do not occur in Sindarin but are present in Quenya, they were most probably introduced by the Exiled Noldor who spoke Quenya as a language of knowledge. By loan-translation, the Cirth became known in Quenya as Certar [ˈkɛrtar], while a single certh was called certa [ˈkɛrta]. After the Tengwar became the sole script used for writing, the Angerthas Daeron was essentially relegated to carved inscriptions. The Elves of the West, for the most part, abandoned the Cirth altogether, with the exception of the Noldor dwelling in the country of Eregion, who maintained it in use and made it known as Angerthas Eregion. Note: In this article, the runes of the Angerthas come with the same peculiar transliteration used by Tolkien in the Appendix E, which differs from the (Latin) spelling of both Quenya and Sindarin. The IPA transcription that follows is applicable to both languages, except where indicated otherwise. Notes: According to Tolkien's legendarium, the Dwarves first came to know the runes of the Noldor at the beginning of the Second Age. The Dwarves "introduced a number of unsystematic changes in value, as well as certain new cirth". They modified the previous system to suit the specific needs of their language, Khuzdul. The Dwarves spread their revised alphabet to Moria, where it came to be known as Angerthas Moria, and developed both carved and pen-written forms of these runes. Many cirth here represent sounds not occurring in Khuzdul (at least in published words of Khuzdul: of course, our corpus is very limited to judge the necessity or not, of these sounds). Here they are marked with a black star (). Notes: In Angerthas Moria the cirth /dʒ/ and /ʒ/ were dropped. Thus and were adopted for /dʒ/ and /ʒ/, although they were used for /r/ and /r̥/ in Elvish languages. Subsequently, this script used the certh for /ʀ/ (or /ʁ/), which had the sound /n/ in the Elvish systems. Therefore, the certh (which was previously used for the sound /ŋ/, useless in Khuzdul) was adopted for the sound /n/. A totally new introduction was the certh , used as an alternative, simplified and, maybe, weaker form of . Because of the visual relation of these two cirth, the certh was given the sound /z/ to relate better with that, in this script, had the sound /s/. At the beginning of the Third Age the Dwarves were driven out of Moria, and some migrated to Erebor. As the Dwarves of Erebor would trade with the Men of the nearby towns of Dale and Lake-town, they needed a script to write in Westron (the lingua franca of Middle-earth, usually rendered in English by Tolkien in his works). The Angerthas Moria was adapted accordingly: some new cirth were added, while some were restored to their Elvish usage, thus creating the Angerthas Erebor. While the Angerthas Moria was still used to write down Khuzdul, this new script was primarily used for Mannish languages. It is also the script used in the first and third page of the Book of Mazarbul. Angerthas Erebor also features combining diacritics: The Angerthas Erebor is used twice in The Lord of the Rings to write in English: The Book of Mazarbul shows some additional cirth used in Angerthas Erebor: one for a double ⟨l⟩ ligature, one for the definite article, and six for the representation of the same number of English diphthongs: Notes: The Cirth is not the only runic writing system used by Tolkien in his legendarium. In fact, he devised a great number of runic alphabets, of which only a few others have been published. Some of these are included in the "Appendix on Runes" of The Treason of Isengard (The History of Middle-earth, vol. VII), edited by Christopher Tolkien. According to Tolkien himself, those found in The Hobbit are a form of "English runes" used in lieu of the Dwarvish runes proper. They can be interpreted as an attempt made by Tolkien to adapt the Fuþorc (i.e., the Old English runic alphabet) to the Modern English language. These runes are basically the same found in Fuþorc, but their sound may change according to their position, just like the letters of the Latin script: the writing mode used by Tolkien is, in this case, mainly orthographic. This means that the system has one rune for each Latin letter, regardless of pronunciation. For example, the rune ⟨c⟩ can sound /k/ in ⟨cover⟩, /s/ in ⟨sincere⟩, /ʃ/ in ⟨special⟩, and even /tʃ/ in the digraph ⟨ch⟩. A few sounds are instead written with the same rune, without considering the English spelling. For example, the sound /ɔː/ is always written with the rune whether in English it is spelt ⟨o⟩ as in ⟨north⟩, ⟨a⟩ as in ⟨fall⟩, or ⟨oo⟩ as in ⟨door⟩. The only two letters that are subject to this phonemic spelling are ⟨a⟩ and ⟨o⟩. Finally, some runes stand for particular English digraphs and diphthongs. Here the runes used in The Hobbit are displayed along with their Fuþorc counterpart and corresponding English grapheme: Notes: Not all the runes mentioned in The Hobbit are Dwarf-runes. The swords found in the Trolls' cave bore runes that Gandalf could not read. In fact, the swords Glamdring and Orcrist (which were forged in the ancient kingdom of Gondolin) bore a type of letters known as Gondolinic runes. They seem to have become obsolete and been forgotten by the Third Age, and this is supported by the fact that only Elrond could still read the inscriptions on the swords. Tolkien devised this runic alphabet in a very early stage of his shaping of Middle-earth. Nevertheless, they are known to us from a slip of paper that Tolkien wrote; his son Christopher sent a photocopy of it to Paul Nolan Hyde in February 1992. Hyde published it, with an extensive analysis, in the 1992 Summer issue of Mythlore, no. 69. The system provides sounds not found in any of the known Elvish languages of the First Age, but perhaps it was designed for a variety of languages. However, the consonants seem to be, more or less, the same found in Welsh phonology, a theory supported by the fact that Tolkien was heavily influenced by Welsh when creating Elvish languages. Equivalents for some (but not all) cirth can be found in the Runic block of Unicode. Tolkien's mode of writing Modern English in Anglo-Saxon runes received explicit recognition with the introduction of his three additional runes to the Runic block with the release of Unicode 7.0, in June 2014. The three characters represent the English ⟨k⟩, ⟨oo⟩ and ⟨sh⟩ graphemes, as follows: A formal Unicode proposal to encode Cirth as a separate script was made in September 1997 by Michael Everson. No action was taken by the Unicode Technical Committee (UTC) but Cirth appears in the Roadmap to the SMP. Unicode Private Use Area layouts for Cirth are defined at the ConScript Unicode Registry (CSUR) and the Under-ConScript Unicode Registry (UCSUR). Two different layouts are defined by the CSUR/UCSUR: Without proper rendering support, you may see question marks, boxes, or other symbols below instead of Cirth.
[ { "paragraph_id": 0, "text": "The Cirth (Sindarin pronunciation: [ˈkirθ], meaning \"runes\"; sg. certh [ˈkɛrθ]) is a semi‑artificial script, based on real‑life runic alphabets, one of several scripts invented by J. R. R. Tolkien for the constructed languages he devised and used in his works. Cirth is written with a capital letter when referring to the writing system; the letters themselves can be called cirth.", "title": "" }, { "paragraph_id": 1, "text": "In the fictional history of Middle-earth, the original Certhas was created by the Sindar (or Grey Elves) for their language, Sindarin. Its extension and elaboration was known as the Angerthas Daeron, as it was attributed to the Sinda Daeron, despite the fact that it was most probably arranged by the Noldor in order to represent the sounds of other languages like Quenya and Telerin.", "title": "" }, { "paragraph_id": 2, "text": "Although it was later largely replaced by the Tengwar, the Cirth was nonetheless adopted by the Dwarves to write down both their Khuzdul language (Angerthas Moria) and the languages of Men (Angerthas Erebor). The Cirth was also adapted, in its oldest and simplest form, by various races including Men and even Orcs.", "title": "" }, { "paragraph_id": 3, "text": "Many letters have shapes also found in the historical runic alphabets, but their sound values are only similar in a few of the vowels. Rather, the system of assignment of sound values is much more systematic in the Cirth than in the historical runes (e.g., voiced variants of a voiceless sound are expressed by an additional stroke).", "title": "External history" }, { "paragraph_id": 4, "text": "The division between the older Cirth of Daeron and their adaptation by Dwarves and Men has been interpreted as a parallel drawn by Tolkien to the development of the Fuþorc to the Younger Fuþark. The original Elvish Cirth \"as supposed products of a superior culture\" are focused on logical arrangement and a close connection between form and value whereas the adaptations by mortal races introduced irregularities. Similar to the Germanic tribes who had no written literature and used only simple runes before their conversion to Christianity, the Sindarin Elves of Beleriand with their Cirth were introduced to the more elaborate Tengwar of Fëanor when the Noldorin Elves returned to Middle-earth from the lands of the divine Valar.", "title": "External history" }, { "paragraph_id": 5, "text": "In the Appendix E to The Return of the King, Tolkien writes that the Sindar of Beleriand first developed an alphabet for their language some time between the invention of the Tengwar by Fëanor (YT 1250) and the introduction thereof to Middle-earth by the Exiled Noldor at the beginning of the First Age.", "title": "Internal history and description" }, { "paragraph_id": 6, "text": "This alphabet was devised to represent only the sounds of their Sindarin language and its letters were mostly used for inscribing names or brief memorials on wood, stone or metal, hence their angular shapes and straight lines. In Sindarin these letters were named cirth (sing. certh), from the Elvish root *kir- meaning \"to cleave, to cut\". An abecedarium of cirth, consisting of the runes listed in due order, was commonly known as Certhas ([ˈkɛrθɑs], meaning \"rune-rows\" in Sindarin and loosely translated as \"runic alphabet\").", "title": "Internal history and description" }, { "paragraph_id": 7, "text": "The oldest cirth were the following:", "title": "Internal history and description" }, { "paragraph_id": 8, "text": "The form of these letters was somewhat unsystematic, unlike later rearrangements and extensions that made them more featural. The cirth and were used for ⟨h⟩ and ⟨s⟩, but varied as to which was which. Many of the runes consisted of a single vertical line (or \"stem\") with an appendage (or \"branch\") attached to one or both sides. If the attachment was made on one side only, it was usually to the right, but \"the reverse was not infrequent\" and did not change the value of the letter. (For example, the variants or specifically mentioned for h or s, also or for t, etc).", "title": "Internal history and description" }, { "paragraph_id": 9, "text": "In Beleriand, before the end of the First Age, the Certhas was rearranged and further developed, partly under the influence of the Tengwar introduced by the Noldor. This reorganisation of the Cirth was commonly attributed to the Elf Daeron, minstrel and loremaster of King Thingol of Doriath. Thus, the new system became known as the Angerthas Daeron (where \"angerthas\" [ɑŋˈɡɛrθɑs] is from Sindarin \"an(d)\" [ɑn(d)] + \"certhas\" [ˈkɛrθɑs], meaning \"long rune-rows\").", "title": "Internal history and description" }, { "paragraph_id": 10, "text": "In this arrangement, the assignment of values to each certh is systematic. The runes consisting of a stem and a branch attached to the right are used for voiceless stops, while other sounds are allocated according to the following principles:", "title": "Internal history and description" }, { "paragraph_id": 11, "text": "The cirth constructed in this way can therefore be arranged into series, each corresponding to a place of articulation:", "title": "Internal history and description" }, { "paragraph_id": 12, "text": "Other letters introduced in this system include: and for ⟨a⟩ and ⟨w⟩, respectively; runes for long vowels, evidently originated by doubling and binding the certh of the corresponding short vowel (e.g., ⟨oo⟩ → ⟨ō⟩); two front vowels, probably stemming from ligatures of the corresponding back vowel with the ⟨i⟩-certh (i.e., → ⟨ü⟩, and → ⟨ö⟩); some homorganic nasal + stop clusters (e.g., [nd]).", "title": "Internal history and description" }, { "paragraph_id": 13, "text": "Back to the fictional history, since the new -series and -series encompass sounds which do not occur in Sindarin but are present in Quenya, they were most probably introduced by the Exiled Noldor who spoke Quenya as a language of knowledge.", "title": "Internal history and description" }, { "paragraph_id": 14, "text": "By loan-translation, the Cirth became known in Quenya as Certar [ˈkɛrtar], while a single certh was called certa [ˈkɛrta].", "title": "Internal history and description" }, { "paragraph_id": 15, "text": "After the Tengwar became the sole script used for writing, the Angerthas Daeron was essentially relegated to carved inscriptions. The Elves of the West, for the most part, abandoned the Cirth altogether, with the exception of the Noldor dwelling in the country of Eregion, who maintained it in use and made it known as Angerthas Eregion.", "title": "Internal history and description" }, { "paragraph_id": 16, "text": "Note: In this article, the runes of the Angerthas come with the same peculiar transliteration used by Tolkien in the Appendix E, which differs from the (Latin) spelling of both Quenya and Sindarin. The IPA transcription that follows is applicable to both languages, except where indicated otherwise.", "title": "Internal history and description" }, { "paragraph_id": 17, "text": "Notes:", "title": "Internal history and description" }, { "paragraph_id": 18, "text": "According to Tolkien's legendarium, the Dwarves first came to know the runes of the Noldor at the beginning of the Second Age. The Dwarves \"introduced a number of unsystematic changes in value, as well as certain new cirth\". They modified the previous system to suit the specific needs of their language, Khuzdul. The Dwarves spread their revised alphabet to Moria, where it came to be known as Angerthas Moria, and developed both carved and pen-written forms of these runes.", "title": "Internal history and description" }, { "paragraph_id": 19, "text": "Many cirth here represent sounds not occurring in Khuzdul (at least in published words of Khuzdul: of course, our corpus is very limited to judge the necessity or not, of these sounds). Here they are marked with a black star ().", "title": "Internal history and description" }, { "paragraph_id": 20, "text": "Notes:", "title": "Internal history and description" }, { "paragraph_id": 21, "text": "In Angerthas Moria the cirth /dʒ/ and /ʒ/ were dropped. Thus and were adopted for /dʒ/ and /ʒ/, although they were used for /r/ and /r̥/ in Elvish languages. Subsequently, this script used the certh for /ʀ/ (or /ʁ/), which had the sound /n/ in the Elvish systems. Therefore, the certh (which was previously used for the sound /ŋ/, useless in Khuzdul) was adopted for the sound /n/. A totally new introduction was the certh , used as an alternative, simplified and, maybe, weaker form of . Because of the visual relation of these two cirth, the certh was given the sound /z/ to relate better with that, in this script, had the sound /s/.", "title": "Internal history and description" }, { "paragraph_id": 22, "text": "At the beginning of the Third Age the Dwarves were driven out of Moria, and some migrated to Erebor. As the Dwarves of Erebor would trade with the Men of the nearby towns of Dale and Lake-town, they needed a script to write in Westron (the lingua franca of Middle-earth, usually rendered in English by Tolkien in his works). The Angerthas Moria was adapted accordingly: some new cirth were added, while some were restored to their Elvish usage, thus creating the Angerthas Erebor.", "title": "Internal history and description" }, { "paragraph_id": 23, "text": "While the Angerthas Moria was still used to write down Khuzdul, this new script was primarily used for Mannish languages. It is also the script used in the first and third page of the Book of Mazarbul.", "title": "Internal history and description" }, { "paragraph_id": 24, "text": "Angerthas Erebor also features combining diacritics:", "title": "Internal history and description" }, { "paragraph_id": 25, "text": "The Angerthas Erebor is used twice in The Lord of the Rings to write in English:", "title": "Internal history and description" }, { "paragraph_id": 26, "text": "The Book of Mazarbul shows some additional cirth used in Angerthas Erebor: one for a double ⟨l⟩ ligature, one for the definite article, and six for the representation of the same number of English diphthongs:", "title": "Internal history and description" }, { "paragraph_id": 27, "text": "Notes:", "title": "Internal history and description" }, { "paragraph_id": 28, "text": "The Cirth is not the only runic writing system used by Tolkien in his legendarium. In fact, he devised a great number of runic alphabets, of which only a few others have been published. Some of these are included in the \"Appendix on Runes\" of The Treason of Isengard (The History of Middle-earth, vol. VII), edited by Christopher Tolkien.", "title": "Other runic scripts by Tolkien" }, { "paragraph_id": 29, "text": "According to Tolkien himself, those found in The Hobbit are a form of \"English runes\" used in lieu of the Dwarvish runes proper. They can be interpreted as an attempt made by Tolkien to adapt the Fuþorc (i.e., the Old English runic alphabet) to the Modern English language.", "title": "Other runic scripts by Tolkien" }, { "paragraph_id": 30, "text": "These runes are basically the same found in Fuþorc, but their sound may change according to their position, just like the letters of the Latin script: the writing mode used by Tolkien is, in this case, mainly orthographic. This means that the system has one rune for each Latin letter, regardless of pronunciation. For example, the rune ⟨c⟩ can sound /k/ in ⟨cover⟩, /s/ in ⟨sincere⟩, /ʃ/ in ⟨special⟩, and even /tʃ/ in the digraph ⟨ch⟩.", "title": "Other runic scripts by Tolkien" }, { "paragraph_id": 31, "text": "A few sounds are instead written with the same rune, without considering the English spelling. For example, the sound /ɔː/ is always written with the rune whether in English it is spelt ⟨o⟩ as in ⟨north⟩, ⟨a⟩ as in ⟨fall⟩, or ⟨oo⟩ as in ⟨door⟩. The only two letters that are subject to this phonemic spelling are ⟨a⟩ and ⟨o⟩.", "title": "Other runic scripts by Tolkien" }, { "paragraph_id": 32, "text": "Finally, some runes stand for particular English digraphs and diphthongs.", "title": "Other runic scripts by Tolkien" }, { "paragraph_id": 33, "text": "Here the runes used in The Hobbit are displayed along with their Fuþorc counterpart and corresponding English grapheme:", "title": "Other runic scripts by Tolkien" }, { "paragraph_id": 34, "text": "Notes:", "title": "Other runic scripts by Tolkien" }, { "paragraph_id": 35, "text": "Not all the runes mentioned in The Hobbit are Dwarf-runes. The swords found in the Trolls' cave bore runes that Gandalf could not read. In fact, the swords Glamdring and Orcrist (which were forged in the ancient kingdom of Gondolin) bore a type of letters known as Gondolinic runes. They seem to have become obsolete and been forgotten by the Third Age, and this is supported by the fact that only Elrond could still read the inscriptions on the swords.", "title": "Other runic scripts by Tolkien" }, { "paragraph_id": 36, "text": "Tolkien devised this runic alphabet in a very early stage of his shaping of Middle-earth. Nevertheless, they are known to us from a slip of paper that Tolkien wrote; his son Christopher sent a photocopy of it to Paul Nolan Hyde in February 1992. Hyde published it, with an extensive analysis, in the 1992 Summer issue of Mythlore, no. 69.", "title": "Other runic scripts by Tolkien" }, { "paragraph_id": 37, "text": "The system provides sounds not found in any of the known Elvish languages of the First Age, but perhaps it was designed for a variety of languages. However, the consonants seem to be, more or less, the same found in Welsh phonology, a theory supported by the fact that Tolkien was heavily influenced by Welsh when creating Elvish languages.", "title": "Other runic scripts by Tolkien" }, { "paragraph_id": 38, "text": "Equivalents for some (but not all) cirth can be found in the Runic block of Unicode.", "title": "Encoding schemes" }, { "paragraph_id": 39, "text": "Tolkien's mode of writing Modern English in Anglo-Saxon runes received explicit recognition with the introduction of his three additional runes to the Runic block with the release of Unicode 7.0, in June 2014. The three characters represent the English ⟨k⟩, ⟨oo⟩ and ⟨sh⟩ graphemes, as follows:", "title": "Encoding schemes" }, { "paragraph_id": 40, "text": "A formal Unicode proposal to encode Cirth as a separate script was made in September 1997 by Michael Everson. No action was taken by the Unicode Technical Committee (UTC) but Cirth appears in the Roadmap to the SMP.", "title": "Encoding schemes" }, { "paragraph_id": 41, "text": "Unicode Private Use Area layouts for Cirth are defined at the ConScript Unicode Registry (CSUR) and the Under-ConScript Unicode Registry (UCSUR).", "title": "Encoding schemes" }, { "paragraph_id": 42, "text": "Two different layouts are defined by the CSUR/UCSUR:", "title": "Encoding schemes" }, { "paragraph_id": 43, "text": "Without proper rendering support, you may see question marks, boxes, or other symbols below instead of Cirth.", "title": "Encoding schemes" } ]
The Cirth is a semi‑artificial script, based on real‑life runic alphabets, one of several scripts invented by J. R. R. Tolkien for the constructed languages he devised and used in his works. Cirth is written with a capital letter when referring to the writing system; the letters themselves can be called cirth. In the fictional history of Middle-earth, the original Certhas was created by the Sindar for their language, Sindarin. Its extension and elaboration was known as the Angerthas Daeron, as it was attributed to the Sinda Daeron, despite the fact that it was most probably arranged by the Noldor in order to represent the sounds of other languages like Quenya and Telerin. Although it was later largely replaced by the Tengwar, the Cirth was nonetheless adopted by the Dwarves to write down both their Khuzdul language and the languages of Men. The Cirth was also adapted, in its oldest and simplest form, by various races including Men and even Orcs.
2002-01-09T14:03:44Z
2023-12-10T05:03:26Z
[ "Template:Cite journal", "Template:Cite thesis", "Template:Note", "Template:CSUR chart Cirth", "Template:Cite book", "Template:Cite conference", "Template:Ref", "Template:IPAc-en", "Template:Cite news", "Template:Abbr", "Template:Citation needed", "Template:Script", "Template:Reflist", "Template:Short description", "Template:Angbr", "Template:Nowrap", "Template:Ref label", "Template:Constructed languages", "Template:Cite letter", "Template:IPA", "Template:A note", "Template:Center", "Template:Infobox writing system", "Template:Infobox Unicode block", "Template:Languages of Middle-earth", "Template:Unichar", "Template:Cite web", "Template:Middle-earth", "Template:Nobold", "Template:Ordered list", "Template:Okina", "Template:Not a typo" ]
https://en.wikipedia.org/wiki/Cirth
7,697
Lockheed C-130 Hercules
The Lockheed C-130 Hercules is an American four-engine turboprop military transport aircraft designed and built by Lockheed (now Lockheed Martin). Capable of using unprepared runways for takeoffs and landings, the C-130 was originally designed as a troop, medevac, and cargo transport aircraft. The versatile airframe has found uses in other roles, including as a gunship (AC-130), for airborne assault, search and rescue, scientific research support, weather reconnaissance, aerial refueling, maritime patrol, and aerial firefighting. It is now the main tactical airlifter for many military forces worldwide. More than 40 variants of the Hercules, including civilian versions marketed as the Lockheed L-100, operate in more than 60 nations. The C-130 entered service with the U.S. in 1956, followed by Australia and many other nations. During its years of service, the Hercules has participated in numerous military, civilian and humanitarian aid operations. In 2007, the transport became the fifth aircraft to mark 50 years of continuous service with its original primary customer, which for the C-130 is the United States Air Force (USAF). The C-130 is the longest continuously produced military aircraft at more than 60 years, with the updated Lockheed Martin C-130J Super Hercules being produced as of 2023. The Korean War showed that World War II-era piston-engine transports—Fairchild C-119 Flying Boxcars, Douglas C-47 Skytrains and Curtiss C-46 Commandos—were no longer adequate. On 2 February 1951, the United States Air Force issued a General Operating Requirement (GOR) for a new transport to Boeing, Douglas, Fairchild, Lockheed, Martin, Chase Aircraft, North American, Northrop, and Airlifts Inc. The new transport would have a capacity of 92 passengers, 72 combat troops or 64 paratroopers in a cargo compartment that was approximately 41 ft (12 m) long, 9 ft (2.7 m) high, and 10 ft (3.0 m) wide. Unlike transports derived from passenger airliners, it was to be designed specifically as a combat transport with loading from a hinged loading ramp at the rear of the fuselage. A notable advance for large aircraft was the introduction of a turboprop powerplant, the Allison T56 which was developed for the C-130. It gave the aircraft greater range than a turbojet engine as it used less fuel. Turboprop engines also produced much more power for their weight than piston engines. However, the turboprop configuration chosen for the T56, with the propeller connected to the compressor, had the potential to cause structural failure of the aircraft if an engine failed. Safety devices had to be incorporated to reduce the excessive drag from a windmilling propeller. The Hercules resembles a larger, four-engine version of the Fairchild C-123 Provider with a similar wing and cargo ramp layout. The C-123 had evolved from the Chase XCG-20 Avitruc first flown in 1950. The Boeing C-97 Stratofreighter had rear ramps, which made it possible to drive vehicles onto the airplane (also possible with the forward ramp on a C-124). The ramp on the Hercules was also used to airdrop cargo, which included a Low-altitude parachute-extraction system for Sheridan tanks and even dropping large improvised "daisy cutter" bombs. The new Lockheed cargo plane had a range of 1,100 nmi (1,270 mi; 2,040 km) and it could operate from short and unprepared strips. Fairchild, North American, Martin, and Northrop declined to participate. The remaining five companies tendered a total of ten designs: Lockheed two, Boeing one, Chase three, Douglas three, and Airlifts Inc. one. The contest was a close affair between the lighter of the two Lockheed (preliminary project designation L-206) proposals and a four-turboprop Douglas design. The Lockheed design team was led by Willis Hawkins, starting with a 130-page proposal for the Lockheed L-206. Hall Hibbard, Lockheed vice president and chief engineer, saw the proposal and directed it to Kelly Johnson, who did not care for the low-speed, unarmed aircraft, and remarked, "If you sign that letter, you will destroy the Lockheed Company." Both Hibbard and Johnson signed the proposal and the company won the contract for the now-designated Model 82 on 2 July 1951. The first flight of the YC-130 prototype was made on 23 August 1954 from the Lockheed plant in Burbank, California. The aircraft, serial number 53-3397, was the second prototype, but the first of the two to fly. The YC-130 was piloted by Stanley Beltz and Roy Wimmer on its 61-minute flight to Edwards Air Force Base; Jack Real and Dick Stanton served as flight engineers. Kelly Johnson flew chase in a Lockheed P2V Neptune. After the two prototypes were completed, production began in Marietta, Georgia, where over 2,300 C-130s have been built through 2009. The initial production model, the C-130A, was powered by Allison T56-A-9 turboprops with three-blade propellers and originally equipped with the blunt nose of the prototypes. Deliveries began in December 1956, continuing until the introduction of the C-130B model in 1959. Some A-models were equipped with skis and re-designated C-130D. As the C-130A became operational with Tactical Air Command (TAC), the C-130's lack of range became apparent and additional fuel capacity was added with wing pylon-mounted tanks outboard of the engines; this added 6,000 pounds (2,700 kg) of fuel capacity for a total capacity of 40,000 pounds (18,000 kg). The C-130B model was developed to complement the A-models that had previously been delivered, and incorporated new features, particularly increased fuel capacity in the form of auxiliary tanks built into the center wing section and an AC electrical system. Four-bladed Hamilton Standard propellers replaced the Aero Products' three-blade propellers that distinguished the earlier A-models. The C-130B had ailerons operated by hydraulic pressure that was increased from 2,050 to 3,000 psi (14.1 to 20.7 MPa), as well as uprated engines and four-blade propellers that were standard until the J-model. The B model was originally intended to have "blown controls", a system that blows high-pressure air over the control surfaces to improve their effectiveness during slow flight. It was tested on an NC-130B prototype aircraft with a pair of T-56 turbines providing high-pressure air through a duct system to the control surfaces and flaps during landing. This greatly reduced landing speed to just 63 knots and cut landing distance in half. The system never entered service because it did not improve takeoff performance by the same margin, making the landing performance pointless if the aircraft could not also take off from where it had landed. An electronic reconnaissance variant of the C-130B was designated C-130B-II. A total of 13 aircraft were converted. The C-130B-II was distinguished by its false external wing fuel tanks, which were disguised signals intelligence (SIGINT) receiver antennas. These pods were slightly larger than the standard wing tanks found on other C-130Bs. Most aircraft featured a swept blade antenna on the upper fuselage, as well as extra wire antennas between the vertical fin and upper fuselage not found on other C-130s. Radio call numbers on the tail of these aircraft were regularly changed to confuse observers and disguise their true mission. The extended-range C-130E model entered service in 1962 after it was developed as an interim long-range transport for the Military Air Transport Service. Essentially a B-model, the new designation was the result of the installation of 1,360 US gallons (5,100 litres) Sargent Fletcher external fuel tanks under each wing's midsection and more powerful Allison T56-A-7A turboprops. The hydraulic boost pressure to the ailerons was reduced back to 2,050 psi (14.1 MPa) as a consequence of the external tanks' weight in the middle of the wingspan. The E model also featured structural improvements, avionics upgrades, and a higher gross weight. Australia took delivery of 12 C130E Hercules during 1966–67 to supplement the 12 C-130A models already in service with the RAAF. Sweden and Spain fly the TP-84T version of the C-130E fitted for aerial refueling capability. The KC-130 tankers, originally C-130F procured for the US Marine Corps (USMC) in 1958 (under the designation GV-1) are equipped with a removable 3,600 US gallons (14,000 L) stainless steel fuel tank carried inside the cargo compartment. The two wing-mounted hose and drogue aerial refueling pods each transfer up to 300 US gallons per minute (1,100 L/min) to two aircraft simultaneously, allowing for rapid cycle times of multiple-receiver aircraft formations, (a typical tanker formation of four aircraft in less than 30 minutes). The US Navy's C-130G has increased structural strength allowing higher gross weight operation. The C-130H model has updated Allison T56-A-15 turboprops, a redesigned outer wing, updated avionics, and other minor improvements. Later H models had a new, fatigue-life-improved, center wing that was retrofitted to many earlier H-models. For structural reasons, some models are required to land with reduced amounts of fuel when carrying heavy cargo, reducing usable range. The H model remains in widespread use with the United States Air Force (USAF) and many foreign air forces. Initial deliveries began in 1964 (to the RNZAF), remaining in production until 1996. An improved C-130H was introduced in 1974, with Australia purchasing 12 of the type in 1978 to replace the original 12 C-130A models, which had first entered Royal Australian Air Force (RAAF) service in 1958. The U.S. Coast Guard employs the HC-130H for long-range search and rescue, drug interdiction, illegal migrant patrols, homeland security, and logistics. C-130H models produced from 1992 to 1996 were designated as C-130H3 by the USAF, with the "3" denoting the third variation in design for the H series. Improvements included ring laser gyros for the INUs, GPS receivers, a partial glass cockpit (ADI and HSI instruments), a more capable APN-241 color radar, night vision device compatible instrument lighting, and an integrated radar and missile warning system. The electrical system upgrade included Generator Control Units (GCU) and Bus Switching units (BSU) to provide stable power to the more sensitive upgraded components. The equivalent model for export to the UK is the C-130K, known by the Royal Air Force (RAF) as the Hercules C.1. The C-130H-30 (Hercules C.3 in RAF service) is a stretched version of the original Hercules, achieved by inserting a 100 in (2.5 m) plug aft of the cockpit and an 80 in (2.0 m) plug at the rear of the fuselage. A single C-130K was purchased by the Met Office for use by its Meteorological Research Flight, where it was classified as the Hercules W.2. This aircraft was heavily modified, with its most prominent feature being the long red and white striped atmospheric probe on the nose and the move of the weather radar into a pod above the forward fuselage. This aircraft, named Snoopy, was withdrawn in 2001 and was then modified by Marshall of Cambridge Aerospace as a flight testbed for the A400M turbine engine, the TP400. The C-130K is used by the RAF Falcons for parachute drops. Three C-130Ks (Hercules C Mk.1P) were upgraded and sold to the Austrian Air Force in 2002. The MC-130E Combat Talon was developed for the USAF during the Vietnam War to support special operations missions in Southeast Asia, and led to both the MC-130H Combat Talon II as well as a family of other special missions aircraft. 37 of the earliest models currently operating with the Air Force Special Operations Command (AFSOC) are scheduled to be replaced by new-production MC-130J versions. The EC-130 Commando Solo is another special missions variant within AFSOC, albeit operated solely by an AFSOC-gained wing in the Pennsylvania Air National Guard, and is a psychological operations/information operations (PSYOP/IO) platform equipped as an aerial radio station and television stations able to transmit messaging over commercial frequencies. Other versions of the EC-130, most notably the EC-130H Compass Call, are also special variants, but are assigned to the Air Combat Command (ACC). The AC-130 gunship was first developed during the Vietnam War to provide close air support and other ground-attack duties. The HC-130 is a family of long-range search and rescue variants used by the USAF and the U.S. Coast Guard. Equipped for the deep deployment of Pararescuemen (PJs), survival equipment, and (in the case of USAF versions) aerial refueling of combat rescue helicopters, HC-130s are usually the on-scene command aircraft for combat SAR missions (USAF only) and non-combat SAR (USAF and USCG). Early USAF versions were also equipped with the Fulton surface-to-air recovery system, designed to pull a person off the ground using a wire strung from a helium balloon. The John Wayne movie The Green Berets features its use. The Fulton system was later removed when aerial refueling of helicopters proved safer and more versatile. The movie The Perfect Storm depicts a real-life SAR mission involving aerial refueling of a New York Air National Guard HH-60G by a New York Air National Guard HC-130P. The C-130R and C-130T are U.S. Navy and USMC models, both equipped with underwing external fuel tanks. The USN C-130T is similar but has additional avionics improvements. In both models, aircraft are equipped with Allison T56-A-16 engines. The USMC versions are designated KC-130R or KC-130T when equipped with underwing refueling pods and pylons and are fully night vision system compatible. The RC-130 is a reconnaissance version. A single example is used by the Islamic Republic of Iran Air Force, the aircraft having originally been sold to the former Imperial Iranian Air Force. The Lockheed L-100 (L-382) is a civilian variant, equivalent to a C-130E model without military equipment. The L-100 also has two stretched versions. In the 1970s, Lockheed proposed a C-130 variant with turbofan engines rather than turboprops, but the U.S. Air Force preferred the takeoff performance of the existing aircraft. In the 1980s, the C-130 was intended to be replaced by the Advanced Medium STOL Transport project. The project was canceled and the C-130 has remained in production. Building on lessons learned, Lockheed Martin modified a commercial variant of the C-130 into a High Technology Test Bed (HTTB). This test aircraft set numerous short takeoff and landing performance records and significantly expanded the database for future derivatives of the C-130. Modifications made to the HTTB included extended chord ailerons, a long chord rudder, fast-acting double-slotted trailing edge flaps, a high-camber wing leading edge extension, a larger dorsal fin and dorsal fins, the addition of three spoiler panels to each wing upper surface, a long-stroke main and nose landing gear system, and changes to the flight controls and a change from direct mechanical linkages assisted by hydraulic boost, to fully powered controls, in which the mechanical linkages from the flight station controls operated only the hydraulic control valves of the appropriate boost unit. The HTTB first flew on 19 June 1984, with civil registration of N130X. After demonstrating many new technologies, some of which were applied to the C-130J, the HTTB was lost in a fatal accident on 3 February 1993, at Dobbins Air Reserve Base, in Marietta, Georgia. The crash was attributed to disengagement of the rudder fly-by-wire flight control system, resulting in a total loss of rudder control capability while conducting ground minimum control speed tests (Vmcg). The disengagement was a result of the inadequate design of the rudder's integrated actuator package by its manufacturer; the operator's insufficient system safety review failed to consider the consequences of the inadequate design to all operating regimes. A factor that contributed to the accident was the flight crew's lack of engineering flight test training. In the 1990s, the improved C-130J Super Hercules was developed by Lockheed (later Lockheed Martin). This model is the newest version and the only model in production. Externally similar to the classic Hercules in general appearance, the J model has new turboprop engines, six-bladed propellers, digital avionics, and other new systems. In 2000, Boeing was awarded a US$1.4 billion contract to develop an Avionics Modernization Program kit for the C-130. The program was beset with delays and cost overruns until project restructuring in 2007. In September 2009, it was reported that the planned Avionics Modernization Program (AMP) upgrade to the older C-130s would be dropped to provide more funds for the F-35, CV-22 and airborne tanker replacement programs. However, in June 2010, Department of Defense approved funding for the initial production of the AMP upgrade kits. Under the terms of this agreement, the USAF has cleared Boeing to begin low-rate initial production (LRIP) for the C-130 AMP. A total of 198 aircraft are expected to feature the AMP upgrade. The current cost per aircraft is US$14 million, although Boeing expects that this price will drop to US$7 million for the 69th aircraft. In the 2000s, Lockheed Martin and the U.S. Air Force began outfitting and retrofitting C-130s with the eight-blade UTC Aerospace Systems NP2000 propellers. An engine enhancement program saving fuel and providing lower temperatures in the T56 engine has been approved, and the US Air Force expects to save $2 billion (~$2.49 billion in 2022) and extend the fleet life. In 2021, the Air Force Research Laboratory demonstrated the Rapid Dragon system which transforms the C-130 into a lethal strike platform capable of launching 12 JASSM-ER with 500 kg warheads from a standoff distance of 925 km (575 mi). Future anticipated improvements support includes support for JDAM-ER, mine laying, drone dispersal as well as improved standoff range when 1,900 km (1,200 mi) JASSM-XR become available in 2024. In October 2010, the U.S. Air Force released a capability request for information (CRFI) for the development of a new airlifter to replace the C-130. The new aircraft was to carry a 190% greater payload and assume the mission of mounted vertical maneuver (MVM). The greater payload and mission would enable it to carry medium-weight armored vehicles and unload them at locations without long runways. Various options were under consideration, including new or upgraded fixed-wing designs, rotorcraft, tiltrotors, or even an airship. The C-130 fleet of around 450 planes would be replaced by only 250 aircraft. The Air Force had attempted to replace the C-130 in the 1970s through the Advanced Medium STOL Transport project, which resulted in the C-17 Globemaster III that instead replaced the C-141 Starlifter. The Air Force Research Laboratory funded Lockheed Martin and Boeing demonstrators for the Speed Agile concept, which had the goal of making a STOL aircraft that could take off and land at speeds as low as 70 kn (130 km/h; 81 mph) on airfields less than 2,000 ft (610 m) long and cruise at Mach 0.8-plus. Boeing's design used upper-surface blowing from embedded engines on the inboard wing and blown flaps for circulation control on the outboard wing. Lockheed's design also used blown flaps outboard, but inboard used patented reversing ejector nozzles. Boeing's design completed over 2,000 hours of wind tunnel tests in late 2009. It was a 5 percent-scale model of a narrow body design with a 55,000 lb (25,000 kg) payload. When the AFRL increased the payload requirement to 65,000 lb (29,000 kg), they tested a 5 percent-scale model of a widebody design with a 303,000 lb (137,000 kg) take-off gross weight and an "A400M-size" 158 in (4.0 m) wide cargo box. It would be powered by four IAE V2533 turbofans. In August 2011, the AFRL released pictures of the Lockheed Speed Agile concept demonstrator. A 23% scale model went through wind tunnel tests to demonstrate its hybrid powered lift, which combined a low drag airframe with simple mechanical assembly to reduce weight and improve aerodynamics. The model had four engines, including two Williams FJ44 turbofans. On 26 March 2013, Boeing was granted a patent for its swept-wing powered lift aircraft. In January 2014, Air Mobility Command, Air Force Materiel Command and the Air Force Research Lab were in the early stages of defining requirements for the C-X next generation airlifter program to replace both the C-130 and C-17. The aircraft would be produced from the early 2030s to the 2040s. The first production batch of C-130A aircraft were delivered beginning in 1956 to the 463d Troop Carrier Wing at Ardmore AFB, Oklahoma, and the 314th Troop Carrier Wing at Sewart AFB, Tennessee. Six additional squadrons were assigned to the 322d Air Division in Europe and the 315th Air Division in the Far East. Additional aircraft were modified for electronics intelligence work and assigned to Rhein-Main Air Base, Germany while modified RC-130As were assigned to the Military Air Transport Service (MATS) photo-mapping division. The C-130A entered service with the U.S. Air Force in December 1956. In 1958, a U.S. reconnaissance C-130A-II of the 7406th Support Squadron was shot down over Armenia by four Soviet MiG-17s along the Turkish-Armenian border during a routine mission. Australia became the first non-American force to operate the C-130A Hercules with 12 examples being delivered from late 1958. The Royal Canadian Air Force became another early user with the delivery of four B-models (Canadian designation CC-130 Mk I) in October / November 1960. In 1963, a Hercules achieved and still holds the record for the largest and heaviest aircraft to land on an aircraft carrier. During October and November that year, a USMC KC-130F (BuNo 149798), loaned to the U.S. Naval Air Test Center, made 29 touch-and-go landings, 21 unarrested full-stop landings and 21 unassisted take-offs on Forrestal at a number of different weights. The pilot, Lieutenant (later Rear Admiral) James H. Flatley III, USN, was awarded the Distinguished Flying Cross for his role in this test series. The tests were highly successful, but the aircraft was not deployed this way. Flatley denied that C-130 was tested for carrier onboard delivery (COD) operations, or for delivering nuclear weapons. He said that the intention was to support the Lockheed U-2, also being tested on carriers. The Hercules used in the test, most recently in service with Marine Aerial Refueler Squadron 352 (VMGR-352) until 2005, is now part of the collection of the National Museum of Naval Aviation at NAS Pensacola, Florida. In 1964, C-130 crews from the 6315th Operations Group at Naha Air Base, Okinawa commenced forward air control (FAC; "Flare") missions over the Ho Chi Minh Trail in Laos supporting USAF strike aircraft. In April 1965 the mission was expanded to North Vietnam where C-130 crews led formations of Martin B-57 Canberra bombers on night reconnaissance/strike missions against communist supply routes leading to South Vietnam. In early 1966 Project Blind Bat/Lamplighter was established at Ubon Royal Thai Air Force Base, Thailand. After the move to Ubon, the mission became a four-engine FAC mission with the C-130 crew searching for targets and then calling in strike aircraft. Another little-known C-130 mission flown by Naha-based crews was Operation Commando Scarf (or Operation Commando Lava), which involved the delivery of chemicals onto sections of the Ho Chi Minh Trail in Laos that were designed to produce mud and landslides in hopes of making the truck routes impassable. In November 1964, on the other side of the globe, C-130Es from the 464th Troop Carrier Wing but loaned to 322d Air Division in France, took part in Operation Dragon Rouge, one of the most dramatic missions in history in the former Belgian Congo. After communist Simba rebels took white residents of the city of Stanleyville hostage, the U.S. and Belgium developed a joint rescue mission that used the C-130s to drop, air-land, and air-lift a force of Belgian paratroopers to rescue the hostages. Two missions were flown, one over Stanleyville and another over Paulis during Thanksgiving week. The headline-making mission resulted in the first award of the prestigious MacKay Trophy to C-130 crews. In the Indo-Pakistani War of 1965, the No. 6 Transport Squadron of the Pakistan Air Force modified its C-130Bs for use as bombers to carry up to 20,000 pounds (9,100 kg) of bombs on pallets. These improvised bombers were used to hit Indian targets such as bridges, heavy artillery positions, tank formations, and troop concentrations, though weren't that successful finally. In October 1968, a C-130Bs from the 463rd Tactical Airlift Wing dropped a pair of M-121 10,000 pounds (4,500 kg) bombs that had been developed for the massive Convair B-36 Peacemaker bomber but had never been used. The U.S. Army and U.S. Air Force resurrected the huge weapons as a means of clearing landing zones for helicopters and in early 1969 the 463rd commenced Commando Vault missions. Although the stated purpose of Commando Vault was to clear LZs, they were also used on enemy base camps and other targets. During the late 1960s, the U.S. was eager to get information on Chinese nuclear capabilities. After the failure of the Black Cat Squadron to plant operating sensor pods near the Lop Nur Nuclear Weapons Test Base using a U-2, the CIA developed a plan, named Heavy Tea, to deploy two battery-powered sensor pallets near the base. To deploy the pallets, a Black Bat Squadron crew was trained in the U.S. to fly the C-130 Hercules. The crew of 12, led by Col Sun Pei Zhen, took off from Takhli Royal Thai Air Force Base in an unmarked U.S. Air Force C-130E on 17 May 1969. Flying for six and a half hours at low altitude in the dark, they arrived over the target and the sensor pallets were dropped by parachute near Anxi in Gansu province. After another six and a half hours of low-altitude flight, they arrived back at Takhli. The sensors worked and uploaded data to a U.S. intelligence satellite for six months before their batteries failed. The Chinese conducted two nuclear tests, on 22 September 1969 and 29 September 1969, during the operating life of the sensor pallets. Another mission to the area was planned as Operation Golden Whip, but it was called off in 1970. It is most likely that the aircraft used on this mission was either C-130E serial number 64-0506 or 64-0507 (cn 382-3990 and 382–3991). These two aircraft were delivered to Air America in 1964. After being returned to the U.S. Air Force sometime between 1966 and 1970, they were assigned the serial numbers of C-130s that had been destroyed in accidents. 64-0506 is now flying as 62–1843, a C-130E that crashed in Vietnam on 20 December 1965, and 64-0507 is now flying as 63–7785, a C-130E that had crashed in Vietnam on 17 June 1966. The A-model continued in service through the Vietnam War, where the aircraft assigned to the four squadrons at Naha AB, Okinawa, and one at Tachikawa Air Base, Japan performed yeoman's service, including operating highly classified special operations missions such as the BLIND BAT FAC/Flare mission and Fact Sheet leaflet mission over Laos and North Vietnam. The A-model was also provided to the Republic of Vietnam Air Force as part of the Vietnamization program at the end of the war, and equipped three squadrons based at Tan Son Nhut Air Base. The last operator in the world is the Honduran Air Force, which is still flying one of five A model Hercules (FAH 558, c/n 3042) as of October 2009. As the Vietnam War wound down, the 463rd Troop Carrier/Tactical Airlift Wing B-models and A-models of the 374th Tactical Airlift Wing were transferred back to the United States where most were assigned to Air Force Reserve and Air National Guard units. Another prominent role for the B model was with the United States Marine Corps, where Hercules initially designated as GV-1s replaced C-119s. After Air Force C-130Ds proved the type's usefulness in Antarctica, the U.S. Navy purchased several B-models equipped with skis that were designated as LC-130s. C-130B-II electronic reconnaissance aircraft were operated under the SUN VALLEY program name primarily from Yokota Air Base, Japan. All reverted to standard C-130B cargo aircraft after their replacement in the reconnaissance role by other aircraft. The C-130 was also used in the 1976 Entebbe raid in which Israeli commando forces performed a surprise operation to rescue 103 passengers of an airliner hijacked by Palestinian and German terrorists at Entebbe Airport, Uganda. The rescue force—200 soldiers, jeeps, and a black Mercedes-Benz (intended to resemble Ugandan Dictator Idi Amin's vehicle of state)—was flown over 2,200 nmi (4,074 km; 2,532 mi) almost entirely at an altitude of less than 100 ft (30 m) from Israel to Entebbe by four Israeli Air Force (IAF) Hercules aircraft without mid-air refueling (on the way back, the aircraft refueled in Nairobi, Kenya). During the Falklands War (Spanish: Guerra de las Malvinas) of 1982, Argentine Air Force C-130s undertook dangerous re-supply night flights as blockade runners to the Argentine garrison on the Falkland Islands. They also performed daylight maritime survey flights. One was shot down by a Royal Navy Sea Harrier using AIM-9 Sidewinders and cannon. The crew of seven were killed. Argentina also operated two KC-130 tankers during the war, and these refueled both the Douglas A-4 Skyhawks and Navy Dassault-Breguet Super Étendards; some C-130s were modified to operate as bombers with bomb-racks under their wings. The British also used RAF C-130s to support their logistical operations. During the Gulf War of 1991 (Operation Desert Storm), the C-130 Hercules was used operationally by the U.S. Air Force, U.S. Navy, and U.S. Marine Corps, along with the air forces of Australia, New Zealand, Saudi Arabia, South Korea, and the UK. The MC-130 Combat Talon variant also made the first attacks using the largest conventional bombs in the world, the BLU-82 "Daisy Cutter" and GBU-43/B "Massive Ordnance Air Blast" (MOAB) bomb. Daisy Cutters were used to primarily clear landing zones and to eliminate mine fields. The weight and size of the weapons make it impossible or impractical to load them on conventional bombers. The GBU-43/B MOAB is a successor to the BLU-82 and can perform the same function, as well as perform strike functions against hardened targets in a low air threat environment. Since 1992, two successive C-130 aircraft named Fat Albert have served as the support aircraft for the U.S. Navy Blue Angels flight demonstration team. Fat Albert I was a TC-130G (151891) a former U.S. Navy TACAMO aircraft serving with Fleet Air Reconnaissance Squadron Three (VQ-3) before being transferred to the BLUES, while Fat Albert II is a C-130T (164763). Although Fat Albert supports a Navy squadron, it is operated by the U.S. Marine Corps (USMC) and its crew consists solely of USMC personnel. At some air shows featuring the team, Fat Albert takes part, performing flyovers. Until 2009, it also demonstrated its rocket-assisted takeoff (RATO) capabilities; these ended due to dwindling supplies of rockets. The AC-130 also holds the record for the longest sustained flight by a C-130. From 22 to 24 October 1997, two AC-130U gunships flew 36 hours nonstop from Hurlburt Field, Florida to Daegu International Airport, South Korea, being refueled seven times by KC-135 tanker aircraft. This record flight beat the previous record longest flight by over 10 hours and the two gunships took on 410,000 lb (190,000 kg) of fuel. The gunship has been used in every major U.S. combat operation since Vietnam, except for Operation El Dorado Canyon, the 1986 attack on Libya. During the invasion of Afghanistan in 2001 and the ongoing support of the International Security Assistance Force (Operation Enduring Freedom), the C-130 Hercules has been used operationally by Australia, Belgium, Canada, Denmark, France, Italy, the Netherlands, New Zealand, Norway, Portugal, Romania, South Korea, Spain, the UK, and the United States. During the 2003 invasion of Iraq (Operation Iraqi Freedom), the C-130 Hercules was used operationally by Australia, the UK, and the United States. After the initial invasion, C-130 operators as part of the Multinational force in Iraq used their C-130s to support their forces in Iraq. Since 2004, the Pakistan Air Force has employed C-130s in the War in North-West Pakistan. Some variants had forward looking infrared (FLIR Systems Star Safire III EO/IR) sensor balls, to enable close tracking of militants. In 2017, France and Germany announced that they are to build up a joint air transport squadron at Evreux Air Base, France, comprising ten C-130J aircraft. Six of these will be operated by Germany. Initial operational capability is expected for 2021 while full operational capability is scheduled for 2024. For almost two decades, the USAF 910th Airlift Wing's 757th Airlift Squadron and the U.S. Coast Guard have participated in oil spill cleanup exercises to ensure the U.S. military has a capable response in the event of a national emergency. The 757th Airlift Squadron operates the DOD's only fixed-wing Aerial Spray System which was certified by the EPA to disperse pesticides on DOD property to spread oil dispersants onto the Deepwater Horizon oil spill in the Gulf Coast in 2010. During the 5-week mission, the aircrews flew 92 sorties and sprayed approximately 30,000 acres with nearly 149,000 gallons of oil dispersant to break up the oil. The Deepwater Horizon mission was the first time the US used the oil dispersing capability of the 910th Airlift Wing—its only large area, fixed-wing aerial spray program—in an actual spill of national significance. The Air Force Reserve Command announced the 910th Airlift Wing has been selected as a recipient of the Air Force Outstanding Unit Award for its outstanding achievement from 28 April 2010 through 4 June 2010. C-130s temporarily based at Kelly Field conducted mosquito control aerial spray applications over areas of eastern Texas devastated by Hurricane Harvey. This special mission treated more than 2.3 million acres at the direction of Federal Emergency Management Agency (FEMA) and the Texas Department of State Health Services (DSHS) to assist in recovery efforts by helping contain the significant increase in pest insects caused by large amounts of standing, stagnant water. The 910th Airlift Wing operates the Department of Defense's only aerial spray capability to control pest insect populations, eliminate undesired and invasive vegetation, and disperse oil spills in large bodies of water. The aerial spray flight also is now able to operate during the night with NVGs, which increases the flight's best case spray capacity from approximately 60 thousand acres per day to approximately 190 thousand acres per day. Spray missions are normally conducted at dusk and nighttime hours when pest insects are most active, the U.S. Air Force Reserve reports. In the early 1970s, Congress created the Modular Airborne FireFighting System (MAFFS) which is a joint operation between the U.S. Forest Service who supply the systems and the Department of Defense who supply the C-130 aircraft. The roll-on/roll-off systems allow existing aircraft to be temporarily converted into a 3,000-gallon airtanker for fighting wildfires when demand exceeds the supply of privately contracted and publicly available airtankers. In the late 1980s, 22 retired USAF C-130As were removed from storage and transferred to the U.S. Forest Service, which then transferred them to six private companies to be converted into airtankers. One of these C-130s crashed in June 2002 while operating the Retardant Aerial Delivery System (RADS) near Walker, CA. The crash was attributed to wing separation caused by fatigue stress cracking and contributed to the grounding of the entire large aircraft fleet. After an extensive review, US Forest Service and The Bureau of Land Management declined to renew the leases on nine C-130A over concerns about the age of the aircraft, which had been in service since the 1950s, and their ability to handle the forces generated by aerial firefighting. More recently, an updated Retardant Aerial Delivery System known as RADS XL was developed by Coulson Aviation USA. That system consists of a C-130H/Q retrofitted with an in-floor discharge system, combined with a removable 3,500- or 4,000-gallon water tank. The combined system is FAA certified. On 23 January 2020, Coulson's Tanker 134, an EC-130Q registered N134CG, crashed during aerial firefighting operations in New South Wales, Australia, killing all three crew members. The aircraft had taken off out of RAAF Base Richmond and was supporting firefighting operations during Australia's 2019–20 fire season. Significant military variants of the C-130 include: Former operators The C-130 Hercules has had a low accident rate in general. The Royal Air Force recorded an accident rate of about one aircraft loss per 250,000 flying hours over the last 40 years, placing it behind Vickers VC10s and Lockheed TriStars with no flying losses. USAF C-130A/B/E-models had an overall attrition rate of 5% as of 1989 as compared to 1–2% for commercial airliners in the U.S., according to the NTSB, 10% for B-52 bombers, and 20% for fighters (F-4, F-111), trainers (T-37, T-38), and helicopters (H-3). Data from USAF C-130 Hercules fact sheet, International Directory of Military Aircraft, Complete Encyclopedia of World Aircraft, and Encyclopedia of Modern Military Aircraft. General characteristics Performance Avionics Related development Aircraft of comparable role, configuration, and era Related lists
[ { "paragraph_id": 0, "text": "The Lockheed C-130 Hercules is an American four-engine turboprop military transport aircraft designed and built by Lockheed (now Lockheed Martin). Capable of using unprepared runways for takeoffs and landings, the C-130 was originally designed as a troop, medevac, and cargo transport aircraft. The versatile airframe has found uses in other roles, including as a gunship (AC-130), for airborne assault, search and rescue, scientific research support, weather reconnaissance, aerial refueling, maritime patrol, and aerial firefighting. It is now the main tactical airlifter for many military forces worldwide. More than 40 variants of the Hercules, including civilian versions marketed as the Lockheed L-100, operate in more than 60 nations.", "title": "" }, { "paragraph_id": 1, "text": "The C-130 entered service with the U.S. in 1956, followed by Australia and many other nations. During its years of service, the Hercules has participated in numerous military, civilian and humanitarian aid operations. In 2007, the transport became the fifth aircraft to mark 50 years of continuous service with its original primary customer, which for the C-130 is the United States Air Force (USAF). The C-130 is the longest continuously produced military aircraft at more than 60 years, with the updated Lockheed Martin C-130J Super Hercules being produced as of 2023.", "title": "" }, { "paragraph_id": 2, "text": "The Korean War showed that World War II-era piston-engine transports—Fairchild C-119 Flying Boxcars, Douglas C-47 Skytrains and Curtiss C-46 Commandos—were no longer adequate. On 2 February 1951, the United States Air Force issued a General Operating Requirement (GOR) for a new transport to Boeing, Douglas, Fairchild, Lockheed, Martin, Chase Aircraft, North American, Northrop, and Airlifts Inc.", "title": "Design and development" }, { "paragraph_id": 3, "text": "The new transport would have a capacity of 92 passengers, 72 combat troops or 64 paratroopers in a cargo compartment that was approximately 41 ft (12 m) long, 9 ft (2.7 m) high, and 10 ft (3.0 m) wide. Unlike transports derived from passenger airliners, it was to be designed specifically as a combat transport with loading from a hinged loading ramp at the rear of the fuselage. A notable advance for large aircraft was the introduction of a turboprop powerplant, the Allison T56 which was developed for the C-130. It gave the aircraft greater range than a turbojet engine as it used less fuel. Turboprop engines also produced much more power for their weight than piston engines. However, the turboprop configuration chosen for the T56, with the propeller connected to the compressor, had the potential to cause structural failure of the aircraft if an engine failed. Safety devices had to be incorporated to reduce the excessive drag from a windmilling propeller.", "title": "Design and development" }, { "paragraph_id": 4, "text": "The Hercules resembles a larger, four-engine version of the Fairchild C-123 Provider with a similar wing and cargo ramp layout. The C-123 had evolved from the Chase XCG-20 Avitruc first flown in 1950. The Boeing C-97 Stratofreighter had rear ramps, which made it possible to drive vehicles onto the airplane (also possible with the forward ramp on a C-124). The ramp on the Hercules was also used to airdrop cargo, which included a Low-altitude parachute-extraction system for Sheridan tanks and even dropping large improvised \"daisy cutter\" bombs. The new Lockheed cargo plane had a range of 1,100 nmi (1,270 mi; 2,040 km) and it could operate from short and unprepared strips.", "title": "Design and development" }, { "paragraph_id": 5, "text": "Fairchild, North American, Martin, and Northrop declined to participate. The remaining five companies tendered a total of ten designs: Lockheed two, Boeing one, Chase three, Douglas three, and Airlifts Inc. one. The contest was a close affair between the lighter of the two Lockheed (preliminary project designation L-206) proposals and a four-turboprop Douglas design.", "title": "Design and development" }, { "paragraph_id": 6, "text": "The Lockheed design team was led by Willis Hawkins, starting with a 130-page proposal for the Lockheed L-206. Hall Hibbard, Lockheed vice president and chief engineer, saw the proposal and directed it to Kelly Johnson, who did not care for the low-speed, unarmed aircraft, and remarked, \"If you sign that letter, you will destroy the Lockheed Company.\" Both Hibbard and Johnson signed the proposal and the company won the contract for the now-designated Model 82 on 2 July 1951.", "title": "Design and development" }, { "paragraph_id": 7, "text": "The first flight of the YC-130 prototype was made on 23 August 1954 from the Lockheed plant in Burbank, California. The aircraft, serial number 53-3397, was the second prototype, but the first of the two to fly. The YC-130 was piloted by Stanley Beltz and Roy Wimmer on its 61-minute flight to Edwards Air Force Base; Jack Real and Dick Stanton served as flight engineers. Kelly Johnson flew chase in a Lockheed P2V Neptune.", "title": "Design and development" }, { "paragraph_id": 8, "text": "After the two prototypes were completed, production began in Marietta, Georgia, where over 2,300 C-130s have been built through 2009.", "title": "Design and development" }, { "paragraph_id": 9, "text": "The initial production model, the C-130A, was powered by Allison T56-A-9 turboprops with three-blade propellers and originally equipped with the blunt nose of the prototypes. Deliveries began in December 1956, continuing until the introduction of the C-130B model in 1959. Some A-models were equipped with skis and re-designated C-130D. As the C-130A became operational with Tactical Air Command (TAC), the C-130's lack of range became apparent and additional fuel capacity was added with wing pylon-mounted tanks outboard of the engines; this added 6,000 pounds (2,700 kg) of fuel capacity for a total capacity of 40,000 pounds (18,000 kg).", "title": "Design and development" }, { "paragraph_id": 10, "text": "The C-130B model was developed to complement the A-models that had previously been delivered, and incorporated new features, particularly increased fuel capacity in the form of auxiliary tanks built into the center wing section and an AC electrical system. Four-bladed Hamilton Standard propellers replaced the Aero Products' three-blade propellers that distinguished the earlier A-models. The C-130B had ailerons operated by hydraulic pressure that was increased from 2,050 to 3,000 psi (14.1 to 20.7 MPa), as well as uprated engines and four-blade propellers that were standard until the J-model.", "title": "Design and development" }, { "paragraph_id": 11, "text": "The B model was originally intended to have \"blown controls\", a system that blows high-pressure air over the control surfaces to improve their effectiveness during slow flight. It was tested on an NC-130B prototype aircraft with a pair of T-56 turbines providing high-pressure air through a duct system to the control surfaces and flaps during landing. This greatly reduced landing speed to just 63 knots and cut landing distance in half. The system never entered service because it did not improve takeoff performance by the same margin, making the landing performance pointless if the aircraft could not also take off from where it had landed.", "title": "Design and development" }, { "paragraph_id": 12, "text": "An electronic reconnaissance variant of the C-130B was designated C-130B-II. A total of 13 aircraft were converted. The C-130B-II was distinguished by its false external wing fuel tanks, which were disguised signals intelligence (SIGINT) receiver antennas. These pods were slightly larger than the standard wing tanks found on other C-130Bs. Most aircraft featured a swept blade antenna on the upper fuselage, as well as extra wire antennas between the vertical fin and upper fuselage not found on other C-130s. Radio call numbers on the tail of these aircraft were regularly changed to confuse observers and disguise their true mission.", "title": "Design and development" }, { "paragraph_id": 13, "text": "The extended-range C-130E model entered service in 1962 after it was developed as an interim long-range transport for the Military Air Transport Service. Essentially a B-model, the new designation was the result of the installation of 1,360 US gallons (5,100 litres) Sargent Fletcher external fuel tanks under each wing's midsection and more powerful Allison T56-A-7A turboprops. The hydraulic boost pressure to the ailerons was reduced back to 2,050 psi (14.1 MPa) as a consequence of the external tanks' weight in the middle of the wingspan. The E model also featured structural improvements, avionics upgrades, and a higher gross weight. Australia took delivery of 12 C130E Hercules during 1966–67 to supplement the 12 C-130A models already in service with the RAAF. Sweden and Spain fly the TP-84T version of the C-130E fitted for aerial refueling capability.", "title": "Design and development" }, { "paragraph_id": 14, "text": "The KC-130 tankers, originally C-130F procured for the US Marine Corps (USMC) in 1958 (under the designation GV-1) are equipped with a removable 3,600 US gallons (14,000 L) stainless steel fuel tank carried inside the cargo compartment. The two wing-mounted hose and drogue aerial refueling pods each transfer up to 300 US gallons per minute (1,100 L/min) to two aircraft simultaneously, allowing for rapid cycle times of multiple-receiver aircraft formations, (a typical tanker formation of four aircraft in less than 30 minutes). The US Navy's C-130G has increased structural strength allowing higher gross weight operation.", "title": "Design and development" }, { "paragraph_id": 15, "text": "The C-130H model has updated Allison T56-A-15 turboprops, a redesigned outer wing, updated avionics, and other minor improvements. Later H models had a new, fatigue-life-improved, center wing that was retrofitted to many earlier H-models. For structural reasons, some models are required to land with reduced amounts of fuel when carrying heavy cargo, reducing usable range.", "title": "Design and development" }, { "paragraph_id": 16, "text": "The H model remains in widespread use with the United States Air Force (USAF) and many foreign air forces. Initial deliveries began in 1964 (to the RNZAF), remaining in production until 1996. An improved C-130H was introduced in 1974, with Australia purchasing 12 of the type in 1978 to replace the original 12 C-130A models, which had first entered Royal Australian Air Force (RAAF) service in 1958. The U.S. Coast Guard employs the HC-130H for long-range search and rescue, drug interdiction, illegal migrant patrols, homeland security, and logistics.", "title": "Design and development" }, { "paragraph_id": 17, "text": "C-130H models produced from 1992 to 1996 were designated as C-130H3 by the USAF, with the \"3\" denoting the third variation in design for the H series. Improvements included ring laser gyros for the INUs, GPS receivers, a partial glass cockpit (ADI and HSI instruments), a more capable APN-241 color radar, night vision device compatible instrument lighting, and an integrated radar and missile warning system. The electrical system upgrade included Generator Control Units (GCU) and Bus Switching units (BSU) to provide stable power to the more sensitive upgraded components.", "title": "Design and development" }, { "paragraph_id": 18, "text": "The equivalent model for export to the UK is the C-130K, known by the Royal Air Force (RAF) as the Hercules C.1. The C-130H-30 (Hercules C.3 in RAF service) is a stretched version of the original Hercules, achieved by inserting a 100 in (2.5 m) plug aft of the cockpit and an 80 in (2.0 m) plug at the rear of the fuselage. A single C-130K was purchased by the Met Office for use by its Meteorological Research Flight, where it was classified as the Hercules W.2. This aircraft was heavily modified, with its most prominent feature being the long red and white striped atmospheric probe on the nose and the move of the weather radar into a pod above the forward fuselage. This aircraft, named Snoopy, was withdrawn in 2001 and was then modified by Marshall of Cambridge Aerospace as a flight testbed for the A400M turbine engine, the TP400. The C-130K is used by the RAF Falcons for parachute drops. Three C-130Ks (Hercules C Mk.1P) were upgraded and sold to the Austrian Air Force in 2002.", "title": "Design and development" }, { "paragraph_id": 19, "text": "The MC-130E Combat Talon was developed for the USAF during the Vietnam War to support special operations missions in Southeast Asia, and led to both the MC-130H Combat Talon II as well as a family of other special missions aircraft. 37 of the earliest models currently operating with the Air Force Special Operations Command (AFSOC) are scheduled to be replaced by new-production MC-130J versions. The EC-130 Commando Solo is another special missions variant within AFSOC, albeit operated solely by an AFSOC-gained wing in the Pennsylvania Air National Guard, and is a psychological operations/information operations (PSYOP/IO) platform equipped as an aerial radio station and television stations able to transmit messaging over commercial frequencies. Other versions of the EC-130, most notably the EC-130H Compass Call, are also special variants, but are assigned to the Air Combat Command (ACC). The AC-130 gunship was first developed during the Vietnam War to provide close air support and other ground-attack duties.", "title": "Design and development" }, { "paragraph_id": 20, "text": "The HC-130 is a family of long-range search and rescue variants used by the USAF and the U.S. Coast Guard. Equipped for the deep deployment of Pararescuemen (PJs), survival equipment, and (in the case of USAF versions) aerial refueling of combat rescue helicopters, HC-130s are usually the on-scene command aircraft for combat SAR missions (USAF only) and non-combat SAR (USAF and USCG). Early USAF versions were also equipped with the Fulton surface-to-air recovery system, designed to pull a person off the ground using a wire strung from a helium balloon. The John Wayne movie The Green Berets features its use. The Fulton system was later removed when aerial refueling of helicopters proved safer and more versatile. The movie The Perfect Storm depicts a real-life SAR mission involving aerial refueling of a New York Air National Guard HH-60G by a New York Air National Guard HC-130P.", "title": "Design and development" }, { "paragraph_id": 21, "text": "The C-130R and C-130T are U.S. Navy and USMC models, both equipped with underwing external fuel tanks. The USN C-130T is similar but has additional avionics improvements. In both models, aircraft are equipped with Allison T56-A-16 engines. The USMC versions are designated KC-130R or KC-130T when equipped with underwing refueling pods and pylons and are fully night vision system compatible.", "title": "Design and development" }, { "paragraph_id": 22, "text": "The RC-130 is a reconnaissance version. A single example is used by the Islamic Republic of Iran Air Force, the aircraft having originally been sold to the former Imperial Iranian Air Force.", "title": "Design and development" }, { "paragraph_id": 23, "text": "The Lockheed L-100 (L-382) is a civilian variant, equivalent to a C-130E model without military equipment. The L-100 also has two stretched versions.", "title": "Design and development" }, { "paragraph_id": 24, "text": "In the 1970s, Lockheed proposed a C-130 variant with turbofan engines rather than turboprops, but the U.S. Air Force preferred the takeoff performance of the existing aircraft. In the 1980s, the C-130 was intended to be replaced by the Advanced Medium STOL Transport project. The project was canceled and the C-130 has remained in production.", "title": "Design and development" }, { "paragraph_id": 25, "text": "Building on lessons learned, Lockheed Martin modified a commercial variant of the C-130 into a High Technology Test Bed (HTTB). This test aircraft set numerous short takeoff and landing performance records and significantly expanded the database for future derivatives of the C-130. Modifications made to the HTTB included extended chord ailerons, a long chord rudder, fast-acting double-slotted trailing edge flaps, a high-camber wing leading edge extension, a larger dorsal fin and dorsal fins, the addition of three spoiler panels to each wing upper surface, a long-stroke main and nose landing gear system, and changes to the flight controls and a change from direct mechanical linkages assisted by hydraulic boost, to fully powered controls, in which the mechanical linkages from the flight station controls operated only the hydraulic control valves of the appropriate boost unit.", "title": "Design and development" }, { "paragraph_id": 26, "text": "The HTTB first flew on 19 June 1984, with civil registration of N130X. After demonstrating many new technologies, some of which were applied to the C-130J, the HTTB was lost in a fatal accident on 3 February 1993, at Dobbins Air Reserve Base, in Marietta, Georgia. The crash was attributed to disengagement of the rudder fly-by-wire flight control system, resulting in a total loss of rudder control capability while conducting ground minimum control speed tests (Vmcg). The disengagement was a result of the inadequate design of the rudder's integrated actuator package by its manufacturer; the operator's insufficient system safety review failed to consider the consequences of the inadequate design to all operating regimes. A factor that contributed to the accident was the flight crew's lack of engineering flight test training.", "title": "Design and development" }, { "paragraph_id": 27, "text": "In the 1990s, the improved C-130J Super Hercules was developed by Lockheed (later Lockheed Martin). This model is the newest version and the only model in production. Externally similar to the classic Hercules in general appearance, the J model has new turboprop engines, six-bladed propellers, digital avionics, and other new systems.", "title": "Design and development" }, { "paragraph_id": 28, "text": "In 2000, Boeing was awarded a US$1.4 billion contract to develop an Avionics Modernization Program kit for the C-130. The program was beset with delays and cost overruns until project restructuring in 2007. In September 2009, it was reported that the planned Avionics Modernization Program (AMP) upgrade to the older C-130s would be dropped to provide more funds for the F-35, CV-22 and airborne tanker replacement programs. However, in June 2010, Department of Defense approved funding for the initial production of the AMP upgrade kits. Under the terms of this agreement, the USAF has cleared Boeing to begin low-rate initial production (LRIP) for the C-130 AMP. A total of 198 aircraft are expected to feature the AMP upgrade. The current cost per aircraft is US$14 million, although Boeing expects that this price will drop to US$7 million for the 69th aircraft.", "title": "Design and development" }, { "paragraph_id": 29, "text": "In the 2000s, Lockheed Martin and the U.S. Air Force began outfitting and retrofitting C-130s with the eight-blade UTC Aerospace Systems NP2000 propellers. An engine enhancement program saving fuel and providing lower temperatures in the T56 engine has been approved, and the US Air Force expects to save $2 billion (~$2.49 billion in 2022) and extend the fleet life.", "title": "Design and development" }, { "paragraph_id": 30, "text": "In 2021, the Air Force Research Laboratory demonstrated the Rapid Dragon system which transforms the C-130 into a lethal strike platform capable of launching 12 JASSM-ER with 500 kg warheads from a standoff distance of 925 km (575 mi). Future anticipated improvements support includes support for JDAM-ER, mine laying, drone dispersal as well as improved standoff range when 1,900 km (1,200 mi) JASSM-XR become available in 2024.", "title": "Design and development" }, { "paragraph_id": 31, "text": "In October 2010, the U.S. Air Force released a capability request for information (CRFI) for the development of a new airlifter to replace the C-130. The new aircraft was to carry a 190% greater payload and assume the mission of mounted vertical maneuver (MVM). The greater payload and mission would enable it to carry medium-weight armored vehicles and unload them at locations without long runways. Various options were under consideration, including new or upgraded fixed-wing designs, rotorcraft, tiltrotors, or even an airship. The C-130 fleet of around 450 planes would be replaced by only 250 aircraft. The Air Force had attempted to replace the C-130 in the 1970s through the Advanced Medium STOL Transport project, which resulted in the C-17 Globemaster III that instead replaced the C-141 Starlifter.", "title": "Design and development" }, { "paragraph_id": 32, "text": "The Air Force Research Laboratory funded Lockheed Martin and Boeing demonstrators for the Speed Agile concept, which had the goal of making a STOL aircraft that could take off and land at speeds as low as 70 kn (130 km/h; 81 mph) on airfields less than 2,000 ft (610 m) long and cruise at Mach 0.8-plus. Boeing's design used upper-surface blowing from embedded engines on the inboard wing and blown flaps for circulation control on the outboard wing. Lockheed's design also used blown flaps outboard, but inboard used patented reversing ejector nozzles.", "title": "Design and development" }, { "paragraph_id": 33, "text": "Boeing's design completed over 2,000 hours of wind tunnel tests in late 2009. It was a 5 percent-scale model of a narrow body design with a 55,000 lb (25,000 kg) payload. When the AFRL increased the payload requirement to 65,000 lb (29,000 kg), they tested a 5 percent-scale model of a widebody design with a 303,000 lb (137,000 kg) take-off gross weight and an \"A400M-size\" 158 in (4.0 m) wide cargo box. It would be powered by four IAE V2533 turbofans.", "title": "Design and development" }, { "paragraph_id": 34, "text": "In August 2011, the AFRL released pictures of the Lockheed Speed Agile concept demonstrator. A 23% scale model went through wind tunnel tests to demonstrate its hybrid powered lift, which combined a low drag airframe with simple mechanical assembly to reduce weight and improve aerodynamics. The model had four engines, including two Williams FJ44 turbofans. On 26 March 2013, Boeing was granted a patent for its swept-wing powered lift aircraft.", "title": "Design and development" }, { "paragraph_id": 35, "text": "In January 2014, Air Mobility Command, Air Force Materiel Command and the Air Force Research Lab were in the early stages of defining requirements for the C-X next generation airlifter program to replace both the C-130 and C-17. The aircraft would be produced from the early 2030s to the 2040s.", "title": "Design and development" }, { "paragraph_id": 36, "text": "The first production batch of C-130A aircraft were delivered beginning in 1956 to the 463d Troop Carrier Wing at Ardmore AFB, Oklahoma, and the 314th Troop Carrier Wing at Sewart AFB, Tennessee. Six additional squadrons were assigned to the 322d Air Division in Europe and the 315th Air Division in the Far East. Additional aircraft were modified for electronics intelligence work and assigned to Rhein-Main Air Base, Germany while modified RC-130As were assigned to the Military Air Transport Service (MATS) photo-mapping division. The C-130A entered service with the U.S. Air Force in December 1956.", "title": "Operational history" }, { "paragraph_id": 37, "text": "In 1958, a U.S. reconnaissance C-130A-II of the 7406th Support Squadron was shot down over Armenia by four Soviet MiG-17s along the Turkish-Armenian border during a routine mission.", "title": "Operational history" }, { "paragraph_id": 38, "text": "Australia became the first non-American force to operate the C-130A Hercules with 12 examples being delivered from late 1958. The Royal Canadian Air Force became another early user with the delivery of four B-models (Canadian designation CC-130 Mk I) in October / November 1960.", "title": "Operational history" }, { "paragraph_id": 39, "text": "In 1963, a Hercules achieved and still holds the record for the largest and heaviest aircraft to land on an aircraft carrier. During October and November that year, a USMC KC-130F (BuNo 149798), loaned to the U.S. Naval Air Test Center, made 29 touch-and-go landings, 21 unarrested full-stop landings and 21 unassisted take-offs on Forrestal at a number of different weights. The pilot, Lieutenant (later Rear Admiral) James H. Flatley III, USN, was awarded the Distinguished Flying Cross for his role in this test series. The tests were highly successful, but the aircraft was not deployed this way. Flatley denied that C-130 was tested for carrier onboard delivery (COD) operations, or for delivering nuclear weapons. He said that the intention was to support the Lockheed U-2, also being tested on carriers. The Hercules used in the test, most recently in service with Marine Aerial Refueler Squadron 352 (VMGR-352) until 2005, is now part of the collection of the National Museum of Naval Aviation at NAS Pensacola, Florida.", "title": "Operational history" }, { "paragraph_id": 40, "text": "In 1964, C-130 crews from the 6315th Operations Group at Naha Air Base, Okinawa commenced forward air control (FAC; \"Flare\") missions over the Ho Chi Minh Trail in Laos supporting USAF strike aircraft. In April 1965 the mission was expanded to North Vietnam where C-130 crews led formations of Martin B-57 Canberra bombers on night reconnaissance/strike missions against communist supply routes leading to South Vietnam. In early 1966 Project Blind Bat/Lamplighter was established at Ubon Royal Thai Air Force Base, Thailand. After the move to Ubon, the mission became a four-engine FAC mission with the C-130 crew searching for targets and then calling in strike aircraft. Another little-known C-130 mission flown by Naha-based crews was Operation Commando Scarf (or Operation Commando Lava), which involved the delivery of chemicals onto sections of the Ho Chi Minh Trail in Laos that were designed to produce mud and landslides in hopes of making the truck routes impassable.", "title": "Operational history" }, { "paragraph_id": 41, "text": "In November 1964, on the other side of the globe, C-130Es from the 464th Troop Carrier Wing but loaned to 322d Air Division in France, took part in Operation Dragon Rouge, one of the most dramatic missions in history in the former Belgian Congo. After communist Simba rebels took white residents of the city of Stanleyville hostage, the U.S. and Belgium developed a joint rescue mission that used the C-130s to drop, air-land, and air-lift a force of Belgian paratroopers to rescue the hostages. Two missions were flown, one over Stanleyville and another over Paulis during Thanksgiving week. The headline-making mission resulted in the first award of the prestigious MacKay Trophy to C-130 crews.", "title": "Operational history" }, { "paragraph_id": 42, "text": "In the Indo-Pakistani War of 1965, the No. 6 Transport Squadron of the Pakistan Air Force modified its C-130Bs for use as bombers to carry up to 20,000 pounds (9,100 kg) of bombs on pallets. These improvised bombers were used to hit Indian targets such as bridges, heavy artillery positions, tank formations, and troop concentrations, though weren't that successful finally.", "title": "Operational history" }, { "paragraph_id": 43, "text": "In October 1968, a C-130Bs from the 463rd Tactical Airlift Wing dropped a pair of M-121 10,000 pounds (4,500 kg) bombs that had been developed for the massive Convair B-36 Peacemaker bomber but had never been used. The U.S. Army and U.S. Air Force resurrected the huge weapons as a means of clearing landing zones for helicopters and in early 1969 the 463rd commenced Commando Vault missions. Although the stated purpose of Commando Vault was to clear LZs, they were also used on enemy base camps and other targets.", "title": "Operational history" }, { "paragraph_id": 44, "text": "During the late 1960s, the U.S. was eager to get information on Chinese nuclear capabilities. After the failure of the Black Cat Squadron to plant operating sensor pods near the Lop Nur Nuclear Weapons Test Base using a U-2, the CIA developed a plan, named Heavy Tea, to deploy two battery-powered sensor pallets near the base. To deploy the pallets, a Black Bat Squadron crew was trained in the U.S. to fly the C-130 Hercules. The crew of 12, led by Col Sun Pei Zhen, took off from Takhli Royal Thai Air Force Base in an unmarked U.S. Air Force C-130E on 17 May 1969. Flying for six and a half hours at low altitude in the dark, they arrived over the target and the sensor pallets were dropped by parachute near Anxi in Gansu province. After another six and a half hours of low-altitude flight, they arrived back at Takhli. The sensors worked and uploaded data to a U.S. intelligence satellite for six months before their batteries failed. The Chinese conducted two nuclear tests, on 22 September 1969 and 29 September 1969, during the operating life of the sensor pallets. Another mission to the area was planned as Operation Golden Whip, but it was called off in 1970. It is most likely that the aircraft used on this mission was either C-130E serial number 64-0506 or 64-0507 (cn 382-3990 and 382–3991). These two aircraft were delivered to Air America in 1964. After being returned to the U.S. Air Force sometime between 1966 and 1970, they were assigned the serial numbers of C-130s that had been destroyed in accidents. 64-0506 is now flying as 62–1843, a C-130E that crashed in Vietnam on 20 December 1965, and 64-0507 is now flying as 63–7785, a C-130E that had crashed in Vietnam on 17 June 1966.", "title": "Operational history" }, { "paragraph_id": 45, "text": "The A-model continued in service through the Vietnam War, where the aircraft assigned to the four squadrons at Naha AB, Okinawa, and one at Tachikawa Air Base, Japan performed yeoman's service, including operating highly classified special operations missions such as the BLIND BAT FAC/Flare mission and Fact Sheet leaflet mission over Laos and North Vietnam. The A-model was also provided to the Republic of Vietnam Air Force as part of the Vietnamization program at the end of the war, and equipped three squadrons based at Tan Son Nhut Air Base. The last operator in the world is the Honduran Air Force, which is still flying one of five A model Hercules (FAH 558, c/n 3042) as of October 2009. As the Vietnam War wound down, the 463rd Troop Carrier/Tactical Airlift Wing B-models and A-models of the 374th Tactical Airlift Wing were transferred back to the United States where most were assigned to Air Force Reserve and Air National Guard units.", "title": "Operational history" }, { "paragraph_id": 46, "text": "Another prominent role for the B model was with the United States Marine Corps, where Hercules initially designated as GV-1s replaced C-119s. After Air Force C-130Ds proved the type's usefulness in Antarctica, the U.S. Navy purchased several B-models equipped with skis that were designated as LC-130s. C-130B-II electronic reconnaissance aircraft were operated under the SUN VALLEY program name primarily from Yokota Air Base, Japan. All reverted to standard C-130B cargo aircraft after their replacement in the reconnaissance role by other aircraft.", "title": "Operational history" }, { "paragraph_id": 47, "text": "The C-130 was also used in the 1976 Entebbe raid in which Israeli commando forces performed a surprise operation to rescue 103 passengers of an airliner hijacked by Palestinian and German terrorists at Entebbe Airport, Uganda. The rescue force—200 soldiers, jeeps, and a black Mercedes-Benz (intended to resemble Ugandan Dictator Idi Amin's vehicle of state)—was flown over 2,200 nmi (4,074 km; 2,532 mi) almost entirely at an altitude of less than 100 ft (30 m) from Israel to Entebbe by four Israeli Air Force (IAF) Hercules aircraft without mid-air refueling (on the way back, the aircraft refueled in Nairobi, Kenya).", "title": "Operational history" }, { "paragraph_id": 48, "text": "During the Falklands War (Spanish: Guerra de las Malvinas) of 1982, Argentine Air Force C-130s undertook dangerous re-supply night flights as blockade runners to the Argentine garrison on the Falkland Islands. They also performed daylight maritime survey flights. One was shot down by a Royal Navy Sea Harrier using AIM-9 Sidewinders and cannon. The crew of seven were killed. Argentina also operated two KC-130 tankers during the war, and these refueled both the Douglas A-4 Skyhawks and Navy Dassault-Breguet Super Étendards; some C-130s were modified to operate as bombers with bomb-racks under their wings. The British also used RAF C-130s to support their logistical operations.", "title": "Operational history" }, { "paragraph_id": 49, "text": "During the Gulf War of 1991 (Operation Desert Storm), the C-130 Hercules was used operationally by the U.S. Air Force, U.S. Navy, and U.S. Marine Corps, along with the air forces of Australia, New Zealand, Saudi Arabia, South Korea, and the UK. The MC-130 Combat Talon variant also made the first attacks using the largest conventional bombs in the world, the BLU-82 \"Daisy Cutter\" and GBU-43/B \"Massive Ordnance Air Blast\" (MOAB) bomb. Daisy Cutters were used to primarily clear landing zones and to eliminate mine fields. The weight and size of the weapons make it impossible or impractical to load them on conventional bombers. The GBU-43/B MOAB is a successor to the BLU-82 and can perform the same function, as well as perform strike functions against hardened targets in a low air threat environment.", "title": "Operational history" }, { "paragraph_id": 50, "text": "Since 1992, two successive C-130 aircraft named Fat Albert have served as the support aircraft for the U.S. Navy Blue Angels flight demonstration team. Fat Albert I was a TC-130G (151891) a former U.S. Navy TACAMO aircraft serving with Fleet Air Reconnaissance Squadron Three (VQ-3) before being transferred to the BLUES, while Fat Albert II is a C-130T (164763). Although Fat Albert supports a Navy squadron, it is operated by the U.S. Marine Corps (USMC) and its crew consists solely of USMC personnel. At some air shows featuring the team, Fat Albert takes part, performing flyovers. Until 2009, it also demonstrated its rocket-assisted takeoff (RATO) capabilities; these ended due to dwindling supplies of rockets.", "title": "Operational history" }, { "paragraph_id": 51, "text": "The AC-130 also holds the record for the longest sustained flight by a C-130. From 22 to 24 October 1997, two AC-130U gunships flew 36 hours nonstop from Hurlburt Field, Florida to Daegu International Airport, South Korea, being refueled seven times by KC-135 tanker aircraft. This record flight beat the previous record longest flight by over 10 hours and the two gunships took on 410,000 lb (190,000 kg) of fuel. The gunship has been used in every major U.S. combat operation since Vietnam, except for Operation El Dorado Canyon, the 1986 attack on Libya.", "title": "Operational history" }, { "paragraph_id": 52, "text": "During the invasion of Afghanistan in 2001 and the ongoing support of the International Security Assistance Force (Operation Enduring Freedom), the C-130 Hercules has been used operationally by Australia, Belgium, Canada, Denmark, France, Italy, the Netherlands, New Zealand, Norway, Portugal, Romania, South Korea, Spain, the UK, and the United States.", "title": "Operational history" }, { "paragraph_id": 53, "text": "During the 2003 invasion of Iraq (Operation Iraqi Freedom), the C-130 Hercules was used operationally by Australia, the UK, and the United States. After the initial invasion, C-130 operators as part of the Multinational force in Iraq used their C-130s to support their forces in Iraq.", "title": "Operational history" }, { "paragraph_id": 54, "text": "Since 2004, the Pakistan Air Force has employed C-130s in the War in North-West Pakistan. Some variants had forward looking infrared (FLIR Systems Star Safire III EO/IR) sensor balls, to enable close tracking of militants.", "title": "Operational history" }, { "paragraph_id": 55, "text": "In 2017, France and Germany announced that they are to build up a joint air transport squadron at Evreux Air Base, France, comprising ten C-130J aircraft. Six of these will be operated by Germany. Initial operational capability is expected for 2021 while full operational capability is scheduled for 2024.", "title": "Operational history" }, { "paragraph_id": 56, "text": "For almost two decades, the USAF 910th Airlift Wing's 757th Airlift Squadron and the U.S. Coast Guard have participated in oil spill cleanup exercises to ensure the U.S. military has a capable response in the event of a national emergency. The 757th Airlift Squadron operates the DOD's only fixed-wing Aerial Spray System which was certified by the EPA to disperse pesticides on DOD property to spread oil dispersants onto the Deepwater Horizon oil spill in the Gulf Coast in 2010.", "title": "Operational history" }, { "paragraph_id": 57, "text": "During the 5-week mission, the aircrews flew 92 sorties and sprayed approximately 30,000 acres with nearly 149,000 gallons of oil dispersant to break up the oil. The Deepwater Horizon mission was the first time the US used the oil dispersing capability of the 910th Airlift Wing—its only large area, fixed-wing aerial spray program—in an actual spill of national significance. The Air Force Reserve Command announced the 910th Airlift Wing has been selected as a recipient of the Air Force Outstanding Unit Award for its outstanding achievement from 28 April 2010 through 4 June 2010.", "title": "Operational history" }, { "paragraph_id": 58, "text": "C-130s temporarily based at Kelly Field conducted mosquito control aerial spray applications over areas of eastern Texas devastated by Hurricane Harvey. This special mission treated more than 2.3 million acres at the direction of Federal Emergency Management Agency (FEMA) and the Texas Department of State Health Services (DSHS) to assist in recovery efforts by helping contain the significant increase in pest insects caused by large amounts of standing, stagnant water. The 910th Airlift Wing operates the Department of Defense's only aerial spray capability to control pest insect populations, eliminate undesired and invasive vegetation, and disperse oil spills in large bodies of water.", "title": "Operational history" }, { "paragraph_id": 59, "text": "The aerial spray flight also is now able to operate during the night with NVGs, which increases the flight's best case spray capacity from approximately 60 thousand acres per day to approximately 190 thousand acres per day. Spray missions are normally conducted at dusk and nighttime hours when pest insects are most active, the U.S. Air Force Reserve reports.", "title": "Operational history" }, { "paragraph_id": 60, "text": "In the early 1970s, Congress created the Modular Airborne FireFighting System (MAFFS) which is a joint operation between the U.S. Forest Service who supply the systems and the Department of Defense who supply the C-130 aircraft. The roll-on/roll-off systems allow existing aircraft to be temporarily converted into a 3,000-gallon airtanker for fighting wildfires when demand exceeds the supply of privately contracted and publicly available airtankers.", "title": "Operational history" }, { "paragraph_id": 61, "text": "In the late 1980s, 22 retired USAF C-130As were removed from storage and transferred to the U.S. Forest Service, which then transferred them to six private companies to be converted into airtankers. One of these C-130s crashed in June 2002 while operating the Retardant Aerial Delivery System (RADS) near Walker, CA. The crash was attributed to wing separation caused by fatigue stress cracking and contributed to the grounding of the entire large aircraft fleet. After an extensive review, US Forest Service and The Bureau of Land Management declined to renew the leases on nine C-130A over concerns about the age of the aircraft, which had been in service since the 1950s, and their ability to handle the forces generated by aerial firefighting.", "title": "Operational history" }, { "paragraph_id": 62, "text": "More recently, an updated Retardant Aerial Delivery System known as RADS XL was developed by Coulson Aviation USA. That system consists of a C-130H/Q retrofitted with an in-floor discharge system, combined with a removable 3,500- or 4,000-gallon water tank. The combined system is FAA certified.", "title": "Operational history" }, { "paragraph_id": 63, "text": "On 23 January 2020, Coulson's Tanker 134, an EC-130Q registered N134CG, crashed during aerial firefighting operations in New South Wales, Australia, killing all three crew members. The aircraft had taken off out of RAAF Base Richmond and was supporting firefighting operations during Australia's 2019–20 fire season.", "title": "Operational history" }, { "paragraph_id": 64, "text": "Significant military variants of the C-130 include:", "title": "Variants" }, { "paragraph_id": 65, "text": "Former operators", "title": "Operators" }, { "paragraph_id": 66, "text": "The C-130 Hercules has had a low accident rate in general. The Royal Air Force recorded an accident rate of about one aircraft loss per 250,000 flying hours over the last 40 years, placing it behind Vickers VC10s and Lockheed TriStars with no flying losses. USAF C-130A/B/E-models had an overall attrition rate of 5% as of 1989 as compared to 1–2% for commercial airliners in the U.S., according to the NTSB, 10% for B-52 bombers, and 20% for fighters (F-4, F-111), trainers (T-37, T-38), and helicopters (H-3).", "title": "Accidents" }, { "paragraph_id": 67, "text": "Data from USAF C-130 Hercules fact sheet, International Directory of Military Aircraft, Complete Encyclopedia of World Aircraft, and Encyclopedia of Modern Military Aircraft.", "title": "Specifications (C-130H)" }, { "paragraph_id": 68, "text": "General characteristics", "title": "Specifications (C-130H)" }, { "paragraph_id": 69, "text": "Performance", "title": "Specifications (C-130H)" }, { "paragraph_id": 70, "text": "Avionics", "title": "Specifications (C-130H)" }, { "paragraph_id": 71, "text": "Related development", "title": "See also" }, { "paragraph_id": 72, "text": "Aircraft of comparable role, configuration, and era", "title": "See also" }, { "paragraph_id": 73, "text": "Related lists", "title": "See also" } ]
The Lockheed C-130 Hercules is an American four-engine turboprop military transport aircraft designed and built by Lockheed. Capable of using unprepared runways for takeoffs and landings, the C-130 was originally designed as a troop, medevac, and cargo transport aircraft. The versatile airframe has found uses in other roles, including as a gunship (AC-130), for airborne assault, search and rescue, scientific research support, weather reconnaissance, aerial refueling, maritime patrol, and aerial firefighting. It is now the main tactical airlifter for many military forces worldwide. More than 40 variants of the Hercules, including civilian versions marketed as the Lockheed L-100, operate in more than 60 nations. The C-130 entered service with the U.S. in 1956, followed by Australia and many other nations. During its years of service, the Hercules has participated in numerous military, civilian and humanitarian aid operations. In 2007, the transport became the fifth aircraft to mark 50 years of continuous service with its original primary customer, which for the C-130 is the United States Air Force (USAF). The C-130 is the longest continuously produced military aircraft at more than 60 years, with the updated Lockheed Martin C-130J Super Hercules being produced as of 2023.
2002-01-10T01:41:14Z
2023-12-21T19:51:12Z
[ "Template:Short description", "Template:Main", "Template:Lang-es", "Template:Commons", "Template:BRA", "Template:POL", "Template:Cite book", "Template:Cite report", "Template:GAB", "Template:IDN", "Template:VEN", "Template:Cite magazine", "Template:Format price", "Template:CHI", "Template:EGY", "Template:JPN", "Template:MAS", "Template:US$", "Template:CMR", "Template:Infobox aircraft begin", "Template:MEX", "Template:NZL", "Template:Cite web", "Template:AUT", "Template:ZAF", "Template:ADF aircraft designations", "Template:For", "Template:FRA", "Template:ISR", "Template:British pathe", "Template:More citations needed section", "Template:Sfn", "Template:ETH", "Template:SUD", "Template:ZAM", "Template:Webarchive", "Template:Lockheed Hercules", "Template:Cvt", "Template:BAN", "Template:ECU", "Template:IND", "Template:DZA", "Template:PAK", "Template:SIN", "Template:ARE", "Template:URU", "Template:SWE", "Template:THA", "Template:AUS", "Template:BOL", "Template:IRN", "Template:LBA", "Template:Globalize section", "Template:NIG", "Template:KOR", "Template:COL", "Template:ITA", "Template:LBR", "Template:Authority control", "Template:Refn", "Template:NLD", "Template:SAU", "Template:UK", "Template:Inflation/year", "Template:Flagdeco", "Template:KWT", "Template:BEL", "Template:YouTube", "Template:GRE", "Template:ISBN", "Template:GER", "Template:NGR", "Template:PER", "Template:Convert", "Template:BOT", "Template:OMN", "Template:ROU", "Template:Reflist", "Template:Cite news", "Template:AircraftDesignationNavboxShell", "Template:Redirect", "Template:ARG", "Template:DNK", "Template:JOR", "Template:NOR", "Template:Cite AV media", "Template:Infobox aircraft type", "Template:Citation needed", "Template:Flag", "Template:Aircraft specs", "Template:Cite journal", "Template:HON", "Template:PHI", "Template:TUR", "Template:Aircontent", "Template:As of", "Template:USS", "Template:MAR", "Template:ESP", "Template:Lockheed", "Template:Legend", "Template:TUN", "Template:Internet Archive short film", "Template:Use dmy dates", "Template:TWN", "Template:IRQ", "Template:USA", "Template:YEM", "Template:CHA", "Template:POR", "Template:SRI", "Template:AGO", "Template:Portal" ]
https://en.wikipedia.org/wiki/Lockheed_C-130_Hercules
7,699
Commodore 1570
The Commodore 1570 is a 5¼" floppy disk drive for the Commodore 128 home/personal computer. It is a single-sided, 170 kB version of the Commodore 1571, released as a stopgap measure when Commodore International was unable to provide sufficient quantities of 1571s due to a shortage of double-sided drive mechanisms (which were supplied by an outside manufacturer). Like the 1571, it can read and write both GCR and MFM disk formats. The 1570 utilizes a 1571 logic board in a cream-colored original-1541-like case with a drive mechanism similar to the 1541's except that it was equipped with track-zero detection. Like the 1571, its built-in DOS provides a data burst mode for transferring data to the C128 computer at a faster speed than a 1541 can. Its ROM also contains some DOS bug fixes that didn't appear in the 1571 until much later. The 1570 can read and write all single-sided CP/M-format disks that the 1571 can access. Although the 1570 is compatible with the Commodore 64, the C64 isn't capable of taking advantage of the drive's higher-speed operation, and when used with the C64 it's little more than a pricier 1541. Also, many early buyers of the C128 chose to temporarily make do with a 1541 drive, perhaps owned as part of a previous C64 setup, until the 1571 became more widely available. The drive uses the CPU MOS 6502, floppy controller WD1770 or WD1772, I/O controllers 2x MOS Technology 6522 and 1x MOS Technology 6526.
[ { "paragraph_id": 0, "text": "The Commodore 1570 is a 5¼\" floppy disk drive for the Commodore 128 home/personal computer. It is a single-sided, 170 kB version of the Commodore 1571, released as a stopgap measure when Commodore International was unable to provide sufficient quantities of 1571s due to a shortage of double-sided drive mechanisms (which were supplied by an outside manufacturer). Like the 1571, it can read and write both GCR and MFM disk formats. The 1570 utilizes a 1571 logic board in a cream-colored original-1541-like case with a drive mechanism similar to the 1541's except that it was equipped with track-zero detection. Like the 1571, its built-in DOS provides a data burst mode for transferring data to the C128 computer at a faster speed than a 1541 can. Its ROM also contains some DOS bug fixes that didn't appear in the 1571 until much later. The 1570 can read and write all single-sided CP/M-format disks that the 1571 can access.", "title": "" }, { "paragraph_id": 1, "text": "Although the 1570 is compatible with the Commodore 64, the C64 isn't capable of taking advantage of the drive's higher-speed operation, and when used with the C64 it's little more than a pricier 1541. Also, many early buyers of the C128 chose to temporarily make do with a 1541 drive, perhaps owned as part of a previous C64 setup, until the 1571 became more widely available.", "title": "" }, { "paragraph_id": 2, "text": "The drive uses the CPU MOS 6502, floppy controller WD1770 or WD1772, I/O controllers 2x MOS Technology 6522 and 1x MOS Technology 6526.", "title": "" } ]
The Commodore 1570 is a 5¼" floppy disk drive for the Commodore 128 home/personal computer. It is a single-sided, 170 kB version of the Commodore 1571, released as a stopgap measure when Commodore International was unable to provide sufficient quantities of 1571s due to a shortage of double-sided drive mechanisms. Like the 1571, it can read and write both GCR and MFM disk formats. The 1570 utilizes a 1571 logic board in a cream-colored original-1541-like case with a drive mechanism similar to the 1541's except that it was equipped with track-zero detection. Like the 1571, its built-in DOS provides a data burst mode for transferring data to the C128 computer at a faster speed than a 1541 can. Its ROM also contains some DOS bug fixes that didn't appear in the 1571 until much later. The 1570 can read and write all single-sided CP/M-format disks that the 1571 can access. Although the 1570 is compatible with the Commodore 64, the C64 isn't capable of taking advantage of the drive's higher-speed operation, and when used with the C64 it's little more than a pricier 1541. Also, many early buyers of the C128 chose to temporarily make do with a 1541 drive, perhaps owned as part of a previous C64 setup, until the 1571 became more widely available. The drive uses the CPU MOS 6502, floppy controller WD1770 or WD1772, I/O controllers 2x MOS Technology 6522 and 1x MOS Technology 6526.
2022-11-16T00:14:14Z
[ "Template:Refimprove", "Template:Infobox information appliance", "Template:Reflist", "Template:Commodore disk drives" ]
https://en.wikipedia.org/wiki/Commodore_1570
7,700
Commodore 1571
The Commodore 1571 is Commodore's high-end 5¼" floppy disk drive, announced in the summer of 1985. With its double-sided drive mechanism, it has the ability to use double-sided, double-density (DS/DD) floppy disks, storing a total of 360 kB per floppy. It also implemented a "burst mode" that doubled transfer speeds, helping address the very slow performance of previous Commodore drives. Earlier Commodore drives used a custom group coded recording format that stored 170 kB per side of a disk. This made it fairly competitive in terms of storage, but limited it to only reading and writing disks from other Commodore machines. The 1571 was designed to partner with the new Commodore 128 (C128), which introduced support for CP/M. Adding double-density MFM encoding allowed the drive to read and write contemporary CP/M disks (and many others). In contrast to its single-sided predecessors, the 1541 and the briefly-available 1570, the 1571 can use both sides of the disk at the same time. Previously, users could only use the second side by manually flipping them over. Because flipping the disk also reverses the direction of rotation, the two methods are not interchangeable; disks which had their back side created in a 1541 by flipping them over would have to be flipped in the 1571 too, and the back side of disks written in a 1571 using the native support for two-sided operation could not be read in a 1541. The 1571 was released to match the Commodore 128, both design-wise and feature-wise. It was announced in the summer of 1985, at the same time as the C128, and became available in quantity later that year. The later C128D had a 1571 drive built into the system unit. A double-sided disk on the 1571 would have a capacity of 340 kB (70 tracks, 1,360 disk blocks of 256 bytes each); as 8 kB are reserved for system use (directory and block availability information) and, under CBM DOS, 2 bytes of each block serve as pointers to the next logical block, 254 x 1,328 = 337,312 B or about 329.4 kB were available for user data. (However, with a program organizing disk storage on its own, all space could be used, e.g. for data disks.) The 1571 was designed to accommodate the C128's "burst" mode for 2x faster disk access, however the drive cannot use it if connected to older Commodore machines. This mode replaced the slow bit-banging serial routines of the 1541 with a true serial shift register implemented in hardware, thus dramatically increasing the drive speed. Although this originally had been planned when Commodore first switched from the parallel IEEE-488 interface to the CBM-488 custom serial interface, hardware bugs in the VIC-20's 6522 VIA shift register prevented it from working properly. When connected to a C128, the 1571 would default to double-sided mode, which allowed the drive to read its own 340k disks as well as single-sided 170 kB 1541 disks. If the C128 was switched into C64 mode by typing GO 64 from BASIC, the 1571 will stay in double-sided mode. If C64 mode was activated by holding down the C= key on power-up, the drive would automatically switch to single-sided mode, in which case it is unable to read 340 kB disks (also the default if a 1571 is used with a C64, Plus/4, VIC-20, or PET). A manual command can also be issued from BASIC to switch the 1571 between single and double sided mode. There is also an undocumented command which allows the user to independently control either of the read/write heads of the 1571, making it possible to format both sides of a diskette separate from each other, however the resultant disk cannot be read in a 1541 as it would be spinning in reverse direction when flipped upside down. In the same vein, "flippy" disks created with a 1541 cannot be read on a 1571 with this feature; they must be inserted upside down. The 1571 is not 100% low-level compatible with the 1541, however this isn't a problem except in some software that uses advanced copy protections such as the RapidLok system found on MicroProse and Accolade games. The 1571 was noticeably quieter than its predecessor and tended to run cooler as well, even though, like the 1541, it had an internal power supply (later Commodore drives, like the 1541-II and the 3½" 1581, came with external power supplies). The 1541-II/1581 power supply makes mention of a 1571-II, hinting that Commodore may have intended to release a version of the 1571 with an external power supply. However, no 1571-IIs are known to exist. The embedded OS in the 1571 was CBM DOS V3.0 1571, an improvement over the 1541's V2.6. Early 1571s had a bug in the ROM-based disk operating system that caused relative files to corrupt if they occupied both sides of the disk. A version 2 ROM was released, but though it cured the initial bug, it introduced some minor quirks of its own - particularly with the 1541 emulation. Curiously, it was also identified as V3.0. As with the 1541, Commodore initially could not meet demand for the 1571, and that lack of availability and the drive's relatively high price (about US$300) presented an opportunity for cloners. Two 1571 clones appeared, one from Oceanic and one from Blue Chip, but legal action from Commodore quickly drove them from the market. Commodore announced at the 1985 Consumer Electronics Show a dual-drive version of the 1571, to be called the Commodore 1572, but quickly canceled it, reportedly due to technical difficulties with the 1572 DOS. It would have had four times as much RAM as the 1571 (8 kB), and twice as much ROM (64 kB). The 1572 would have allowed for fast disk backups of non-copy-protected media, much like the old 4040, 8050, and 8250 dual drives. The 1571 built into the European plastic-case C128 D computer is electronically identical to the stand-alone version, but 1571 version integrated into the later metal-case C128 D (often called C128 DCR, for D Cost-Reduced) differs a lot from the stand-alone 1571. It includes a newer DOS, version 3.1, replaces the MOS Technology CIA interface chip, of which only a few features were used by the 1571 DOS, with a very much simplified chip called 5710, and has some compatibility issues with the stand-alone drive. Because this internal 1571 does not have an unused 8-bit input/output port on any chip, unlike most other Commodore drives, it is not possible to install a parallel cable in this drive, such as that used by SpeedDOS, DolphinDOS and some other fast third-party Commodore DOS replacements. The drive detects the motor speed and generates an internal data sampling clock signal that matches with the motor speed. The 1571 uses a saddle canceler when reading the data stream. A correction signal is generated when the raw data pattern on the disk consists of two consecutive zeros. With the GCR recording format a problem occurs in the read signal waveform. The worst case pattern 1001 may cause a saddle condition where a false data bit may occur. The original 1541 drives uses a one-shot to correct the condition. The 1571 uses a gate array to correct this digitally. The drive uses the MOS 6502 CPU, WD1770 or WD1772 floppy controller, 2x MOS Technology 6522 I/O controllers and 1x MOS Technology 6526. Unlike the 1541, which was limited to GCR formatting, the 1571 could read both GCR and MFM disk formats. The version of CP/M included with the C128 supported the following formats: The 1571 can read any of the many CP/M 5+1⁄4-disk formats. If the CP/M BIOS is modified, it is possible to read any soft sector 40-track MFM format. Single density (FM) formats are not supported because the density selector pin on the MFM controller chip in the drive is disabled (wired to ground). A 1571 cannot boot from MFM disks; the user must boot CP/M from a GCR disk and then switch to MFM disks. With additional software, it was possible to read and write to MS-DOS-formatted floppies as well. Numerous commercial and public-domain programs for this purpose became available, the best-known being SOGWAP's "Big Blue Reader". Although the C128 could not run any DOS-based software, this capability allowed data files to be exchanged with PC users. Reading Atari 8-bit 130 kB or 180 kB disks was possible as well with special software, but the standard Atari 8-bit 90 kB format, which used FM rather than MFM encoding, could not be handled by the 1571 hardware without modifying the drive circuitry as the control line that determines if FM or MFM encoding is used by the disc controller chip was permanently wired to ground (MFM mode) rather than being under software control. In the 1541 format, while 40 tracks are possible for a 5.25" DD drive like the 154x/157x, only 35 tracks are used. Commodore chose not to use the upper five tracks by default (or at least to use more than 35) due to the bad quality of some of the drive mechanisms, which did not always work reliably on those tracks. For compatibility and ease of implementation, the 1571's double-sided format of one logical disk side with 70 tracks was created by putting together the lower 35 physical tracks on each of the physical sides of the disk rather than using two times 40 tracks, even though there were no more quality problems with the mechanisms of the 1571 drives.
[ { "paragraph_id": 0, "text": "The Commodore 1571 is Commodore's high-end 5¼\" floppy disk drive, announced in the summer of 1985. With its double-sided drive mechanism, it has the ability to use double-sided, double-density (DS/DD) floppy disks, storing a total of 360 kB per floppy. It also implemented a \"burst mode\" that doubled transfer speeds, helping address the very slow performance of previous Commodore drives.", "title": "" }, { "paragraph_id": 1, "text": "Earlier Commodore drives used a custom group coded recording format that stored 170 kB per side of a disk. This made it fairly competitive in terms of storage, but limited it to only reading and writing disks from other Commodore machines. The 1571 was designed to partner with the new Commodore 128 (C128), which introduced support for CP/M. Adding double-density MFM encoding allowed the drive to read and write contemporary CP/M disks (and many others).", "title": "" }, { "paragraph_id": 2, "text": "In contrast to its single-sided predecessors, the 1541 and the briefly-available 1570, the 1571 can use both sides of the disk at the same time. Previously, users could only use the second side by manually flipping them over. Because flipping the disk also reverses the direction of rotation, the two methods are not interchangeable; disks which had their back side created in a 1541 by flipping them over would have to be flipped in the 1571 too, and the back side of disks written in a 1571 using the native support for two-sided operation could not be read in a 1541.", "title": "" }, { "paragraph_id": 3, "text": "The 1571 was released to match the Commodore 128, both design-wise and feature-wise. It was announced in the summer of 1985, at the same time as the C128, and became available in quantity later that year. The later C128D had a 1571 drive built into the system unit. A double-sided disk on the 1571 would have a capacity of 340 kB (70 tracks, 1,360 disk blocks of 256 bytes each); as 8 kB are reserved for system use (directory and block availability information) and, under CBM DOS, 2 bytes of each block serve as pointers to the next logical block, 254 x 1,328 = 337,312 B or about 329.4 kB were available for user data. (However, with a program organizing disk storage on its own, all space could be used, e.g. for data disks.)", "title": "Release and features" }, { "paragraph_id": 4, "text": "The 1571 was designed to accommodate the C128's \"burst\" mode for 2x faster disk access, however the drive cannot use it if connected to older Commodore machines. This mode replaced the slow bit-banging serial routines of the 1541 with a true serial shift register implemented in hardware, thus dramatically increasing the drive speed. Although this originally had been planned when Commodore first switched from the parallel IEEE-488 interface to the CBM-488 custom serial interface, hardware bugs in the VIC-20's 6522 VIA shift register prevented it from working properly.", "title": "Release and features" }, { "paragraph_id": 5, "text": "When connected to a C128, the 1571 would default to double-sided mode, which allowed the drive to read its own 340k disks as well as single-sided 170 kB 1541 disks. If the C128 was switched into C64 mode by typing GO 64 from BASIC, the 1571 will stay in double-sided mode. If C64 mode was activated by holding down the C= key on power-up, the drive would automatically switch to single-sided mode, in which case it is unable to read 340 kB disks (also the default if a 1571 is used with a C64, Plus/4, VIC-20, or PET). A manual command can also be issued from BASIC to switch the 1571 between single and double sided mode. There is also an undocumented command which allows the user to independently control either of the read/write heads of the 1571, making it possible to format both sides of a diskette separate from each other, however the resultant disk cannot be read in a 1541 as it would be spinning in reverse direction when flipped upside down. In the same vein, \"flippy\" disks created with a 1541 cannot be read on a 1571 with this feature; they must be inserted upside down.", "title": "Release and features" }, { "paragraph_id": 6, "text": "The 1571 is not 100% low-level compatible with the 1541, however this isn't a problem except in some software that uses advanced copy protections such as the RapidLok system found on MicroProse and Accolade games.", "title": "Release and features" }, { "paragraph_id": 7, "text": "The 1571 was noticeably quieter than its predecessor and tended to run cooler as well, even though, like the 1541, it had an internal power supply (later Commodore drives, like the 1541-II and the 3½\" 1581, came with external power supplies). The 1541-II/1581 power supply makes mention of a 1571-II, hinting that Commodore may have intended to release a version of the 1571 with an external power supply. However, no 1571-IIs are known to exist. The embedded OS in the 1571 was CBM DOS V3.0 1571, an improvement over the 1541's V2.6.", "title": "Release and features" }, { "paragraph_id": 8, "text": "Early 1571s had a bug in the ROM-based disk operating system that caused relative files to corrupt if they occupied both sides of the disk. A version 2 ROM was released, but though it cured the initial bug, it introduced some minor quirks of its own - particularly with the 1541 emulation. Curiously, it was also identified as V3.0.", "title": "Release and features" }, { "paragraph_id": 9, "text": "As with the 1541, Commodore initially could not meet demand for the 1571, and that lack of availability and the drive's relatively high price (about US$300) presented an opportunity for cloners. Two 1571 clones appeared, one from Oceanic and one from Blue Chip, but legal action from Commodore quickly drove them from the market.", "title": "Release and features" }, { "paragraph_id": 10, "text": "Commodore announced at the 1985 Consumer Electronics Show a dual-drive version of the 1571, to be called the Commodore 1572, but quickly canceled it, reportedly due to technical difficulties with the 1572 DOS. It would have had four times as much RAM as the 1571 (8 kB), and twice as much ROM (64 kB). The 1572 would have allowed for fast disk backups of non-copy-protected media, much like the old 4040, 8050, and 8250 dual drives.", "title": "Release and features" }, { "paragraph_id": 11, "text": "The 1571 built into the European plastic-case C128 D computer is electronically identical to the stand-alone version, but 1571 version integrated into the later metal-case C128 D (often called C128 DCR, for D Cost-Reduced) differs a lot from the stand-alone 1571. It includes a newer DOS, version 3.1, replaces the MOS Technology CIA interface chip, of which only a few features were used by the 1571 DOS, with a very much simplified chip called 5710, and has some compatibility issues with the stand-alone drive. Because this internal 1571 does not have an unused 8-bit input/output port on any chip, unlike most other Commodore drives, it is not possible to install a parallel cable in this drive, such as that used by SpeedDOS, DolphinDOS and some other fast third-party Commodore DOS replacements.", "title": "Release and features" }, { "paragraph_id": 12, "text": "The drive detects the motor speed and generates an internal data sampling clock signal that matches with the motor speed.", "title": "Technical design" }, { "paragraph_id": 13, "text": "The 1571 uses a saddle canceler when reading the data stream. A correction signal is generated when the raw data pattern on the disk consists of two consecutive zeros. With the GCR recording format a problem occurs in the read signal waveform. The worst case pattern 1001 may cause a saddle condition where a false data bit may occur. The original 1541 drives uses a one-shot to correct the condition. The 1571 uses a gate array to correct this digitally.", "title": "Technical design" }, { "paragraph_id": 14, "text": "The drive uses the MOS 6502 CPU, WD1770 or WD1772 floppy controller, 2x MOS Technology 6522 I/O controllers and 1x MOS Technology 6526.", "title": "Technical design" }, { "paragraph_id": 15, "text": "Unlike the 1541, which was limited to GCR formatting, the 1571 could read both GCR and MFM disk formats. The version of CP/M included with the C128 supported the following formats:", "title": "Disk format" }, { "paragraph_id": 16, "text": "The 1571 can read any of the many CP/M 5+1⁄4-disk formats. If the CP/M BIOS is modified, it is possible to read any soft sector 40-track MFM format. Single density (FM) formats are not supported because the density selector pin on the MFM controller chip in the drive is disabled (wired to ground).", "title": "Disk format" }, { "paragraph_id": 17, "text": "A 1571 cannot boot from MFM disks; the user must boot CP/M from a GCR disk and then switch to MFM disks.", "title": "Disk format" }, { "paragraph_id": 18, "text": "With additional software, it was possible to read and write to MS-DOS-formatted floppies as well. Numerous commercial and public-domain programs for this purpose became available, the best-known being SOGWAP's \"Big Blue Reader\". Although the C128 could not run any DOS-based software, this capability allowed data files to be exchanged with PC users. Reading Atari 8-bit 130 kB or 180 kB disks was possible as well with special software, but the standard Atari 8-bit 90 kB format, which used FM rather than MFM encoding, could not be handled by the 1571 hardware without modifying the drive circuitry as the control line that determines if FM or MFM encoding is used by the disc controller chip was permanently wired to ground (MFM mode) rather than being under software control.", "title": "Disk format" }, { "paragraph_id": 19, "text": "In the 1541 format, while 40 tracks are possible for a 5.25\" DD drive like the 154x/157x, only 35 tracks are used. Commodore chose not to use the upper five tracks by default (or at least to use more than 35) due to the bad quality of some of the drive mechanisms, which did not always work reliably on those tracks.", "title": "Disk format" }, { "paragraph_id": 20, "text": "For compatibility and ease of implementation, the 1571's double-sided format of one logical disk side with 70 tracks was created by putting together the lower 35 physical tracks on each of the physical sides of the disk rather than using two times 40 tracks, even though there were no more quality problems with the mechanisms of the 1571 drives.", "title": "Disk format" } ]
The Commodore 1571 is Commodore's high-end 5¼" floppy disk drive, announced in the summer of 1985. With its double-sided drive mechanism, it has the ability to use double-sided, double-density (DS/DD) floppy disks, storing a total of 360 kB per floppy. It also implemented a "burst mode" that doubled transfer speeds, helping address the very slow performance of previous Commodore drives. Earlier Commodore drives used a custom group coded recording format that stored 170 kB per side of a disk. This made it fairly competitive in terms of storage, but limited it to only reading and writing disks from other Commodore machines. The 1571 was designed to partner with the new Commodore 128 (C128), which introduced support for CP/M. Adding double-density MFM encoding allowed the drive to read and write contemporary CP/M disks. In contrast to its single-sided predecessors, the 1541 and the briefly-available 1570, the 1571 can use both sides of the disk at the same time. Previously, users could only use the second side by manually flipping them over. Because flipping the disk also reverses the direction of rotation, the two methods are not interchangeable; disks which had their back side created in a 1541 by flipping them over would have to be flipped in the 1571 too, and the back side of disks written in a 1571 using the native support for two-sided operation could not be read in a 1541.
2002-02-25T15:51:15Z
2023-11-05T01:34:00Z
[ "Template:Nowrap", "Template:Cite web", "Template:Cite book", "Template:Refend", "Template:Use dmy dates", "Template:Infobox information appliance", "Template:Frac", "Template:Reflist", "Template:Refbegin", "Template:Commodore disk drives", "Template:Short description", "Template:Citation needed" ]
https://en.wikipedia.org/wiki/Commodore_1571
7,701
Cocaine
Cocaine (from French: cocaïne, from Spanish: coca, ultimately from Quechua: kúka) is a tropane alkaloid that acts as a central nervous system (CNS) stimulant. As an extract, it is mainly used recreationally, and often illegally for its euphoric and rewarding effects. It is also used in medicine by Indigenous South Americans for various purposes and rarely, but more formally, as a local anaesthetic or diagnostic tool by medical practitioners in more developed countries. It is primarily obtained from the leaves of two Coca species native to South America: Erythroxylum coca and E. novogranatense. After extraction from the plant, and further processing into cocaine hydrochloride (powdered cocaine), the drug is administered by being either snorted, applied topically to the mouth, or dissolved and injected into a vein. It can also then be turned into free base form (typically crack cocaine), in which it can be heated until sublimated and then the vapours can be inhaled. Cocaine stimulates the reward pathway in the brain. Mental effects may include an intense feeling of happiness, sexual arousal, loss of contact with reality, or agitation. Physical effects may include a fast heart rate, sweating, and dilated pupils. High doses can result in high blood pressure or high body temperature. Onset of effects can begin within seconds to minutes of use, depending on method of delivery, and can last between five and ninety minutes. As cocaine also has numbing and blood vessel constriction properties, it is occasionally used during surgery on the throat or inside of the nose to control pain, bleeding, and vocal cord spasm. Cocaine crosses the blood–brain barrier via a proton-coupled organic cation antiporter and (to a lesser extent) via passive diffusion across cell membranes. Cocaine blocks the dopamine transporter, inhibiting reuptake of dopamine from the synaptic cleft into the pre-synaptic axon terminal; the higher dopamine levels in the synaptic cleft increase dopamine receptor activation in the post-synaptic neuron, causing euphoria and arousal. Cocaine also blocks the serotonin transporter and norepinephrine transporter, inhibiting reuptake of serotonin and norepinephrine from the synaptic cleft into the pre-synaptic axon terminal and increasing activation of serotonin receptors and norepinephrine receptors in the post-synaptic neuron, contributing to the mental and physical effects of cocaine exposure. A single dose of cocaine induces tolerance to the drug's effects. Repeated use is likely to result in addiction. Addicts who abstain from cocaine may experience craving and drug withdrawal symptoms, with depression, decreased libido, decreased ability to feel pleasure, and fatigue being most common. Use of cocaine increases the overall risk of death, and intravenous use potentially increases the risk of trauma and infectious diseases such as blood infections and HIV through the use of shared paraphernalia. It also increases risk of stroke, heart attack, cardiac arrhythmia, lung injury (when smoked), and sudden cardiac death. Illicitly sold cocaine can be adulterated with fentanyl, local anesthetics, levamisole, cornstarch, quinine, or sugar, which can result in additional toxicity. In 2017, the Global Burden of Disease study found that cocaine use caused around 7,300 deaths annually. Coca leaves have been used by Andean civilizations since ancient times. In ancient Wari culture, Incan culture, and through modern successor indigenous cultures of the Andes mountains, coca leaves are chewed, taken orally in the form of a tea, or alternatively, prepared in a sachet wrapped around alkaline burnt ashes, and held in the mouth against the inner cheek; it has traditionally been used to combat the effects of cold, hunger, and altitude sickness. Cocaine was first isolated from the leaves in 1860. Globally, in 2019, cocaine was used by an estimated 20 million people (0.4% of adults aged 15 to 64 years). The highest prevalence of cocaine use was in Australia and New Zealand (2.1%), followed by North America (2.1%), Western and Central Europe (1.4%), and South and Central America (1.0%). Since 1961, the Single Convention on Narcotic Drugs has required countries to make recreational use of cocaine a crime. In the United States, cocaine is regulated as a Schedule II drug under the Controlled Substances Act, meaning that it has a high potential for abuse but has an accepted medical use. While rarely used medically today, its accepted uses are as a topical local anesthetic for the upper respiratory tract as well as to reduce bleeding in the mouth, throat and nasal cavities. Cocaine eye drops are frequently used by neurologists when examining patients suspected of having Horner syndrome. In Horner syndrome, sympathetic innervation to the eye is blocked. In a healthy eye, cocaine will stimulate the sympathetic nerves by inhibiting norepinephrine reuptake, and the pupil will dilate; if the patient has Horner syndrome, the sympathetic nerves are blocked, and the affected eye will remain constricted or dilate to a lesser extent than the opposing (unaffected) eye which also receives the eye drop test. If both eyes dilate equally, the patient does not have Horner syndrome. Topical cocaine is sometimes used as a local numbing agent and vasoconstrictor to help control pain and bleeding with surgery of the nose, mouth, throat or lacrimal duct. Although some absorption and systemic effects may occur, the use of cocaine as a topical anesthetic and vasoconstrictor is generally safe, rarely causing cardiovascular toxicity, glaucoma, and pupil dilation. Occasionally, cocaine is mixed with adrenaline and sodium bicarbonate and used topically for surgery, a formulation called Moffett's solution. Cocaine hydrochloride (Goprelto), an ester local anesthetic, was approved for medical use in the United States in December 2017, and is indicated for the introduction of local anesthesia of the mucous membranes for diagnostic procedures and surgeries on or through the nasal cavities of adults. Cocaine hydrochloride (Numbrino) was approved for medical use in the United States in January 2020. The most common adverse reactions in people treated with Goprelto are headache and epistaxis. The most common adverse reactions in people treated with Numbrino are hypertension, tachycardia, and sinus tachycardia. Cocaine is a central nervous system stimulant. Its effects can last from 15 minutes to an hour. The duration of cocaine's effects depends on the amount taken and the route of administration. Cocaine can be in the form of fine white powder and has a bitter taste. Crack cocaine is a smokeable form of cocaine made into small "rocks" by processing cocaine with sodium bicarbonate (baking soda) and water. Crack cocaine is referred to as "crack" because of the crackling sounds it makes when heated. Cocaine use leads to increases in alertness, feelings of well-being and euphoria, increased energy and motor activity, and increased feelings of competence and sexuality. Analysis of the correlation between the use of 18 various psychoactive substances shows that cocaine use correlates with other "party drugs" (such as ecstasy or amphetamines), as well as with heroin and benzodiazepines use, and can be considered as a bridge between the use of different groups of drugs. It is legal for people to use coca leaves in some Andean nations, such as Peru and Bolivia, where they are chewed, consumed in the form of tea, or are sometimes incorporated into food products. Coca leaves are typically mixed with an alkaline substance (such as lime) and chewed into a wad that is retained in the buccal pouch (mouth between gum and cheek, much the same as chewing tobacco is chewed) and sucked of its juices. The juices are absorbed slowly by the mucous membrane of the inner cheek and by the gastrointestinal tract when swallowed. Alternatively, coca leaves can be infused in liquid and consumed like tea. Coca tea, an infusion of coca leaves, is also a traditional method of consumption. The tea has often been recommended for travelers in the Andes to prevent altitude sickness. Its actual effectiveness has never been systematically studied. In 1986 an article in the Journal of the American Medical Association revealed that U.S. health food stores were selling dried coca leaves to be prepared as an infusion as "Health Inca Tea". While the packaging claimed it had been "decocainized", no such process had actually taken place. The article stated that drinking two cups of the tea per day gave a mild stimulation, increased heart rate, and mood elevation, and the tea was essentially harmless. Nasal insufflation (known colloquially as "snorting", "sniffing", or "blowing") is a common method of ingestion of recreational powdered cocaine. The drug coats and is absorbed through the mucous membranes lining the nasal passages. Cocaine's desired euphoric effects are delayed when snorted through the nose by about five minutes. This occurs because cocaine's absorption is slowed by its constricting effect on the blood vessels of the nose. Insufflation of cocaine also leads to the longest duration of its effects (60–90 minutes). When insufflating cocaine, absorption through the nasal membranes is approximately 30–60% In a study of cocaine users, the average time taken to reach peak subjective effects was 14.6 minutes. Any damage to the inside of the nose is due to cocaine constricting blood vessels — and therefore restricting blood and oxygen/nutrient flow — to that area. Rolled up banknotes, hollowed-out pens, cut straws, pointed ends of keys, specialized spoons, long fingernails, and (clean) tampon applicators are often used to insufflate cocaine. The cocaine typically is poured onto a flat, hard surface (such as a mobile phone screen, mirror, CD case or book) and divided into "bumps", "lines" or "rails", and then insufflated. A 2001 study reported that the sharing of straws used to "snort" cocaine can spread blood diseases such as hepatitis C. Subjective effects not commonly shared with other methods of administration include a ringing in the ears moments after injection (usually when over 120 milligrams) lasting two to 5 minutes including tinnitus and audio distortion. This is colloquially referred to as a "bell ringer". In a study of cocaine users, the average time taken to reach peak subjective effects was 3.1 minutes. The euphoria passes quickly. Aside from the toxic effects of cocaine, there is also the danger of circulatory emboli from the insoluble substances that may be used to cut the drug. As with all injected illicit substances, there is a risk of the user contracting blood-borne infections if sterile injecting equipment is not available or used. An injected mixture of cocaine and heroin, known as "speedball", is a particularly dangerous combination, as the converse effects of the drugs actually complement each other, but may also mask the symptoms of an overdose. It has been responsible for numerous deaths, including celebrities such as comedians/actors John Belushi and Chris Farley, Mitch Hedberg, River Phoenix, grunge singer Layne Staley and actor Philip Seymour Hoffman. Experimentally, cocaine injections can be delivered to animals such as fruit flies to study the mechanisms of cocaine addiction. The onset of cocaine's euphoric effects is fastest with inhalation, beginning after 3–5 seconds. However, inhalation gives the shortest duration of euphoria (5–15 minutes). Cocaine is smoked by inhaling the vapor produced when free base cocaine is heated to the point of sublimation. In a 2000 Brookhaven National Laboratory medical department study, based on self-reports of 32 people who used cocaine who participated in the study, "peak high" was found at a mean of 1.4 ± 0.5 minutes. Pyrolysis products of cocaine that occur only when heated/smoked have been shown to change the effect profile, i.e. anhydroecgonine methyl ester, when co-administered with cocaine, increases the dopamine in CPu and NAc brain regions, and has M1 — and M3 — receptor affinity. Smoking freebase cocaine is often accomplished using a pipe made from a small glass tube, often taken from "love roses", small glass tubes with a paper rose that are promoted as romantic gifts. These are sometimes called "stems", "horns", "blasters" and "straight shooters". A small piece of clean heavy copper or occasionally stainless steel scouring pad – often called a "brillo" (actual Brillo Pads contain soap, and are not used) or "chore" (named for Chore Boy brand copper scouring pads) – serves as a reduction base and flow modulator in which the "rock" can be melted and boiled to vapor. Crack is smoked by placing it at the end of the pipe; a flame held close to it produces vapor, which is then inhaled by the smoker. The effects felt almost immediately after smoking, are very intense and do not last long — usually 2 to 10 minutes. When smoked, cocaine is sometimes combined with other drugs, such as cannabis, often rolled into a joint or blunt. Acute exposure to cocaine has many effects on humans, including euphoria, increases in heart rate and blood pressure, and increases in cortisol secretion from the adrenal gland. In humans with acute exposure followed by continuous exposure to cocaine at a constant blood concentration, the acute tolerance to the chronotropic cardiac effects of cocaine begins after about 10 minutes, while acute tolerance to the euphoric effects of cocaine begins after about one hour. With excessive or prolonged use, the drug can cause itching, fast heart rate, and paranoid delusions or sensations of insects crawling on the skin. Intranasal cocaine and crack use are both associated with pharmacological violence. Aggressive behavior may be displayed by both addicts and casual users. Cocaine can induce psychosis characterized by paranoia, impaired reality testing, hallucinations, irritability, and physical aggression. Cocaine intoxication can cause hyperawareness, hypervigilance, and psychomotor agitation and delirium. Consumption of large doses of cocaine can cause violent outbursts, especially by those with preexisting psychosis. Crack-related violence is also systemic, relating to disputes between crack dealers and users. Acute exposure may induce cardiac arrhythmias, including atrial fibrillation, supraventricular tachycardia, ventricular tachycardia, and ventricular fibrillation. Acute exposure may also lead to angina, heart attack, and congestive heart failure. Cocaine overdose may cause seizures, abnormally high body temperature and a marked elevation of blood pressure, which can be life-threatening, abnormal heart rhythms, and death. Anxiety, paranoia, and restlessness can also occur, especially during the comedown. With excessive dosage, tremors, convulsions and increased body temperature are observed. Severe cardiac adverse events, particularly sudden cardiac death, become a serious risk at high doses due to cocaine's blocking effect on cardiac sodium channels. Incidental exposure of the eye to sublimated cocaine while smoking crack cocaine can cause serious injury to the cornea and long-term loss of visual acuity. Although it has been commonly asserted, the available evidence does not show that chronic use of cocaine is associated with broad cognitive deficits. Research is inconclusive on age-related loss of striatal dopamine transporter (DAT) sites, suggesting cocaine has neuroprotective or neurodegenerative properties for dopamine neurons. Exposure to cocaine may lead to the breakdown of the blood–brain barrier. Physical side effects from chronic smoking of cocaine include coughing up blood, bronchospasm, itching, fever, diffuse alveolar infiltrates without effusions, pulmonary and systemic eosinophilia, chest pain, lung trauma, sore throat, asthma, hoarse voice, dyspnea (shortness of breath), and an aching, flu-like syndrome. Cocaine constricts blood vessels, dilates pupils, and increases body temperature, heart rate, and blood pressure. It can also cause headaches and gastrointestinal complications such as abdominal pain and nausea. A common but untrue belief is that the smoking of cocaine chemically breaks down tooth enamel and causes tooth decay. Cocaine can cause involuntary tooth grinding, known as bruxism, which can deteriorate tooth enamel and lead to gingivitis. Additionally, stimulants like cocaine, methamphetamine, and even caffeine cause dehydration and dry mouth. Since saliva is an important mechanism in maintaining one's oral pH level, people who use cocaine over a long period of time who do not hydrate sufficiently may experience demineralization of their teeth due to the pH of the tooth surface dropping too low (below 5.5). Cocaine use also promotes the formation of blood clots. This increase in blood clot formation is attributed to cocaine-associated increases in the activity of plasminogen activator inhibitor, and an increase in the number, activation, and aggregation of platelets. Chronic intranasal usage can degrade the cartilage separating the nostrils (the septum nasi), leading eventually to its complete disappearance. Due to the absorption of the cocaine from cocaine hydrochloride, the remaining hydrochloride forms a dilute hydrochloric acid. Illicitly-sold cocaine may be contaminated with levamisole. Levamisole may accentuate cocaine's effects. Levamisole-adulterated cocaine has been associated with autoimmune disease. Cocaine use leads to an increased risk of hemorrhagic and ischemic strokes. Cocaine use also increases the risk of having a heart attack. Relatives of persons with cocaine addiction have an increased risk of cocaine addiction. Cocaine addiction occurs through ΔFosB overexpression in the nucleus accumbens, which results in altered transcriptional regulation in neurons within the nucleus accumbens. ΔFosB levels have been found to increase upon the use of cocaine. Each subsequent dose of cocaine continues to increase ΔFosB levels with no ceiling of tolerance. Elevated levels of ΔFosB leads to increases in brain-derived neurotrophic factor (BDNF) levels, which in turn increases the number of dendritic branches and spines present on neurons involved with the nucleus accumbens and prefrontal cortex areas of the brain. This change can be identified rather quickly, and may be sustained weeks after the last dose of the drug. Transgenic mice exhibiting inducible expression of ΔFosB primarily in the nucleus accumbens and dorsal striatum exhibit sensitized behavioural responses to cocaine. They self-administer cocaine at lower doses than control, but have a greater likelihood of relapse when the drug is withheld. ΔFosB increases the expression of AMPA receptor subunit GluR2 and also decreases expression of dynorphin, thereby enhancing sensitivity to reward. DNA damage is increased in the brain of rodents by administration of cocaine. During DNA repair of such damages, persistent chromatin alterations may occur such as methylation of DNA or the acetylation or methylation of histones at the sites of repair. These alterations can be epigenetic scars in the chromatin that contribute to the persistent epigenetic changes found in cocaine addiction. In humans, cocaine abuse may cause structural changes in brain connectivity, though it is unclear to what extent these changes are permanent. Cocaine dependence develops after even brief periods of regular cocaine use and produces a withdrawal state with emotional-motivational deficits upon cessation of cocaine use. Crack baby is a term for a child born to a mother who used crack cocaine during her pregnancy. The threat that cocaine use during pregnancy poses to the fetus is now considered exaggerated. Studies show that prenatal cocaine exposure (independent of other effects such as, for example, alcohol, tobacco, or physical environment) has no appreciable effect on childhood growth and development. However, the official opinion of the National Institute on Drug Abuse of the United States warns about health risks while cautioning against stereotyping: Many recall that "crack babies", or babies born to mothers who used crack cocaine while pregnant, were at one time written off by many as a lost generation. They were predicted to suffer from severe, irreversible damage, including reduced intelligence and social skills. It was later found that this was a gross exaggeration. However, the fact that most of these children appear normal should not be over-interpreted as indicating that there is no cause for concern. Using sophisticated technologies, scientists are now finding that exposure to cocaine during fetal development may lead to subtle, yet significant, later deficits in some children, including deficits in some aspects of cognitive performance, information-processing, and attention to tasks—abilities that are important for success in school. There are also warnings about the threat of breastfeeding: The March of Dimes said "it is likely that cocaine will reach the baby through breast milk," and advises the following regarding cocaine use during pregnancy: Cocaine use during pregnancy can affect a pregnant woman and her unborn baby in many ways. During the early months of pregnancy, it may increase the risk of miscarriage. Later in pregnancy, it can trigger preterm labor (labor that occurs before 37 weeks of pregnancy) or cause the baby to grow poorly. As a result, cocaine-exposed babies are more likely than unexposed babies to be born with low birth weight (less than 5.5 lb or 2.5 kg). Low-birthweight babies are 20 times more likely to die in their first month of life than normal-weight babies, and face an increased risk of lifelong disabilities such as mental retardation and cerebral palsy. Cocaine-exposed babies also tend to have smaller heads, which generally reflect smaller brains. Some studies suggest that cocaine-exposed babies are at increased risk of birth defects, including urinary tract defects and, possibly, heart defects. Cocaine also may cause an unborn baby to have a stroke, irreversible brain damage, or a heart attack. Persons with regular or problematic use of cocaine have a significantly higher rate of death, and are specifically at higher risk of traumatic deaths and deaths attributable to infectious disease. The extent of absorption of cocaine into the systemic circulation after nasal insufflation is similar to that after oral ingestion. The rate of absorption after nasal insufflation is limited by cocaine-induced vasoconstriction of capillaries in the nasal mucosa. Onset of absorption after oral ingestion is delayed because cocaine is a weak base with a pKa of 8.6, and is thus in an ionized form that is poorly absorbed from the acidic stomach and easily absorbed from the alkaline duodenum. The rate and extent of absorption from inhalation of cocaine is similar or greater than with intravenous injection, as inhalation provides access directly to the pulmonary capillary bed. The delay in absorption after oral ingestion may account for the popular belief that cocaine bioavailability from the stomach is lower than after insufflation. Compared with ingestion, the faster absorption of insufflated cocaine results in quicker attainment of maximum drug effects. Snorting cocaine produces maximum physiological effects within 40 minutes and maximum psychotropic effects within 20 minutes. Physiological and psychotropic effects from nasally insufflated cocaine are sustained for approximately 40–60 minutes after the peak effects are attained. Cocaine crosses the blood–brain barrier via both a proton-coupled organic cation antiporter and (to a lesser extent) via passive diffusion across cell membranes. As of September 2022, the gene or genes encoding the human proton-organic cation antiporter had not been identified. Cocaine has a short elimination half life of 0.7–1.5 hours and is extensively metabolized by plasma esterases and also by liver cholinesterases, with only about 1% excreted unchanged in the urine. The metabolism is dominated by hydrolytic ester cleavage, so the eliminated metabolites consist mostly of benzoylecgonine (BE), the major metabolite, and other metabolites in lesser amounts such as ecgonine methyl ester (EME) and ecgonine. Further minor metabolites of cocaine include norcocaine, p-hydroxycocaine, m-hydroxycocaine, p-hydroxybenzoylecgonine (pOHBE), and m-hydroxybenzoylecgonine. If consumed with alcohol, cocaine combines with alcohol in the liver to form cocaethylene. Studies have suggested cocaethylene is more euphoric, and has a higher cardiovascular toxicity than cocaine by itself. Depending on liver and kidney function, cocaine metabolites are detectable in urine. Benzoylecgonine can be detected in urine within four hours after cocaine intake and remains detectable in concentrations greater than 150 ng/mL typically for up to eight days after cocaine is used. Detection of cocaine metabolites in hair is possible in regular users until after the sections of hair grown during the period of cocaine use are cut or fall out. The pharmacodynamics of cocaine involve the complex relationships of neurotransmitters (inhibiting monoamine uptake in rats with ratios of about: serotonin:dopamine = 2:3, serotonin:norepinephrine = 2:5). The most extensively studied effect of cocaine on the central nervous system is the blockade of the dopamine transporter protein. Dopamine neurotransmitter released during neural signaling is normally recycled via the transporter; i.e., the transporter binds the transmitter and pumps it out of the synaptic cleft back into the presynaptic neuron, where it is taken up into storage vesicles. Cocaine binds tightly at the dopamine transporter forming a complex that blocks the transporter's function. The dopamine transporter can no longer perform its reuptake function, and thus dopamine accumulates in the synaptic cleft. The increased concentration of dopamine in the synapse activates post-synaptic dopamine receptors, which makes the drug rewarding and promotes the compulsive use of cocaine. Cocaine affects certain serotonin (5-HT) receptors; in particular, it has been shown to antagonize the 5-HT3 receptor, which is a ligand-gated ion channel. An overabundance of 5-HT3 receptors is reported in cocaine-conditioned rats, though 5-HT3's role is unclear. The 5-HT2 receptor (particularly the subtypes 5-HT2A, 5-HT2B and 5-HT2C) are involved in the locomotor-activating effects of cocaine. Cocaine has been demonstrated to bind as to directly stabilize the DAT transporter on the open outward-facing conformation. Further, cocaine binds in such a way as to inhibit a hydrogen bond innate to DAT. Cocaine's binding properties are such that it attaches so this hydrogen bond will not form and is blocked from formation due to the tightly locked orientation of the cocaine molecule. Research studies have suggested that the affinity for the transporter is not what is involved in the habituation of the substance so much as the conformation and binding properties to where and how on the transporter the molecule binds. Sigma receptors are affected by cocaine, as cocaine functions as a sigma ligand agonist. Further specific receptors it has been demonstrated to function on are NMDA and the D1 dopamine receptor. Cocaine also blocks sodium channels, thereby interfering with the propagation of action potentials; thus, like lignocaine and novocaine, it acts as a local anesthetic. It also functions on the binding sites to the dopamine and serotonin sodium dependent transport area as targets as separate mechanisms from its reuptake of those transporters; unique to its local anesthetic value which makes it in a class of functionality different from both its own derived phenyltropanes analogues which have that removed. In addition to this, cocaine has some target binding to the site of the κ-opioid receptor. Cocaine also causes vasoconstriction, thus reducing bleeding during minor surgical procedures. Recent research points to an important role of circadian mechanisms and clock genes in behavioral actions of cocaine. Cocaine is known to suppress hunger and appetite by increasing co-localization of sigma σ1R receptors and ghrelin GHS-R1a receptors at the neuronal cell surface, thereby increasing ghrelin-mediated signaling of satiety and possibly via other effects on appetitive hormones. Chronic users may lose their appetite and can experience severe malnutrition and significant weight loss. Cocaine effects, further, are shown to be potentiated for the user when used in conjunction with new surroundings and stimuli, and otherwise novel environs. Cocaine in its purest form is a white, pearly product. Cocaine appearing in powder form is a salt, typically cocaine hydrochloride. Street cocaine is often adulterated or "cut" with talc, lactose, sucrose, glucose, mannitol, inositol, caffeine, procaine, phencyclidine, phenytoin, lignocaine, strychnine, levamisole, amphetamine, or heroin. Crack cocaine looks like irregular shaped white rocks. Cocaine — a tropane alkaloid — is a weakly alkaline compound, and can therefore combine with acidic compounds to form salts. The hydrochloride (HCl) salt of cocaine is by far the most commonly encountered, although the sulfate (SO4) and the nitrate (NO3) salts are occasionally seen. Different salts dissolve to a greater or lesser extent in various solvents — the hydrochloride salt is polar in character and is quite soluble in water. As the name implies, "freebase" is the base form of cocaine, as opposed to the salt form. It is practically insoluble in water whereas hydrochloride salt is water-soluble. Smoking freebase cocaine has the additional effect of releasing methylecgonidine into the user's system due to the pyrolysis of the substance (a side effect which insufflating or injecting powder cocaine does not create). Some research suggests that smoking freebase cocaine can be even more cardiotoxic than other routes of administration because of methylecgonidine's effects on lung tissue and liver tissue. Pure cocaine is prepared by neutralizing its compounding salt with an alkaline solution, which will precipitate non-polar basic cocaine. It is further refined through aqueous-solvent liquid–liquid extraction. Crack is usually smoked in a glass pipe, and once inhaled, it passes from the lungs directly to the central nervous system, producing an almost immediate "high" that can be very powerful – this initial crescendo of stimulation is known as a "rush". This is followed by an equally intense low, leaving the user craving more drug. Addiction to crack usually occurs with four to six weeks; much more rapidly than with regular cocaine. Powder cocaine (cocaine hydrochloride) must be heated to a high temperature (about 197 °C), and considerable decomposition/burning occurs at these high temperatures. This effectively destroys some of the cocaine and yields a sharp, acrid, and foul-tasting smoke. Cocaine base/crack can be smoked because it vaporizes with little or no decomposition at 98 °C (208 °F), which is below the boiling point of water. Crack is a lower purity form of free-base cocaine that is usually produced by neutralization of cocaine hydrochloride with a solution of baking soda (sodium bicarbonate, NaHCO3) and water, producing a very hard/brittle, off-white-to-brown colored, amorphous material that contains sodium carbonate, entrapped water, and other by-products as the main impurities. The origin of the name "crack" comes from the "crackling" sound (and hence the onomatopoeic moniker "crack") that is produced when the cocaine and its impurities (i.e. water, sodium bicarbonate) are heated past the point of vaporization. Coca herbal infusion (also referred to as coca tea) is used in coca-leaf producing countries much as any herbal medicinal infusion would elsewhere in the world. The free and legal commercialization of dried coca leaves under the form of filtration bags to be used as "coca tea" has been actively promoted by the governments of Peru and Bolivia for many years as a drink having medicinal powers. In Peru, the National Coca Company, a state-run corporation, sells cocaine-infused teas and other medicinal products and also exports leaves to the U.S. for medicinal use. Visitors to the city of Cuzco in Peru, and La Paz in Bolivia are greeted with the offering of coca leaf infusions (prepared in teapots with whole coca leaves) purportedly to help the newly arrived traveler overcome the malaise of high altitude sickness. The effects of drinking coca tea are mild stimulation and mood lift. It has also been promoted as an adjuvant for the treatment of cocaine dependence. One study on coca leaf infusion used with counseling in the treatment of 23 addicted coca-paste smokers in Lima, Peru found that the relapses rate fell from 4.35 times per month on average before coca tea treatment to one during treatment. The duration of abstinence increased from an average of 32 days before treatment to 217.2 days during treatment. This suggests that coca leaf infusion plus counseling may be effective at preventing relapse during cocaine addiction treatment. There is little information on the pharmacological and toxicological effects of consuming coca tea. A chemical analysis by solid-phase extraction and gas chromatography–mass spectrometry (SPE-GC/MS) of Peruvian and Bolivian tea bags indicated the presence of significant amounts of cocaine, the metabolite benzoylecgonine, ecgonine methyl ester and trans-cinnamoylcocaine in coca tea bags and coca tea. Urine specimens were also analyzed from an individual who consumed one cup of coca tea and it was determined that enough cocaine and cocaine-related metabolites were present to produce a positive drug test. The first synthesis and elucidation of the cocaine molecule was by Richard Willstätter in 1898. Willstätter's synthesis derived cocaine from tropinone. Since then, Robert Robinson and Edward Leete have made significant contributions to the mechanism of the synthesis. (-NO3) The additional carbon atoms required for the synthesis of cocaine are derived from acetyl-CoA, by addition of two acetyl-CoA units to the N-methyl-Δ-pyrrolinium cation. The first addition is a Mannich-like reaction with the enolate anion from acetyl-CoA acting as a nucleophile towards the pyrrolinium cation. The second addition occurs through a Claisen condensation. This produces a racemic mixture of the 2-substituted pyrrolidine, with the retention of the thioester from the Claisen condensation. In formation of tropinone from racemic ethyl [2,3-13C2]4(Nmethyl-2-pyrrolidinyl)-3-oxobutanoate there is no preference for either stereoisomer. In cocaine biosynthesis, only the (S)-enantiomer can cyclize to form the tropane ring system of cocaine. The stereoselectivity of this reaction was further investigated through study of prochiral methylene hydrogen discrimination. This is due to the extra chiral center at C-2. This process occurs through an oxidation, which regenerates the pyrrolinium cation and formation of an enolate anion, and an intramolecular Mannich reaction. The tropane ring system undergoes hydrolysis, SAM-dependent methylation, and reduction via NADPH for the formation of methylecgonine. The benzoyl moiety required for the formation of the cocaine diester is synthesized from phenylalanine via cinnamic acid. Benzoyl-CoA then combines the two units to form cocaine. The biosynthesis begins with L-Glutamine, which is derived to L-ornithine in plants. The major contribution of L-ornithine and L-arginine as a precursor to the tropane ring was confirmed by Edward Leete. Ornithine then undergoes a pyridoxal phosphate-dependent decarboxylation to form putrescine. In some animals, the urea cycle derives putrescine from ornithine. L-ornithine is converted to L-arginine, which is then decarboxylated via PLP to form agmatine. Hydrolysis of the imine derives N-carbamoylputrescine followed with hydrolysis of the urea to form putrescine. The separate pathways of converting ornithine to putrescine in plants and animals have converged. A SAM-dependent N-methylation of putrescine gives the N-methylputrescine product, which then undergoes oxidative deamination by the action of diamine oxidase to yield the aminoaldehyde. Schiff base formation confirms the biosynthesis of the N-methyl-Δ-pyrrolinium cation. The biosynthesis of the tropane alkaloid is still not understood. Hemscheidt proposes that Robinson's acetonedicarboxylate emerges as a potential intermediate for this reaction. Condensation of N-methylpyrrolinium and acetonedicarboxylate would generate the oxobutyrate. Decarboxylation leads to tropane alkaloid formation. The reduction of tropinone is mediated by NADPH-dependent reductase enzymes, which have been characterized in multiple plant species. These plant species all contain two types of the reductase enzymes, tropinone reductase I and tropinone reductase II. TRI produces tropine and TRII produces pseudotropine. Due to differing kinetic and pH/activity characteristics of the enzymes and by the 25-fold higher activity of TRI over TRII, the majority of the tropinone reduction is from TRI to form tropine. In 2022, a GMO produced N. benthamiana were discovered that were able to produce 25% of the amount of cocaine found in a coca plant. Cocaine and its major metabolites may be quantified in blood, plasma, or urine to monitor for use, confirm a diagnosis of poisoning, or assist in the forensic investigation of a traffic or other criminal violation or sudden death. Most commercial cocaine immunoassay screening tests cross-react appreciably with the major cocaine metabolites, but chromatographic techniques can easily distinguish and separately measure each of these substances. When interpreting the results of a test, it is important to consider the cocaine usage history of the individual, since a chronic user can develop tolerance to doses that would incapacitate a cocaine-naive individual, and the chronic user often has high baseline values of the metabolites in his system. Cautious interpretation of testing results may allow a distinction between passive or active usage, and between smoking versus other routes of administration. Cocaine may be detected by law enforcement using the Scott reagent. The test can easily generate false positives for common substances and must be confirmed with a laboratory test. Approximate cocaine purity can be determined using 1 mL 2% cupric sulfate pentahydrate in dilute HCl, 1 mL 2% potassium thiocyanate and 2 mL of chloroform. The shade of brown shown by the chloroform is proportional to the cocaine content. This test is not cross sensitive to heroin, methamphetamine, benzocaine, procaine and a number of other drugs but other chemicals could cause false positives. According to a 2016 United Nations report, England and Wales are the countries with the highest rate of cocaine usage (2.4% of adults in the previous year). Other countries where the usage rate meets or exceeds 1.5% are Spain and Scotland (2.2%), the United States (2.1%), Australia (2.1%), Uruguay (1.8%), Brazil (1.75%), Chile (1.73%), the Netherlands (1.5%) and Ireland (1.5%). Cocaine is the second most popular illegal recreational drug in Europe (behind cannabis). Since the mid-1990s, overall cocaine usage in Europe has been on the rise, but usage rates and attitudes tend to vary between countries. European countries with the highest usage rates are the United Kingdom, Spain, Italy, and the Republic of Ireland. Approximately 17 million Europeans (5.1%) have used cocaine at least once and 3.5 million (1.1%) in the last year. About 1.9% (2.3 million) of young adults (15–34 years old) have used cocaine in the last year (latest data available as of 2018). Usage is particularly prevalent among this demographic: 4% to 7% of males have used cocaine in the last year in Spain, Denmark, the Republic of Ireland, Italy, and the United Kingdom. The ratio of male to female users is approximately 3.8:1, but this statistic varies from 1:1 to 13:1 depending on country. In 2014 London had the highest amount of cocaine in its sewage out of 50 European cities. Cocaine is the second most popular illegal recreational drug in the United States (behind cannabis) and the U.S. is the world's largest consumer of cocaine. Its users span over different ages, races, and professions. In the 1970s and 1980s, the drug became particularly popular in the disco culture as cocaine usage was very common and popular in many discos such as Studio 54. Indigenous peoples of South America have chewed the leaves of Erythroxylon coca — a plant that contains vital nutrients as well as numerous alkaloids, including cocaine — for over a thousand years. The coca leaf was, and still is, chewed almost universally by some indigenous communities. The remains of coca leaves have been found with ancient Peruvian mummies, and pottery from the time period depicts humans with bulged cheeks, indicating the presence of something on which they are chewing. There is also evidence that these cultures used a mixture of coca leaves and saliva as an anesthetic for the performance of trepanation. When the Spanish arrived in South America, the conquistadors at first banned coca as an "evil agent of devil". But after discovering that without the coca the locals were barely able to work, the conquistadors legalized and taxed the leaf, taking 10% off the value of each crop. In 1569, Spanish botanist Nicolás Monardes described the indigenous peoples' practice of chewing a mixture of tobacco and coca leaves to induce "great contentment": When they wished to make themselves drunk and out of judgment they chewed a mixture of tobacco and coca leaves which make them go as they were out of their wittes. In 1609, Padre Blas Valera wrote: Coca protects the body from many ailments, and our doctors use it in powdered form to reduce the swelling of wounds, to strengthen broken bones, to expel cold from the body or prevent it from entering, and to cure rotten wounds or sores that are full of maggots. And if it does so much for outward ailments, will not its singular virtue have even greater effect in the entrails of those who eat it? Although the stimulant and hunger-suppressant properties of coca had been known for many centuries, the isolation of the cocaine alkaloid was not achieved until 1855. Various European scientists had attempted to isolate cocaine, but none had been successful for two reasons: the knowledge of chemistry required was insufficient at the time, and contemporary conditions of sea-shipping from South America could degrade the cocaine in the plant samples available to European chemists. The cocaine alkaloid was first isolated by the German chemist Friedrich Gaedcke in 1855. Gaedcke named the alkaloid "erythroxyline", and published a description in the journal Archiv der Pharmazie. In 1856, Friedrich Wöhler asked Dr. Carl Scherzer, a scientist aboard the Novara (an Austrian frigate sent by Emperor Franz Joseph to circle the globe), to bring him a large amount of coca leaves from South America. In 1859, the ship finished its travels and Wöhler received a trunk full of coca. Wöhler passed on the leaves to Albert Niemann, a PhD student at the University of Göttingen in Germany, who then developed an improved purification process. Niemann described every step he took to isolate cocaine in his dissertation titled Über eine neue organische Base in den Cocablättern (On a New Organic Base in the Coca Leaves), which was published in 1860 and earned him his Ph.D. He wrote of the alkaloid's "colourless transparent prisms" and said that "Its solutions have an alkaline reaction, a bitter taste, promote the flow of saliva and leave a peculiar numbness, followed by a sense of cold when applied to the tongue." Niemann named the alkaloid "cocaine" from "coca" (from Quechua "kúka") + suffix "ine". The first synthesis and elucidation of the structure of the cocaine molecule was by Richard Willstätter in 1898. It was the first biomimetic synthesis of an organic structure recorded in academic chemical literature. The synthesis started from tropinone, a related natural product and took five steps. Because of the former use of cocaine as a local anesthetic, a suffix "-caine" was later extracted and used to form names of synthetic local anesthetics. With the discovery of this new alkaloid, Western medicine was quick to exploit the possible uses of this plant. In 1879, Vassili von Anrep, of the University of Würzburg, devised an experiment to demonstrate the analgesic properties of the newly discovered alkaloid. He prepared two separate jars, one containing a cocaine-salt solution, with the other containing merely saltwater. He then submerged a frog's legs into the two jars, one leg in the treatment and one in the control solution, and proceeded to stimulate the legs in several different ways. The leg that had been immersed in the cocaine solution reacted very differently from the leg that had been immersed in saltwater. Karl Koller (a close associate of Sigmund Freud, who would write about cocaine later) experimented with cocaine for ophthalmic usage. In an infamous experiment in 1884, he experimented upon himself by applying a cocaine solution to his own eye and then pricking it with pins. His findings were presented to the Heidelberg Ophthalmological Society. Also in 1884, Jellinek demonstrated the effects of cocaine as a respiratory system anesthetic. In 1885, William Halsted demonstrated nerve-block anesthesia, and James Leonard Corning demonstrated peridural anesthesia. 1898 saw Heinrich Quincke use cocaine for spinal anesthesia. In 1859, an Italian doctor, Paolo Mantegazza, returned from Peru, where he had witnessed first-hand the use of coca by the local indigenous peoples. He proceeded to experiment on himself and upon his return to Milan, he wrote a paper in which he described the effects. In this paper, he declared coca and cocaine (at the time they were assumed to be the same) as being useful medicinally, in the treatment of "a furred tongue in the morning, flatulence, and whitening of the teeth." A chemist named Angelo Mariani who read Mantegazza's paper became immediately intrigued with coca and its economic potential. In 1863, Mariani started marketing a wine called Vin Mariani, which had been treated with coca leaves, to become coca wine. The ethanol in wine acted as a solvent and extracted the cocaine from the coca leaves, altering the drink's effect. It contained 6 mg cocaine per ounce of wine, but Vin Mariani which was to be exported contained 7.2 mg per ounce, to compete with the higher cocaine content of similar drinks in the United States. A "pinch of coca leaves" was included in John Styth Pemberton's original 1886 recipe for Coca-Cola, though the company began using decocainized leaves in 1906 when the Pure Food and Drug Act was passed. In 1879 cocaine began to be used to treat morphine addiction. Cocaine was introduced into clinical use as a local anesthetic in Germany in 1884, about the same time as Sigmund Freud published his work Über Coca, in which he wrote that cocaine causes: Exhilaration and lasting euphoria, which in no way differs from the normal euphoria of the healthy person. You perceive an increase of self-control and possess more vitality and capacity for work. In other words, you are simply normal, and it is soon hard to believe you are under the influence of any drug. Long intensive physical work is performed without any fatigue. This result is enjoyed without any of the unpleasant after-effects that follow exhilaration brought about by alcoholic beverages. No craving for the further use of cocaine appears after the first, or even after repeated taking of the drug. By 1885 the U.S. manufacturer Parke-Davis sold coca-leaf cigarettes and cheroots, a cocaine inhalant, a Coca Cordial, cocaine crystals, and cocaine solution for intravenous injection. The company promised that its cocaine products would "supply the place of food, make the coward brave, the silent eloquent and render the sufferer insensitive to pain." By the late Victorian era, cocaine use had appeared as a vice in literature. For example, it was injected by Arthur Conan Doyle's fictional Sherlock Holmes, generally to offset the boredom he felt when he was not working on a case. In early 20th-century Memphis, Tennessee, cocaine was sold in neighborhood drugstores on Beale Street, costing five or ten cents for a small boxful. Stevedores along the Mississippi River used the drug as a stimulant, and white employers encouraged its use by black laborers. In 1909, Ernest Shackleton took "Forced March" brand cocaine tablets to Antarctica, as did Captain Scott a year later on his ill-fated journey to the South Pole. In the 1931 song "Minnie the Moocher", Cab Calloway heavily references cocaine use. He uses the phrase "kicking the gong around", slang for cocaine use; describes titular character Minnie as "tall and skinny;" and describes Smokey Joe as "cokey". In the 1932 comedy musical film The Big Broadcast, Cab Calloway performs the song with his orchestra and mimes snorting cocaine in between verses. During the mid-1940s, amidst World War II, cocaine was considered for inclusion as an ingredient of a future generation of 'pep pills' for the German military, code named D-IX. In modern popular culture, references to cocaine are common. The drug has a glamorous image associated with the wealthy, famous and powerful, and is said to make users "feel rich and beautiful". In addition the pace of modern society − such as in finance − gives many the incentive to make use of the drug. In many countries, cocaine is a popular recreational drug. In the United States, the development of "crack" cocaine introduced the substance to a generally poorer inner-city market. The use of the powder form has stayed relatively constant, experiencing a new height of use during the late 1990s and early 2000s in the U.S., and has become much more popular in the last few years in the UK. Cocaine use is prevalent across all socioeconomic strata, including age, demographics, economic, social, political, religious, and livelihood. The estimated U.S. cocaine market exceeded US$70 billion in street value for the year 2005, exceeding revenues by corporations such as Starbucks. Cocaine's status as a club drug shows its immense popularity among the "party crowd". In 1995 the World Health Organization (WHO) and the United Nations Interregional Crime and Justice Research Institute (UNICRI) announced in a press release the publication of the results of the largest global study on cocaine use ever undertaken. An American representative in the World Health Assembly banned the publication of the study, because it seemed to make a case for the positive uses of cocaine. An excerpt of the report strongly conflicted with accepted paradigms, for example, "that occasional cocaine use does not typically lead to severe or even minor physical or social problems." In the sixth meeting of the B committee, the US representative threatened that "If World Health Organization activities relating to drugs failed to reinforce proven drug control approaches, funds for the relevant programs should be curtailed". This led to the decision to discontinue publication. A part of the study was recuperated and published in 2010, including profiles of cocaine use in 20 countries, but are unavailable as of 2015. In October 2010 it was reported that the use of cocaine in Australia has doubled since monitoring began in 2003. A problem with illegal cocaine use, especially in the higher volumes used to combat fatigue (rather than increase euphoria) by long-term users, is the risk of ill effects or damage caused by the compounds used in adulteration. Cutting or "stepping on" the drug is commonplace, using compounds which simulate ingestion effects, such as Novocain (procaine) producing temporary anesthesia, as many users believe a strong numbing effect is the result of strong and/or pure cocaine, ephedrine or similar stimulants that are to produce an increased heart rate. The normal adulterants for profit are inactive sugars, usually mannitol, creatine, or glucose, so introducing active adulterants gives the illusion of purity and to 'stretch' or make it so a dealer can sell more product than without the adulterants. The adulterant of sugars allows the dealer to sell the product for a higher price because of the illusion of purity and allows the sale of more of the product at that higher price, enabling dealers to significantly increase revenue with little additional cost for the adulterants. A 2007 study by the European Monitoring Centre for Drugs and Drug Addiction showed that the purity levels for street purchased cocaine was often under 5% and on average under 50% pure. The production, distribution, and sale of cocaine products is restricted (and illegal in most contexts) in most countries as regulated by the Single Convention on Narcotic Drugs, and the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances. In the United States the manufacture, importation, possession, and distribution of cocaine are additionally regulated by the 1970 Controlled Substances Act. Some countries, such as Peru and Bolivia, permit the cultivation of coca leaf for traditional consumption by the local indigenous population, but nevertheless, prohibit the production, sale, and consumption of cocaine. The provisions as to how much a coca farmer can yield annually is protected by laws such as the Bolivian Cato accord. In addition, some parts of Europe, the United States, and Australia allow processed cocaine for medicinal uses only. Cocaine is a Schedule 8 prohibited substance in Australia under the Poisons Standard (July 2016). A schedule 8 substance is a controlled Drug – Substances which should be available for use but require the restriction of manufacture, supply, distribution, possession and use to reduce abuse, misuse, and physical or psychological dependence. In Western Australia under the Misuse of Drugs Act 1981 4.0g of cocaine is the amount of prohibited drugs determining a court of trial, 2.0g is the amount of cocaine required for the presumption of intention to sell or supply and 28.0g is the amount of cocaine required for purposes of drug trafficking. The US federal government instituted a national labeling requirement for cocaine and cocaine-containing products through the Pure Food and Drug Act of 1906. The next important federal regulation was the Harrison Narcotics Tax Act of 1914. While this act is often seen as the start of prohibition, the act itself was not actually a prohibition on cocaine, but instead set up a regulatory and licensing regime. The Harrison Act did not recognize addiction as a treatable condition and therefore the therapeutic use of cocaine, heroin, or morphine to such individuals was outlawed – leading a 1915 editorial in the journal American Medicine to remark that the addict "is denied the medical care he urgently needs, open, above-board sources from which he formerly obtained his drug supply are closed to him, and he is driven to the underworld where he can get his drug, but of course, surreptitiously and in violation of the law." The Harrison Act left manufacturers of cocaine untouched so long as they met certain purity and labeling standards. Despite that cocaine was typically illegal to sell and legal outlets were rarer, the quantities of legal cocaine produced declined very little. Legal cocaine quantities did not decrease until the Jones–Miller Act of 1922 put serious restrictions on cocaine manufactures. Before the early 1900s, the primary problem caused by cocaine use was portrayed by newspapers to be addiction, not violence or crime, and the cocaine user was represented as an upper- or middle-class White person. In 1914, The New York Times published an article titled "Negro Cocaine 'Fiends' Are a New Southern Menace", portraying Black cocaine users as dangerous and able to withstand wounds that would normally be fatal. The Anti-Drug Abuse Act of 1986 mandated prison sentences for 500 grams of powdered cocaine and 5 grams of crack cocaine. In the National Survey on Drug Use and Health, Whites reported a higher rate of powdered cocaine use, and Blacks reported a higher rate of crack cocaine use. In 2004, according to the United Nations, 589 tonnes of cocaine were seized globally by law enforcement authorities. Colombia seized 188 t, the United States 166 t, Europe 79 t, Peru 14 t, Bolivia 9 t, and the rest of the world 133 t. Colombia is as of 2019 the world's largest cocaine producer, with production more than tripling since 2013. Three-quarters of the world's annual yield of cocaine has been produced in Colombia, both from cocaine base imported from Peru (primarily the Huallaga Valley) and Bolivia and from locally grown coca. There was a 28% increase in the amount of potentially harvestable coca plants which were grown in Colombia in 1998. This, combined with crop reductions in Bolivia and Peru, made Colombia the nation with the largest area of coca under cultivation after the mid-1990s. Coca grown for traditional purposes by indigenous communities, a use which is still present and is permitted by Colombian laws, only makes up a small fragment of total coca production, most of which is used for the illegal drug trade. An interview with a coca farmer published in 2003 described a mode of production by acid-base extraction that has changed little since 1905. Roughly 625 pounds (283 kg) of leaves were harvested per hectare, six times per year. The leaves were dried for half a day, then chopped into small pieces with a string trimmer and sprinkled with a small amount of powdered cement (replacing sodium carbonate from former times). Several hundred pounds of this mixture were soaked in 50 US gallons (190 L) of gasoline for a day, then the gasoline was removed and the leaves were pressed for the remaining liquid, after which they could be discarded. Then battery acid (weak sulfuric acid) was used, one bucket per 55 lb (25 kg) of leaves, to create a phase separation in which the cocaine free base in the gasoline was acidified and extracted into a few buckets of "murky-looking smelly liquid". Once powdered caustic soda was added to this, the cocaine precipitated and could be removed by filtration through a cloth. The resulting material, when dried, was termed pasta and sold by the farmer. The 3,750 pounds (1,700 kg) yearly harvest of leaves from a hectare produced 6 lb (2.5 kg) of pasta, approximately 40–60% cocaine. Repeated recrystallization from solvents, producing pasta lavada and eventually crystalline cocaine were performed at specialized laboratories after the sale. Attempts to eradicate coca fields through the use of defoliants have devastated part of the farming economy in some coca-growing regions of Colombia, and strains appear to have been developed that are more resistant or immune to their use. Whether these strains are natural mutations or the product of human tampering is unclear. These strains have also shown to be more potent than those previously grown, increasing profits for the drug cartels responsible for the exporting of cocaine. Although production fell temporarily, coca crops rebounded in numerous smaller fields in Colombia, rather than the larger plantations. The cultivation of coca has become an attractive economic decision for many growers due to the combination of several factors, including the lack of other employment alternatives, the lower profitability of alternative crops in official crop substitution programs, the eradication-related damages to non-drug farms, the spread of new strains of the coca plant due to persistent worldwide demand. The latest estimate provided by the U.S. authorities on the annual production of cocaine in Colombia refers to 290 metric tons. As of the end of 2011, the seizure operations of Colombian cocaine carried out in different countries have totaled 351.8 metric tons of cocaine, i.e. 121.3% of Colombia's annual production according to the U.S. Department of State's estimates. Synthesizing cocaine could eliminate the high visibility and low reliability of offshore sources and international smuggling, replacing them with clandestine domestic laboratories, as are common for illicit methamphetamine, but is rarely done. Natural cocaine remains the lowest cost and highest quality supply of cocaine. Formation of inactive stereoisomers (cocaine has four chiral centres – 1R 2R, 3S, and 5S, two of them dependent, hence eight possible stereoisomers) plus synthetic by-products limits the yield and purity. Organized criminal gangs operating on a large scale dominate the cocaine trade. Most cocaine is grown and processed in South America, particularly in Colombia, Bolivia, Peru, and smuggled into the United States and Europe, the United States being the world's largest consumer of cocaine, where it is sold at huge markups; usually in the US at $80–120 for 1 gram, and $250–300 for 3.5 grams (1/8 of an ounce, or an "eight ball"). The primary cocaine importation points in the United States have been in Arizona, southern California, southern Florida, and Texas. Typically, land vehicles are driven across the U.S.–Mexico border. Sixty-five percent of cocaine enters the United States through Mexico, and the vast majority of the rest enters through Florida. As of 2015, the Sinaloa Cartel is the most active drug cartel involved in smuggling illicit drugs like cocaine into the United States and trafficking them throughout the United States. Cocaine traffickers from Colombia and Mexico have established a labyrinth of smuggling routes throughout the Caribbean, the Bahama Island chain, and South Florida. They often hire traffickers from Mexico or the Dominican Republic to transport the drug using a variety of smuggling techniques to U.S. markets. These include airdrops of 500 to 700 kg (1,100 to 1,500 lb) in the Bahama Islands or off the coast of Puerto Rico, mid-ocean boat-to-boat transfers of 500 to 2,000 kg (1,100 to 4,400 lb), and the commercial shipment of tonnes of cocaine through the port of Miami. Another route of cocaine traffic goes through Chile, which is primarily used for cocaine produced in Bolivia since the nearest seaports lie in northern Chile. The arid Bolivia–Chile border is easily crossed by 4×4 vehicles that then head to the seaports of Iquique and Antofagasta. While the price of cocaine is higher in Chile than in Peru and Bolivia, the final destination is usually Europe, especially Spain where drug dealing networks exist among South American immigrants. Cocaine is also carried in small, concealed, kilogram quantities across the border by couriers known as "mules" (or "mulas"), who cross a border either legally, for example, through a port or airport, or illegally elsewhere. The drugs may be strapped to the waist or legs or hidden in bags, or hidden in the body. If the mule gets through without being caught, the gangs will reap most of the profits. If caught, gangs will sever all links and the mule will usually stand trial for trafficking alone. Bulk cargo ships are also used to smuggle cocaine to staging sites in the western Caribbean–Gulf of Mexico area. These vessels are typically 150–250-foot (50–80 m) coastal freighters that carry an average cocaine load of approximately 2.5 tonnes. Commercial fishing vessels are also used for smuggling operations. In areas with a high volume of recreational traffic, smugglers use the same types of vessels, such as go-fast boats, like those used by the local populations. Sophisticated drug subs are the latest tool drug runners are using to bring cocaine north from Colombia, it was reported on 20 March 2008. Although the vessels were once viewed as a quirky sideshow in the drug war, they are becoming faster, more seaworthy, and capable of carrying bigger loads of drugs than earlier models, according to those charged with catching them. Cocaine is readily available in all major countries' metropolitan areas. According to the Summer 1998 Pulse Check, published by the U.S. Office of National Drug Control Policy, cocaine use had stabilized across the country, with a few increases reported in San Diego, Bridgeport, Miami, and Boston. In the West, cocaine usage was lower, which was thought to be due to a switch to methamphetamine among some users; methamphetamine is cheaper, three and a half times more powerful, and lasts 12–24 times longer with each dose. Nevertheless, the number of cocaine users remain high, with a large concentration among urban youth. In addition to the amounts previously mentioned, cocaine can be sold in "bill sizes": As of 2007 for example, $10 might purchase a "dime bag", a very small amount (0.1–0.15 g) of cocaine. These amounts and prices are very popular among young people because they are inexpensive and easily concealed on one's body. Quality and price can vary dramatically depending on supply and demand, and on geographic region. In 2008, the European Monitoring Centre for Drugs and Drug Addiction reports that the typical retail price of cocaine varied between €50 and €75 per gram in most European countries, although Cyprus, Romania, Sweden, and Turkey reported much higher values. World annual cocaine consumption, as of 2000, stood at around 600 tonnes, with the United States consuming around 300 t, 50% of the total, Europe about 150 t, 25% of the total, and the rest of the world the remaining 150 t or 25%. It is estimated that 1.5 million people in the United States used cocaine in 2010, down from 2.4 million in 2006. Conversely, cocaine use appears to be increasing in Europe with the highest prevalences in Spain, the United Kingdom, Italy, and Ireland. The 2010 UN World Drug Report concluded that "it appears that the North American cocaine market has declined in value from US$47 billion in 1998 to US$38 billion in 2008. Between 2006 and 2008, the value of the market remained basically stable".
[ { "paragraph_id": 0, "text": "Cocaine (from French: cocaïne, from Spanish: coca, ultimately from Quechua: kúka) is a tropane alkaloid that acts as a central nervous system (CNS) stimulant. As an extract, it is mainly used recreationally, and often illegally for its euphoric and rewarding effects. It is also used in medicine by Indigenous South Americans for various purposes and rarely, but more formally, as a local anaesthetic or diagnostic tool by medical practitioners in more developed countries. It is primarily obtained from the leaves of two Coca species native to South America: Erythroxylum coca and E. novogranatense. After extraction from the plant, and further processing into cocaine hydrochloride (powdered cocaine), the drug is administered by being either snorted, applied topically to the mouth, or dissolved and injected into a vein. It can also then be turned into free base form (typically crack cocaine), in which it can be heated until sublimated and then the vapours can be inhaled.", "title": "" }, { "paragraph_id": 1, "text": "Cocaine stimulates the reward pathway in the brain. Mental effects may include an intense feeling of happiness, sexual arousal, loss of contact with reality, or agitation. Physical effects may include a fast heart rate, sweating, and dilated pupils. High doses can result in high blood pressure or high body temperature. Onset of effects can begin within seconds to minutes of use, depending on method of delivery, and can last between five and ninety minutes. As cocaine also has numbing and blood vessel constriction properties, it is occasionally used during surgery on the throat or inside of the nose to control pain, bleeding, and vocal cord spasm.", "title": "" }, { "paragraph_id": 2, "text": "Cocaine crosses the blood–brain barrier via a proton-coupled organic cation antiporter and (to a lesser extent) via passive diffusion across cell membranes. Cocaine blocks the dopamine transporter, inhibiting reuptake of dopamine from the synaptic cleft into the pre-synaptic axon terminal; the higher dopamine levels in the synaptic cleft increase dopamine receptor activation in the post-synaptic neuron, causing euphoria and arousal. Cocaine also blocks the serotonin transporter and norepinephrine transporter, inhibiting reuptake of serotonin and norepinephrine from the synaptic cleft into the pre-synaptic axon terminal and increasing activation of serotonin receptors and norepinephrine receptors in the post-synaptic neuron, contributing to the mental and physical effects of cocaine exposure.", "title": "" }, { "paragraph_id": 3, "text": "A single dose of cocaine induces tolerance to the drug's effects. Repeated use is likely to result in addiction. Addicts who abstain from cocaine may experience craving and drug withdrawal symptoms, with depression, decreased libido, decreased ability to feel pleasure, and fatigue being most common. Use of cocaine increases the overall risk of death, and intravenous use potentially increases the risk of trauma and infectious diseases such as blood infections and HIV through the use of shared paraphernalia. It also increases risk of stroke, heart attack, cardiac arrhythmia, lung injury (when smoked), and sudden cardiac death. Illicitly sold cocaine can be adulterated with fentanyl, local anesthetics, levamisole, cornstarch, quinine, or sugar, which can result in additional toxicity. In 2017, the Global Burden of Disease study found that cocaine use caused around 7,300 deaths annually.", "title": "" }, { "paragraph_id": 4, "text": "Coca leaves have been used by Andean civilizations since ancient times. In ancient Wari culture, Incan culture, and through modern successor indigenous cultures of the Andes mountains, coca leaves are chewed, taken orally in the form of a tea, or alternatively, prepared in a sachet wrapped around alkaline burnt ashes, and held in the mouth against the inner cheek; it has traditionally been used to combat the effects of cold, hunger, and altitude sickness. Cocaine was first isolated from the leaves in 1860.", "title": "Uses" }, { "paragraph_id": 5, "text": "Globally, in 2019, cocaine was used by an estimated 20 million people (0.4% of adults aged 15 to 64 years). The highest prevalence of cocaine use was in Australia and New Zealand (2.1%), followed by North America (2.1%), Western and Central Europe (1.4%), and South and Central America (1.0%). Since 1961, the Single Convention on Narcotic Drugs has required countries to make recreational use of cocaine a crime. In the United States, cocaine is regulated as a Schedule II drug under the Controlled Substances Act, meaning that it has a high potential for abuse but has an accepted medical use. While rarely used medically today, its accepted uses are as a topical local anesthetic for the upper respiratory tract as well as to reduce bleeding in the mouth, throat and nasal cavities.", "title": "Uses" }, { "paragraph_id": 6, "text": "Cocaine eye drops are frequently used by neurologists when examining patients suspected of having Horner syndrome. In Horner syndrome, sympathetic innervation to the eye is blocked. In a healthy eye, cocaine will stimulate the sympathetic nerves by inhibiting norepinephrine reuptake, and the pupil will dilate; if the patient has Horner syndrome, the sympathetic nerves are blocked, and the affected eye will remain constricted or dilate to a lesser extent than the opposing (unaffected) eye which also receives the eye drop test. If both eyes dilate equally, the patient does not have Horner syndrome.", "title": "Uses" }, { "paragraph_id": 7, "text": "Topical cocaine is sometimes used as a local numbing agent and vasoconstrictor to help control pain and bleeding with surgery of the nose, mouth, throat or lacrimal duct. Although some absorption and systemic effects may occur, the use of cocaine as a topical anesthetic and vasoconstrictor is generally safe, rarely causing cardiovascular toxicity, glaucoma, and pupil dilation. Occasionally, cocaine is mixed with adrenaline and sodium bicarbonate and used topically for surgery, a formulation called Moffett's solution.", "title": "Uses" }, { "paragraph_id": 8, "text": "Cocaine hydrochloride (Goprelto), an ester local anesthetic, was approved for medical use in the United States in December 2017, and is indicated for the introduction of local anesthesia of the mucous membranes for diagnostic procedures and surgeries on or through the nasal cavities of adults. Cocaine hydrochloride (Numbrino) was approved for medical use in the United States in January 2020.", "title": "Uses" }, { "paragraph_id": 9, "text": "The most common adverse reactions in people treated with Goprelto are headache and epistaxis. The most common adverse reactions in people treated with Numbrino are hypertension, tachycardia, and sinus tachycardia.", "title": "Uses" }, { "paragraph_id": 10, "text": "Cocaine is a central nervous system stimulant. Its effects can last from 15 minutes to an hour. The duration of cocaine's effects depends on the amount taken and the route of administration. Cocaine can be in the form of fine white powder and has a bitter taste. Crack cocaine is a smokeable form of cocaine made into small \"rocks\" by processing cocaine with sodium bicarbonate (baking soda) and water. Crack cocaine is referred to as \"crack\" because of the crackling sounds it makes when heated.", "title": "Uses" }, { "paragraph_id": 11, "text": "Cocaine use leads to increases in alertness, feelings of well-being and euphoria, increased energy and motor activity, and increased feelings of competence and sexuality.", "title": "Uses" }, { "paragraph_id": 12, "text": "Analysis of the correlation between the use of 18 various psychoactive substances shows that cocaine use correlates with other \"party drugs\" (such as ecstasy or amphetamines), as well as with heroin and benzodiazepines use, and can be considered as a bridge between the use of different groups of drugs.", "title": "Uses" }, { "paragraph_id": 13, "text": "It is legal for people to use coca leaves in some Andean nations, such as Peru and Bolivia, where they are chewed, consumed in the form of tea, or are sometimes incorporated into food products. Coca leaves are typically mixed with an alkaline substance (such as lime) and chewed into a wad that is retained in the buccal pouch (mouth between gum and cheek, much the same as chewing tobacco is chewed) and sucked of its juices. The juices are absorbed slowly by the mucous membrane of the inner cheek and by the gastrointestinal tract when swallowed. Alternatively, coca leaves can be infused in liquid and consumed like tea. Coca tea, an infusion of coca leaves, is also a traditional method of consumption. The tea has often been recommended for travelers in the Andes to prevent altitude sickness. Its actual effectiveness has never been systematically studied.", "title": "Uses" }, { "paragraph_id": 14, "text": "In 1986 an article in the Journal of the American Medical Association revealed that U.S. health food stores were selling dried coca leaves to be prepared as an infusion as \"Health Inca Tea\". While the packaging claimed it had been \"decocainized\", no such process had actually taken place. The article stated that drinking two cups of the tea per day gave a mild stimulation, increased heart rate, and mood elevation, and the tea was essentially harmless.", "title": "Uses" }, { "paragraph_id": 15, "text": "Nasal insufflation (known colloquially as \"snorting\", \"sniffing\", or \"blowing\") is a common method of ingestion of recreational powdered cocaine. The drug coats and is absorbed through the mucous membranes lining the nasal passages. Cocaine's desired euphoric effects are delayed when snorted through the nose by about five minutes. This occurs because cocaine's absorption is slowed by its constricting effect on the blood vessels of the nose. Insufflation of cocaine also leads to the longest duration of its effects (60–90 minutes). When insufflating cocaine, absorption through the nasal membranes is approximately 30–60%", "title": "Uses" }, { "paragraph_id": 16, "text": "In a study of cocaine users, the average time taken to reach peak subjective effects was 14.6 minutes. Any damage to the inside of the nose is due to cocaine constricting blood vessels — and therefore restricting blood and oxygen/nutrient flow — to that area.", "title": "Uses" }, { "paragraph_id": 17, "text": "Rolled up banknotes, hollowed-out pens, cut straws, pointed ends of keys, specialized spoons, long fingernails, and (clean) tampon applicators are often used to insufflate cocaine. The cocaine typically is poured onto a flat, hard surface (such as a mobile phone screen, mirror, CD case or book) and divided into \"bumps\", \"lines\" or \"rails\", and then insufflated. A 2001 study reported that the sharing of straws used to \"snort\" cocaine can spread blood diseases such as hepatitis C.", "title": "Uses" }, { "paragraph_id": 18, "text": "Subjective effects not commonly shared with other methods of administration include a ringing in the ears moments after injection (usually when over 120 milligrams) lasting two to 5 minutes including tinnitus and audio distortion. This is colloquially referred to as a \"bell ringer\". In a study of cocaine users, the average time taken to reach peak subjective effects was 3.1 minutes. The euphoria passes quickly. Aside from the toxic effects of cocaine, there is also the danger of circulatory emboli from the insoluble substances that may be used to cut the drug. As with all injected illicit substances, there is a risk of the user contracting blood-borne infections if sterile injecting equipment is not available or used.", "title": "Uses" }, { "paragraph_id": 19, "text": "An injected mixture of cocaine and heroin, known as \"speedball\", is a particularly dangerous combination, as the converse effects of the drugs actually complement each other, but may also mask the symptoms of an overdose. It has been responsible for numerous deaths, including celebrities such as comedians/actors John Belushi and Chris Farley, Mitch Hedberg, River Phoenix, grunge singer Layne Staley and actor Philip Seymour Hoffman. Experimentally, cocaine injections can be delivered to animals such as fruit flies to study the mechanisms of cocaine addiction.", "title": "Uses" }, { "paragraph_id": 20, "text": "The onset of cocaine's euphoric effects is fastest with inhalation, beginning after 3–5 seconds. However, inhalation gives the shortest duration of euphoria (5–15 minutes). Cocaine is smoked by inhaling the vapor produced when free base cocaine is heated to the point of sublimation. In a 2000 Brookhaven National Laboratory medical department study, based on self-reports of 32 people who used cocaine who participated in the study, \"peak high\" was found at a mean of 1.4 ± 0.5 minutes. Pyrolysis products of cocaine that occur only when heated/smoked have been shown to change the effect profile, i.e. anhydroecgonine methyl ester, when co-administered with cocaine, increases the dopamine in CPu and NAc brain regions, and has M1 — and M3 — receptor affinity.", "title": "Uses" }, { "paragraph_id": 21, "text": "Smoking freebase cocaine is often accomplished using a pipe made from a small glass tube, often taken from \"love roses\", small glass tubes with a paper rose that are promoted as romantic gifts. These are sometimes called \"stems\", \"horns\", \"blasters\" and \"straight shooters\". A small piece of clean heavy copper or occasionally stainless steel scouring pad – often called a \"brillo\" (actual Brillo Pads contain soap, and are not used) or \"chore\" (named for Chore Boy brand copper scouring pads) – serves as a reduction base and flow modulator in which the \"rock\" can be melted and boiled to vapor. Crack is smoked by placing it at the end of the pipe; a flame held close to it produces vapor, which is then inhaled by the smoker. The effects felt almost immediately after smoking, are very intense and do not last long — usually 2 to 10 minutes. When smoked, cocaine is sometimes combined with other drugs, such as cannabis, often rolled into a joint or blunt.", "title": "Uses" }, { "paragraph_id": 22, "text": "Acute exposure to cocaine has many effects on humans, including euphoria, increases in heart rate and blood pressure, and increases in cortisol secretion from the adrenal gland. In humans with acute exposure followed by continuous exposure to cocaine at a constant blood concentration, the acute tolerance to the chronotropic cardiac effects of cocaine begins after about 10 minutes, while acute tolerance to the euphoric effects of cocaine begins after about one hour. With excessive or prolonged use, the drug can cause itching, fast heart rate, and paranoid delusions or sensations of insects crawling on the skin. Intranasal cocaine and crack use are both associated with pharmacological violence. Aggressive behavior may be displayed by both addicts and casual users. Cocaine can induce psychosis characterized by paranoia, impaired reality testing, hallucinations, irritability, and physical aggression. Cocaine intoxication can cause hyperawareness, hypervigilance, and psychomotor agitation and delirium. Consumption of large doses of cocaine can cause violent outbursts, especially by those with preexisting psychosis. Crack-related violence is also systemic, relating to disputes between crack dealers and users. Acute exposure may induce cardiac arrhythmias, including atrial fibrillation, supraventricular tachycardia, ventricular tachycardia, and ventricular fibrillation. Acute exposure may also lead to angina, heart attack, and congestive heart failure. Cocaine overdose may cause seizures, abnormally high body temperature and a marked elevation of blood pressure, which can be life-threatening, abnormal heart rhythms, and death. Anxiety, paranoia, and restlessness can also occur, especially during the comedown. With excessive dosage, tremors, convulsions and increased body temperature are observed. Severe cardiac adverse events, particularly sudden cardiac death, become a serious risk at high doses due to cocaine's blocking effect on cardiac sodium channels. Incidental exposure of the eye to sublimated cocaine while smoking crack cocaine can cause serious injury to the cornea and long-term loss of visual acuity.", "title": "Effects" }, { "paragraph_id": 23, "text": "Although it has been commonly asserted, the available evidence does not show that chronic use of cocaine is associated with broad cognitive deficits. Research is inconclusive on age-related loss of striatal dopamine transporter (DAT) sites, suggesting cocaine has neuroprotective or neurodegenerative properties for dopamine neurons. Exposure to cocaine may lead to the breakdown of the blood–brain barrier.", "title": "Effects" }, { "paragraph_id": 24, "text": "Physical side effects from chronic smoking of cocaine include coughing up blood, bronchospasm, itching, fever, diffuse alveolar infiltrates without effusions, pulmonary and systemic eosinophilia, chest pain, lung trauma, sore throat, asthma, hoarse voice, dyspnea (shortness of breath), and an aching, flu-like syndrome. Cocaine constricts blood vessels, dilates pupils, and increases body temperature, heart rate, and blood pressure. It can also cause headaches and gastrointestinal complications such as abdominal pain and nausea. A common but untrue belief is that the smoking of cocaine chemically breaks down tooth enamel and causes tooth decay. Cocaine can cause involuntary tooth grinding, known as bruxism, which can deteriorate tooth enamel and lead to gingivitis. Additionally, stimulants like cocaine, methamphetamine, and even caffeine cause dehydration and dry mouth. Since saliva is an important mechanism in maintaining one's oral pH level, people who use cocaine over a long period of time who do not hydrate sufficiently may experience demineralization of their teeth due to the pH of the tooth surface dropping too low (below 5.5). Cocaine use also promotes the formation of blood clots. This increase in blood clot formation is attributed to cocaine-associated increases in the activity of plasminogen activator inhibitor, and an increase in the number, activation, and aggregation of platelets.", "title": "Effects" }, { "paragraph_id": 25, "text": "Chronic intranasal usage can degrade the cartilage separating the nostrils (the septum nasi), leading eventually to its complete disappearance. Due to the absorption of the cocaine from cocaine hydrochloride, the remaining hydrochloride forms a dilute hydrochloric acid.", "title": "Effects" }, { "paragraph_id": 26, "text": "Illicitly-sold cocaine may be contaminated with levamisole. Levamisole may accentuate cocaine's effects. Levamisole-adulterated cocaine has been associated with autoimmune disease.", "title": "Effects" }, { "paragraph_id": 27, "text": "Cocaine use leads to an increased risk of hemorrhagic and ischemic strokes. Cocaine use also increases the risk of having a heart attack.", "title": "Effects" }, { "paragraph_id": 28, "text": "Relatives of persons with cocaine addiction have an increased risk of cocaine addiction. Cocaine addiction occurs through ΔFosB overexpression in the nucleus accumbens, which results in altered transcriptional regulation in neurons within the nucleus accumbens. ΔFosB levels have been found to increase upon the use of cocaine. Each subsequent dose of cocaine continues to increase ΔFosB levels with no ceiling of tolerance. Elevated levels of ΔFosB leads to increases in brain-derived neurotrophic factor (BDNF) levels, which in turn increases the number of dendritic branches and spines present on neurons involved with the nucleus accumbens and prefrontal cortex areas of the brain. This change can be identified rather quickly, and may be sustained weeks after the last dose of the drug.", "title": "Effects" }, { "paragraph_id": 29, "text": "Transgenic mice exhibiting inducible expression of ΔFosB primarily in the nucleus accumbens and dorsal striatum exhibit sensitized behavioural responses to cocaine. They self-administer cocaine at lower doses than control, but have a greater likelihood of relapse when the drug is withheld. ΔFosB increases the expression of AMPA receptor subunit GluR2 and also decreases expression of dynorphin, thereby enhancing sensitivity to reward.", "title": "Effects" }, { "paragraph_id": 30, "text": "DNA damage is increased in the brain of rodents by administration of cocaine. During DNA repair of such damages, persistent chromatin alterations may occur such as methylation of DNA or the acetylation or methylation of histones at the sites of repair. These alterations can be epigenetic scars in the chromatin that contribute to the persistent epigenetic changes found in cocaine addiction.", "title": "Effects" }, { "paragraph_id": 31, "text": "In humans, cocaine abuse may cause structural changes in brain connectivity, though it is unclear to what extent these changes are permanent.", "title": "Effects" }, { "paragraph_id": 32, "text": "Cocaine dependence develops after even brief periods of regular cocaine use and produces a withdrawal state with emotional-motivational deficits upon cessation of cocaine use.", "title": "Effects" }, { "paragraph_id": 33, "text": "Crack baby is a term for a child born to a mother who used crack cocaine during her pregnancy. The threat that cocaine use during pregnancy poses to the fetus is now considered exaggerated. Studies show that prenatal cocaine exposure (independent of other effects such as, for example, alcohol, tobacco, or physical environment) has no appreciable effect on childhood growth and development. However, the official opinion of the National Institute on Drug Abuse of the United States warns about health risks while cautioning against stereotyping:", "title": "Effects" }, { "paragraph_id": 34, "text": "Many recall that \"crack babies\", or babies born to mothers who used crack cocaine while pregnant, were at one time written off by many as a lost generation. They were predicted to suffer from severe, irreversible damage, including reduced intelligence and social skills. It was later found that this was a gross exaggeration. However, the fact that most of these children appear normal should not be over-interpreted as indicating that there is no cause for concern. Using sophisticated technologies, scientists are now finding that exposure to cocaine during fetal development may lead to subtle, yet significant, later deficits in some children, including deficits in some aspects of cognitive performance, information-processing, and attention to tasks—abilities that are important for success in school.", "title": "Effects" }, { "paragraph_id": 35, "text": "There are also warnings about the threat of breastfeeding: The March of Dimes said \"it is likely that cocaine will reach the baby through breast milk,\" and advises the following regarding cocaine use during pregnancy:", "title": "Effects" }, { "paragraph_id": 36, "text": "Cocaine use during pregnancy can affect a pregnant woman and her unborn baby in many ways. During the early months of pregnancy, it may increase the risk of miscarriage. Later in pregnancy, it can trigger preterm labor (labor that occurs before 37 weeks of pregnancy) or cause the baby to grow poorly. As a result, cocaine-exposed babies are more likely than unexposed babies to be born with low birth weight (less than 5.5 lb or 2.5 kg). Low-birthweight babies are 20 times more likely to die in their first month of life than normal-weight babies, and face an increased risk of lifelong disabilities such as mental retardation and cerebral palsy. Cocaine-exposed babies also tend to have smaller heads, which generally reflect smaller brains. Some studies suggest that cocaine-exposed babies are at increased risk of birth defects, including urinary tract defects and, possibly, heart defects. Cocaine also may cause an unborn baby to have a stroke, irreversible brain damage, or a heart attack.", "title": "Effects" }, { "paragraph_id": 37, "text": "Persons with regular or problematic use of cocaine have a significantly higher rate of death, and are specifically at higher risk of traumatic deaths and deaths attributable to infectious disease.", "title": "Effects" }, { "paragraph_id": 38, "text": "The extent of absorption of cocaine into the systemic circulation after nasal insufflation is similar to that after oral ingestion. The rate of absorption after nasal insufflation is limited by cocaine-induced vasoconstriction of capillaries in the nasal mucosa. Onset of absorption after oral ingestion is delayed because cocaine is a weak base with a pKa of 8.6, and is thus in an ionized form that is poorly absorbed from the acidic stomach and easily absorbed from the alkaline duodenum. The rate and extent of absorption from inhalation of cocaine is similar or greater than with intravenous injection, as inhalation provides access directly to the pulmonary capillary bed. The delay in absorption after oral ingestion may account for the popular belief that cocaine bioavailability from the stomach is lower than after insufflation. Compared with ingestion, the faster absorption of insufflated cocaine results in quicker attainment of maximum drug effects. Snorting cocaine produces maximum physiological effects within 40 minutes and maximum psychotropic effects within 20 minutes. Physiological and psychotropic effects from nasally insufflated cocaine are sustained for approximately 40–60 minutes after the peak effects are attained.", "title": "Pharmacology" }, { "paragraph_id": 39, "text": "Cocaine crosses the blood–brain barrier via both a proton-coupled organic cation antiporter and (to a lesser extent) via passive diffusion across cell membranes. As of September 2022, the gene or genes encoding the human proton-organic cation antiporter had not been identified.", "title": "Pharmacology" }, { "paragraph_id": 40, "text": "Cocaine has a short elimination half life of 0.7–1.5 hours and is extensively metabolized by plasma esterases and also by liver cholinesterases, with only about 1% excreted unchanged in the urine. The metabolism is dominated by hydrolytic ester cleavage, so the eliminated metabolites consist mostly of benzoylecgonine (BE), the major metabolite, and other metabolites in lesser amounts such as ecgonine methyl ester (EME) and ecgonine. Further minor metabolites of cocaine include norcocaine, p-hydroxycocaine, m-hydroxycocaine, p-hydroxybenzoylecgonine (pOHBE), and m-hydroxybenzoylecgonine. If consumed with alcohol, cocaine combines with alcohol in the liver to form cocaethylene. Studies have suggested cocaethylene is more euphoric, and has a higher cardiovascular toxicity than cocaine by itself.", "title": "Pharmacology" }, { "paragraph_id": 41, "text": "Depending on liver and kidney function, cocaine metabolites are detectable in urine. Benzoylecgonine can be detected in urine within four hours after cocaine intake and remains detectable in concentrations greater than 150 ng/mL typically for up to eight days after cocaine is used. Detection of cocaine metabolites in hair is possible in regular users until after the sections of hair grown during the period of cocaine use are cut or fall out.", "title": "Pharmacology" }, { "paragraph_id": 42, "text": "The pharmacodynamics of cocaine involve the complex relationships of neurotransmitters (inhibiting monoamine uptake in rats with ratios of about: serotonin:dopamine = 2:3, serotonin:norepinephrine = 2:5). The most extensively studied effect of cocaine on the central nervous system is the blockade of the dopamine transporter protein. Dopamine neurotransmitter released during neural signaling is normally recycled via the transporter; i.e., the transporter binds the transmitter and pumps it out of the synaptic cleft back into the presynaptic neuron, where it is taken up into storage vesicles. Cocaine binds tightly at the dopamine transporter forming a complex that blocks the transporter's function. The dopamine transporter can no longer perform its reuptake function, and thus dopamine accumulates in the synaptic cleft. The increased concentration of dopamine in the synapse activates post-synaptic dopamine receptors, which makes the drug rewarding and promotes the compulsive use of cocaine.", "title": "Pharmacology" }, { "paragraph_id": 43, "text": "Cocaine affects certain serotonin (5-HT) receptors; in particular, it has been shown to antagonize the 5-HT3 receptor, which is a ligand-gated ion channel. An overabundance of 5-HT3 receptors is reported in cocaine-conditioned rats, though 5-HT3's role is unclear. The 5-HT2 receptor (particularly the subtypes 5-HT2A, 5-HT2B and 5-HT2C) are involved in the locomotor-activating effects of cocaine.", "title": "Pharmacology" }, { "paragraph_id": 44, "text": "Cocaine has been demonstrated to bind as to directly stabilize the DAT transporter on the open outward-facing conformation. Further, cocaine binds in such a way as to inhibit a hydrogen bond innate to DAT. Cocaine's binding properties are such that it attaches so this hydrogen bond will not form and is blocked from formation due to the tightly locked orientation of the cocaine molecule. Research studies have suggested that the affinity for the transporter is not what is involved in the habituation of the substance so much as the conformation and binding properties to where and how on the transporter the molecule binds.", "title": "Pharmacology" }, { "paragraph_id": 45, "text": "Sigma receptors are affected by cocaine, as cocaine functions as a sigma ligand agonist. Further specific receptors it has been demonstrated to function on are NMDA and the D1 dopamine receptor.", "title": "Pharmacology" }, { "paragraph_id": 46, "text": "Cocaine also blocks sodium channels, thereby interfering with the propagation of action potentials; thus, like lignocaine and novocaine, it acts as a local anesthetic. It also functions on the binding sites to the dopamine and serotonin sodium dependent transport area as targets as separate mechanisms from its reuptake of those transporters; unique to its local anesthetic value which makes it in a class of functionality different from both its own derived phenyltropanes analogues which have that removed. In addition to this, cocaine has some target binding to the site of the κ-opioid receptor. Cocaine also causes vasoconstriction, thus reducing bleeding during minor surgical procedures. Recent research points to an important role of circadian mechanisms and clock genes in behavioral actions of cocaine.", "title": "Pharmacology" }, { "paragraph_id": 47, "text": "Cocaine is known to suppress hunger and appetite by increasing co-localization of sigma σ1R receptors and ghrelin GHS-R1a receptors at the neuronal cell surface, thereby increasing ghrelin-mediated signaling of satiety and possibly via other effects on appetitive hormones. Chronic users may lose their appetite and can experience severe malnutrition and significant weight loss.", "title": "Pharmacology" }, { "paragraph_id": 48, "text": "Cocaine effects, further, are shown to be potentiated for the user when used in conjunction with new surroundings and stimuli, and otherwise novel environs.", "title": "Pharmacology" }, { "paragraph_id": 49, "text": "Cocaine in its purest form is a white, pearly product. Cocaine appearing in powder form is a salt, typically cocaine hydrochloride. Street cocaine is often adulterated or \"cut\" with talc, lactose, sucrose, glucose, mannitol, inositol, caffeine, procaine, phencyclidine, phenytoin, lignocaine, strychnine, levamisole, amphetamine, or heroin.", "title": "Chemistry" }, { "paragraph_id": 50, "text": "Crack cocaine looks like irregular shaped white rocks.", "title": "Chemistry" }, { "paragraph_id": 51, "text": "Cocaine — a tropane alkaloid — is a weakly alkaline compound, and can therefore combine with acidic compounds to form salts. The hydrochloride (HCl) salt of cocaine is by far the most commonly encountered, although the sulfate (SO4) and the nitrate (NO3) salts are occasionally seen. Different salts dissolve to a greater or lesser extent in various solvents — the hydrochloride salt is polar in character and is quite soluble in water.", "title": "Chemistry" }, { "paragraph_id": 52, "text": "As the name implies, \"freebase\" is the base form of cocaine, as opposed to the salt form. It is practically insoluble in water whereas hydrochloride salt is water-soluble.", "title": "Chemistry" }, { "paragraph_id": 53, "text": "Smoking freebase cocaine has the additional effect of releasing methylecgonidine into the user's system due to the pyrolysis of the substance (a side effect which insufflating or injecting powder cocaine does not create). Some research suggests that smoking freebase cocaine can be even more cardiotoxic than other routes of administration because of methylecgonidine's effects on lung tissue and liver tissue.", "title": "Chemistry" }, { "paragraph_id": 54, "text": "Pure cocaine is prepared by neutralizing its compounding salt with an alkaline solution, which will precipitate non-polar basic cocaine. It is further refined through aqueous-solvent liquid–liquid extraction.", "title": "Chemistry" }, { "paragraph_id": 55, "text": "Crack is usually smoked in a glass pipe, and once inhaled, it passes from the lungs directly to the central nervous system, producing an almost immediate \"high\" that can be very powerful – this initial crescendo of stimulation is known as a \"rush\". This is followed by an equally intense low, leaving the user craving more drug. Addiction to crack usually occurs with four to six weeks; much more rapidly than with regular cocaine.", "title": "Chemistry" }, { "paragraph_id": 56, "text": "Powder cocaine (cocaine hydrochloride) must be heated to a high temperature (about 197 °C), and considerable decomposition/burning occurs at these high temperatures. This effectively destroys some of the cocaine and yields a sharp, acrid, and foul-tasting smoke. Cocaine base/crack can be smoked because it vaporizes with little or no decomposition at 98 °C (208 °F), which is below the boiling point of water.", "title": "Chemistry" }, { "paragraph_id": 57, "text": "Crack is a lower purity form of free-base cocaine that is usually produced by neutralization of cocaine hydrochloride with a solution of baking soda (sodium bicarbonate, NaHCO3) and water, producing a very hard/brittle, off-white-to-brown colored, amorphous material that contains sodium carbonate, entrapped water, and other by-products as the main impurities. The origin of the name \"crack\" comes from the \"crackling\" sound (and hence the onomatopoeic moniker \"crack\") that is produced when the cocaine and its impurities (i.e. water, sodium bicarbonate) are heated past the point of vaporization.", "title": "Chemistry" }, { "paragraph_id": 58, "text": "Coca herbal infusion (also referred to as coca tea) is used in coca-leaf producing countries much as any herbal medicinal infusion would elsewhere in the world. The free and legal commercialization of dried coca leaves under the form of filtration bags to be used as \"coca tea\" has been actively promoted by the governments of Peru and Bolivia for many years as a drink having medicinal powers. In Peru, the National Coca Company, a state-run corporation, sells cocaine-infused teas and other medicinal products and also exports leaves to the U.S. for medicinal use.", "title": "Chemistry" }, { "paragraph_id": 59, "text": "Visitors to the city of Cuzco in Peru, and La Paz in Bolivia are greeted with the offering of coca leaf infusions (prepared in teapots with whole coca leaves) purportedly to help the newly arrived traveler overcome the malaise of high altitude sickness. The effects of drinking coca tea are mild stimulation and mood lift. It has also been promoted as an adjuvant for the treatment of cocaine dependence. One study on coca leaf infusion used with counseling in the treatment of 23 addicted coca-paste smokers in Lima, Peru found that the relapses rate fell from 4.35 times per month on average before coca tea treatment to one during treatment. The duration of abstinence increased from an average of 32 days before treatment to 217.2 days during treatment. This suggests that coca leaf infusion plus counseling may be effective at preventing relapse during cocaine addiction treatment.", "title": "Chemistry" }, { "paragraph_id": 60, "text": "There is little information on the pharmacological and toxicological effects of consuming coca tea. A chemical analysis by solid-phase extraction and gas chromatography–mass spectrometry (SPE-GC/MS) of Peruvian and Bolivian tea bags indicated the presence of significant amounts of cocaine, the metabolite benzoylecgonine, ecgonine methyl ester and trans-cinnamoylcocaine in coca tea bags and coca tea. Urine specimens were also analyzed from an individual who consumed one cup of coca tea and it was determined that enough cocaine and cocaine-related metabolites were present to produce a positive drug test.", "title": "Chemistry" }, { "paragraph_id": 61, "text": "The first synthesis and elucidation of the cocaine molecule was by Richard Willstätter in 1898. Willstätter's synthesis derived cocaine from tropinone. Since then, Robert Robinson and Edward Leete have made significant contributions to the mechanism of the synthesis. (-NO3)", "title": "Chemistry" }, { "paragraph_id": 62, "text": "The additional carbon atoms required for the synthesis of cocaine are derived from acetyl-CoA, by addition of two acetyl-CoA units to the N-methyl-Δ-pyrrolinium cation. The first addition is a Mannich-like reaction with the enolate anion from acetyl-CoA acting as a nucleophile towards the pyrrolinium cation. The second addition occurs through a Claisen condensation. This produces a racemic mixture of the 2-substituted pyrrolidine, with the retention of the thioester from the Claisen condensation. In formation of tropinone from racemic ethyl [2,3-13C2]4(Nmethyl-2-pyrrolidinyl)-3-oxobutanoate there is no preference for either stereoisomer. In cocaine biosynthesis, only the (S)-enantiomer can cyclize to form the tropane ring system of cocaine. The stereoselectivity of this reaction was further investigated through study of prochiral methylene hydrogen discrimination. This is due to the extra chiral center at C-2. This process occurs through an oxidation, which regenerates the pyrrolinium cation and formation of an enolate anion, and an intramolecular Mannich reaction. The tropane ring system undergoes hydrolysis, SAM-dependent methylation, and reduction via NADPH for the formation of methylecgonine. The benzoyl moiety required for the formation of the cocaine diester is synthesized from phenylalanine via cinnamic acid. Benzoyl-CoA then combines the two units to form cocaine.", "title": "Chemistry" }, { "paragraph_id": 63, "text": "The biosynthesis begins with L-Glutamine, which is derived to L-ornithine in plants. The major contribution of L-ornithine and L-arginine as a precursor to the tropane ring was confirmed by Edward Leete. Ornithine then undergoes a pyridoxal phosphate-dependent decarboxylation to form putrescine. In some animals, the urea cycle derives putrescine from ornithine. L-ornithine is converted to L-arginine, which is then decarboxylated via PLP to form agmatine. Hydrolysis of the imine derives N-carbamoylputrescine followed with hydrolysis of the urea to form putrescine. The separate pathways of converting ornithine to putrescine in plants and animals have converged. A SAM-dependent N-methylation of putrescine gives the N-methylputrescine product, which then undergoes oxidative deamination by the action of diamine oxidase to yield the aminoaldehyde. Schiff base formation confirms the biosynthesis of the N-methyl-Δ-pyrrolinium cation.", "title": "Chemistry" }, { "paragraph_id": 64, "text": "The biosynthesis of the tropane alkaloid is still not understood. Hemscheidt proposes that Robinson's acetonedicarboxylate emerges as a potential intermediate for this reaction. Condensation of N-methylpyrrolinium and acetonedicarboxylate would generate the oxobutyrate. Decarboxylation leads to tropane alkaloid formation.", "title": "Chemistry" }, { "paragraph_id": 65, "text": "The reduction of tropinone is mediated by NADPH-dependent reductase enzymes, which have been characterized in multiple plant species. These plant species all contain two types of the reductase enzymes, tropinone reductase I and tropinone reductase II. TRI produces tropine and TRII produces pseudotropine. Due to differing kinetic and pH/activity characteristics of the enzymes and by the 25-fold higher activity of TRI over TRII, the majority of the tropinone reduction is from TRI to form tropine.", "title": "Chemistry" }, { "paragraph_id": 66, "text": "In 2022, a GMO produced N. benthamiana were discovered that were able to produce 25% of the amount of cocaine found in a coca plant.", "title": "Chemistry" }, { "paragraph_id": 67, "text": "Cocaine and its major metabolites may be quantified in blood, plasma, or urine to monitor for use, confirm a diagnosis of poisoning, or assist in the forensic investigation of a traffic or other criminal violation or sudden death. Most commercial cocaine immunoassay screening tests cross-react appreciably with the major cocaine metabolites, but chromatographic techniques can easily distinguish and separately measure each of these substances. When interpreting the results of a test, it is important to consider the cocaine usage history of the individual, since a chronic user can develop tolerance to doses that would incapacitate a cocaine-naive individual, and the chronic user often has high baseline values of the metabolites in his system. Cautious interpretation of testing results may allow a distinction between passive or active usage, and between smoking versus other routes of administration.", "title": "Chemistry" }, { "paragraph_id": 68, "text": "Cocaine may be detected by law enforcement using the Scott reagent. The test can easily generate false positives for common substances and must be confirmed with a laboratory test.", "title": "Chemistry" }, { "paragraph_id": 69, "text": "Approximate cocaine purity can be determined using 1 mL 2% cupric sulfate pentahydrate in dilute HCl, 1 mL 2% potassium thiocyanate and 2 mL of chloroform. The shade of brown shown by the chloroform is proportional to the cocaine content. This test is not cross sensitive to heroin, methamphetamine, benzocaine, procaine and a number of other drugs but other chemicals could cause false positives.", "title": "Chemistry" }, { "paragraph_id": 70, "text": "According to a 2016 United Nations report, England and Wales are the countries with the highest rate of cocaine usage (2.4% of adults in the previous year). Other countries where the usage rate meets or exceeds 1.5% are Spain and Scotland (2.2%), the United States (2.1%), Australia (2.1%), Uruguay (1.8%), Brazil (1.75%), Chile (1.73%), the Netherlands (1.5%) and Ireland (1.5%).", "title": "Usage" }, { "paragraph_id": 71, "text": "Cocaine is the second most popular illegal recreational drug in Europe (behind cannabis). Since the mid-1990s, overall cocaine usage in Europe has been on the rise, but usage rates and attitudes tend to vary between countries. European countries with the highest usage rates are the United Kingdom, Spain, Italy, and the Republic of Ireland.", "title": "Usage" }, { "paragraph_id": 72, "text": "Approximately 17 million Europeans (5.1%) have used cocaine at least once and 3.5 million (1.1%) in the last year. About 1.9% (2.3 million) of young adults (15–34 years old) have used cocaine in the last year (latest data available as of 2018).", "title": "Usage" }, { "paragraph_id": 73, "text": "Usage is particularly prevalent among this demographic: 4% to 7% of males have used cocaine in the last year in Spain, Denmark, the Republic of Ireland, Italy, and the United Kingdom. The ratio of male to female users is approximately 3.8:1, but this statistic varies from 1:1 to 13:1 depending on country.", "title": "Usage" }, { "paragraph_id": 74, "text": "In 2014 London had the highest amount of cocaine in its sewage out of 50 European cities.", "title": "Usage" }, { "paragraph_id": 75, "text": "Cocaine is the second most popular illegal recreational drug in the United States (behind cannabis) and the U.S. is the world's largest consumer of cocaine. Its users span over different ages, races, and professions. In the 1970s and 1980s, the drug became particularly popular in the disco culture as cocaine usage was very common and popular in many discos such as Studio 54.", "title": "Usage" }, { "paragraph_id": 76, "text": "Indigenous peoples of South America have chewed the leaves of Erythroxylon coca — a plant that contains vital nutrients as well as numerous alkaloids, including cocaine — for over a thousand years. The coca leaf was, and still is, chewed almost universally by some indigenous communities. The remains of coca leaves have been found with ancient Peruvian mummies, and pottery from the time period depicts humans with bulged cheeks, indicating the presence of something on which they are chewing. There is also evidence that these cultures used a mixture of coca leaves and saliva as an anesthetic for the performance of trepanation.", "title": "History" }, { "paragraph_id": 77, "text": "When the Spanish arrived in South America, the conquistadors at first banned coca as an \"evil agent of devil\". But after discovering that without the coca the locals were barely able to work, the conquistadors legalized and taxed the leaf, taking 10% off the value of each crop. In 1569, Spanish botanist Nicolás Monardes described the indigenous peoples' practice of chewing a mixture of tobacco and coca leaves to induce \"great contentment\":", "title": "History" }, { "paragraph_id": 78, "text": "When they wished to make themselves drunk and out of judgment they chewed a mixture of tobacco and coca leaves which make them go as they were out of their wittes.", "title": "History" }, { "paragraph_id": 79, "text": "In 1609, Padre Blas Valera wrote:", "title": "History" }, { "paragraph_id": 80, "text": "Coca protects the body from many ailments, and our doctors use it in powdered form to reduce the swelling of wounds, to strengthen broken bones, to expel cold from the body or prevent it from entering, and to cure rotten wounds or sores that are full of maggots. And if it does so much for outward ailments, will not its singular virtue have even greater effect in the entrails of those who eat it?", "title": "History" }, { "paragraph_id": 81, "text": "Although the stimulant and hunger-suppressant properties of coca had been known for many centuries, the isolation of the cocaine alkaloid was not achieved until 1855. Various European scientists had attempted to isolate cocaine, but none had been successful for two reasons: the knowledge of chemistry required was insufficient at the time, and contemporary conditions of sea-shipping from South America could degrade the cocaine in the plant samples available to European chemists.", "title": "History" }, { "paragraph_id": 82, "text": "The cocaine alkaloid was first isolated by the German chemist Friedrich Gaedcke in 1855. Gaedcke named the alkaloid \"erythroxyline\", and published a description in the journal Archiv der Pharmazie.", "title": "History" }, { "paragraph_id": 83, "text": "In 1856, Friedrich Wöhler asked Dr. Carl Scherzer, a scientist aboard the Novara (an Austrian frigate sent by Emperor Franz Joseph to circle the globe), to bring him a large amount of coca leaves from South America. In 1859, the ship finished its travels and Wöhler received a trunk full of coca. Wöhler passed on the leaves to Albert Niemann, a PhD student at the University of Göttingen in Germany, who then developed an improved purification process.", "title": "History" }, { "paragraph_id": 84, "text": "Niemann described every step he took to isolate cocaine in his dissertation titled Über eine neue organische Base in den Cocablättern (On a New Organic Base in the Coca Leaves), which was published in 1860 and earned him his Ph.D. He wrote of the alkaloid's \"colourless transparent prisms\" and said that \"Its solutions have an alkaline reaction, a bitter taste, promote the flow of saliva and leave a peculiar numbness, followed by a sense of cold when applied to the tongue.\" Niemann named the alkaloid \"cocaine\" from \"coca\" (from Quechua \"kúka\") + suffix \"ine\".", "title": "History" }, { "paragraph_id": 85, "text": "The first synthesis and elucidation of the structure of the cocaine molecule was by Richard Willstätter in 1898. It was the first biomimetic synthesis of an organic structure recorded in academic chemical literature. The synthesis started from tropinone, a related natural product and took five steps.", "title": "History" }, { "paragraph_id": 86, "text": "Because of the former use of cocaine as a local anesthetic, a suffix \"-caine\" was later extracted and used to form names of synthetic local anesthetics.", "title": "History" }, { "paragraph_id": 87, "text": "With the discovery of this new alkaloid, Western medicine was quick to exploit the possible uses of this plant.", "title": "History" }, { "paragraph_id": 88, "text": "In 1879, Vassili von Anrep, of the University of Würzburg, devised an experiment to demonstrate the analgesic properties of the newly discovered alkaloid. He prepared two separate jars, one containing a cocaine-salt solution, with the other containing merely saltwater. He then submerged a frog's legs into the two jars, one leg in the treatment and one in the control solution, and proceeded to stimulate the legs in several different ways. The leg that had been immersed in the cocaine solution reacted very differently from the leg that had been immersed in saltwater.", "title": "History" }, { "paragraph_id": 89, "text": "Karl Koller (a close associate of Sigmund Freud, who would write about cocaine later) experimented with cocaine for ophthalmic usage. In an infamous experiment in 1884, he experimented upon himself by applying a cocaine solution to his own eye and then pricking it with pins. His findings were presented to the Heidelberg Ophthalmological Society. Also in 1884, Jellinek demonstrated the effects of cocaine as a respiratory system anesthetic. In 1885, William Halsted demonstrated nerve-block anesthesia, and James Leonard Corning demonstrated peridural anesthesia. 1898 saw Heinrich Quincke use cocaine for spinal anesthesia.", "title": "History" }, { "paragraph_id": 90, "text": "In 1859, an Italian doctor, Paolo Mantegazza, returned from Peru, where he had witnessed first-hand the use of coca by the local indigenous peoples. He proceeded to experiment on himself and upon his return to Milan, he wrote a paper in which he described the effects. In this paper, he declared coca and cocaine (at the time they were assumed to be the same) as being useful medicinally, in the treatment of \"a furred tongue in the morning, flatulence, and whitening of the teeth.\"", "title": "History" }, { "paragraph_id": 91, "text": "A chemist named Angelo Mariani who read Mantegazza's paper became immediately intrigued with coca and its economic potential. In 1863, Mariani started marketing a wine called Vin Mariani, which had been treated with coca leaves, to become coca wine. The ethanol in wine acted as a solvent and extracted the cocaine from the coca leaves, altering the drink's effect. It contained 6 mg cocaine per ounce of wine, but Vin Mariani which was to be exported contained 7.2 mg per ounce, to compete with the higher cocaine content of similar drinks in the United States. A \"pinch of coca leaves\" was included in John Styth Pemberton's original 1886 recipe for Coca-Cola, though the company began using decocainized leaves in 1906 when the Pure Food and Drug Act was passed.", "title": "History" }, { "paragraph_id": 92, "text": "In 1879 cocaine began to be used to treat morphine addiction. Cocaine was introduced into clinical use as a local anesthetic in Germany in 1884, about the same time as Sigmund Freud published his work Über Coca, in which he wrote that cocaine causes:", "title": "History" }, { "paragraph_id": 93, "text": "Exhilaration and lasting euphoria, which in no way differs from the normal euphoria of the healthy person. You perceive an increase of self-control and possess more vitality and capacity for work. In other words, you are simply normal, and it is soon hard to believe you are under the influence of any drug. Long intensive physical work is performed without any fatigue. This result is enjoyed without any of the unpleasant after-effects that follow exhilaration brought about by alcoholic beverages. No craving for the further use of cocaine appears after the first, or even after repeated taking of the drug.", "title": "History" }, { "paragraph_id": 94, "text": "By 1885 the U.S. manufacturer Parke-Davis sold coca-leaf cigarettes and cheroots, a cocaine inhalant, a Coca Cordial, cocaine crystals, and cocaine solution for intravenous injection. The company promised that its cocaine products would \"supply the place of food, make the coward brave, the silent eloquent and render the sufferer insensitive to pain.\"", "title": "History" }, { "paragraph_id": 95, "text": "By the late Victorian era, cocaine use had appeared as a vice in literature. For example, it was injected by Arthur Conan Doyle's fictional Sherlock Holmes, generally to offset the boredom he felt when he was not working on a case.", "title": "History" }, { "paragraph_id": 96, "text": "In early 20th-century Memphis, Tennessee, cocaine was sold in neighborhood drugstores on Beale Street, costing five or ten cents for a small boxful. Stevedores along the Mississippi River used the drug as a stimulant, and white employers encouraged its use by black laborers.", "title": "History" }, { "paragraph_id": 97, "text": "In 1909, Ernest Shackleton took \"Forced March\" brand cocaine tablets to Antarctica, as did Captain Scott a year later on his ill-fated journey to the South Pole.", "title": "History" }, { "paragraph_id": 98, "text": "In the 1931 song \"Minnie the Moocher\", Cab Calloway heavily references cocaine use. He uses the phrase \"kicking the gong around\", slang for cocaine use; describes titular character Minnie as \"tall and skinny;\" and describes Smokey Joe as \"cokey\". In the 1932 comedy musical film The Big Broadcast, Cab Calloway performs the song with his orchestra and mimes snorting cocaine in between verses.", "title": "History" }, { "paragraph_id": 99, "text": "During the mid-1940s, amidst World War II, cocaine was considered for inclusion as an ingredient of a future generation of 'pep pills' for the German military, code named D-IX.", "title": "History" }, { "paragraph_id": 100, "text": "In modern popular culture, references to cocaine are common. The drug has a glamorous image associated with the wealthy, famous and powerful, and is said to make users \"feel rich and beautiful\". In addition the pace of modern society − such as in finance − gives many the incentive to make use of the drug.", "title": "History" }, { "paragraph_id": 101, "text": "In many countries, cocaine is a popular recreational drug. In the United States, the development of \"crack\" cocaine introduced the substance to a generally poorer inner-city market. The use of the powder form has stayed relatively constant, experiencing a new height of use during the late 1990s and early 2000s in the U.S., and has become much more popular in the last few years in the UK.", "title": "History" }, { "paragraph_id": 102, "text": "Cocaine use is prevalent across all socioeconomic strata, including age, demographics, economic, social, political, religious, and livelihood.", "title": "History" }, { "paragraph_id": 103, "text": "The estimated U.S. cocaine market exceeded US$70 billion in street value for the year 2005, exceeding revenues by corporations such as Starbucks. Cocaine's status as a club drug shows its immense popularity among the \"party crowd\".", "title": "History" }, { "paragraph_id": 104, "text": "In 1995 the World Health Organization (WHO) and the United Nations Interregional Crime and Justice Research Institute (UNICRI) announced in a press release the publication of the results of the largest global study on cocaine use ever undertaken. An American representative in the World Health Assembly banned the publication of the study, because it seemed to make a case for the positive uses of cocaine. An excerpt of the report strongly conflicted with accepted paradigms, for example, \"that occasional cocaine use does not typically lead to severe or even minor physical or social problems.\" In the sixth meeting of the B committee, the US representative threatened that \"If World Health Organization activities relating to drugs failed to reinforce proven drug control approaches, funds for the relevant programs should be curtailed\". This led to the decision to discontinue publication. A part of the study was recuperated and published in 2010, including profiles of cocaine use in 20 countries, but are unavailable as of 2015.", "title": "History" }, { "paragraph_id": 105, "text": "In October 2010 it was reported that the use of cocaine in Australia has doubled since monitoring began in 2003.", "title": "History" }, { "paragraph_id": 106, "text": "A problem with illegal cocaine use, especially in the higher volumes used to combat fatigue (rather than increase euphoria) by long-term users, is the risk of ill effects or damage caused by the compounds used in adulteration. Cutting or \"stepping on\" the drug is commonplace, using compounds which simulate ingestion effects, such as Novocain (procaine) producing temporary anesthesia, as many users believe a strong numbing effect is the result of strong and/or pure cocaine, ephedrine or similar stimulants that are to produce an increased heart rate. The normal adulterants for profit are inactive sugars, usually mannitol, creatine, or glucose, so introducing active adulterants gives the illusion of purity and to 'stretch' or make it so a dealer can sell more product than without the adulterants. The adulterant of sugars allows the dealer to sell the product for a higher price because of the illusion of purity and allows the sale of more of the product at that higher price, enabling dealers to significantly increase revenue with little additional cost for the adulterants. A 2007 study by the European Monitoring Centre for Drugs and Drug Addiction showed that the purity levels for street purchased cocaine was often under 5% and on average under 50% pure.", "title": "History" }, { "paragraph_id": 107, "text": "The production, distribution, and sale of cocaine products is restricted (and illegal in most contexts) in most countries as regulated by the Single Convention on Narcotic Drugs, and the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances. In the United States the manufacture, importation, possession, and distribution of cocaine are additionally regulated by the 1970 Controlled Substances Act.", "title": "Society and culture" }, { "paragraph_id": 108, "text": "Some countries, such as Peru and Bolivia, permit the cultivation of coca leaf for traditional consumption by the local indigenous population, but nevertheless, prohibit the production, sale, and consumption of cocaine. The provisions as to how much a coca farmer can yield annually is protected by laws such as the Bolivian Cato accord. In addition, some parts of Europe, the United States, and Australia allow processed cocaine for medicinal uses only.", "title": "Society and culture" }, { "paragraph_id": 109, "text": "Cocaine is a Schedule 8 prohibited substance in Australia under the Poisons Standard (July 2016). A schedule 8 substance is a controlled Drug – Substances which should be available for use but require the restriction of manufacture, supply, distribution, possession and use to reduce abuse, misuse, and physical or psychological dependence.", "title": "Society and culture" }, { "paragraph_id": 110, "text": "In Western Australia under the Misuse of Drugs Act 1981 4.0g of cocaine is the amount of prohibited drugs determining a court of trial, 2.0g is the amount of cocaine required for the presumption of intention to sell or supply and 28.0g is the amount of cocaine required for purposes of drug trafficking.", "title": "Society and culture" }, { "paragraph_id": 111, "text": "The US federal government instituted a national labeling requirement for cocaine and cocaine-containing products through the Pure Food and Drug Act of 1906. The next important federal regulation was the Harrison Narcotics Tax Act of 1914. While this act is often seen as the start of prohibition, the act itself was not actually a prohibition on cocaine, but instead set up a regulatory and licensing regime. The Harrison Act did not recognize addiction as a treatable condition and therefore the therapeutic use of cocaine, heroin, or morphine to such individuals was outlawed – leading a 1915 editorial in the journal American Medicine to remark that the addict \"is denied the medical care he urgently needs, open, above-board sources from which he formerly obtained his drug supply are closed to him, and he is driven to the underworld where he can get his drug, but of course, surreptitiously and in violation of the law.\" The Harrison Act left manufacturers of cocaine untouched so long as they met certain purity and labeling standards. Despite that cocaine was typically illegal to sell and legal outlets were rarer, the quantities of legal cocaine produced declined very little. Legal cocaine quantities did not decrease until the Jones–Miller Act of 1922 put serious restrictions on cocaine manufactures.", "title": "Society and culture" }, { "paragraph_id": 112, "text": "Before the early 1900s, the primary problem caused by cocaine use was portrayed by newspapers to be addiction, not violence or crime, and the cocaine user was represented as an upper- or middle-class White person. In 1914, The New York Times published an article titled \"Negro Cocaine 'Fiends' Are a New Southern Menace\", portraying Black cocaine users as dangerous and able to withstand wounds that would normally be fatal. The Anti-Drug Abuse Act of 1986 mandated prison sentences for 500 grams of powdered cocaine and 5 grams of crack cocaine. In the National Survey on Drug Use and Health, Whites reported a higher rate of powdered cocaine use, and Blacks reported a higher rate of crack cocaine use.", "title": "Society and culture" }, { "paragraph_id": 113, "text": "In 2004, according to the United Nations, 589 tonnes of cocaine were seized globally by law enforcement authorities. Colombia seized 188 t, the United States 166 t, Europe 79 t, Peru 14 t, Bolivia 9 t, and the rest of the world 133 t.", "title": "Society and culture" }, { "paragraph_id": 114, "text": "Colombia is as of 2019 the world's largest cocaine producer, with production more than tripling since 2013. Three-quarters of the world's annual yield of cocaine has been produced in Colombia, both from cocaine base imported from Peru (primarily the Huallaga Valley) and Bolivia and from locally grown coca. There was a 28% increase in the amount of potentially harvestable coca plants which were grown in Colombia in 1998. This, combined with crop reductions in Bolivia and Peru, made Colombia the nation with the largest area of coca under cultivation after the mid-1990s. Coca grown for traditional purposes by indigenous communities, a use which is still present and is permitted by Colombian laws, only makes up a small fragment of total coca production, most of which is used for the illegal drug trade.", "title": "Society and culture" }, { "paragraph_id": 115, "text": "An interview with a coca farmer published in 2003 described a mode of production by acid-base extraction that has changed little since 1905. Roughly 625 pounds (283 kg) of leaves were harvested per hectare, six times per year. The leaves were dried for half a day, then chopped into small pieces with a string trimmer and sprinkled with a small amount of powdered cement (replacing sodium carbonate from former times). Several hundred pounds of this mixture were soaked in 50 US gallons (190 L) of gasoline for a day, then the gasoline was removed and the leaves were pressed for the remaining liquid, after which they could be discarded. Then battery acid (weak sulfuric acid) was used, one bucket per 55 lb (25 kg) of leaves, to create a phase separation in which the cocaine free base in the gasoline was acidified and extracted into a few buckets of \"murky-looking smelly liquid\". Once powdered caustic soda was added to this, the cocaine precipitated and could be removed by filtration through a cloth. The resulting material, when dried, was termed pasta and sold by the farmer. The 3,750 pounds (1,700 kg) yearly harvest of leaves from a hectare produced 6 lb (2.5 kg) of pasta, approximately 40–60% cocaine. Repeated recrystallization from solvents, producing pasta lavada and eventually crystalline cocaine were performed at specialized laboratories after the sale.", "title": "Society and culture" }, { "paragraph_id": 116, "text": "Attempts to eradicate coca fields through the use of defoliants have devastated part of the farming economy in some coca-growing regions of Colombia, and strains appear to have been developed that are more resistant or immune to their use. Whether these strains are natural mutations or the product of human tampering is unclear. These strains have also shown to be more potent than those previously grown, increasing profits for the drug cartels responsible for the exporting of cocaine. Although production fell temporarily, coca crops rebounded in numerous smaller fields in Colombia, rather than the larger plantations.", "title": "Society and culture" }, { "paragraph_id": 117, "text": "The cultivation of coca has become an attractive economic decision for many growers due to the combination of several factors, including the lack of other employment alternatives, the lower profitability of alternative crops in official crop substitution programs, the eradication-related damages to non-drug farms, the spread of new strains of the coca plant due to persistent worldwide demand.", "title": "Society and culture" }, { "paragraph_id": 118, "text": "The latest estimate provided by the U.S. authorities on the annual production of cocaine in Colombia refers to 290 metric tons. As of the end of 2011, the seizure operations of Colombian cocaine carried out in different countries have totaled 351.8 metric tons of cocaine, i.e. 121.3% of Colombia's annual production according to the U.S. Department of State's estimates.", "title": "Society and culture" }, { "paragraph_id": 119, "text": "Synthesizing cocaine could eliminate the high visibility and low reliability of offshore sources and international smuggling, replacing them with clandestine domestic laboratories, as are common for illicit methamphetamine, but is rarely done. Natural cocaine remains the lowest cost and highest quality supply of cocaine. Formation of inactive stereoisomers (cocaine has four chiral centres – 1R 2R, 3S, and 5S, two of them dependent, hence eight possible stereoisomers) plus synthetic by-products limits the yield and purity.", "title": "Society and culture" }, { "paragraph_id": 120, "text": "Organized criminal gangs operating on a large scale dominate the cocaine trade. Most cocaine is grown and processed in South America, particularly in Colombia, Bolivia, Peru, and smuggled into the United States and Europe, the United States being the world's largest consumer of cocaine, where it is sold at huge markups; usually in the US at $80–120 for 1 gram, and $250–300 for 3.5 grams (1/8 of an ounce, or an \"eight ball\").", "title": "Society and culture" }, { "paragraph_id": 121, "text": "The primary cocaine importation points in the United States have been in Arizona, southern California, southern Florida, and Texas. Typically, land vehicles are driven across the U.S.–Mexico border. Sixty-five percent of cocaine enters the United States through Mexico, and the vast majority of the rest enters through Florida. As of 2015, the Sinaloa Cartel is the most active drug cartel involved in smuggling illicit drugs like cocaine into the United States and trafficking them throughout the United States.", "title": "Society and culture" }, { "paragraph_id": 122, "text": "Cocaine traffickers from Colombia and Mexico have established a labyrinth of smuggling routes throughout the Caribbean, the Bahama Island chain, and South Florida. They often hire traffickers from Mexico or the Dominican Republic to transport the drug using a variety of smuggling techniques to U.S. markets. These include airdrops of 500 to 700 kg (1,100 to 1,500 lb) in the Bahama Islands or off the coast of Puerto Rico, mid-ocean boat-to-boat transfers of 500 to 2,000 kg (1,100 to 4,400 lb), and the commercial shipment of tonnes of cocaine through the port of Miami.", "title": "Society and culture" }, { "paragraph_id": 123, "text": "Another route of cocaine traffic goes through Chile, which is primarily used for cocaine produced in Bolivia since the nearest seaports lie in northern Chile. The arid Bolivia–Chile border is easily crossed by 4×4 vehicles that then head to the seaports of Iquique and Antofagasta. While the price of cocaine is higher in Chile than in Peru and Bolivia, the final destination is usually Europe, especially Spain where drug dealing networks exist among South American immigrants.", "title": "Society and culture" }, { "paragraph_id": 124, "text": "Cocaine is also carried in small, concealed, kilogram quantities across the border by couriers known as \"mules\" (or \"mulas\"), who cross a border either legally, for example, through a port or airport, or illegally elsewhere. The drugs may be strapped to the waist or legs or hidden in bags, or hidden in the body. If the mule gets through without being caught, the gangs will reap most of the profits. If caught, gangs will sever all links and the mule will usually stand trial for trafficking alone.", "title": "Society and culture" }, { "paragraph_id": 125, "text": "Bulk cargo ships are also used to smuggle cocaine to staging sites in the western Caribbean–Gulf of Mexico area. These vessels are typically 150–250-foot (50–80 m) coastal freighters that carry an average cocaine load of approximately 2.5 tonnes. Commercial fishing vessels are also used for smuggling operations. In areas with a high volume of recreational traffic, smugglers use the same types of vessels, such as go-fast boats, like those used by the local populations.", "title": "Society and culture" }, { "paragraph_id": 126, "text": "Sophisticated drug subs are the latest tool drug runners are using to bring cocaine north from Colombia, it was reported on 20 March 2008. Although the vessels were once viewed as a quirky sideshow in the drug war, they are becoming faster, more seaworthy, and capable of carrying bigger loads of drugs than earlier models, according to those charged with catching them.", "title": "Society and culture" }, { "paragraph_id": 127, "text": "Cocaine is readily available in all major countries' metropolitan areas. According to the Summer 1998 Pulse Check, published by the U.S. Office of National Drug Control Policy, cocaine use had stabilized across the country, with a few increases reported in San Diego, Bridgeport, Miami, and Boston. In the West, cocaine usage was lower, which was thought to be due to a switch to methamphetamine among some users; methamphetamine is cheaper, three and a half times more powerful, and lasts 12–24 times longer with each dose. Nevertheless, the number of cocaine users remain high, with a large concentration among urban youth.", "title": "Society and culture" }, { "paragraph_id": 128, "text": "In addition to the amounts previously mentioned, cocaine can be sold in \"bill sizes\": As of 2007 for example, $10 might purchase a \"dime bag\", a very small amount (0.1–0.15 g) of cocaine. These amounts and prices are very popular among young people because they are inexpensive and easily concealed on one's body. Quality and price can vary dramatically depending on supply and demand, and on geographic region.", "title": "Society and culture" }, { "paragraph_id": 129, "text": "In 2008, the European Monitoring Centre for Drugs and Drug Addiction reports that the typical retail price of cocaine varied between €50 and €75 per gram in most European countries, although Cyprus, Romania, Sweden, and Turkey reported much higher values.", "title": "Society and culture" }, { "paragraph_id": 130, "text": "World annual cocaine consumption, as of 2000, stood at around 600 tonnes, with the United States consuming around 300 t, 50% of the total, Europe about 150 t, 25% of the total, and the rest of the world the remaining 150 t or 25%. It is estimated that 1.5 million people in the United States used cocaine in 2010, down from 2.4 million in 2006. Conversely, cocaine use appears to be increasing in Europe with the highest prevalences in Spain, the United Kingdom, Italy, and Ireland.", "title": "Society and culture" }, { "paragraph_id": 131, "text": "The 2010 UN World Drug Report concluded that \"it appears that the North American cocaine market has declined in value from US$47 billion in 1998 to US$38 billion in 2008. Between 2006 and 2008, the value of the market remained basically stable\".", "title": "Society and culture" } ]
Cocaine is a tropane alkaloid that acts as a central nervous system (CNS) stimulant. As an extract, it is mainly used recreationally, and often illegally for its euphoric and rewarding effects. It is also used in medicine by Indigenous South Americans for various purposes and rarely, but more formally, as a local anaesthetic or diagnostic tool by medical practitioners in more developed countries. It is primarily obtained from the leaves of two Coca species native to South America: Erythroxylum coca and E. novogranatense. After extraction from the plant, and further processing into cocaine hydrochloride, the drug is administered by being either snorted, applied topically to the mouth, or dissolved and injected into a vein. It can also then be turned into free base form, in which it can be heated until sublimated and then the vapours can be inhaled. Cocaine stimulates the reward pathway in the brain. Mental effects may include an intense feeling of happiness, sexual arousal, loss of contact with reality, or agitation. Physical effects may include a fast heart rate, sweating, and dilated pupils. High doses can result in high blood pressure or high body temperature. Onset of effects can begin within seconds to minutes of use, depending on method of delivery, and can last between five and ninety minutes. As cocaine also has numbing and blood vessel constriction properties, it is occasionally used during surgery on the throat or inside of the nose to control pain, bleeding, and vocal cord spasm. Cocaine crosses the blood–brain barrier via a proton-coupled organic cation antiporter and via passive diffusion across cell membranes. Cocaine blocks the dopamine transporter, inhibiting reuptake of dopamine from the synaptic cleft into the pre-synaptic axon terminal; the higher dopamine levels in the synaptic cleft increase dopamine receptor activation in the post-synaptic neuron, causing euphoria and arousal. Cocaine also blocks the serotonin transporter and norepinephrine transporter, inhibiting reuptake of serotonin and norepinephrine from the synaptic cleft into the pre-synaptic axon terminal and increasing activation of serotonin receptors and norepinephrine receptors in the post-synaptic neuron, contributing to the mental and physical effects of cocaine exposure. A single dose of cocaine induces tolerance to the drug's effects. Repeated use is likely to result in addiction. Addicts who abstain from cocaine may experience craving and drug withdrawal symptoms, with depression, decreased libido, decreased ability to feel pleasure, and fatigue being most common. Use of cocaine increases the overall risk of death, and intravenous use potentially increases the risk of trauma and infectious diseases such as blood infections and HIV through the use of shared paraphernalia. It also increases risk of stroke, heart attack, cardiac arrhythmia, lung injury, and sudden cardiac death. Illicitly sold cocaine can be adulterated with fentanyl, local anesthetics, levamisole, cornstarch, quinine, or sugar, which can result in additional toxicity. In 2017, the Global Burden of Disease study found that cocaine use caused around 7,300 deaths annually.
2002-02-25T15:43:11Z
2023-12-18T16:44:43Z
[ "Template:Div col end", "Template:Dubious", "Template:Reflist", "Template:Sigma receptor modulators", "Template:Sfrac", "Template:Cite journal", "Template:Drug use", "Template:Global estimates of illicit drug users", "Template:Citation", "Template:Stimulants", "Template:Monoamine reuptake inhibitors", "Template:Cite book", "Template:Cn", "Template:Spaced ndash", "Template:Infobox drug", "Template:Convert", "Template:Pp-move-indef", "Template:Lang-es", "Template:TOC limit", "Template:Page needed", "Template:Div col", "Template:Link note", "Template:OEtymD", "Template:Commons category", "Template:Cite magazine", "Template:Ion channel modulators", "Template:Citation needed", "Template:When", "Template:As of", "Template:Sfn", "Template:Wikiquote", "Template:Ancient anaesthesia-footer", "Template:Lang-fr", "Template:Main", "Template:Refbegin", "Template:Refend", "Template:Wiktionary", "Template:Authority control", "Template:Short description", "Template:Other uses", "Template:Pp-semi-indef", "Template:Use dmy dates", "Template:Blockquote", "Template:Cite web", "Template:Cite news", "Template:Local anesthetics", "Template:Use American English", "Template:Chem name", "Template:Section link", "Template:Euphoriants", "Template:See also", "Template:Portal", "Template:Citation-attribution" ]
https://en.wikipedia.org/wiki/Cocaine
7,706
Cartesian coordinate system
In geometry, a Cartesian coordinate system (UK: /kɑːrˈtiːzjən/, US: /kɑːrˈtiʒən/) in a plane is a coordinate system that specifies each point uniquely by a pair of real numbers called coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, called coordinate lines, coordinate axes or just axes (plural of axis) of the system. The point where they meet is called the origin and has (0, 0) as coordinates. Similarly, the position of any point in three-dimensional space can be specified by three Cartesian coordinates, which are the signed distances from the point to three mutually perpendicular planes. More generally, n Cartesian coordinates specify the point in an n-dimensional Euclidean space for any dimension n. These coordinates are the signed distances from the point to n mutually perpendicular fixed hyperplanes. Cartesian coordinates are named for René Descartes, whose invention of them in the 17th century revolutionized mathematics by allowing the expression of problems of geometry in terms of algebra and calculus. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by equations involving the coordinates of points of the shape. For example, a circle of radius 2, centered at the origin of the plane, may be described as the set of all points whose coordinates x and y satisfy the equation x + y = 4; the area, the perimeter and the tangent line at any point can be computed from this equation by using integrals and derivatives, in a way that can be applied to any curve. Cartesian coordinates are the foundation of analytic geometry, and provide enlightening geometric interpretations for many other branches of mathematics, such as linear algebra, complex analysis, differential geometry, multivariate calculus, group theory and more. A familiar example is the concept of the graph of a function. Cartesian coordinates are also essential tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering and many more. They are the most common coordinate system used in computer graphics, computer-aided geometric design and other geometry-related data processing. The adjective Cartesian refers to the French mathematician and philosopher René Descartes, who published this idea in 1637 while he was resident in the Netherlands. It was independently discovered by Pierre de Fermat, who also worked in three dimensions, although Fermat did not publish the discovery. The French cleric Nicole Oresme used constructions similar to Cartesian coordinates well before the time of Descartes and Fermat. Both Descartes and Fermat used a single axis in their treatments and have a variable length measured in reference to this axis. The concept of using a pair of axes was introduced later, after Descartes' La Géométrie was translated into Latin in 1649 by Frans van Schooten and his students. These commentators introduced several concepts while trying to clarify the ideas contained in Descartes's work. The development of the Cartesian coordinate system would play a fundamental role in the development of the calculus by Isaac Newton and Gottfried Wilhelm Leibniz. The two-coordinate description of the plane was later generalized into the concept of vector spaces. Many other coordinate systems have been developed since Descartes, such as the polar coordinates for the plane, and the spherical and cylindrical coordinates for three-dimensional space. Choosing a Cartesian coordinate system for a one-dimensional space—that is, for a straight line—involves choosing a point O of the line (the origin), a unit of length, and an orientation for the line. An orientation chooses which of the two half-lines determined by O is the positive and which is negative; we then say that the line "is oriented" (or "points") from the negative half towards the positive half. Then each point P of the line can be specified by its distance from O, taken with a + or − sign depending on which half-line contains P. A line with a chosen Cartesian system is called a number line. The choice of this Cartesian system induces a bijection between the line and the real numbers. A Cartesian coordinate system in two dimensions (also called a rectangular coordinate system or an orthogonal coordinate system) is defined by an ordered pair of perpendicular lines (axes), a single unit of length for both axes, and an orientation for each axis. The point where the axes meet is taken as the origin for both, thus turning each axis into a number line. For any point P, a line is drawn through P perpendicular to each axis, and the position where it meets the axis is interpreted as a number. The two numbers, in that chosen order, are the Cartesian coordinates of P. The reverse construction allows one to determine the point P given its coordinates. The first and second coordinates are called the abscissa and the ordinate of P, respectively; and the point where the axes meet is called the origin of the coordinate system. The coordinates are usually written as two numbers in parentheses, in that order, separated by a comma, as in (3, −10.5). Thus the origin has coordinates (0, 0), and the points on the positive half-axes, one unit away from the origin, have coordinates (1, 0) and (0, 1). In mathematics, physics, and engineering, the first axis is usually defined or depicted as horizontal and oriented to the right, and the second axis is vertical and oriented upwards. (However, in some computer graphics contexts, the ordinate axis may be oriented downwards.) The origin is often labeled O, and the two coordinates are often denoted by the letters X and Y, or x and y. The axes may then be referred to as the X-axis and Y-axis. The choices of letters come from the original convention, which is to use the latter part of the alphabet to indicate unknown values. The first part of the alphabet was used to designate known values. A Euclidean plane with a chosen Cartesian coordinate system is called a Cartesian plane. In a Cartesian plane, one can define canonical representatives of certain geometric figures, such as the unit circle (with radius equal to the length unit, and center at the origin), the unit square (whose diagonal has endpoints at (0, 0) and (1, 1)), the unit hyperbola, and so on. The two axes divide the plane into four right angles, called quadrants. The quadrants may be named or numbered in various ways, but the quadrant where all coordinates are positive is usually called the first quadrant. If the coordinates of a point are (x, y), then its distances from the X-axis and from the Y-axis are |y| and |x|, respectively; where | · | denotes the absolute value of a number. A Cartesian coordinate system for a three-dimensional space consists of an ordered triplet of lines (the axes) that go through a common point (the origin), and are pair-wise perpendicular; an orientation for each axis; and a single unit of length for all three axes. As in the two-dimensional case, each axis becomes a number line. For any point P of space, one considers a hyperplane through P perpendicular to each coordinate axis, and interprets the point where that hyperplane cuts the axis as a number. The Cartesian coordinates of P are those three numbers, in the chosen order. The reverse construction determines the point P given its three coordinates. Alternatively, each coordinate of a point P can be taken as the distance from P to the hyperplane defined by the other two axes, with the sign determined by the orientation of the corresponding axis. Each pair of axes defines a coordinate hyperplane. These hyperplanes divide space into eight octants. The octants are: The coordinates are usually written as three numbers (or algebraic formulas) surrounded by parentheses and separated by commas, as in (3, −2.5, 1) or (t, u + v, π/2). Thus, the origin has coordinates (0, 0, 0), and the unit points on the three axes are (1, 0, 0), (0, 1, 0), and (0, 0, 1). There are no standard names for the coordinates in the three axes (however, the terms abscissa, ordinate and applicate are sometimes used). The coordinates are often denoted by the letters X, Y, and Z, or x, y, and z. The axes may then be referred to as the X-axis, Y-axis, and Z-axis, respectively. Then the coordinate hyperplanes can be referred to as the XY-plane, YZ-plane, and XZ-plane. In mathematics, physics, and engineering contexts, the first two axes are often defined or depicted as horizontal, with the third axis pointing up. In that case the third coordinate may be called height or altitude. The orientation is usually chosen so that the 90 degree angle from the first axis to the second axis looks counter-clockwise when seen from the point (0, 0, 1); a convention that is commonly called the right-hand rule. Since Cartesian coordinates are unique and non-ambiguous, the points of a Cartesian plane can be identified with pairs of real numbers; that is, with the Cartesian product R 2 = R × R {\displaystyle \mathbb {R} ^{2}=\mathbb {R} \times \mathbb {R} } , where R {\displaystyle \mathbb {R} } is the set of all real numbers. In the same way, the points in any Euclidean space of dimension n be identified with the tuples (lists) of n real numbers; that is, with the Cartesian product R n {\displaystyle \mathbb {R} ^{n}} . The concept of Cartesian coordinates generalizes to allow axes that are not perpendicular to each other, and/or different units along each axis. In that case, each coordinate is obtained by projecting the point onto one axis along a direction that is parallel to the other axis (or, in general, to the hyperplane defined by all the other axes). In such an oblique coordinate system the computations of distances and angles must be modified from that in standard Cartesian systems, and many standard formulas (such as the Pythagorean formula for the distance) do not hold (see affine plane). The Cartesian coordinates of a point are usually written in parentheses and separated by commas, as in (10, 5) or (3, 5, 7). The origin is often labelled with the capital letter O. In analytic geometry, unknown or generic coordinates are often denoted by the letters (x, y) in the plane, and (x, y, z) in three-dimensional space. This custom comes from a convention of algebra, which uses letters near the end of the alphabet for unknown values (such as the coordinates of points in many geometric problems), and letters near the beginning for given quantities. These conventional names are often used in other domains, such as physics and engineering, although other letters may be used. For example, in a graph showing how a pressure varies with time, the graph coordinates may be denoted p and t. Each axis is usually named after the coordinate which is measured along it; so one says the x-axis, the y-axis, the t-axis, etc. Another common convention for coordinate naming is to use subscripts, as (x1, x2, ..., xn) for the n coordinates in an n-dimensional space, especially when n is greater than 3 or unspecified. Some authors prefer the numbering (x0, x1, ..., xn−1). These notations are especially advantageous in computer programming: by storing the coordinates of a point as an array, instead of a record, the subscript can serve to index the coordinates. In mathematical illustrations of two-dimensional Cartesian systems, the first coordinate (traditionally called the abscissa) is measured along a horizontal axis, oriented from left to right. The second coordinate (the ordinate) is then measured along a vertical axis, usually oriented from bottom to top. Young children learning the Cartesian system, commonly learn the order to read the values before cementing the x-, y-, and z-axis concepts, by starting with 2D mnemonics (for example, 'Walk along the hall then up the stairs' akin to straight across the x-axis then up vertically along the y-axis). Computer graphics and image processing, however, often use a coordinate system with the y-axis oriented downwards on the computer display. This convention developed in the 1960s (or earlier) from the way that images were originally stored in display buffers. For three-dimensional systems, a convention is to portray the xy-plane horizontally, with the z-axis added to represent height (positive up). Furthermore, there is a convention to orient the x-axis toward the viewer, biased either to the right or left. If a diagram (3D projection or 2D perspective drawing) shows the x- and y-axis horizontally and vertically, respectively, then the z-axis should be shown pointing "out of the page" towards the viewer or camera. In such a 2D diagram of a 3D coordinate system, the z-axis would appear as a line or ray pointing down and to the left or down and to the right, depending on the presumed viewer or camera perspective. In any diagram or display, the orientation of the three axes, as a whole, is arbitrary. However, the orientation of the axes relative to each other should always comply with the right-hand rule, unless specifically stated otherwise. All laws of physics and math assume this right-handedness, which ensures consistency. For 3D diagrams, the names "abscissa" and "ordinate" are rarely used for x and y, respectively. When they are, the z-coordinate is sometimes called the applicate. The words abscissa, ordinate and applicate are sometimes used to refer to coordinate axes rather than the coordinate values. The axes of a two-dimensional Cartesian system divide the plane into four infinite regions, called quadrants, each bounded by two half-axes. These are often numbered from 1st to 4th and denoted by Roman numerals: I (where the coordinates both have positive signs), II (where the abscissa is negative − and the ordinate is positive +), III (where both the abscissa and the ordinate are −), and IV (abscissa +, ordinate −). When the axes are drawn according to the mathematical custom, the numbering goes counter-clockwise starting from the upper right ("north-east") quadrant. Similarly, a three-dimensional Cartesian system defines a division of space into eight regions or octants, according to the signs of the coordinates of the points. The convention used for naming a specific octant is to list its signs; for example, (+ + +) or (− + −). The generalization of the quadrant and octant to an arbitrary number of dimensions is the orthant, and a similar naming system applies. The Euclidean distance between two points of the plane with Cartesian coordinates ( x 1 , y 1 ) {\displaystyle (x_{1},y_{1})} and ( x 2 , y 2 ) {\displaystyle (x_{2},y_{2})} is This is the Cartesian version of Pythagoras's theorem. In three-dimensional space, the distance between points ( x 1 , y 1 , z 1 ) {\displaystyle (x_{1},y_{1},z_{1})} and ( x 2 , y 2 , z 2 ) {\displaystyle (x_{2},y_{2},z_{2})} is which can be obtained by two consecutive applications of Pythagoras' theorem. The Euclidean transformations or Euclidean motions are the (bijective) mappings of points of the Euclidean plane to themselves which preserve distances between points. There are four types of these mappings (also called isometries): translations, rotations, reflections and glide reflections. Translating a set of points of the plane, preserving the distances and directions between them, is equivalent to adding a fixed pair of numbers (a, b) to the Cartesian coordinates of every point in the set. That is, if the original coordinates of a point are (x, y), after the translation they will be To rotate a figure counterclockwise around the origin by some angle θ {\displaystyle \theta } is equivalent to replacing every point with coordinates (x,y) by the point with coordinates (x',y'), where Thus: If (x, y) are the Cartesian coordinates of a point, then (−x, y) are the coordinates of its reflection across the second coordinate axis (the y-axis), as if that line were a mirror. Likewise, (x, −y) are the coordinates of its reflection across the first coordinate axis (the x-axis). In more generality, reflection across a line through the origin making an angle θ {\displaystyle \theta } with the x-axis, is equivalent to replacing every point with coordinates (x, y) by the point with coordinates (x′,y′), where Thus: A glide reflection is the composition of a reflection across a line followed by a translation in the direction of that line. It can be seen that the order of these operations does not matter (the translation can come first, followed by the reflection). All affine transformations of the plane can be described in a uniform way by using matrices. For this purpose, the coordinates ( x , y ) {\displaystyle (x,y)} of a point are commonly represented as the column matrix ( x y ) . {\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}.} The result ( x ′ , y ′ ) {\displaystyle (x',y')} of applying an affine transformation to a point ( x , y ) {\displaystyle (x,y)} is given by the formula where is a 2×2 matrix and b = ( b 1 b 2 ) {\displaystyle b={\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}}} is a column matrix. That is, Among the affine transformations, the Euclidean transformations are characterized by the fact that the matrix A {\displaystyle A} is orthogonal; that is, its columns are orthogonal vectors of Euclidean norm one, or, explicitly, and This is equivalent to saying that A times its transpose is the identity matrix. If these conditions do not hold, the formula describes a more general affine transformation. The transformation is a translation if and only if A is the identity matrix. The transformation is a rotation around some point if and only if A is a rotation matrix, meaning that it is orthogonal and A reflection or glide reflection is obtained when, Assuming that translations are not used (that is, b 1 = b 2 = 0 {\displaystyle b_{1}=b_{2}=0} ) transformations can be composed by simply multiplying the associated transformation matrices. In the general case, it is useful to use the augmented matrix of the transformation; that is, to rewrite the transformation formula where With this trick, the composition of affine transformations is obtained by multiplying the augmented matrices. Affine transformations of the Euclidean plane are transformations that map lines to lines, but may change distances and angles. As said in the preceding section, they can be represented with augmented matrices: The Euclidean transformations are the affine transformations such that the 2×2 matrix of the A i , j {\displaystyle A_{i,j}} is orthogonal. The augmented matrix that represents the composition of two affine transformations is obtained by multiplying their augmented matrices. Some affine transformations that are not Euclidean transformations have received specific names. An example of an affine transformation which is not Euclidean is given by scaling. To make a figure larger or smaller is equivalent to multiplying the Cartesian coordinates of every point by the same positive number m. If (x, y) are the coordinates of a point on the original figure, the corresponding point on the scaled figure has coordinates If m is greater than 1, the figure becomes larger; if m is between 0 and 1, it becomes smaller. A shearing transformation will push the top of a square sideways to form a parallelogram. Horizontal shearing is defined by: Shearing can also be applied vertically: Fixing or choosing the x-axis determines the y-axis up to direction. Namely, the y-axis is necessarily the perpendicular to the x-axis through the point marked 0 on the x-axis. But there is a choice of which of the two half lines on the perpendicular to designate as positive and which as negative. Each of these two choices determines a different orientation (also called handedness) of the Cartesian plane. The usual way of orienting the plane, with the positive x-axis pointing right and the positive y-axis pointing up (and the x-axis being the "first" and the y-axis the "second" axis), is considered the positive or standard orientation, also called the right-handed orientation. A commonly used mnemonic for defining the positive orientation is the right-hand rule. Placing a somewhat closed right hand on the plane with the thumb pointing up, the fingers point from the x-axis to the y-axis, in a positively oriented coordinate system. The other way of orienting the plane is following the left-hand rule, placing the left hand on the plane with the thumb pointing up. When pointing the thumb away from the origin along an axis towards positive, the curvature of the fingers indicates a positive rotation along that axis. Regardless of the rule used to orient the plane, rotating the coordinate system will preserve the orientation. Switching any one axis will reverse the orientation, but switching both will leave the orientation unchanged. Once the x- and y-axes are specified, they determine the line along which the z-axis should lie, but there are two possible orientation for this line. The two possible coordinate systems, which result are called 'right-handed' and 'left-handed'. The standard orientation, where the xy-plane is horizontal and the z-axis points up (and the x- and the y-axis form a positively oriented two-dimensional coordinate system in the xy-plane if observed from above the xy-plane) is called right-handed or positive. The name derives from the right-hand rule. If the index finger of the right hand is pointed forward, the middle finger bent inward at a right angle to it, and the thumb placed at a right angle to both, the three fingers indicate the relative orientation of the x-, y-, and z-axes in a right-handed system. The thumb indicates the x-axis, the index finger the y-axis and the middle finger the z-axis. Conversely, if the same is done with the left hand, a left-handed system results. Figure 7 depicts a left and a right-handed coordinate system. Because a three-dimensional object is represented on the two-dimensional screen, distortion and ambiguity result. The axis pointing downward (and to the right) is also meant to point towards the observer, whereas the "middle"-axis is meant to point away from the observer. The red circle is parallel to the horizontal xy-plane and indicates rotation from the x-axis to the y-axis (in both cases). Hence the red arrow passes in front of the z-axis. Figure 8 is another attempt at depicting a right-handed coordinate system. Again, there is an ambiguity caused by projecting the three-dimensional coordinate system into the plane. Many observers see Figure 8 as "flipping in and out" between a convex cube and a concave "corner". This corresponds to the two possible orientations of the space. Seeing the figure as convex gives a left-handed coordinate system. Thus the "correct" way to view Figure 8 is to imagine the x-axis as pointing towards the observer and thus seeing a concave corner. A point in space in a Cartesian coordinate system may also be represented by a position vector, which can be thought of as an arrow pointing from the origin of the coordinate system to the point. If the coordinates represent spatial positions (displacements), it is common to represent the vector from the origin to the point of interest as r {\displaystyle \mathbf {r} } . In two dimensions, the vector from the origin to the point with Cartesian coordinates (x, y) can be written as: where i = ( 1 0 ) {\displaystyle \mathbf {i} ={\begin{pmatrix}1\\0\end{pmatrix}}} and j = ( 0 1 ) {\displaystyle \mathbf {j} ={\begin{pmatrix}0\\1\end{pmatrix}}} are unit vectors in the direction of the x-axis and y-axis respectively, generally referred to as the standard basis (in some application areas these may also be referred to as versors). Similarly, in three dimensions, the vector from the origin to the point with Cartesian coordinates ( x , y , z ) {\displaystyle (x,y,z)} can be written as: where i = ( 1 0 0 ) , {\displaystyle \mathbf {i} ={\begin{pmatrix}1\\0\\0\end{pmatrix}},} j = ( 0 1 0 ) , {\displaystyle \mathbf {j} ={\begin{pmatrix}0\\1\\0\end{pmatrix}},} and k = ( 0 0 1 ) . {\displaystyle \mathbf {k} ={\begin{pmatrix}0\\0\\1\end{pmatrix}}.} There is no natural interpretation of multiplying vectors to obtain another vector that works in all dimensions, however there is a way to use complex numbers to provide such a multiplication. In a two-dimensional cartesian plane, identify the point with coordinates (x, y) with the complex number z = x + iy. Here, i is the imaginary unit and is identified with the point with coordinates (0, 1), so it is not the unit vector in the direction of the x-axis. Since the complex numbers can be multiplied giving another complex number, this identification provides a means to "multiply" vectors. In a three-dimensional cartesian space a similar identification can be made with a subset of the quaternions.
[ { "paragraph_id": 0, "text": "In geometry, a Cartesian coordinate system (UK: /kɑːrˈtiːzjən/, US: /kɑːrˈtiʒən/) in a plane is a coordinate system that specifies each point uniquely by a pair of real numbers called coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, called coordinate lines, coordinate axes or just axes (plural of axis) of the system. The point where they meet is called the origin and has (0, 0) as coordinates.", "title": "" }, { "paragraph_id": 1, "text": "Similarly, the position of any point in three-dimensional space can be specified by three Cartesian coordinates, which are the signed distances from the point to three mutually perpendicular planes. More generally, n Cartesian coordinates specify the point in an n-dimensional Euclidean space for any dimension n. These coordinates are the signed distances from the point to n mutually perpendicular fixed hyperplanes.", "title": "" }, { "paragraph_id": 2, "text": "Cartesian coordinates are named for René Descartes, whose invention of them in the 17th century revolutionized mathematics by allowing the expression of problems of geometry in terms of algebra and calculus. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by equations involving the coordinates of points of the shape. For example, a circle of radius 2, centered at the origin of the plane, may be described as the set of all points whose coordinates x and y satisfy the equation x + y = 4; the area, the perimeter and the tangent line at any point can be computed from this equation by using integrals and derivatives, in a way that can be applied to any curve.", "title": "" }, { "paragraph_id": 3, "text": "Cartesian coordinates are the foundation of analytic geometry, and provide enlightening geometric interpretations for many other branches of mathematics, such as linear algebra, complex analysis, differential geometry, multivariate calculus, group theory and more. A familiar example is the concept of the graph of a function. Cartesian coordinates are also essential tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering and many more. They are the most common coordinate system used in computer graphics, computer-aided geometric design and other geometry-related data processing.", "title": "" }, { "paragraph_id": 4, "text": "The adjective Cartesian refers to the French mathematician and philosopher René Descartes, who published this idea in 1637 while he was resident in the Netherlands. It was independently discovered by Pierre de Fermat, who also worked in three dimensions, although Fermat did not publish the discovery. The French cleric Nicole Oresme used constructions similar to Cartesian coordinates well before the time of Descartes and Fermat.", "title": "History" }, { "paragraph_id": 5, "text": "Both Descartes and Fermat used a single axis in their treatments and have a variable length measured in reference to this axis. The concept of using a pair of axes was introduced later, after Descartes' La Géométrie was translated into Latin in 1649 by Frans van Schooten and his students. These commentators introduced several concepts while trying to clarify the ideas contained in Descartes's work.", "title": "History" }, { "paragraph_id": 6, "text": "The development of the Cartesian coordinate system would play a fundamental role in the development of the calculus by Isaac Newton and Gottfried Wilhelm Leibniz. The two-coordinate description of the plane was later generalized into the concept of vector spaces.", "title": "History" }, { "paragraph_id": 7, "text": "Many other coordinate systems have been developed since Descartes, such as the polar coordinates for the plane, and the spherical and cylindrical coordinates for three-dimensional space.", "title": "History" }, { "paragraph_id": 8, "text": "", "title": "Description" }, { "paragraph_id": 9, "text": "Choosing a Cartesian coordinate system for a one-dimensional space—that is, for a straight line—involves choosing a point O of the line (the origin), a unit of length, and an orientation for the line. An orientation chooses which of the two half-lines determined by O is the positive and which is negative; we then say that the line \"is oriented\" (or \"points\") from the negative half towards the positive half. Then each point P of the line can be specified by its distance from O, taken with a + or − sign depending on which half-line contains P.", "title": "Description" }, { "paragraph_id": 10, "text": "A line with a chosen Cartesian system is called a number line. The choice of this Cartesian system induces a bijection between the line and the real numbers.", "title": "Description" }, { "paragraph_id": 11, "text": "", "title": "Description" }, { "paragraph_id": 12, "text": "A Cartesian coordinate system in two dimensions (also called a rectangular coordinate system or an orthogonal coordinate system) is defined by an ordered pair of perpendicular lines (axes), a single unit of length for both axes, and an orientation for each axis. The point where the axes meet is taken as the origin for both, thus turning each axis into a number line. For any point P, a line is drawn through P perpendicular to each axis, and the position where it meets the axis is interpreted as a number. The two numbers, in that chosen order, are the Cartesian coordinates of P. The reverse construction allows one to determine the point P given its coordinates.", "title": "Description" }, { "paragraph_id": 13, "text": "The first and second coordinates are called the abscissa and the ordinate of P, respectively; and the point where the axes meet is called the origin of the coordinate system. The coordinates are usually written as two numbers in parentheses, in that order, separated by a comma, as in (3, −10.5). Thus the origin has coordinates (0, 0), and the points on the positive half-axes, one unit away from the origin, have coordinates (1, 0) and (0, 1).", "title": "Description" }, { "paragraph_id": 14, "text": "In mathematics, physics, and engineering, the first axis is usually defined or depicted as horizontal and oriented to the right, and the second axis is vertical and oriented upwards. (However, in some computer graphics contexts, the ordinate axis may be oriented downwards.) The origin is often labeled O, and the two coordinates are often denoted by the letters X and Y, or x and y. The axes may then be referred to as the X-axis and Y-axis. The choices of letters come from the original convention, which is to use the latter part of the alphabet to indicate unknown values. The first part of the alphabet was used to designate known values.", "title": "Description" }, { "paragraph_id": 15, "text": "A Euclidean plane with a chosen Cartesian coordinate system is called a Cartesian plane. In a Cartesian plane, one can define canonical representatives of certain geometric figures, such as the unit circle (with radius equal to the length unit, and center at the origin), the unit square (whose diagonal has endpoints at (0, 0) and (1, 1)), the unit hyperbola, and so on.", "title": "Description" }, { "paragraph_id": 16, "text": "The two axes divide the plane into four right angles, called quadrants. The quadrants may be named or numbered in various ways, but the quadrant where all coordinates are positive is usually called the first quadrant.", "title": "Description" }, { "paragraph_id": 17, "text": "If the coordinates of a point are (x, y), then its distances from the X-axis and from the Y-axis are |y| and |x|, respectively; where | · | denotes the absolute value of a number.", "title": "Description" }, { "paragraph_id": 18, "text": "", "title": "Description" }, { "paragraph_id": 19, "text": "A Cartesian coordinate system for a three-dimensional space consists of an ordered triplet of lines (the axes) that go through a common point (the origin), and are pair-wise perpendicular; an orientation for each axis; and a single unit of length for all three axes. As in the two-dimensional case, each axis becomes a number line. For any point P of space, one considers a hyperplane through P perpendicular to each coordinate axis, and interprets the point where that hyperplane cuts the axis as a number. The Cartesian coordinates of P are those three numbers, in the chosen order. The reverse construction determines the point P given its three coordinates.", "title": "Description" }, { "paragraph_id": 20, "text": "Alternatively, each coordinate of a point P can be taken as the distance from P to the hyperplane defined by the other two axes, with the sign determined by the orientation of the corresponding axis.", "title": "Description" }, { "paragraph_id": 21, "text": "Each pair of axes defines a coordinate hyperplane. These hyperplanes divide space into eight octants. The octants are:", "title": "Description" }, { "paragraph_id": 22, "text": "The coordinates are usually written as three numbers (or algebraic formulas) surrounded by parentheses and separated by commas, as in (3, −2.5, 1) or (t, u + v, π/2). Thus, the origin has coordinates (0, 0, 0), and the unit points on the three axes are (1, 0, 0), (0, 1, 0), and (0, 0, 1).", "title": "Description" }, { "paragraph_id": 23, "text": "There are no standard names for the coordinates in the three axes (however, the terms abscissa, ordinate and applicate are sometimes used). The coordinates are often denoted by the letters X, Y, and Z, or x, y, and z. The axes may then be referred to as the X-axis, Y-axis, and Z-axis, respectively. Then the coordinate hyperplanes can be referred to as the XY-plane, YZ-plane, and XZ-plane.", "title": "Description" }, { "paragraph_id": 24, "text": "In mathematics, physics, and engineering contexts, the first two axes are often defined or depicted as horizontal, with the third axis pointing up. In that case the third coordinate may be called height or altitude. The orientation is usually chosen so that the 90 degree angle from the first axis to the second axis looks counter-clockwise when seen from the point (0, 0, 1); a convention that is commonly called the right-hand rule.", "title": "Description" }, { "paragraph_id": 25, "text": "Since Cartesian coordinates are unique and non-ambiguous, the points of a Cartesian plane can be identified with pairs of real numbers; that is, with the Cartesian product R 2 = R × R {\\displaystyle \\mathbb {R} ^{2}=\\mathbb {R} \\times \\mathbb {R} } , where R {\\displaystyle \\mathbb {R} } is the set of all real numbers. In the same way, the points in any Euclidean space of dimension n be identified with the tuples (lists) of n real numbers; that is, with the Cartesian product R n {\\displaystyle \\mathbb {R} ^{n}} .", "title": "Description" }, { "paragraph_id": 26, "text": "The concept of Cartesian coordinates generalizes to allow axes that are not perpendicular to each other, and/or different units along each axis. In that case, each coordinate is obtained by projecting the point onto one axis along a direction that is parallel to the other axis (or, in general, to the hyperplane defined by all the other axes). In such an oblique coordinate system the computations of distances and angles must be modified from that in standard Cartesian systems, and many standard formulas (such as the Pythagorean formula for the distance) do not hold (see affine plane).", "title": "Description" }, { "paragraph_id": 27, "text": "The Cartesian coordinates of a point are usually written in parentheses and separated by commas, as in (10, 5) or (3, 5, 7). The origin is often labelled with the capital letter O. In analytic geometry, unknown or generic coordinates are often denoted by the letters (x, y) in the plane, and (x, y, z) in three-dimensional space. This custom comes from a convention of algebra, which uses letters near the end of the alphabet for unknown values (such as the coordinates of points in many geometric problems), and letters near the beginning for given quantities.", "title": "Notations and conventions" }, { "paragraph_id": 28, "text": "These conventional names are often used in other domains, such as physics and engineering, although other letters may be used. For example, in a graph showing how a pressure varies with time, the graph coordinates may be denoted p and t. Each axis is usually named after the coordinate which is measured along it; so one says the x-axis, the y-axis, the t-axis, etc.", "title": "Notations and conventions" }, { "paragraph_id": 29, "text": "Another common convention for coordinate naming is to use subscripts, as (x1, x2, ..., xn) for the n coordinates in an n-dimensional space, especially when n is greater than 3 or unspecified. Some authors prefer the numbering (x0, x1, ..., xn−1). These notations are especially advantageous in computer programming: by storing the coordinates of a point as an array, instead of a record, the subscript can serve to index the coordinates.", "title": "Notations and conventions" }, { "paragraph_id": 30, "text": "In mathematical illustrations of two-dimensional Cartesian systems, the first coordinate (traditionally called the abscissa) is measured along a horizontal axis, oriented from left to right. The second coordinate (the ordinate) is then measured along a vertical axis, usually oriented from bottom to top. Young children learning the Cartesian system, commonly learn the order to read the values before cementing the x-, y-, and z-axis concepts, by starting with 2D mnemonics (for example, 'Walk along the hall then up the stairs' akin to straight across the x-axis then up vertically along the y-axis).", "title": "Notations and conventions" }, { "paragraph_id": 31, "text": "Computer graphics and image processing, however, often use a coordinate system with the y-axis oriented downwards on the computer display. This convention developed in the 1960s (or earlier) from the way that images were originally stored in display buffers.", "title": "Notations and conventions" }, { "paragraph_id": 32, "text": "For three-dimensional systems, a convention is to portray the xy-plane horizontally, with the z-axis added to represent height (positive up). Furthermore, there is a convention to orient the x-axis toward the viewer, biased either to the right or left. If a diagram (3D projection or 2D perspective drawing) shows the x- and y-axis horizontally and vertically, respectively, then the z-axis should be shown pointing \"out of the page\" towards the viewer or camera. In such a 2D diagram of a 3D coordinate system, the z-axis would appear as a line or ray pointing down and to the left or down and to the right, depending on the presumed viewer or camera perspective. In any diagram or display, the orientation of the three axes, as a whole, is arbitrary. However, the orientation of the axes relative to each other should always comply with the right-hand rule, unless specifically stated otherwise. All laws of physics and math assume this right-handedness, which ensures consistency.", "title": "Notations and conventions" }, { "paragraph_id": 33, "text": "For 3D diagrams, the names \"abscissa\" and \"ordinate\" are rarely used for x and y, respectively. When they are, the z-coordinate is sometimes called the applicate. The words abscissa, ordinate and applicate are sometimes used to refer to coordinate axes rather than the coordinate values.", "title": "Notations and conventions" }, { "paragraph_id": 34, "text": "The axes of a two-dimensional Cartesian system divide the plane into four infinite regions, called quadrants, each bounded by two half-axes. These are often numbered from 1st to 4th and denoted by Roman numerals: I (where the coordinates both have positive signs), II (where the abscissa is negative − and the ordinate is positive +), III (where both the abscissa and the ordinate are −), and IV (abscissa +, ordinate −). When the axes are drawn according to the mathematical custom, the numbering goes counter-clockwise starting from the upper right (\"north-east\") quadrant.", "title": "Notations and conventions" }, { "paragraph_id": 35, "text": "Similarly, a three-dimensional Cartesian system defines a division of space into eight regions or octants, according to the signs of the coordinates of the points. The convention used for naming a specific octant is to list its signs; for example, (+ + +) or (− + −). The generalization of the quadrant and octant to an arbitrary number of dimensions is the orthant, and a similar naming system applies.", "title": "Notations and conventions" }, { "paragraph_id": 36, "text": "The Euclidean distance between two points of the plane with Cartesian coordinates ( x 1 , y 1 ) {\\displaystyle (x_{1},y_{1})} and ( x 2 , y 2 ) {\\displaystyle (x_{2},y_{2})} is", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 37, "text": "This is the Cartesian version of Pythagoras's theorem. In three-dimensional space, the distance between points ( x 1 , y 1 , z 1 ) {\\displaystyle (x_{1},y_{1},z_{1})} and ( x 2 , y 2 , z 2 ) {\\displaystyle (x_{2},y_{2},z_{2})} is", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 38, "text": "which can be obtained by two consecutive applications of Pythagoras' theorem.", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 39, "text": "The Euclidean transformations or Euclidean motions are the (bijective) mappings of points of the Euclidean plane to themselves which preserve distances between points. There are four types of these mappings (also called isometries): translations, rotations, reflections and glide reflections.", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 40, "text": "Translating a set of points of the plane, preserving the distances and directions between them, is equivalent to adding a fixed pair of numbers (a, b) to the Cartesian coordinates of every point in the set. That is, if the original coordinates of a point are (x, y), after the translation they will be", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 41, "text": "To rotate a figure counterclockwise around the origin by some angle θ {\\displaystyle \\theta } is equivalent to replacing every point with coordinates (x,y) by the point with coordinates (x',y'), where", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 42, "text": "Thus:", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 43, "text": "If (x, y) are the Cartesian coordinates of a point, then (−x, y) are the coordinates of its reflection across the second coordinate axis (the y-axis), as if that line were a mirror. Likewise, (x, −y) are the coordinates of its reflection across the first coordinate axis (the x-axis). In more generality, reflection across a line through the origin making an angle θ {\\displaystyle \\theta } with the x-axis, is equivalent to replacing every point with coordinates (x, y) by the point with coordinates (x′,y′), where", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 44, "text": "Thus:", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 45, "text": "A glide reflection is the composition of a reflection across a line followed by a translation in the direction of that line. It can be seen that the order of these operations does not matter (the translation can come first, followed by the reflection).", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 46, "text": "All affine transformations of the plane can be described in a uniform way by using matrices. For this purpose, the coordinates ( x , y ) {\\displaystyle (x,y)} of a point are commonly represented as the column matrix ( x y ) . {\\displaystyle {\\begin{pmatrix}x\\\\y\\end{pmatrix}}.} The result ( x ′ , y ′ ) {\\displaystyle (x',y')} of applying an affine transformation to a point ( x , y ) {\\displaystyle (x,y)} is given by the formula", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 47, "text": "where", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 48, "text": "is a 2×2 matrix and b = ( b 1 b 2 ) {\\displaystyle b={\\begin{pmatrix}b_{1}\\\\b_{2}\\end{pmatrix}}} is a column matrix. That is,", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 49, "text": "Among the affine transformations, the Euclidean transformations are characterized by the fact that the matrix A {\\displaystyle A} is orthogonal; that is, its columns are orthogonal vectors of Euclidean norm one, or, explicitly,", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 50, "text": "and", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 51, "text": "This is equivalent to saying that A times its transpose is the identity matrix. If these conditions do not hold, the formula describes a more general affine transformation.", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 52, "text": "The transformation is a translation if and only if A is the identity matrix. The transformation is a rotation around some point if and only if A is a rotation matrix, meaning that it is orthogonal and", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 53, "text": "A reflection or glide reflection is obtained when,", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 54, "text": "Assuming that translations are not used (that is, b 1 = b 2 = 0 {\\displaystyle b_{1}=b_{2}=0} ) transformations can be composed by simply multiplying the associated transformation matrices. In the general case, it is useful to use the augmented matrix of the transformation; that is, to rewrite the transformation formula", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 55, "text": "where", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 56, "text": "With this trick, the composition of affine transformations is obtained by multiplying the augmented matrices.", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 57, "text": "Affine transformations of the Euclidean plane are transformations that map lines to lines, but may change distances and angles. As said in the preceding section, they can be represented with augmented matrices:", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 58, "text": "The Euclidean transformations are the affine transformations such that the 2×2 matrix of the A i , j {\\displaystyle A_{i,j}} is orthogonal.", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 59, "text": "The augmented matrix that represents the composition of two affine transformations is obtained by multiplying their augmented matrices.", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 60, "text": "Some affine transformations that are not Euclidean transformations have received specific names.", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 61, "text": "An example of an affine transformation which is not Euclidean is given by scaling. To make a figure larger or smaller is equivalent to multiplying the Cartesian coordinates of every point by the same positive number m. If (x, y) are the coordinates of a point on the original figure, the corresponding point on the scaled figure has coordinates", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 62, "text": "If m is greater than 1, the figure becomes larger; if m is between 0 and 1, it becomes smaller.", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 63, "text": "A shearing transformation will push the top of a square sideways to form a parallelogram. Horizontal shearing is defined by:", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 64, "text": "Shearing can also be applied vertically:", "title": "Cartesian formulae for the plane" }, { "paragraph_id": 65, "text": "Fixing or choosing the x-axis determines the y-axis up to direction. Namely, the y-axis is necessarily the perpendicular to the x-axis through the point marked 0 on the x-axis. But there is a choice of which of the two half lines on the perpendicular to designate as positive and which as negative. Each of these two choices determines a different orientation (also called handedness) of the Cartesian plane.", "title": "Orientation and handedness" }, { "paragraph_id": 66, "text": "The usual way of orienting the plane, with the positive x-axis pointing right and the positive y-axis pointing up (and the x-axis being the \"first\" and the y-axis the \"second\" axis), is considered the positive or standard orientation, also called the right-handed orientation.", "title": "Orientation and handedness" }, { "paragraph_id": 67, "text": "A commonly used mnemonic for defining the positive orientation is the right-hand rule. Placing a somewhat closed right hand on the plane with the thumb pointing up, the fingers point from the x-axis to the y-axis, in a positively oriented coordinate system.", "title": "Orientation and handedness" }, { "paragraph_id": 68, "text": "The other way of orienting the plane is following the left-hand rule, placing the left hand on the plane with the thumb pointing up.", "title": "Orientation and handedness" }, { "paragraph_id": 69, "text": "When pointing the thumb away from the origin along an axis towards positive, the curvature of the fingers indicates a positive rotation along that axis.", "title": "Orientation and handedness" }, { "paragraph_id": 70, "text": "Regardless of the rule used to orient the plane, rotating the coordinate system will preserve the orientation. Switching any one axis will reverse the orientation, but switching both will leave the orientation unchanged.", "title": "Orientation and handedness" }, { "paragraph_id": 71, "text": "Once the x- and y-axes are specified, they determine the line along which the z-axis should lie, but there are two possible orientation for this line. The two possible coordinate systems, which result are called 'right-handed' and 'left-handed'. The standard orientation, where the xy-plane is horizontal and the z-axis points up (and the x- and the y-axis form a positively oriented two-dimensional coordinate system in the xy-plane if observed from above the xy-plane) is called right-handed or positive.", "title": "Orientation and handedness" }, { "paragraph_id": 72, "text": "The name derives from the right-hand rule. If the index finger of the right hand is pointed forward, the middle finger bent inward at a right angle to it, and the thumb placed at a right angle to both, the three fingers indicate the relative orientation of the x-, y-, and z-axes in a right-handed system. The thumb indicates the x-axis, the index finger the y-axis and the middle finger the z-axis. Conversely, if the same is done with the left hand, a left-handed system results.", "title": "Orientation and handedness" }, { "paragraph_id": 73, "text": "Figure 7 depicts a left and a right-handed coordinate system. Because a three-dimensional object is represented on the two-dimensional screen, distortion and ambiguity result. The axis pointing downward (and to the right) is also meant to point towards the observer, whereas the \"middle\"-axis is meant to point away from the observer. The red circle is parallel to the horizontal xy-plane and indicates rotation from the x-axis to the y-axis (in both cases). Hence the red arrow passes in front of the z-axis.", "title": "Orientation and handedness" }, { "paragraph_id": 74, "text": "Figure 8 is another attempt at depicting a right-handed coordinate system. Again, there is an ambiguity caused by projecting the three-dimensional coordinate system into the plane. Many observers see Figure 8 as \"flipping in and out\" between a convex cube and a concave \"corner\". This corresponds to the two possible orientations of the space. Seeing the figure as convex gives a left-handed coordinate system. Thus the \"correct\" way to view Figure 8 is to imagine the x-axis as pointing towards the observer and thus seeing a concave corner.", "title": "Orientation and handedness" }, { "paragraph_id": 75, "text": "A point in space in a Cartesian coordinate system may also be represented by a position vector, which can be thought of as an arrow pointing from the origin of the coordinate system to the point. If the coordinates represent spatial positions (displacements), it is common to represent the vector from the origin to the point of interest as r {\\displaystyle \\mathbf {r} } . In two dimensions, the vector from the origin to the point with Cartesian coordinates (x, y) can be written as:", "title": "Representing a vector in the standard basis" }, { "paragraph_id": 76, "text": "where i = ( 1 0 ) {\\displaystyle \\mathbf {i} ={\\begin{pmatrix}1\\\\0\\end{pmatrix}}} and j = ( 0 1 ) {\\displaystyle \\mathbf {j} ={\\begin{pmatrix}0\\\\1\\end{pmatrix}}} are unit vectors in the direction of the x-axis and y-axis respectively, generally referred to as the standard basis (in some application areas these may also be referred to as versors). Similarly, in three dimensions, the vector from the origin to the point with Cartesian coordinates ( x , y , z ) {\\displaystyle (x,y,z)} can be written as:", "title": "Representing a vector in the standard basis" }, { "paragraph_id": 77, "text": "where i = ( 1 0 0 ) , {\\displaystyle \\mathbf {i} ={\\begin{pmatrix}1\\\\0\\\\0\\end{pmatrix}},} j = ( 0 1 0 ) , {\\displaystyle \\mathbf {j} ={\\begin{pmatrix}0\\\\1\\\\0\\end{pmatrix}},} and k = ( 0 0 1 ) . {\\displaystyle \\mathbf {k} ={\\begin{pmatrix}0\\\\0\\\\1\\end{pmatrix}}.}", "title": "Representing a vector in the standard basis" }, { "paragraph_id": 78, "text": "There is no natural interpretation of multiplying vectors to obtain another vector that works in all dimensions, however there is a way to use complex numbers to provide such a multiplication. In a two-dimensional cartesian plane, identify the point with coordinates (x, y) with the complex number z = x + iy. Here, i is the imaginary unit and is identified with the point with coordinates (0, 1), so it is not the unit vector in the direction of the x-axis. Since the complex numbers can be multiplied giving another complex number, this identification provides a means to \"multiply\" vectors. In a three-dimensional cartesian space a similar identification can be made with a subset of the quaternions.", "title": "Representing a vector in the standard basis" } ]
In geometry, a Cartesian coordinate system in a plane is a coordinate system that specifies each point uniquely by a pair of real numbers called coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, called coordinate lines, coordinate axes or just axes of the system. The point where they meet is called the origin and has as coordinates. Similarly, the position of any point in three-dimensional space can be specified by three Cartesian coordinates, which are the signed distances from the point to three mutually perpendicular planes. More generally, n Cartesian coordinates specify the point in an n-dimensional Euclidean space for any dimension n. These coordinates are the signed distances from the point to n mutually perpendicular fixed hyperplanes. Cartesian coordinates are named for René Descartes, whose invention of them in the 17th century revolutionized mathematics by allowing the expression of problems of geometry in terms of algebra and calculus. Using the Cartesian coordinate system, geometric shapes can be described by equations involving the coordinates of points of the shape. For example, a circle of radius 2, centered at the origin of the plane, may be described as the set of all points whose coordinates x and y satisfy the equation x2 + y2 = 4; the area, the perimeter and the tangent line at any point can be computed from this equation by using integrals and derivatives, in a way that can be applied to any curve. Cartesian coordinates are the foundation of analytic geometry, and provide enlightening geometric interpretations for many other branches of mathematics, such as linear algebra, complex analysis, differential geometry, multivariate calculus, group theory and more. A familiar example is the concept of the graph of a function. Cartesian coordinates are also essential tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering and many more. They are the most common coordinate system used in computer graphics, computer-aided geometric design and other geometry-related data processing.
2002-01-10T18:08:12Z
2023-12-20T14:07:27Z
[ "Template:Cite book", "Template:Mathworld", "Template:Use dmy dates", "Template:IPAc-en", "Template:Harvnb", "Template:Main", "Template:See also", "Template:Clear", "Template:Reflist", "Template:Short description", "Template:Math", "Template:Citation needed", "Template:Abs", "Template:Cite web", "Template:Orthogonal coordinate systems", "Template:Nowrap", "Template:Anchor", "Template:Vanchor", "Template:Further", "Template:Citation", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Cartesian_coordinate_system
7,708
Commandant of the United States Marine Corps
The commandant of the Marine Corps (CMC) is normally the highest-ranking officer in the United States Marine Corps. It is a four-star general position and a member of the Joint Chiefs of Staff. The CMC reports directly to the secretary of the Navy and is responsible for ensuring the organization, policy, plans, and programs for the Marine Corps as well as advising the president, the secretary of defense, the National Security Council, the Homeland Security Council, and the secretary of the Navy on matters involving the Marine Corps. Under the authority of the secretary of the Navy, the CMC designates Marine personnel and resources to the commanders of unified combatant commands. The commandant performs all other functions prescribed in Section 8043 in Title 10 of the United States Code or delegates those duties and responsibilities to other officers in his administration in his name. As with the other joint chiefs, the commandant is an administrative position and has no operational command authority over United States Marine Corps forces. The commandant is nominated for appointment by the president, for a four-year term of office, and must be confirmed by the Senate. The commandant can be reappointed to serve one additional term, but only during times of war or national emergency declared by Congress. By statute, the commandant is appointed as a four-star general while serving in office. "The commandant is directly responsible to the Secretary of the Navy for the total performance of the Marine Corps. This includes the administration, discipline, internal organization, training, requirements, efficiency, and readiness of the service. The Commandant is also responsible for the operation of the Marine Corps material support system." Since 1806, the official residence of the commandant has been located in the Marine Barracks in Washington, D.C., and his main offices are in Arlington County, Virginia. The 39th and current commandant is General Eric M. Smith; due to a heart attack, assistant commandant Christopher J. Mahoney is currently performing Smith's duties as commandant. The responsibilities of the commandant are outlined in Title 10, Section 5043, the United States Code and the position is "subject to the authority, direction, and control of the Secretary of the Navy". As stated in the U.S. Code, the commandant "shall preside over the Headquarters, Marine Corps, transmit the plans and recommendations of the Headquarters, Marine Corps, to the Secretary and advise the Secretary with regard to such plans and recommendations, after approval of the plans or recommendations of the Headquarters, Marine Corps, by the Secretary, act as the agent of the Secretary in carrying them into effect, exercise supervision, consistent with the authority assigned to commanders of unified or specified combatant commands under chapter 6 of this title, over such of the members and organizations of the Marine Corps and the Navy as the Secretary determines, perform the duties prescribed for him by section 171 of this title and other provisions of law and perform such other military duties, not otherwise assigned by law, as are assigned to him by the President, the Secretary of Defense, or the Secretary of the Navy". 39 men have served as the commandant of the Marine Corps. The first commandant was Samuel Nicholas, who took office as a captain, though there was no office titled "Commandant" at the time, and the Second Continental Congress had authorized that the senior-most Marine could take a rank up to Colonel. The longest-serving was Archibald Henderson, sometimes referred to as the "Grand old man of the Marine Corps" due to his thirty-nine-year tenure. In the history of the United States Marine Corps, only one Commandant has ever been fired from the job: Anthony Gale, as a result of a court-martial in 1820.
[ { "paragraph_id": 0, "text": "The commandant of the Marine Corps (CMC) is normally the highest-ranking officer in the United States Marine Corps. It is a four-star general position and a member of the Joint Chiefs of Staff. The CMC reports directly to the secretary of the Navy and is responsible for ensuring the organization, policy, plans, and programs for the Marine Corps as well as advising the president, the secretary of defense, the National Security Council, the Homeland Security Council, and the secretary of the Navy on matters involving the Marine Corps. Under the authority of the secretary of the Navy, the CMC designates Marine personnel and resources to the commanders of unified combatant commands. The commandant performs all other functions prescribed in Section 8043 in Title 10 of the United States Code or delegates those duties and responsibilities to other officers in his administration in his name. As with the other joint chiefs, the commandant is an administrative position and has no operational command authority over United States Marine Corps forces.", "title": "" }, { "paragraph_id": 1, "text": "The commandant is nominated for appointment by the president, for a four-year term of office, and must be confirmed by the Senate. The commandant can be reappointed to serve one additional term, but only during times of war or national emergency declared by Congress. By statute, the commandant is appointed as a four-star general while serving in office. \"The commandant is directly responsible to the Secretary of the Navy for the total performance of the Marine Corps. This includes the administration, discipline, internal organization, training, requirements, efficiency, and readiness of the service. The Commandant is also responsible for the operation of the Marine Corps material support system.\" Since 1806, the official residence of the commandant has been located in the Marine Barracks in Washington, D.C., and his main offices are in Arlington County, Virginia.", "title": "" }, { "paragraph_id": 2, "text": "The 39th and current commandant is General Eric M. Smith; due to a heart attack, assistant commandant Christopher J. Mahoney is currently performing Smith's duties as commandant.", "title": "" }, { "paragraph_id": 3, "text": "The responsibilities of the commandant are outlined in Title 10, Section 5043, the United States Code and the position is \"subject to the authority, direction, and control of the Secretary of the Navy\". As stated in the U.S. Code, the commandant \"shall preside over the Headquarters, Marine Corps, transmit the plans and recommendations of the Headquarters, Marine Corps, to the Secretary and advise the Secretary with regard to such plans and recommendations, after approval of the plans or recommendations of the Headquarters, Marine Corps, by the Secretary, act as the agent of the Secretary in carrying them into effect, exercise supervision, consistent with the authority assigned to commanders of unified or specified combatant commands under chapter 6 of this title, over such of the members and organizations of the Marine Corps and the Navy as the Secretary determines, perform the duties prescribed for him by section 171 of this title and other provisions of law and perform such other military duties, not otherwise assigned by law, as are assigned to him by the President, the Secretary of Defense, or the Secretary of the Navy\".", "title": "Responsibilities" }, { "paragraph_id": 4, "text": "39 men have served as the commandant of the Marine Corps. The first commandant was Samuel Nicholas, who took office as a captain, though there was no office titled \"Commandant\" at the time, and the Second Continental Congress had authorized that the senior-most Marine could take a rank up to Colonel. The longest-serving was Archibald Henderson, sometimes referred to as the \"Grand old man of the Marine Corps\" due to his thirty-nine-year tenure. In the history of the United States Marine Corps, only one Commandant has ever been fired from the job: Anthony Gale, as a result of a court-martial in 1820.", "title": "List of commandants" }, { "paragraph_id": 5, "text": "", "title": "External links" } ]
The commandant of the Marine Corps (CMC) is normally the highest-ranking officer in the United States Marine Corps. It is a four-star general position and a member of the Joint Chiefs of Staff. The CMC reports directly to the secretary of the Navy and is responsible for ensuring the organization, policy, plans, and programs for the Marine Corps as well as advising the president, the secretary of defense, the National Security Council, the Homeland Security Council, and the secretary of the Navy on matters involving the Marine Corps. Under the authority of the secretary of the Navy, the CMC designates Marine personnel and resources to the commanders of unified combatant commands. The commandant performs all other functions prescribed in Section 8043 in Title 10 of the United States Code or delegates those duties and responsibilities to other officers in his administration in his name. As with the other joint chiefs, the commandant is an administrative position and has no operational command authority over United States Marine Corps forces. The commandant is nominated for appointment by the president, for a four-year term of office, and must be confirmed by the Senate. The commandant can be reappointed to serve one additional term, but only during times of war or national emergency declared by Congress. By statute, the commandant is appointed as a four-star general while serving in office. "The commandant is directly responsible to the Secretary of the Navy for the total performance of the Marine Corps. This includes the administration, discipline, internal organization, training, requirements, efficiency, and readiness of the service. The Commandant is also responsible for the operation of the Marine Corps material support system." Since 1806, the official residence of the commandant has been located in the Marine Barracks in Washington, D.C., and his main offices are in Arlington County, Virginia. The 39th and current commandant is General Eric M. Smith; due to a heart attack, assistant commandant Christopher J. Mahoney is currently performing Smith's duties as commandant.
2002-01-10T23:41:12Z
2023-12-31T22:50:59Z
[ "Template:Small", "Template:UnitedStatesCode", "Template:Cite web", "Template:Official website", "Template:US Marine Corps navbox", "Template:Current JCS members", "Template:Current US Department of Defense Secretaries", "Template:Use dmy dates", "Template:US military navbox", "Template:Featured list", "Template:Short description", "Template:Officeholder table", "Template:Ayd", "Template:Commons category", "Template:CMC", "Template:Officeholder table start", "Template:Infobox official post", "Template:Reflist", "Template:Cite book", "Template:Marine Corps", "Template:Use American English" ]
https://en.wikipedia.org/wiki/Commandant_of_the_United_States_Marine_Corps
7,710
California Department of Transportation
The California Department of Transportation (Caltrans) is an executive department of the U.S. state of California. The department is part of the cabinet-level California State Transportation Agency (CalSTA). Caltrans is headquartered in Sacramento. Caltrans manages the state's highway system, which includes the California Freeway and Expressway System, supports public transportation systems throughout the state and provides funding and oversight for three state-supported Amtrak intercity rail routes (Capitol Corridor, Pacific Surfliner and San Joaquins) which are collectively branded as Amtrak California. In 2015, Caltrans released a new mission statement: "Provide a safe, sustainable, integrated and efficient transportation system to enhance California's economy and livability." The earliest predecessor of Caltrans was the Bureau of Highways, which was created by the California Legislature and signed into law by Governor James Budd in 1895. This agency consisted of three commissioners who were charged with analyzing the roads of the state and making recommendations for their improvement. At the time, there was no state highway system, since roads were purely a local responsibility. California's roads consisted of crude dirt roads maintained by county governments, as well as some paved streets in certain cities, and this ad hoc system was no longer adequate for the needs of the state's rapidly growing population. After the commissioners submitted their report to the governor on November 25, 1896, the legislature replaced the Bureau with the Department of Highways. Due to the state's weak fiscal condition and corrupt politics, little progress was made until 1907, when the legislature replaced the Department of Highways with the Department of Engineering, within which there was a Division of Highways. California voters approved an $18 million bond issue for the construction of a state highway system in 1910, and the first California Highway Commission was convened in 1911. On August 7, 1912, the department broke ground on its first construction project, the section of El Camino Real between South San Francisco and Burlingame, which later became part of California State Route 82. The year 1912 also saw the founding of the Transportation Laboratory and the creation of seven administrative divisions, which are the predecessors of the 12 district offices in use as of 2018. The original seven division headquarters were located in: In 1913, the California State Legislature began requiring vehicle registration and allocated the resulting funds to support regular highway maintenance, which began the next year. In 1921, the state legislature turned the Department of Engineering into the Department of Public Works, which continued to have a Division of Highways. That same year, three additional divisions (now districts) were created, in Stockton, Bishop, and San Bernardino. In 1933, the state legislature enacted an amendment to the State Highway Classification Act of 1927, which added over 6,700 miles of county roads to the state highway system. To help manage all the additional work created by this massive expansion, an eleventh district office was founded that year in San Diego. The enactment of the Collier–Burns Highway Act of 1947 after "a lengthy and bitter legislative battle" was a watershed moment in Caltrans history. The act "placed California highway's program on a sound financial basis" by doubling vehicle registration fees and raising gasoline and diesel fuel taxes from 3 cents to 4.5 cents per gallon. All these taxes were again raised further in 1953 and 1963. The state also obtained extensive federal funding from the Federal-Aid Highway Act of 1956 for the construction of its portion of the Interstate Highway System. Over the next two decades after Collier-Burns, the state "embarked on a massive highway construction program" in which nearly all of the now-extant state highway system was either constructed or upgraded. In hindsight, the period from 1940 to 1969 can be characterized as the "Golden Age" of California's state highway construction program. The history of Caltrans and its predecessor agencies during the 20th century was marked by many firsts. It was one of the first agencies in the United States to paint centerlines on highways statewide; the first to build a freeway west of the Mississippi River; the first to build a four-level stack interchange; the first to develop and deploy non-reflective raised pavement markers, better known as Botts' dots; and one of the first to implement dedicated freeway-to-freeway connector ramps for high-occupancy vehicle lanes. In 1967, Governor Ronald Reagan formed a Task Force Committee on Transportation to study the state transportation system and recommend major reforms. One of the proposals of the task force was the creation of a State Transportation Board as a permanent advisory board on state transportation policy; the board would later merge into the California Transportation Commission in 1978. In September 1971, the State Transportation Board proposed the creation of a state department of transportation charged with responsibility "for performing and integrating transportation planning for all modes." Governor Reagan mentioned this proposal in his 1972 State of the State address, and Assemblyman Wadie P. Deddeh introduced Assembly Bill 69 to that effect, which was duly passed by the state legislature and signed into law by Reagan later that same year. AB 69 merged three existing departments to create the Department of Transportation, of which the most important was the Department of Public Works and its Division of Highways. The California Department of Transportation began official operations on July 1, 1973. The new agency was organized into six divisions: Highways, Mass Transportation, Aeronautics, Transportation Planning, Legal, and Administrative Services. Caltrans went through a difficult period of transformation during the 1970s, as its institutional focus shifted from highway construction to highway maintenance. The agency was forced to contend with declining revenues, increasing construction and maintenance costs (especially the skyrocketing cost of maintaining the vast highway system built over the past three prior decades), widespread freeway revolts, and new environmental laws. In 1970, the enactment of the National Environmental Policy Act and the California Environmental Quality Act forced Caltrans to devote significant time, money, people, and other resources to confronting issues such as "air and water quality, hazardous waste, archaeology, historic preservation, and noise abatement." The devastating 1971 San Fernando earthquake compelled the agency to recognize that its existing design standards had not adequately accounted for earthquake stress and that numerous existing structures needed expensive seismic retrofitting. Maintenance and construction costs grew at twice the inflation rate in this era of high inflation; the reluctance of one governor after another to raise fuel taxes in accordance with inflation meant that California ranked dead last in the United States in per capita transportation spending by 1983. During the 1980s and 1990s, Caltrans concentrated on "the upgrading, rehabilitation, and maintenance of the existing system," plus occasional gap closure and realignment projects. In 2023, Caltrans demoted one of its top officials, Jeanie Ward-Waller, because she objected to highway expansions and alleged that permits for those expansions improperly understated their adverse environmental impact. For administrative purposes, Caltrans divides the State of California into 12 districts, supervised by district offices. Most districts cover multiple counties; District 12 (Orange County) is the only district with one county. The largest districts by population are District 4 (San Francisco Bay Area) and District 7 (Los Angeles and Ventura counties). Like many state agencies, Caltrans maintains its headquarters in Sacramento, which is covered by District 3.
[ { "paragraph_id": 0, "text": "The California Department of Transportation (Caltrans) is an executive department of the U.S. state of California. The department is part of the cabinet-level California State Transportation Agency (CalSTA). Caltrans is headquartered in Sacramento.", "title": "" }, { "paragraph_id": 1, "text": "Caltrans manages the state's highway system, which includes the California Freeway and Expressway System, supports public transportation systems throughout the state and provides funding and oversight for three state-supported Amtrak intercity rail routes (Capitol Corridor, Pacific Surfliner and San Joaquins) which are collectively branded as Amtrak California.", "title": "" }, { "paragraph_id": 2, "text": "In 2015, Caltrans released a new mission statement: \"Provide a safe, sustainable, integrated and efficient transportation system to enhance California's economy and livability.\"", "title": "" }, { "paragraph_id": 3, "text": "The earliest predecessor of Caltrans was the Bureau of Highways, which was created by the California Legislature and signed into law by Governor James Budd in 1895. This agency consisted of three commissioners who were charged with analyzing the roads of the state and making recommendations for their improvement. At the time, there was no state highway system, since roads were purely a local responsibility. California's roads consisted of crude dirt roads maintained by county governments, as well as some paved streets in certain cities, and this ad hoc system was no longer adequate for the needs of the state's rapidly growing population. After the commissioners submitted their report to the governor on November 25, 1896, the legislature replaced the Bureau with the Department of Highways.", "title": "History" }, { "paragraph_id": 4, "text": "Due to the state's weak fiscal condition and corrupt politics, little progress was made until 1907, when the legislature replaced the Department of Highways with the Department of Engineering, within which there was a Division of Highways. California voters approved an $18 million bond issue for the construction of a state highway system in 1910, and the first California Highway Commission was convened in 1911. On August 7, 1912, the department broke ground on its first construction project, the section of El Camino Real between South San Francisco and Burlingame, which later became part of California State Route 82. The year 1912 also saw the founding of the Transportation Laboratory and the creation of seven administrative divisions, which are the predecessors of the 12 district offices in use as of 2018. The original seven division headquarters were located in:", "title": "History" }, { "paragraph_id": 5, "text": "In 1913, the California State Legislature began requiring vehicle registration and allocated the resulting funds to support regular highway maintenance, which began the next year.", "title": "History" }, { "paragraph_id": 6, "text": "In 1921, the state legislature turned the Department of Engineering into the Department of Public Works, which continued to have a Division of Highways. That same year, three additional divisions (now districts) were created, in Stockton, Bishop, and San Bernardino.", "title": "History" }, { "paragraph_id": 7, "text": "In 1933, the state legislature enacted an amendment to the State Highway Classification Act of 1927, which added over 6,700 miles of county roads to the state highway system. To help manage all the additional work created by this massive expansion, an eleventh district office was founded that year in San Diego.", "title": "History" }, { "paragraph_id": 8, "text": "The enactment of the Collier–Burns Highway Act of 1947 after \"a lengthy and bitter legislative battle\" was a watershed moment in Caltrans history. The act \"placed California highway's program on a sound financial basis\" by doubling vehicle registration fees and raising gasoline and diesel fuel taxes from 3 cents to 4.5 cents per gallon. All these taxes were again raised further in 1953 and 1963. The state also obtained extensive federal funding from the Federal-Aid Highway Act of 1956 for the construction of its portion of the Interstate Highway System. Over the next two decades after Collier-Burns, the state \"embarked on a massive highway construction program\" in which nearly all of the now-extant state highway system was either constructed or upgraded. In hindsight, the period from 1940 to 1969 can be characterized as the \"Golden Age\" of California's state highway construction program.", "title": "History" }, { "paragraph_id": 9, "text": "The history of Caltrans and its predecessor agencies during the 20th century was marked by many firsts. It was one of the first agencies in the United States to paint centerlines on highways statewide; the first to build a freeway west of the Mississippi River; the first to build a four-level stack interchange; the first to develop and deploy non-reflective raised pavement markers, better known as Botts' dots; and one of the first to implement dedicated freeway-to-freeway connector ramps for high-occupancy vehicle lanes.", "title": "History" }, { "paragraph_id": 10, "text": "In 1967, Governor Ronald Reagan formed a Task Force Committee on Transportation to study the state transportation system and recommend major reforms. One of the proposals of the task force was the creation of a State Transportation Board as a permanent advisory board on state transportation policy; the board would later merge into the California Transportation Commission in 1978. In September 1971, the State Transportation Board proposed the creation of a state department of transportation charged with responsibility \"for performing and integrating transportation planning for all modes.\" Governor Reagan mentioned this proposal in his 1972 State of the State address, and Assemblyman Wadie P. Deddeh introduced Assembly Bill 69 to that effect, which was duly passed by the state legislature and signed into law by Reagan later that same year. AB 69 merged three existing departments to create the Department of Transportation, of which the most important was the Department of Public Works and its Division of Highways. The California Department of Transportation began official operations on July 1, 1973. The new agency was organized into six divisions: Highways, Mass Transportation, Aeronautics, Transportation Planning, Legal, and Administrative Services.", "title": "History" }, { "paragraph_id": 11, "text": "Caltrans went through a difficult period of transformation during the 1970s, as its institutional focus shifted from highway construction to highway maintenance. The agency was forced to contend with declining revenues, increasing construction and maintenance costs (especially the skyrocketing cost of maintaining the vast highway system built over the past three prior decades), widespread freeway revolts, and new environmental laws. In 1970, the enactment of the National Environmental Policy Act and the California Environmental Quality Act forced Caltrans to devote significant time, money, people, and other resources to confronting issues such as \"air and water quality, hazardous waste, archaeology, historic preservation, and noise abatement.\" The devastating 1971 San Fernando earthquake compelled the agency to recognize that its existing design standards had not adequately accounted for earthquake stress and that numerous existing structures needed expensive seismic retrofitting. Maintenance and construction costs grew at twice the inflation rate in this era of high inflation; the reluctance of one governor after another to raise fuel taxes in accordance with inflation meant that California ranked dead last in the United States in per capita transportation spending by 1983. During the 1980s and 1990s, Caltrans concentrated on \"the upgrading, rehabilitation, and maintenance of the existing system,\" plus occasional gap closure and realignment projects.", "title": "History" }, { "paragraph_id": 12, "text": "In 2023, Caltrans demoted one of its top officials, Jeanie Ward-Waller, because she objected to highway expansions and alleged that permits for those expansions improperly understated their adverse environmental impact.", "title": "History" }, { "paragraph_id": 13, "text": "For administrative purposes, Caltrans divides the State of California into 12 districts, supervised by district offices. Most districts cover multiple counties; District 12 (Orange County) is the only district with one county. The largest districts by population are District 4 (San Francisco Bay Area) and District 7 (Los Angeles and Ventura counties). Like many state agencies, Caltrans maintains its headquarters in Sacramento, which is covered by District 3.", "title": "Administration" } ]
The California Department of Transportation (Caltrans) is an executive department of the U.S. state of California. The department is part of the cabinet-level California State Transportation Agency (CalSTA). Caltrans is headquartered in Sacramento. Caltrans manages the state's highway system, which includes the California Freeway and Expressway System, supports public transportation systems throughout the state and provides funding and oversight for three state-supported Amtrak intercity rail routes which are collectively branded as Amtrak California. In 2015, Caltrans released a new mission statement: "Provide a safe, sustainable, integrated and efficient transportation system to enhance California's economy and livability."
2002-01-11T03:28:01Z
2023-12-28T19:08:08Z
[ "Template:Short description", "Template:Redirect-distinguish", "Template:Infobox government agency", "Template:Asof", "Template:Reflist", "Template:Cite web", "Template:Official website", "Template:U.S. State Departments of Transportation", "Template:Authority control", "Template:Portal", "Template:Cite journal", "Template:Commons category", "Template:California state agencies" ]
https://en.wikipedia.org/wiki/California_Department_of_Transportation
7,712
Continuation War
The Continuation War, also known as the Second Soviet-Finnish War, was a conflict fought by Finland and Nazi Germany against the Soviet Union during World War II. It began with a Finnish declaration of war and invasion on 25 June 1941 and ended on 19 September 1944 with the Moscow Armistice. The Soviet Union and Finland had previously fought the Winter War from 1939 to 1940, which ended with the Soviet failure to conquer Finland and the Moscow Peace Treaty. Numerous reasons have been proposed for the Finnish decision to invade, with regaining territory lost during the Winter War regarded as the most common. Other justifications for the conflict include Finnish President Risto Ryti's vision of a Greater Finland and Commander-in-Chief Carl Gustaf Emil Mannerheim's desire to annex East Karelia. On 22 June 1941, the Axis invaded the Soviet Union. Three days later, the Soviet Union conducted an air raid on Finnish cities which prompted Finland to declare war and allow German troops in Finland to begin offensive warfare. By September 1941, Finland had regained its post–Winter War concessions to the Soviet Union in Karelia. The Finnish Army continued its offensive past the 1939 border during the invasion of East Karelia and halted it only around 30–32 km (19–20 mi) from the centre of Leningrad. It participated in besieging the city by cutting the northern supply routes and by digging in until 1944. In Lapland, joint German-Finnish forces failed to capture Murmansk or to cut the Kirov (Murmansk) Railway. The Soviet Vyborg–Petrozavodsk Offensive in June and August 1944 drove the Finns from most of the territories that they had gained during the war, but the Finnish Army halted the offensive in August 1944. Hostilities between Finland and the USSR ceased in September 1944 with the signing of the Moscow Armistice in which Finland restored its borders per the 1940 Moscow Peace Treaty and additionally ceded Petsamo and leased the Porkkala Peninsula to the Soviets. Furthermore, Finland was required to pay war reparations to the Soviet Union, accept partial responsibility for the war, and acknowledge that it had been a German ally. Finland was also required by the agreement to expel German troops from Finnish territory, which led to the Lapland War between Finland and Germany. On 23 August 1939, the Soviet Union and Germany signed the Molotov–Ribbentrop Pact in which both parties agreed to divide the independent countries of Finland, Estonia, Latvia, Lithuania, Poland, and Romania into spheres of interest, with Finland falling within the Soviet sphere. One week later, Germany invaded Poland, leading to the United Kingdom and France declaring war on Germany. The Soviet Union invaded eastern Poland on 17 September. The Soviet government turned its attention to the Baltic states of Estonia, Latvia, and Lithuania, demanding that they allow Soviet military bases to be established and troops stationed on their soil. The Baltic governments acquiesced to these demands and signed agreements in September and October. In October 1939, the Soviet Union attempted to negotiate with Finland to cede Finnish territory on the Karelian Isthmus and the islands of the Gulf of Finland, and to establish a Soviet military base near the Finnish capital of Helsinki. The Finnish government refused, and the Red Army invaded Finland on 30 November 1939. The same day of the invasion, Field Marshal C. G. E. Mannerheim, who was chairman of Finland's Defence Council at the time, assumed the position of Commander-in-Chief of the Finnish Defence Forces. The USSR was expelled from the League of Nations and was condemned by the international community for the illegal attack. Foreign support for Finland was promised, but very little actual help materialised, except from Sweden. The Moscow Peace Treaty concluded the 105-day Winter War on 13 March 1940 and started the Interim Peace. By the terms of the treaty, Finland ceded 9% of its national territory and 13% of its economic capacity to the Soviet Union. Some 420,000 evacuees were resettled from the ceded territories. Finland avoided total conquest of the country by the Soviet Union and retained its sovereignty. Prior to the war, Finnish foreign policy had been based on multilateral guarantees of support from the League of Nations and Nordic countries, but this policy was considered a failure. After the war, Finnish public opinion favored the reconquest of Finnish Karelia. The government declared national defence to be its first priority, and military expenditure rose to nearly half of public spending. Finland both received donations and purchased war materiel during and immediately after the Winter War. Likewise, the Finnish leadership wanted to preserve the spirit of unanimity that was felt throughout the country during the Winter War. The divisive White Guard tradition of the Finnish Civil War's 16 May victory-day celebration was therefore discontinued. The Soviet Union had received the Hanko Naval Base, on Finland's southern coast near the capital Helsinki, where it deployed over 30,000 Soviet military personnel. Relations between Finland and the Soviet Union remained strained after the signing of the one-sided peace treaty, and there were disputes regarding the implementation of the treaty. Finland sought security against further territorial depredations by the USSR and proposed mutual defence agreements with Norway and Sweden, but these initiatives were quashed by Moscow. After the Winter War, Germany was viewed with distrust by the Finnish, as it was considered an ally of the Soviet Union. Nonetheless, the Finnish government sought to restore diplomatic relations with Germany, but also continued its Western-orientated policy and negotiated a war trade agreement with the United Kingdom. The agreement was renounced after the German invasion of Denmark and Norway on 9 April 1940 resulted in the UK cutting all trade and traffic communications with the Nordic countries. With the fall of France, a Western orientation was no longer considered a viable option in Finnish foreign policy. On 15 and 16 June, the Soviet Union occupied the Baltic states almost without any resistance and Soviet puppet regimes were installed. Within two months Estonia, Latvia and Lithuania were incorporated into the USSR and by mid–1940, the two remaining northern democracies, Finland and Sweden, were encircled by the hostile states of Germany and the Soviet Union. On 23 June, shortly after the Soviet occupation of the Baltic states began, Soviet Foreign Minister Vyacheslav Molotov contacted the Finnish government to demand that a mining licence be issued to the Soviet Union for the nickel mines in Petsamo or, alternatively, permission for the establishment of a joint Soviet-Finnish company to operate there. A licence to mine the deposit had already been granted to a British-Canadian company and so the demand was rejected by Finland. The following month, the Soviets demanded that Finland destroy the fortifications on the Åland Islands and to grant the Soviets the right to use Finnish railways to transport Soviet troops to the newly acquired Soviet base at Hanko. The Finns very reluctantly agreed to those demands. On 24 July, Molotov accused the Finnish government of persecuting the communist Finland–Soviet Union Peace and Friendship Society and soon afterward publicly declared support for the group. The society organised demonstrations in Finland, some of which turned into riots. Russian-language sources from the post-Soviet era, such as the study Stalin's Missed Chance, maintain that Soviet policies leading up to the Continuation War were best explained as defensive measures by offensive means. The Soviet division of occupied Poland with Germany, the Soviet occupation of the Baltic states and the Soviet invasion of Finland during the Winter War are described as elements in the Soviet construction of a security zone or buffer region from the perceived threat from the capitalist powers of Western Europe. Other post-Soviet Russian-language sources consider establishment of Soviet satellite states in the Warsaw Pact countries and the Finno-Soviet Treaty of 1948 as the culmination of the Soviet defence plan. Western historians, such as Norman Davies and John Lukacs, dispute this view and describe pre-war Soviet policy as an attempt to stay out of the war and regain the land lost due to the Treaty of Brest-Litovsk after the fall of the Russian Empire. On 31 July 1940, Adolf Hitler gave the order to plan an assault on the Soviet Union, meaning Germany had to reassess its position regarding both Finland and Romania. Until then, Germany had rejected Finnish requests to purchase arms, but with the prospect of an invasion of Russia, that policy was reversed, and in August, the secret sale of weapons to Finland was permitted. Military authorities signed an agreement on 12 September, and an official exchange of diplomatic notes was sent on 22 September. Meanwhile, German troops were allowed to transit through Sweden and Finland. This change in policy meant Germany had effectively redrawn the border of the German and Soviet spheres of influence, in violation of the Molotov-Ribbentrop Pact. In response to that new situation, Molotov visited Berlin on 12–13 November 1940. He requested for Germany to withdraw its troops from Finland and to stop enabling Finnish anti-Soviet sentiments. He also reminded the Germans of the 1939 pact. Hitler inquired how the Soviets planned to settle the "Finnish question" to which Molotov responded that it would mirror the events in Bessarabia and the Baltic states. Hitler rejected that course of action. During the Finnish presidential election in December 1940, Risto Ryti was elected to be president largely due to interference by Molotov in Ryti's favour since he had signed the Moscow Peace Treaty as prime minister. On 18 December 1940, Hitler officially approved Operation Barbarossa, paving the way for the German invasion of the Soviet Union, in which he expected both Finland and Romania to participate. Meanwhile, Finnish Major General Paavo Talvela met with German Colonel General Franz Halder and Reich Marshal Hermann Göring in Berlin, the first time that the Germans had advised the Finnish government, in carefully-couched diplomatic terms, that they were preparing for war with the Soviet Union. Outlines of the actual plan were revealed in January 1941 and regular contact between Finnish and German military leaders began in February. Additionally in January 1941, Moscow again demanded Finland relinquish control of the Petsamo mining area to the Soviets, but Finland, emboldened by a rebuilt defence force and German support, rejected the proposition. In the late spring of 1941, the USSR made a number of goodwill gestures to prevent Finland from completely falling under German influence. Ambassador Ivan Stepanovich Zotov [ru] was replaced with the more conciliatory and passive Pavel Dmitrievich Orlov [ru]. Furthermore, the Soviet government announced that it no longer opposed a rapprochement between Finland and Sweden. Those conciliatory measures, however, did not have any effect on Finnish policy. Finland wished to re-enter the war mainly because of the Soviet invasion of Finland during the Winter War, which the League of Nations and Nordic neutrality had failed to prevent due to lack of outside support. Finland primarily aimed to reverse its territorial losses from the 1940 Moscow Peace Treaty and, depending on the success of the German invasion of the Soviet Union, to possibly expand its borders, especially into East Karelia. Some right-wing groups, such as the Academic Karelia Society, supported a Greater Finland ideology. This ideology of a Greater Finland mostly composed of Soviet territories was augmented by anti-Russian sentiments. The details of the Finnish preparations for war are still somewhat opaque. Historian William R. Trotter stated that "it has so far proven impossible to pinpoint the exact date on which Finland was taken into confidence about Operation Barbarossa" and that "neither the Finns nor the Germans were entirely candid with one another as to their national aims and methods. In any case, the step from contingency planning to actual operations, when it came, was little more than a formality". The inner circle of Finnish leadership, led by Ryti and Mannerheim, actively planned joint operations with Germany under a veil of ambiguous neutrality and without formal agreements after an alliance with Sweden had proved fruitless, according to a meta-analysis by Finnish historian Olli-Pekka Vehviläinen [fi]. He likewise refuted the so-called "driftwood theory" that Finland had been merely a piece of driftwood that was swept uncontrollably in the rapids of great power politics. Even then, most historians conclude that Finland had no realistic alternative to co-operating with Germany. On 20 May, the Germans invited a number of Finnish officers to discuss the coordination of Operation Barbarossa. The participants met on 25–28 May in Salzburg and Berlin and continued their meeting in Helsinki from 3 to 6 June. They agreed upon Finnish mobilisation and a general division of operations. They also agreed that the Finnish Army would start mobilisation on 15 June, but the Germans did not reveal the actual date of the assault. The Finnish decisions were made by the inner circle of political and military leaders, without the knowledge of the rest of the government. Due to tensions between Germany and the Soviet Union, the government was not informed until 9 June that mobilisation of reservists would be required. Finland never signed the Tripartite Pact. The Finnish leadership stated they would fight against the Soviets only to the extent needed to redress the balance of the 1940 treaty, though some historians consider that it had wider territorial goals under the slogan "shorter borders, longer peace" (Finnish: ”lyhyet rajat, pitkä rauha”). During the war, the Finnish leadership generally referred to the Germans as "brothers-in-arms" but also denied that they were allies of Germany – instead claiming to be "co-belligerents". For Hitler, the distinction was irrelevant since he saw Finland as an ally. The 1947 Paris Peace Treaty signed by Finland described Finland as having been "an ally of Hitlerite Germany" during the Continuation War. In a 2008 poll of 28 Finnish historians carried out by Helsingin Sanomat, 16 said that Finland had been an ally of Nazi Germany, six said it had not been and six did not take a position. The Northern Front (Russian: Северный фронт) of the Leningrad Military District was commanded by Lieutenant General Markian Popov and numbered around 450,000 soldiers in 18 divisions and 40 independent battalions in the Finnish region. During the Interim Peace, the Soviet Military had relaid operational plans to conquer Finland, but with the German attack, Operation Barbarossa, begun on 22 June 1941, the Soviets required its best units and latest materiel to be deployed against the Germans and so abandoned plans for a renewed offensive against Finland. The 23rd Army was deployed in the Karelian Isthmus, the 7th Army to Ladoga Karelia and the 14th Army to the Murmansk–Salla area of Lapland. The Northern Front also commanded eight aviation divisions. As the initial German strike against the Soviet Air Forces had not affected air units located near Finland, the Soviets could deploy around 700 aircraft supported by a number of Soviet Navy wings. The Red Banner Baltic Fleet, which outnumbered the navy of Germany (Kriegsmarine), comprised 2 battleships, 2 light cruisers, 47 destroyers or large torpedo boats, 75 submarines, over 200 smaller crafts, and 682 aircraft (of which 595 were operational). The Finnish Army (Maavoimat) mobilised between 475,000 and 500,000 soldiers in 14 divisions and 3 brigades for the invasion, commanded by Field Marshal (sotamarsalkka) Mannerheim. The army was organised as follows: Although initially deployed for a static defence, the Finnish Army was to later launch an attack to the south, on both sides of Lake Ladoga, putting pressure on Leningrad and thus supporting the advance of the German Army Group North through the Baltic states towards Leningrad. Finnish intelligence had overestimated the strength of the Red Army, when in fact it was numerically inferior to Finnish forces at various points along the border. The army, especially its artillery, was stronger than it had been during the Winter War but included only one armoured battalion and had a general lack of motorised transportation; the army possessed 1,829 artillery pieces at the beginning of the invasion. The Finnish Air Force (Ilmavoimat) had received large donations from Germany prior to the Continuation War including Curtiss Hawk 75s, Fokker D.XXIs, Dornier Do 22 flying boats, Morane M.S. 406 bombers, and Focke-Wulf Fw 44 Stieglitz trainers; in total the Finnish Air Force had 550 aircraft by June 1941, approximately half being combat. By September 1944, despite considerable German supply of aircraft, the Finns only had 384 planes. Even with the increase in supplied aircraft, the air force was constantly outnumbered by the Soviets. The Army of Norway, or AOK Norwegen, comprising four divisions totaling 67,000 German soldiers, held the arctic front, which stretched approximately 500 km (310 mi) through Finnish Lapland. This army would also be tasked with striking Murmansk and the Kirov (Murmansk) Railway during Operation Silver Fox. The Army of Norway was under the direct command of the German Army High Command (OKH) and was organised into Mountain Corps Norway and XXXVI Mountain Corps with the Finnish III Corps and 14th Division attached to it. The German Air Force High Command (OKL) assigned 60 aircraft from Luftflotte 5 (Air Fleet 5) to provide air support to the Army of Norway and the Finnish Army, in addition to its main responsibility of defending Norwegian air space. In contrast to the front in Finland, a total of 149 divisions and 3,050,000 soldiers were deployed for the rest of Operation Barbarossa. In the evening of 21 June 1941, German mine-layers hiding in the Archipelago Sea deployed two large minefields across the Gulf of Finland. Later that night, German bombers flew along the gulf to Leningrad, mining the harbour and the river Neva, making a refueling stop at Utti, Finland, on the return leg. In the early hours of 22 June, Finnish forces launched Operation Kilpapurjehdus ("Regatta"), deploying troops in the demilitarised Åland Islands. Although the 1921 Åland convention had clauses allowing Finland to defend the islands in the event of an attack, the coordination of this operation with the German invasion and the arrest of the Soviet consulate staff stationed on the islands, meant that the deployment was a deliberate violation of the treaty, according to Finnish historian Mauno Jokipii. On the morning of 22 June Hitler's proclamation read: "Together with their Finnish comrades in arms the heroes from Narvik stand at the edge of the Arctic Ocean. German troops under command of the conqueror of Norway, and the Finnish freedom fighters under their Marshal's command, are protecting Finnish territory." Following the launch of Operation Barbarossa at around 3:15 a.m. on 22 June 1941, the Soviet Union sent seven bombers on a retaliatory airstrike into Finland, hitting targets at 6:06 a.m. Helsinki time as reported by the Finnish coastal defence ship Väinämöinen. On the morning of 25 June, the Soviet Union launched another air offensive, with 460 fighters and bombers targeting 19 airfields in Finland, however inaccurate intelligence and poor bombing accuracy resulted in several raids hitting Finnish cities, or municipalities, causing considerable damage. 23 Soviet bombers were lost in this strike while the Finnish forces lost no aircraft. Although the USSR claimed that the airstrikes were directed against German targets, particularly airfields in Finland, the Finnish Parliament used the attacks as justification for the approval of a "defensive war". According to historian David Kirby, the message was intended more for public opinion in Finland than abroad, where the country was viewed as an ally of the Axis powers. The Finnish plans for the offensive in Ladoga Karelia were finalised on 28 June 1941, and the first stages of the operation began on 10 July. By 16 July, the VI Corps had reached the northern shore of Lake Ladoga, dividing the Soviet 7th Army, which had been tasked with defending the area. The USSR struggled to contain the German assault, and soon the Soviet high command, Stavka (Russian: Ставка), pulled all available units stationed along the Finnish border into the beleaguered front line. Additional reinforcements were drawn from the 237th Rifle Division and the Soviet 10th Mechanised Corps, excluding the 198th Motorised Division [ru], both of which were stationed in Ladoga Karelia, but this stripped much of the reserve strength of the Soviet units defending that area. The Finnish II Corps started its offensive in the north of the Karelian Isthmus on 31 July. Other Finnish forces reached the shores of Lake Ladoga on 9 August, encircling most of the three defending Soviet divisions on the northwestern coast of the lake in a pocket (Finnish: motti); these divisions were later evacuated across the lake. On 22 August, the Finnish IV Corps began its offensive south of II Corps and advanced towards Vyborg (Finnish: Viipuri). By 23 August, II Corps had reached the Vuoksi River to the east and encircled the Soviet forces defending Vyborg. Finnish forces captured Vyborg on 29 August. The Soviet order to withdraw from Vyborg came too late, resulting in significant losses in materiel, although most of the troops were later evacuated via the Koivisto Islands. After suffering severe losses, the Soviet 23rd Army was unable to halt the offensive, and by 2 September the Finnish Army had reached the old 1939 border. The advance by Finnish and German forces split the Soviet Northern Front into the Leningrad Front and the Karelian Front on 23 August. On 31 August, Finnish Headquarters ordered II and IV Corps, which had advanced the furthest, to halt their advance along a line that ran from the Gulf of Finland via Beloostrov–Sestra–Okhta–Lembolovo to Lake Ladoga. The line ran past the former 1939 border, and approximately 30–32 km (19–20 mi) from Leningrad; a defensive position was established along this line. On 30 August, the IV Corps fought the Soviet 23rd Army in the Battle of Porlampi and defeated them on 1 September. Sporadic fighting continued around Beloostrov until the Soviets evicted the Finns on 5 September. The front on the Isthmus stabilised and the siege of Leningrad began on 8 September. The Finnish Army of Karelia started its attack in East Karelia towards Petrozavodsk, Lake Onega and the Svir River on 9 September. German Army Group North advanced from the south of Leningrad towards the Svir River and captured Tikhvin but were forced to retreat to the Volkhov River by Soviet counterattacks. Soviet forces repeatedly attempted to expel the Finns from their bridgehead south of the Svir during October and December but were repulsed; Soviet units attacked the German 163rd Infantry Division in October 1941, which was operating under Finnish command across the Svir, but failed to dislodge it. Despite these failed attacks, the Finnish attack in East Karelia had been blunted and their advance had halted by 6 December. During the five-month campaign, the Finns suffered 75,000 casualties, of whom 26,355 had died, while the Soviets had 230,000 casualties, of whom 50,000 became prisoners of war. The German objective in Finnish Lapland was to take Murmansk and cut the Kirov (Murmansk) Railway running from Murmansk to Leningrad by capturing Salla and Kandalaksha. Murmansk was the only year-round ice-free port in the north and a threat to the nickel mine at Petsamo. The joint Finnish–German Operation Silver Fox (German: Unternehmen Silberfuchs; Finnish: operaatio Hopeakettu) was started on 29 June 1941 by the German Army of Norway, which had the Finnish 3rd and 6th Divisions under its command, against the defending Soviet 14th Army and 54th Rifle Division. By November, the operation had stalled 30 km (19 mi) from the Kirov Railway due to unacclimatised German troops, heavy Soviet resistance, poor terrain, arctic weather and diplomatic pressure by the United States on the Finns regarding the lend-lease deliveries to Murmansk. The offensive and its three sub-operations failed to achieve their objectives. Both sides dug in and the arctic theatre remained stable, excluding minor skirmishes, until the Soviet Petsamo–Kirkenes Offensive in October 1944. The crucial arctic lend-lease convoys from the US and the UK via Murmansk and Kirov Railway to the bulk of the Soviet forces continued throughout World War II. The US supplied almost $11 billion in materials: 400,000 jeeps and trucks; 12,000 armored vehicles (including 7,000 tanks, which could equip some 20 US armoured divisions); 11,400 aircraft; and 1.59 million t (1.75 million short tons) of food. As a similar example, British shipments of Matilda, Valentine and Tetrarch tanks accounted for only 6 per cent of total Soviet tank production, but over 25 per cent of medium and heavy tanks produced for the Red Army. The Wehrmacht rapidly advanced deep into Soviet territory early in the Operation Barbarossa campaign, leading the Finnish government to believe that Germany would defeat the Soviet Union quickly. President Ryti envisioned a Greater Finland, where Finland and other Finnic people would live inside a "natural defence borderline" by incorporating the Kola Peninsula, East Karelia and perhaps even northern Ingria. In public, the proposed frontier was introduced with the slogan "short border, long peace". Some members of the Finnish Parliament, such as members of the Social Democratic Party and the Swedish People's Party, opposed the idea, arguing that maintaining the 1939 frontier would be enough. Mannerheim often called the war an anti-Communist crusade, hoping to defeat "Bolshevism once and for all". On 10 July, Mannerheim drafted his order of the day, the Sword Scabbard Declaration, in which he pledged to liberate Karelia; in December 1941 in private letters, he made known his doubts of the need to push beyond the previous borders. The Finnish government assured the United States that it was unaware of the order. According to Vehviläinen, most Finns thought that the scope of the new offensive was only to regain what had been taken in the Winter War. He further stated that the term 'Continuation War' was created at the start of the conflict by the Finnish government to justify the invasion to the population as a continuation of the defensive Winter War. The government also wished to emphasise that it was not an official ally of Germany, but a 'co-belligerent' fighting against a common enemy and with purely Finnish aims. Vehviläinen wrote that the authenticity of the government's claim changed when the Finnish Army crossed the old frontier of 1939 and began to annex Soviet territory. British author Jonathan Clements asserted that by December 1941, Finnish soldiers had started questioning whether they were fighting a war of national defence or foreign conquest. By the autumn of 1941, the Finnish military leadership started to doubt Germany's capability to finish the war quickly. The Finnish Defence Forces suffered relatively severe losses during their advance and, overall, German victory became uncertain as German troops were halted near Moscow. German troops in northern Finland faced circumstances they were unprepared for and failed to reach their targets. As the front lines stabilised, Finland attempted to start peace negotiations with the USSR. Mannerheim refused to assault Leningrad, which would inextricably tie Finland to Germany; he regarded his objectives for the war to be achieved, a decision that angered the Germans. Due to the war effort, the Finnish economy suffered from a lack of labour, as well as food shortages and increased prices. To combat this, the Finnish government demobilised part of the army to prevent industrial and agricultural production from collapsing. In October, Finland informed Germany that it would need 159,000 t (175,000 short tons) of grain to manage until next year's harvest. The German authorities would have rejected the request, but Hitler himself agreed. Annual grain deliveries of 180,000 t (200,000 short tons) equaled almost half of the Finnish domestic crop. On 25 November 1941, Finland signed the Anti-Comintern Pact, a less formal alliance, which the German leadership saw as a "litmus test of loyalty". Finland maintained good relations with a number of other Western powers. Foreign volunteers from Sweden and Estonia were among the foreigners who joined Finnish ranks. Infantry Regiment 200, called soomepoisid ("Finnish boys"), mostly Estonians, and the Swedes mustered the Swedish Volunteer Battalion. The Finnish government stressed that Finland was fighting as a co-belligerent with Germany against the USSR only to protect itself and that it was still the same democratic country as it had been in the Winter War. For example, Finland maintained diplomatic relations with the exiled Norwegian government and more than once criticised German occupation policy in Norway. Relations between Finland and the United States were more complex since the American public was sympathetic to the "brave little democracy" and had anticommunist sentiments. At first, the United States sympathised with the Finnish cause, but the situation became problematic after the Finnish Army had crossed the 1939 border. Finnish and German troops were a threat to the Kirov Railway and the northern supply line between the Western Allies and the Soviet Union. On 25 October 1941, the US demanded that Finland cease all hostilities against the USSR and to withdraw behind the 1939 border. In public, President Ryti rejected the demands, but in private, he wrote to Mannerheim on 5 November and asked him to halt the offensive. Mannerheim agreed and secretly instructed General Hjalmar Siilasvuo and his III Corps to end the assault on the Kirov Railway. Nevertheless, the United States never declared war on Finland during the entire conflict. On 12 July 1941, the United Kingdom signed an agreement of joint action with the Soviet Union. Under German pressure, Finland closed the British legation in Helsinki and cut diplomatic relations with Britain on 1 August. On 2 August 1941, Britain declared that Finland was under enemy occupation, which ended all economic transactions between Britain and Finland and led to a blockade of Finnish trade. The most sizable British action on Finnish soil was the Raid on Kirkenes and Petsamo, an aircraft-carrier strike on German and Finnish ships on 31 July 1941. The attack accomplished little except the loss of one Norwegian ship and three British aircraft, but it was intended to demonstrate British support for its Soviet ally. From September to October in 1941, a total of 39 Hawker Hurricanes of No. 151 Wing RAF, based at Murmansk, reinforced and provided pilot-training to the Soviet Air Forces during Operation Benedict to protect arctic convoys. On 28 November, the British government presented Finland with an ultimatum demanding for the Finns to cease military operations by 3 December. Unofficially, Finland informed the Allies that Finnish troops would halt their advance in the next few days. The reply did not satisfy London, which declared war on Finland on 6 December. The Commonwealth nations of Canada, Australia, India and New Zealand soon followed suit. In private, British Prime Minister Winston Churchill had sent a letter to Mannerheim on 29 November in which Churchill was "deeply grieved" that the British would have to declare war on Finland because of the British alliance with the Soviets. Mannerheim repatriated British volunteers under his command to the United Kingdom via Sweden. According to Clements, the declaration of war was mostly for appearance's sake. Unconventional warfare was fought in both the Finnish and Soviet wildernesses. Finnish long-range reconnaissance patrols, organised both by the Intelligence Division's Detached Battalion 4 and by local units, patrolled behind Soviet lines. Soviet partisans, both resistance fighters and regular long-range patrol detachments, conducted a number of operations in Finland and in Eastern Karelia from 1941 to 1944. In summer 1942, the USSR formed the 1st Partisan Brigade. The unit was 'partisan' in name only, as it was essentially 600 men and women on long-range patrol intended to disrupt Finnish operations. The 1st Partisan Brigade was able to infiltrate beyond Finnish patrol lines, but was intercepted, and rendered ineffective, in August 1942 at Lake Segozero. Irregular partisans distributed propaganda newspapers, such as Finnish translations of the official Communist Party paper Pravda (Russian: Правда). Notable Soviet politician, Yuri Andropov, took part in these partisan guerrilla actions. Finnish sources state that, although Soviet partisan activity in East Karelia disrupted Finnish military supply and communication assets, almost two thirds of the attacks targeted civilians, killing 200 and injuring 50, including children and elderly. Between 1942 and 1943, military operations were limited, although the front did see some action. In January 1942, the Soviet Karelian Front attempted to retake Medvezhyegorsk (Finnish: Karhumäki), which had been lost to the Finns in late 1941. With the arrival of spring in April, Soviet forces went on the offensive on the Svir River front, in the Kestenga (Finnish: Kiestinki) region further north in Lapland as well as in the far north at Petsamo with the 14th Rifle Division's amphibious landings supported by the Northern Fleet. All Soviet offensives started promisingly, but due either to the Soviets overextending their lines or stubborn defensive resistance, the offensives were repulsed. After Finnish and German counterattacks in Kestenga, the front lines were generally stalemated. In September 1942, the USSR attacked again at Medvezhyegorsk, but despite five days of fighting, the Soviets only managed to push the Finnish lines back 500 m (550 yd) on a roughly 1 km (0.62 mi) long stretch of the front. Later that month, a Soviet landing with two battalions in Petsamo was defeated by a German counterattack. In November 1941, Hitler decided to separate the German forces fighting in Lapland from the Army of Norway and create the Army of Lapland, commanded by Colonel General Eduard Dietl. In June 1942, the Army of Lapland was redesignated the 20th Mountain Army. In the early stages of the war, the Finnish Army overran the former 1939 border, but ceased their advance 30–32 km (19–20 mi) from the center of Leningrad. Multiple authors have stated that Finland participated in the siege of Leningrad (Russian: Блокада Ленинграда), but the full extent and nature of their participation is debated and a clear consensus has yet to emerge. American historian David Glantz writes that the Finnish Army generally maintained their lines and contributed little to the siege from 1941 to 1944, whereas Russian historian Nikolai Baryshnikov [ru] stated in 2002 that Finland tacitly supported Hitler's starvation policy for the city. However, in 2009 British historian Michael Jones disputed Baryshnikov's claim and asserted that the Finnish Army cut off the city's northern supply routes but did not take further military action. In 2006, American author Lisa Kirschenbaum wrote that the siege started "when German and Finnish troops severed all land routes in and out of Leningrad." According to Clements, Mannerheim personally refused Hitler's request of assaulting Leningrad during their meeting on 4 June 1942. Mannerheim explained to Hitler that "Finland had every reason to wish to stay out of any further provocation of the Soviet Union." In 2014, author Jeff Rutherford described the city as being "ensnared" between the German and Finnish armies. British historian John Barber described it as a "siege by the German and Finnish armies from 8 September 1941 to 27 January 1944 [...]" in his foreword in 2017. Likewise, in 2017, Alexis Peri wrote that the city was "completely cut off, save a heavily patrolled water passage over Lake Ladoga" by "Hitler's Army Group North and his Finnish allies." The 150 speedboats, two minelayers and four steamships of the Finnish Ladoga Naval Detachment, as well as numerous shore batteries, had been stationed on Lake Ladoga since August 1941. Finnish Lieutenant General Paavo Talvela proposed on 17 May 1942 to create a joint Finnish–German–Italian unit on the lake to disrupt Soviet supply convoys to Leningrad. The unit was named Naval Detachment K and comprised four Italian MAS torpedo motorboats of the XII Squadriglia MAS, four German KM-type minelayers and the Finnish torpedo-motorboat Sisu. The detachment began operations in August 1942 and sank numerous smaller Soviet watercraft and flatboats and assaulted enemy bases and beach fronts until it was dissolved in the winter of 1942–43. Twenty-three Siebel ferries and nine infantry transports of the German Einsatzstab Fähre Ost were also deployed to Lake Ladoga and unsuccessfully assaulted the island of Sukho, which protected the main supply route to Leningrad, in October 1942. Despite the siege of the city, the Soviet Baltic Fleet was still able to operate from Leningrad. The Finnish Navy's flagship Ilmarinen had been sunk in September 1941 in the gulf by mines during the failed diversionary Operation North Wind in 1941. In early 1942, Soviet forces recaptured the island of Gogland, but lost it and the Bolshoy Tyuters islands to Finnish forces later in spring 1942. During the winter between 1941 and 1942, the Soviet Baltic Fleet decided to use their large submarine fleet in offensive operations. Though initial submarine operations in the summer of 1942 were successful, the Kriegsmarine and Finnish Navy soon intensified their anti-submarine efforts, making Soviet submarine operations later in 1942 costly. The underwater offensive carried out by the Soviets convinced the Germans to lay anti-submarine nets as well as supporting minefields between Porkkala Peninsula and Naissaar, which proved to be an insurmountable obstacle for Soviet submarines. On the Arctic Ocean, Finnish radio intelligence intercepted Allied messages on supply convoys to Murmansk, such as PQ 17 and PQ 18, and relayed the information to the Abwehr, German intelligence. On 19 July 1941, the Finns created a military administration in occupied East Karelia with the goal of preparing the region for eventual incorporation into Finland. The Finns aimed to expel the Russian portion of the local population (constituting to about a half), who were deemed "non-national", from the area once the war was over, and replace them with the local Finnic peoples, such as Karelians, Finns, Ingrians and Vepsians. Most of the East Karelian population had already been evacuated before the Finnish forces arrived, but about 85,000 people — mostly elderly, women and children — were left behind, less than half of whom were Karelians. A significant number of civilians, almost 30 per cent of the remaining Russians, were interned in concentration camps. The winter between 1941 and 1942 was particularly harsh for the Finnish urban population due to poor harvests and a shortage of agricultural labourers. However, conditions were much worse for Russians in Finnish concentration camps. More than 3,500 people died, mostly from starvation, amounting to 13.8 per cent of those detained, while the corresponding figure for the free population of the occupied territories was 2.6 per cent, and 1.4 per cent for Finland. Conditions gradually improved, ethnic discrimination in wage levels and food rations was terminated, and new schools were established for the Russian-speaking population the following year, after Commander-in-Chief Mannerheim called for the International Committee of the Red Cross from Geneva to inspect the camps. By the end of the occupation, mortality rates had dropped to the same levels as in Finland. In 1939, Finland had a small Jewish population of approximately 2,000 people, of whom 300 were refugees from Germany, Austria, and Czechoslovakia. They had full civil rights and fought with other Finns in the ranks of the Finnish Army. The field synagogue in East Karelia was one of the very few functioning synagogues on the Axis side during the war. There were several cases of Jewish officers of the Finnish Army being awarded the German Iron Cross, which they declined. German soldiers were treated by Jewish medical officers—who sometimes saved the soldiers' lives. German command mentioned Finnish Jews at the Wannsee Conference in January 1942, wishing to transport them to the Majdanek concentration camp in occupied Poland. SS leader Heinrich Himmler also raised the topic of Finnish Jews during his visit in Finland in the summer of 1942; Finnish Prime Minister Jukka Rangell replied that Finland did not have a Jewish question. In November 1942, the Minister of the Interior Toivo Horelli and the head of State Police Arno Anthoni secretly deported eight Jewish refugees to the Gestapo, raising protests among Finnish Social Democrat Party ministers. Only one of the deportees survived. After the incident, the Finnish government refused to transfer any more Jews to German detainment. Finland began to seek an exit from the war after the German defeat at the Battle of Stalingrad in February 1943. Finnish Prime Minister Edwin Linkomies formed a new cabinet in March 1943 with peace as the top priority. Similarly, the Finns were distressed by the Allied invasion of Sicily in July and the German defeat in the Battle of Kursk in August. Negotiations were conducted intermittently in 1943 and 1944 between Finland, the Western Allies and the Soviets, but no agreement was reached. Stalin decided to force Finland to surrender with a bombing campaign on Helsinki. Starting in February 1944, it included three major air attacks totaling over 6,000 sorties. Finnish anti-aircraft defence repelled the raids, and only 5% of the dropped bombs hit their planned targets. In Helsinki, decoy searchlights and fires were placed outside the city to deceive Soviet bombers into dropping their payloads on unpopulated areas. Major air attacks also hit Oulu and Kotka, but pre-emptive radio intelligence and effective defence kept the number of casualties low. The Soviet Leningrad–Novgorod Offensive finally lifted the siege of Leningrad on 27 January 1944. The Army Group North was pushed to Ida-Viru County on the Estonian border. Stiff German and Estonian defence in Narva from February to August prevented the use of occupied Estonia as a favourable base for Soviet amphibious and air assaults against Helsinki and other Finnish coastal cities in support of a land offensive. Field Marshal Mannerheim had reminded the German command on numerous occasions that if the German troops withdrew from Estonia, Finland would be forced to make peace, even on extremely unfavourable terms. Finland abandoned peace negotiations in April 1944 because of the unfavourable terms the USSR demanded. On 9 June 1944, the Soviet Leningrad Front launched an offensive against Finnish positions on the Karelian Isthmus and in the area of Lake Ladoga, timed to coincide with Operation Overlord in Normandy as agreed during the Tehran Conference. Along the 21.7 km (13.5 mi)-wide breakthrough, the Red Army concentrated 3,000 guns and mortars. In some places, the concentration of artillery pieces exceeded 200 guns for every kilometre of front or one for every 5 m (5.5 yd). Soviet artillery fired over 80,000 rounds along the front on the Karelian Isthmus. On the second day of the offensive, the artillery barrages and superior number of Soviet forces crushed the main Finnish defence line. The Red Army penetrated the second line of defence, the Vammelsuu–Taipale line (VT line), by the sixth day and recaptured Viipuri with insignificant resistance on 20 June. The Soviet breakthrough on the Karelian Isthmus forced the Finns to reinforce the area, thus allowing the concurrent Soviet offensive in East Karelia to meet less resistance and to recapture Petrozavodsk by 28 June 1944. On 25 June, the Red Army reached the third line of defence, the Viipuri–Kuparsaari–Taipale line (VKT line), and the decisive Battle of Tali-Ihantala began, which has been described as the largest battle in Nordic military history. By then, the Finnish Army had retreated around 100 km (62 mi) to approximately the same line of defence they had held at the end of the Winter War. Finland especially lacked modern anti-tank weaponry that could stop Soviet heavy armour, such as the KV-1 or IS-2. Thus, German Foreign Minister Joachim von Ribbentrop offered German hand-held Panzerfaust and Panzerschreck antitank weapons in exchange for a guarantee that Finland would not seek a separate peace with the Soviets. On 26 June, President Risto Ryti gave the guarantee as a personal undertaking that he, Field Marshal Mannerheim and Prime Minister Edwin Linkomies intended to last legally only for the remainder of Ryti's presidency. In addition to delivering thousands of anti-tank weapons, Hitler sent the 122nd Infantry Division and the half-strength 303rd Assault Gun Brigade armed with Sturmgeschütz III tank destroyers as well as the Luftwaffe's Detachment Kuhlmey to provide temporary support in the most vulnerable sectors. With the new supplies and assistance from Germany, the Finnish Army halted the numerically and materially superior Soviet advance at Tali-Ihantala on 9 July 1944 and stabilised the front. More battles were fought toward the end of the war, the last of which was the Battle of Ilomantsi, fought between 26 July and 13 August 1944 and resulting in a Finnish victory with the destruction of two Soviet divisions. Resisting the Soviet offensive had exhausted Finnish resources. Despite German support under the Ryti-Ribbentrop Agreement, Finland asserted that it was unable to blunt another major offensive. Soviet victories against German Army Groups Center and North during Operation Bagration made the situation even more dire for Finland. With no imminent further Soviet offensives, Finland sought to leave the war. On 1 August, Ryti resigned, and on 4 August, Field Marshal Mannerheim was sworn in as the new president. He annulled the agreement between Ryti and Ribbentrop on 17 August to allow Finland to sue for peace with the Soviets again, and peace terms from Moscow arrived on 29 August. Finland was required to return to the borders agreed to in the 1940 Moscow Peace Treaty, demobilise its armed forces, fulfill war reparations and cede the municipality of Petsamo. The Finns were also required to end any diplomatic relations with Germany immediately and to expel the Wehrmacht from Finnish territory by 15 September 1944; any troops remaining were to be disarmed, arrested and turned over to the Allies. The Finnish Parliament accepted those terms in a secret meeting on 2 September and requested for official negotiations for an armistice to begin. The Finnish Army implemented a ceasefire at 8:00 a.m. Helsinki time on 4 September. The Red Army followed suit a day later. On 14 September, a delegation led by Finnish Prime Minister Antti Hackzell and Foreign Minister Carl Enckell began negotiating, with the Soviet Union and the United Kingdom, the final terms of the Moscow Armistice, which eventually included additional stipulations from the Soviets. They were presented by Molotov on 18 September and accepted by the Finnish Parliament a day later. The motivations for the Soviet peace agreement with Finland are debated. Several Western historians stated that the original Soviet designs for Finland were no different from those for the Baltic countries. American political scientist Dan Reiter asserted that for Moscow, the control of Finland was necessary. Reiter and the British historian Victor Rothwell quoted Molotov as telling his Lithuanian counterpart in 1940, when the Soviets effectively annexed Lithuania, that minor states such as Finland, "will be included within the honourable family of Soviet peoples". Reiter stated that concern over severe losses pushed Stalin into accepting a limited outcome in the war rather than pursuing annexation, although some Soviet documents called for military occupation of Finland. He also wrote that Stalin had described territorial concessions, reparations and military bases as his objective with Finland to representatives from the UK, in December 1941, and the US, in March 1943, as well as the Tehran Conference. He believed that in the end, "Stalin's desire to crush Hitler quickly and decisively without distraction from the Finnish sideshow" concluded the war. Red Army officers captured as prisoners of war during the Battle of Tali-Ihantala revealed that their intention was to reach Helsinki, and that they were to be strengthened with reinforcements for this task. This was confirmed by intercepted Soviet radio messages. Russian historian Nikolai Baryshnikov disputed the view that the Soviet Union sought to deprive Finland of its independence. He argued that there is no documentary evidence for such claims and that the Soviet government was always open for negotiations. Baryshnikov cited sources like the public information chief of Finnish Headquarters, Major Kalle Lehmus [fi], to show that Finnish leadership had learned of the limited Soviet plans for Finland by at least July 1944 after intelligence revealed that some Soviet divisions were to be transferred to reserve in Leningrad. Finnish historian Heikki Ylikangas [fi] stated similar findings in 2009. According to him, the Soviets refocused their efforts in the summer of 1944 from the Finnish Front to defeating Germany, and Mannerheim received intelligence from Colonel Aladár Paasonen in June 1944 that the Soviet Union was aiming for peace, not occupation. Evidence of the Soviet leadership's intentions for the occupation of Finland has later been uncovered. In 2018, it was revealed that the Soviets' designed and printed (in Goznak) new banknotes for Finland during the closing phases of the war, which were to be put into use after the planned occupation of the country. According to Finnish historians, the casualties of the Finnish Defence Forces amounted to 63,204 dead or missing and around 158,000 wounded. Officially, the Soviets captured 2,377 Finnish prisoners-of-war, but Finnish researchers estimated the number to be around 3,500 prisoners. A total of 939 Finnish civilians died in air raids and 190 civilians were killed by Soviet partisans. Germany suffered approximately 84,000 casualties in the Finnish front: 16,400 killed, 60,400 wounded and 6,800 missing. In addition to the original peace terms of restoring the 1940 border, Finland was required to pay war reparations to the USSR, conduct domestic war-responsibility trials, cede the municipality of Petsamo and lease the Porkkala Peninsula to the Soviets, as well as ban fascist elements and allow left-wing groups, such as the Communist Party of Finland. A Soviet-led Allied Control Commission was installed to enforce and monitor the peace agreement in Finland. The requirement to disarm or expel any German troops left on Finnish soil by 15 September 1944 eventually escalated into the Lapland War between Finland and Germany and the evacuation of the 200,000-strong 20th Mountain Army to Norway. The Soviet demand for $600 million in war indemnities was reduced to $300 million (equivalent to $6.2 billion in 2022), most likely because of pressure from the US and the UK. After the ceasefire, the Soviets insisted for the payments to be based on 1938 prices, which doubled the de facto amount. The temporary Moscow Armistice was finalised without changes later in the 1947 Paris Peace Treaties. Henrik Lunde noted that Finland survived the war without losing its independence, unlike many of Germany's allies. Likewise, Helsinki, along with Moscow, was the only capital of a combatant nation that was not occupied in Continental Europe. In the longer term, Peter Provis analysed that by following self-censorship and limited appeasement policies as well as by fulfilling the Soviet demands, Finland avoided the fate of other nations that were annexed by the Soviets. Because of Soviet pressure, Finland decided not to accept economic aid from the Marshall Plan. On 6 April 1948, Finland and the Soviet Union agreed to sign the Finno-Soviet Treaty of 1948, which was introduced since Finland wanted more political independence from the USSR and the Soviets sought to prevent Finland from being used by Western powers to invade the USSR. On 19 September 1955, Finland and the Soviet Union agreed to extend the Finno-Soviet Treaty of 1948 and the Soviets also agreed to return the Porkkala Peninsula to Finland. In January 1956, twelve years after the beginning of the lease in 1944, the Soviets withdrew from their naval base on Porkkala and the peninsula was returned to Finnish sovereignty. Many civilians who had been displaced after the Winter War had moved back into Karelia during the Continuation War and so had to be evacuated from Karelia again. Of the 260,000 civilians who had returned Karelia, only 19 chose to remain and become Soviet citizens. Most of the Ingrian Finns, together with Votes and Izhorians living in German-occupied Ingria, had been evacuated to Finland in 1943–1944. After the armistice, Finland was forced to return the evacuees. Soviet authorities did not allow the 55,733 returnees to resettle in Ingria and deported the Ingrian Finns to central regions of the Soviet Union. The war is considered a Soviet victory. According to Finnish historians, Soviet casualties in the Continuation War were not accurately recorded and various approximations have arisen. Russian historian Grigori Krivosheev estimated in 1997 that around 250,000 were killed or missing in action while 575,000 were medical casualties (385,000 wounded and 190,000 sick). Finnish author Nenye and others stated in 2016 that at least 305,000 were confirmed dead, or missing, according to the latest research and the number of wounded certainly exceeded 500,000. Of material losses, authors Jowett and Snodgrass state that 697 Soviet tanks were destroyed, 842 field artillery pieces captured, and 1,600 airplanes destroyed by Finnish fighter planes (1,030 by anti-aircraft fire and 75 by the Navy). The number of Soviet prisoners of war in Finland was estimated by Finnish historians to be around 64,000, 56,000 of whom were captured in 1941. Around 2,600 to 2,800 Soviet prisoners of war were rendered to Germany in exchange for roughly 2,200 Finnic prisoners of war. Of the Soviet prisoners, at least 18,318 were documented to have died in Finnish prisoner of war camps. Finnish archival sources indicate that the highest mortality rates were observed in the largest prisoner of war camps, with mortality rates as high as 41%. For small camps, the comparable mortality rate was under 5%. Nearly 85% of the deaths happened between November 1941 and September 1942 with the highest monthly number of deaths, 2,665, recorded in February 1942. For comparison, the amount of deaths in February 1943 was 92. Historian Oula Silvennoinen [fi] attributes the amount of Soviet deaths to several factors, which include Finnish unpreparedness to handle unexpectedly large amounts of prisoners resulting in overcrowding, a lack of warm clothing among prisoners captured predominantly during the summer offensive, limited supplies of food (often made worse by camp personnel stealing food for themselves), and disease as a result of the previous factors. According to historian Antti Kujala, approximately 1,200 prisoners were shot, "most" of whom illegally. The extent of Finland's participation in the siege of Leningrad, and whether Soviet civilian casualties during the siege should be attributed to the Continuation War, is debated and lacks a consensus (estimates of civilian deaths during the siege range from 632,253 to 1,042,000). Several literary and cinematic arrangements have been made on the basis of the Continuation War. The best-known story about the Continuation War is Väinö Linna's novel The Unknown Soldier (Finnish: Tuntematon sotilas), which was the basis for three films in 1955, 1985, and 2017. There is also a 1999 film Ambush, based on a novel by Antti Tuuri on the events in Rukajärvi, Karelia, and a 2007 film 1944: The Final Defence, based on the Battle of Tali-Ihantala. The final stages of the Continuation War were the primary focus of Soviet director Yuli Raizman's 1945 documentary entitled A Propos of the Truce with Finland (Russian: К вопросу о перемирии с Финляндией). The documentary illustrates the strategic operations that led to the breakthrough on the Karelian Isthmus by the Soviets as well as how Soviet propaganda presented the war overall. The film is titled Läpimurto Kannaksella ja rauhanneuvottelut in Finnish.
[ { "paragraph_id": 0, "text": "The Continuation War, also known as the Second Soviet-Finnish War, was a conflict fought by Finland and Nazi Germany against the Soviet Union during World War II. It began with a Finnish declaration of war and invasion on 25 June 1941 and ended on 19 September 1944 with the Moscow Armistice. The Soviet Union and Finland had previously fought the Winter War from 1939 to 1940, which ended with the Soviet failure to conquer Finland and the Moscow Peace Treaty. Numerous reasons have been proposed for the Finnish decision to invade, with regaining territory lost during the Winter War regarded as the most common. Other justifications for the conflict include Finnish President Risto Ryti's vision of a Greater Finland and Commander-in-Chief Carl Gustaf Emil Mannerheim's desire to annex East Karelia.", "title": "" }, { "paragraph_id": 1, "text": "On 22 June 1941, the Axis invaded the Soviet Union. Three days later, the Soviet Union conducted an air raid on Finnish cities which prompted Finland to declare war and allow German troops in Finland to begin offensive warfare. By September 1941, Finland had regained its post–Winter War concessions to the Soviet Union in Karelia. The Finnish Army continued its offensive past the 1939 border during the invasion of East Karelia and halted it only around 30–32 km (19–20 mi) from the centre of Leningrad. It participated in besieging the city by cutting the northern supply routes and by digging in until 1944. In Lapland, joint German-Finnish forces failed to capture Murmansk or to cut the Kirov (Murmansk) Railway. The Soviet Vyborg–Petrozavodsk Offensive in June and August 1944 drove the Finns from most of the territories that they had gained during the war, but the Finnish Army halted the offensive in August 1944.", "title": "" }, { "paragraph_id": 2, "text": "Hostilities between Finland and the USSR ceased in September 1944 with the signing of the Moscow Armistice in which Finland restored its borders per the 1940 Moscow Peace Treaty and additionally ceded Petsamo and leased the Porkkala Peninsula to the Soviets. Furthermore, Finland was required to pay war reparations to the Soviet Union, accept partial responsibility for the war, and acknowledge that it had been a German ally. Finland was also required by the agreement to expel German troops from Finnish territory, which led to the Lapland War between Finland and Germany.", "title": "" }, { "paragraph_id": 3, "text": "On 23 August 1939, the Soviet Union and Germany signed the Molotov–Ribbentrop Pact in which both parties agreed to divide the independent countries of Finland, Estonia, Latvia, Lithuania, Poland, and Romania into spheres of interest, with Finland falling within the Soviet sphere. One week later, Germany invaded Poland, leading to the United Kingdom and France declaring war on Germany. The Soviet Union invaded eastern Poland on 17 September. The Soviet government turned its attention to the Baltic states of Estonia, Latvia, and Lithuania, demanding that they allow Soviet military bases to be established and troops stationed on their soil. The Baltic governments acquiesced to these demands and signed agreements in September and October.", "title": "Background" }, { "paragraph_id": 4, "text": "In October 1939, the Soviet Union attempted to negotiate with Finland to cede Finnish territory on the Karelian Isthmus and the islands of the Gulf of Finland, and to establish a Soviet military base near the Finnish capital of Helsinki. The Finnish government refused, and the Red Army invaded Finland on 30 November 1939. The same day of the invasion, Field Marshal C. G. E. Mannerheim, who was chairman of Finland's Defence Council at the time, assumed the position of Commander-in-Chief of the Finnish Defence Forces. The USSR was expelled from the League of Nations and was condemned by the international community for the illegal attack. Foreign support for Finland was promised, but very little actual help materialised, except from Sweden. The Moscow Peace Treaty concluded the 105-day Winter War on 13 March 1940 and started the Interim Peace. By the terms of the treaty, Finland ceded 9% of its national territory and 13% of its economic capacity to the Soviet Union. Some 420,000 evacuees were resettled from the ceded territories. Finland avoided total conquest of the country by the Soviet Union and retained its sovereignty.", "title": "Background" }, { "paragraph_id": 5, "text": "Prior to the war, Finnish foreign policy had been based on multilateral guarantees of support from the League of Nations and Nordic countries, but this policy was considered a failure. After the war, Finnish public opinion favored the reconquest of Finnish Karelia. The government declared national defence to be its first priority, and military expenditure rose to nearly half of public spending. Finland both received donations and purchased war materiel during and immediately after the Winter War. Likewise, the Finnish leadership wanted to preserve the spirit of unanimity that was felt throughout the country during the Winter War. The divisive White Guard tradition of the Finnish Civil War's 16 May victory-day celebration was therefore discontinued.", "title": "Background" }, { "paragraph_id": 6, "text": "The Soviet Union had received the Hanko Naval Base, on Finland's southern coast near the capital Helsinki, where it deployed over 30,000 Soviet military personnel. Relations between Finland and the Soviet Union remained strained after the signing of the one-sided peace treaty, and there were disputes regarding the implementation of the treaty. Finland sought security against further territorial depredations by the USSR and proposed mutual defence agreements with Norway and Sweden, but these initiatives were quashed by Moscow.", "title": "Background" }, { "paragraph_id": 7, "text": "After the Winter War, Germany was viewed with distrust by the Finnish, as it was considered an ally of the Soviet Union. Nonetheless, the Finnish government sought to restore diplomatic relations with Germany, but also continued its Western-orientated policy and negotiated a war trade agreement with the United Kingdom. The agreement was renounced after the German invasion of Denmark and Norway on 9 April 1940 resulted in the UK cutting all trade and traffic communications with the Nordic countries. With the fall of France, a Western orientation was no longer considered a viable option in Finnish foreign policy. On 15 and 16 June, the Soviet Union occupied the Baltic states almost without any resistance and Soviet puppet regimes were installed. Within two months Estonia, Latvia and Lithuania were incorporated into the USSR and by mid–1940, the two remaining northern democracies, Finland and Sweden, were encircled by the hostile states of Germany and the Soviet Union.", "title": "Background" }, { "paragraph_id": 8, "text": "On 23 June, shortly after the Soviet occupation of the Baltic states began, Soviet Foreign Minister Vyacheslav Molotov contacted the Finnish government to demand that a mining licence be issued to the Soviet Union for the nickel mines in Petsamo or, alternatively, permission for the establishment of a joint Soviet-Finnish company to operate there. A licence to mine the deposit had already been granted to a British-Canadian company and so the demand was rejected by Finland. The following month, the Soviets demanded that Finland destroy the fortifications on the Åland Islands and to grant the Soviets the right to use Finnish railways to transport Soviet troops to the newly acquired Soviet base at Hanko. The Finns very reluctantly agreed to those demands. On 24 July, Molotov accused the Finnish government of persecuting the communist Finland–Soviet Union Peace and Friendship Society and soon afterward publicly declared support for the group. The society organised demonstrations in Finland, some of which turned into riots.", "title": "Background" }, { "paragraph_id": 9, "text": "Russian-language sources from the post-Soviet era, such as the study Stalin's Missed Chance, maintain that Soviet policies leading up to the Continuation War were best explained as defensive measures by offensive means. The Soviet division of occupied Poland with Germany, the Soviet occupation of the Baltic states and the Soviet invasion of Finland during the Winter War are described as elements in the Soviet construction of a security zone or buffer region from the perceived threat from the capitalist powers of Western Europe. Other post-Soviet Russian-language sources consider establishment of Soviet satellite states in the Warsaw Pact countries and the Finno-Soviet Treaty of 1948 as the culmination of the Soviet defence plan. Western historians, such as Norman Davies and John Lukacs, dispute this view and describe pre-war Soviet policy as an attempt to stay out of the war and regain the land lost due to the Treaty of Brest-Litovsk after the fall of the Russian Empire.", "title": "Background" }, { "paragraph_id": 10, "text": "On 31 July 1940, Adolf Hitler gave the order to plan an assault on the Soviet Union, meaning Germany had to reassess its position regarding both Finland and Romania. Until then, Germany had rejected Finnish requests to purchase arms, but with the prospect of an invasion of Russia, that policy was reversed, and in August, the secret sale of weapons to Finland was permitted. Military authorities signed an agreement on 12 September, and an official exchange of diplomatic notes was sent on 22 September. Meanwhile, German troops were allowed to transit through Sweden and Finland. This change in policy meant Germany had effectively redrawn the border of the German and Soviet spheres of influence, in violation of the Molotov-Ribbentrop Pact.", "title": "Background" }, { "paragraph_id": 11, "text": "In response to that new situation, Molotov visited Berlin on 12–13 November 1940. He requested for Germany to withdraw its troops from Finland and to stop enabling Finnish anti-Soviet sentiments. He also reminded the Germans of the 1939 pact. Hitler inquired how the Soviets planned to settle the \"Finnish question\" to which Molotov responded that it would mirror the events in Bessarabia and the Baltic states. Hitler rejected that course of action. During the Finnish presidential election in December 1940, Risto Ryti was elected to be president largely due to interference by Molotov in Ryti's favour since he had signed the Moscow Peace Treaty as prime minister.", "title": "Background" }, { "paragraph_id": 12, "text": "On 18 December 1940, Hitler officially approved Operation Barbarossa, paving the way for the German invasion of the Soviet Union, in which he expected both Finland and Romania to participate. Meanwhile, Finnish Major General Paavo Talvela met with German Colonel General Franz Halder and Reich Marshal Hermann Göring in Berlin, the first time that the Germans had advised the Finnish government, in carefully-couched diplomatic terms, that they were preparing for war with the Soviet Union. Outlines of the actual plan were revealed in January 1941 and regular contact between Finnish and German military leaders began in February. Additionally in January 1941, Moscow again demanded Finland relinquish control of the Petsamo mining area to the Soviets, but Finland, emboldened by a rebuilt defence force and German support, rejected the proposition.", "title": "Background" }, { "paragraph_id": 13, "text": "In the late spring of 1941, the USSR made a number of goodwill gestures to prevent Finland from completely falling under German influence. Ambassador Ivan Stepanovich Zotov [ru] was replaced with the more conciliatory and passive Pavel Dmitrievich Orlov [ru]. Furthermore, the Soviet government announced that it no longer opposed a rapprochement between Finland and Sweden. Those conciliatory measures, however, did not have any effect on Finnish policy. Finland wished to re-enter the war mainly because of the Soviet invasion of Finland during the Winter War, which the League of Nations and Nordic neutrality had failed to prevent due to lack of outside support. Finland primarily aimed to reverse its territorial losses from the 1940 Moscow Peace Treaty and, depending on the success of the German invasion of the Soviet Union, to possibly expand its borders, especially into East Karelia. Some right-wing groups, such as the Academic Karelia Society, supported a Greater Finland ideology. This ideology of a Greater Finland mostly composed of Soviet territories was augmented by anti-Russian sentiments.", "title": "Background" }, { "paragraph_id": 14, "text": "The details of the Finnish preparations for war are still somewhat opaque. Historian William R. Trotter stated that \"it has so far proven impossible to pinpoint the exact date on which Finland was taken into confidence about Operation Barbarossa\" and that \"neither the Finns nor the Germans were entirely candid with one another as to their national aims and methods. In any case, the step from contingency planning to actual operations, when it came, was little more than a formality\".", "title": "Background" }, { "paragraph_id": 15, "text": "The inner circle of Finnish leadership, led by Ryti and Mannerheim, actively planned joint operations with Germany under a veil of ambiguous neutrality and without formal agreements after an alliance with Sweden had proved fruitless, according to a meta-analysis by Finnish historian Olli-Pekka Vehviläinen [fi]. He likewise refuted the so-called \"driftwood theory\" that Finland had been merely a piece of driftwood that was swept uncontrollably in the rapids of great power politics. Even then, most historians conclude that Finland had no realistic alternative to co-operating with Germany. On 20 May, the Germans invited a number of Finnish officers to discuss the coordination of Operation Barbarossa. The participants met on 25–28 May in Salzburg and Berlin and continued their meeting in Helsinki from 3 to 6 June. They agreed upon Finnish mobilisation and a general division of operations. They also agreed that the Finnish Army would start mobilisation on 15 June, but the Germans did not reveal the actual date of the assault. The Finnish decisions were made by the inner circle of political and military leaders, without the knowledge of the rest of the government. Due to tensions between Germany and the Soviet Union, the government was not informed until 9 June that mobilisation of reservists would be required.", "title": "Background" }, { "paragraph_id": 16, "text": "Finland never signed the Tripartite Pact. The Finnish leadership stated they would fight against the Soviets only to the extent needed to redress the balance of the 1940 treaty, though some historians consider that it had wider territorial goals under the slogan \"shorter borders, longer peace\" (Finnish: ”lyhyet rajat, pitkä rauha”). During the war, the Finnish leadership generally referred to the Germans as \"brothers-in-arms\" but also denied that they were allies of Germany – instead claiming to be \"co-belligerents\". For Hitler, the distinction was irrelevant since he saw Finland as an ally. The 1947 Paris Peace Treaty signed by Finland described Finland as having been \"an ally of Hitlerite Germany\" during the Continuation War. In a 2008 poll of 28 Finnish historians carried out by Helsingin Sanomat, 16 said that Finland had been an ally of Nazi Germany, six said it had not been and six did not take a position.", "title": "Background" }, { "paragraph_id": 17, "text": "The Northern Front (Russian: Северный фронт) of the Leningrad Military District was commanded by Lieutenant General Markian Popov and numbered around 450,000 soldiers in 18 divisions and 40 independent battalions in the Finnish region. During the Interim Peace, the Soviet Military had relaid operational plans to conquer Finland, but with the German attack, Operation Barbarossa, begun on 22 June 1941, the Soviets required its best units and latest materiel to be deployed against the Germans and so abandoned plans for a renewed offensive against Finland. The 23rd Army was deployed in the Karelian Isthmus, the 7th Army to Ladoga Karelia and the 14th Army to the Murmansk–Salla area of Lapland. The Northern Front also commanded eight aviation divisions. As the initial German strike against the Soviet Air Forces had not affected air units located near Finland, the Soviets could deploy around 700 aircraft supported by a number of Soviet Navy wings. The Red Banner Baltic Fleet, which outnumbered the navy of Germany (Kriegsmarine), comprised 2 battleships, 2 light cruisers, 47 destroyers or large torpedo boats, 75 submarines, over 200 smaller crafts, and 682 aircraft (of which 595 were operational).", "title": "Order of battle and operational planning" }, { "paragraph_id": 18, "text": "The Finnish Army (Maavoimat) mobilised between 475,000 and 500,000 soldiers in 14 divisions and 3 brigades for the invasion, commanded by Field Marshal (sotamarsalkka) Mannerheim. The army was organised as follows:", "title": "Order of battle and operational planning" }, { "paragraph_id": 19, "text": "Although initially deployed for a static defence, the Finnish Army was to later launch an attack to the south, on both sides of Lake Ladoga, putting pressure on Leningrad and thus supporting the advance of the German Army Group North through the Baltic states towards Leningrad. Finnish intelligence had overestimated the strength of the Red Army, when in fact it was numerically inferior to Finnish forces at various points along the border. The army, especially its artillery, was stronger than it had been during the Winter War but included only one armoured battalion and had a general lack of motorised transportation; the army possessed 1,829 artillery pieces at the beginning of the invasion. The Finnish Air Force (Ilmavoimat) had received large donations from Germany prior to the Continuation War including Curtiss Hawk 75s, Fokker D.XXIs, Dornier Do 22 flying boats, Morane M.S. 406 bombers, and Focke-Wulf Fw 44 Stieglitz trainers; in total the Finnish Air Force had 550 aircraft by June 1941, approximately half being combat. By September 1944, despite considerable German supply of aircraft, the Finns only had 384 planes. Even with the increase in supplied aircraft, the air force was constantly outnumbered by the Soviets.", "title": "Order of battle and operational planning" }, { "paragraph_id": 20, "text": "The Army of Norway, or AOK Norwegen, comprising four divisions totaling 67,000 German soldiers, held the arctic front, which stretched approximately 500 km (310 mi) through Finnish Lapland. This army would also be tasked with striking Murmansk and the Kirov (Murmansk) Railway during Operation Silver Fox. The Army of Norway was under the direct command of the German Army High Command (OKH) and was organised into Mountain Corps Norway and XXXVI Mountain Corps with the Finnish III Corps and 14th Division attached to it. The German Air Force High Command (OKL) assigned 60 aircraft from Luftflotte 5 (Air Fleet 5) to provide air support to the Army of Norway and the Finnish Army, in addition to its main responsibility of defending Norwegian air space. In contrast to the front in Finland, a total of 149 divisions and 3,050,000 soldiers were deployed for the rest of Operation Barbarossa.", "title": "Order of battle and operational planning" }, { "paragraph_id": 21, "text": "In the evening of 21 June 1941, German mine-layers hiding in the Archipelago Sea deployed two large minefields across the Gulf of Finland. Later that night, German bombers flew along the gulf to Leningrad, mining the harbour and the river Neva, making a refueling stop at Utti, Finland, on the return leg. In the early hours of 22 June, Finnish forces launched Operation Kilpapurjehdus (\"Regatta\"), deploying troops in the demilitarised Åland Islands. Although the 1921 Åland convention had clauses allowing Finland to defend the islands in the event of an attack, the coordination of this operation with the German invasion and the arrest of the Soviet consulate staff stationed on the islands, meant that the deployment was a deliberate violation of the treaty, according to Finnish historian Mauno Jokipii.", "title": "Finnish offensive phase in 1941" }, { "paragraph_id": 22, "text": "On the morning of 22 June Hitler's proclamation read: \"Together with their Finnish comrades in arms the heroes from Narvik stand at the edge of the Arctic Ocean. German troops under command of the conqueror of Norway, and the Finnish freedom fighters under their Marshal's command, are protecting Finnish territory.\"", "title": "Finnish offensive phase in 1941" }, { "paragraph_id": 23, "text": "Following the launch of Operation Barbarossa at around 3:15 a.m. on 22 June 1941, the Soviet Union sent seven bombers on a retaliatory airstrike into Finland, hitting targets at 6:06 a.m. Helsinki time as reported by the Finnish coastal defence ship Väinämöinen. On the morning of 25 June, the Soviet Union launched another air offensive, with 460 fighters and bombers targeting 19 airfields in Finland, however inaccurate intelligence and poor bombing accuracy resulted in several raids hitting Finnish cities, or municipalities, causing considerable damage. 23 Soviet bombers were lost in this strike while the Finnish forces lost no aircraft. Although the USSR claimed that the airstrikes were directed against German targets, particularly airfields in Finland, the Finnish Parliament used the attacks as justification for the approval of a \"defensive war\". According to historian David Kirby, the message was intended more for public opinion in Finland than abroad, where the country was viewed as an ally of the Axis powers.", "title": "Finnish offensive phase in 1941" }, { "paragraph_id": 24, "text": "The Finnish plans for the offensive in Ladoga Karelia were finalised on 28 June 1941, and the first stages of the operation began on 10 July. By 16 July, the VI Corps had reached the northern shore of Lake Ladoga, dividing the Soviet 7th Army, which had been tasked with defending the area. The USSR struggled to contain the German assault, and soon the Soviet high command, Stavka (Russian: Ставка), pulled all available units stationed along the Finnish border into the beleaguered front line. Additional reinforcements were drawn from the 237th Rifle Division and the Soviet 10th Mechanised Corps, excluding the 198th Motorised Division [ru], both of which were stationed in Ladoga Karelia, but this stripped much of the reserve strength of the Soviet units defending that area.", "title": "Finnish offensive phase in 1941" }, { "paragraph_id": 25, "text": "The Finnish II Corps started its offensive in the north of the Karelian Isthmus on 31 July. Other Finnish forces reached the shores of Lake Ladoga on 9 August, encircling most of the three defending Soviet divisions on the northwestern coast of the lake in a pocket (Finnish: motti); these divisions were later evacuated across the lake. On 22 August, the Finnish IV Corps began its offensive south of II Corps and advanced towards Vyborg (Finnish: Viipuri). By 23 August, II Corps had reached the Vuoksi River to the east and encircled the Soviet forces defending Vyborg. Finnish forces captured Vyborg on 29 August.", "title": "Finnish offensive phase in 1941" }, { "paragraph_id": 26, "text": "The Soviet order to withdraw from Vyborg came too late, resulting in significant losses in materiel, although most of the troops were later evacuated via the Koivisto Islands. After suffering severe losses, the Soviet 23rd Army was unable to halt the offensive, and by 2 September the Finnish Army had reached the old 1939 border. The advance by Finnish and German forces split the Soviet Northern Front into the Leningrad Front and the Karelian Front on 23 August. On 31 August, Finnish Headquarters ordered II and IV Corps, which had advanced the furthest, to halt their advance along a line that ran from the Gulf of Finland via Beloostrov–Sestra–Okhta–Lembolovo to Lake Ladoga. The line ran past the former 1939 border, and approximately 30–32 km (19–20 mi) from Leningrad; a defensive position was established along this line. On 30 August, the IV Corps fought the Soviet 23rd Army in the Battle of Porlampi and defeated them on 1 September. Sporadic fighting continued around Beloostrov until the Soviets evicted the Finns on 5 September. The front on the Isthmus stabilised and the siege of Leningrad began on 8 September.", "title": "Finnish offensive phase in 1941" }, { "paragraph_id": 27, "text": "The Finnish Army of Karelia started its attack in East Karelia towards Petrozavodsk, Lake Onega and the Svir River on 9 September. German Army Group North advanced from the south of Leningrad towards the Svir River and captured Tikhvin but were forced to retreat to the Volkhov River by Soviet counterattacks. Soviet forces repeatedly attempted to expel the Finns from their bridgehead south of the Svir during October and December but were repulsed; Soviet units attacked the German 163rd Infantry Division in October 1941, which was operating under Finnish command across the Svir, but failed to dislodge it. Despite these failed attacks, the Finnish attack in East Karelia had been blunted and their advance had halted by 6 December. During the five-month campaign, the Finns suffered 75,000 casualties, of whom 26,355 had died, while the Soviets had 230,000 casualties, of whom 50,000 became prisoners of war.", "title": "Finnish offensive phase in 1941" }, { "paragraph_id": 28, "text": "The German objective in Finnish Lapland was to take Murmansk and cut the Kirov (Murmansk) Railway running from Murmansk to Leningrad by capturing Salla and Kandalaksha. Murmansk was the only year-round ice-free port in the north and a threat to the nickel mine at Petsamo. The joint Finnish–German Operation Silver Fox (German: Unternehmen Silberfuchs; Finnish: operaatio Hopeakettu) was started on 29 June 1941 by the German Army of Norway, which had the Finnish 3rd and 6th Divisions under its command, against the defending Soviet 14th Army and 54th Rifle Division. By November, the operation had stalled 30 km (19 mi) from the Kirov Railway due to unacclimatised German troops, heavy Soviet resistance, poor terrain, arctic weather and diplomatic pressure by the United States on the Finns regarding the lend-lease deliveries to Murmansk. The offensive and its three sub-operations failed to achieve their objectives. Both sides dug in and the arctic theatre remained stable, excluding minor skirmishes, until the Soviet Petsamo–Kirkenes Offensive in October 1944.", "title": "Finnish offensive phase in 1941" }, { "paragraph_id": 29, "text": "The crucial arctic lend-lease convoys from the US and the UK via Murmansk and Kirov Railway to the bulk of the Soviet forces continued throughout World War II. The US supplied almost $11 billion in materials: 400,000 jeeps and trucks; 12,000 armored vehicles (including 7,000 tanks, which could equip some 20 US armoured divisions); 11,400 aircraft; and 1.59 million t (1.75 million short tons) of food. As a similar example, British shipments of Matilda, Valentine and Tetrarch tanks accounted for only 6 per cent of total Soviet tank production, but over 25 per cent of medium and heavy tanks produced for the Red Army.", "title": "Finnish offensive phase in 1941" }, { "paragraph_id": 30, "text": "The Wehrmacht rapidly advanced deep into Soviet territory early in the Operation Barbarossa campaign, leading the Finnish government to believe that Germany would defeat the Soviet Union quickly. President Ryti envisioned a Greater Finland, where Finland and other Finnic people would live inside a \"natural defence borderline\" by incorporating the Kola Peninsula, East Karelia and perhaps even northern Ingria. In public, the proposed frontier was introduced with the slogan \"short border, long peace\". Some members of the Finnish Parliament, such as members of the Social Democratic Party and the Swedish People's Party, opposed the idea, arguing that maintaining the 1939 frontier would be enough. Mannerheim often called the war an anti-Communist crusade, hoping to defeat \"Bolshevism once and for all\". On 10 July, Mannerheim drafted his order of the day, the Sword Scabbard Declaration, in which he pledged to liberate Karelia; in December 1941 in private letters, he made known his doubts of the need to push beyond the previous borders. The Finnish government assured the United States that it was unaware of the order.", "title": "Finnish offensive phase in 1941" }, { "paragraph_id": 31, "text": "According to Vehviläinen, most Finns thought that the scope of the new offensive was only to regain what had been taken in the Winter War. He further stated that the term 'Continuation War' was created at the start of the conflict by the Finnish government to justify the invasion to the population as a continuation of the defensive Winter War. The government also wished to emphasise that it was not an official ally of Germany, but a 'co-belligerent' fighting against a common enemy and with purely Finnish aims. Vehviläinen wrote that the authenticity of the government's claim changed when the Finnish Army crossed the old frontier of 1939 and began to annex Soviet territory. British author Jonathan Clements asserted that by December 1941, Finnish soldiers had started questioning whether they were fighting a war of national defence or foreign conquest.", "title": "Finnish offensive phase in 1941" }, { "paragraph_id": 32, "text": "By the autumn of 1941, the Finnish military leadership started to doubt Germany's capability to finish the war quickly. The Finnish Defence Forces suffered relatively severe losses during their advance and, overall, German victory became uncertain as German troops were halted near Moscow. German troops in northern Finland faced circumstances they were unprepared for and failed to reach their targets. As the front lines stabilised, Finland attempted to start peace negotiations with the USSR. Mannerheim refused to assault Leningrad, which would inextricably tie Finland to Germany; he regarded his objectives for the war to be achieved, a decision that angered the Germans.", "title": "Finnish offensive phase in 1941" }, { "paragraph_id": 33, "text": "Due to the war effort, the Finnish economy suffered from a lack of labour, as well as food shortages and increased prices. To combat this, the Finnish government demobilised part of the army to prevent industrial and agricultural production from collapsing. In October, Finland informed Germany that it would need 159,000 t (175,000 short tons) of grain to manage until next year's harvest. The German authorities would have rejected the request, but Hitler himself agreed. Annual grain deliveries of 180,000 t (200,000 short tons) equaled almost half of the Finnish domestic crop. On 25 November 1941, Finland signed the Anti-Comintern Pact, a less formal alliance, which the German leadership saw as a \"litmus test of loyalty\".", "title": "Finnish offensive phase in 1941" }, { "paragraph_id": 34, "text": "Finland maintained good relations with a number of other Western powers. Foreign volunteers from Sweden and Estonia were among the foreigners who joined Finnish ranks. Infantry Regiment 200, called soomepoisid (\"Finnish boys\"), mostly Estonians, and the Swedes mustered the Swedish Volunteer Battalion. The Finnish government stressed that Finland was fighting as a co-belligerent with Germany against the USSR only to protect itself and that it was still the same democratic country as it had been in the Winter War. For example, Finland maintained diplomatic relations with the exiled Norwegian government and more than once criticised German occupation policy in Norway. Relations between Finland and the United States were more complex since the American public was sympathetic to the \"brave little democracy\" and had anticommunist sentiments. At first, the United States sympathised with the Finnish cause, but the situation became problematic after the Finnish Army had crossed the 1939 border. Finnish and German troops were a threat to the Kirov Railway and the northern supply line between the Western Allies and the Soviet Union. On 25 October 1941, the US demanded that Finland cease all hostilities against the USSR and to withdraw behind the 1939 border. In public, President Ryti rejected the demands, but in private, he wrote to Mannerheim on 5 November and asked him to halt the offensive. Mannerheim agreed and secretly instructed General Hjalmar Siilasvuo and his III Corps to end the assault on the Kirov Railway. Nevertheless, the United States never declared war on Finland during the entire conflict.", "title": "Finnish offensive phase in 1941" }, { "paragraph_id": 35, "text": "On 12 July 1941, the United Kingdom signed an agreement of joint action with the Soviet Union. Under German pressure, Finland closed the British legation in Helsinki and cut diplomatic relations with Britain on 1 August. On 2 August 1941, Britain declared that Finland was under enemy occupation, which ended all economic transactions between Britain and Finland and led to a blockade of Finnish trade. The most sizable British action on Finnish soil was the Raid on Kirkenes and Petsamo, an aircraft-carrier strike on German and Finnish ships on 31 July 1941. The attack accomplished little except the loss of one Norwegian ship and three British aircraft, but it was intended to demonstrate British support for its Soviet ally. From September to October in 1941, a total of 39 Hawker Hurricanes of No. 151 Wing RAF, based at Murmansk, reinforced and provided pilot-training to the Soviet Air Forces during Operation Benedict to protect arctic convoys. On 28 November, the British government presented Finland with an ultimatum demanding for the Finns to cease military operations by 3 December. Unofficially, Finland informed the Allies that Finnish troops would halt their advance in the next few days. The reply did not satisfy London, which declared war on Finland on 6 December. The Commonwealth nations of Canada, Australia, India and New Zealand soon followed suit. In private, British Prime Minister Winston Churchill had sent a letter to Mannerheim on 29 November in which Churchill was \"deeply grieved\" that the British would have to declare war on Finland because of the British alliance with the Soviets. Mannerheim repatriated British volunteers under his command to the United Kingdom via Sweden. According to Clements, the declaration of war was mostly for appearance's sake.", "title": "Finnish offensive phase in 1941" }, { "paragraph_id": 36, "text": "Unconventional warfare was fought in both the Finnish and Soviet wildernesses. Finnish long-range reconnaissance patrols, organised both by the Intelligence Division's Detached Battalion 4 and by local units, patrolled behind Soviet lines. Soviet partisans, both resistance fighters and regular long-range patrol detachments, conducted a number of operations in Finland and in Eastern Karelia from 1941 to 1944. In summer 1942, the USSR formed the 1st Partisan Brigade. The unit was 'partisan' in name only, as it was essentially 600 men and women on long-range patrol intended to disrupt Finnish operations. The 1st Partisan Brigade was able to infiltrate beyond Finnish patrol lines, but was intercepted, and rendered ineffective, in August 1942 at Lake Segozero. Irregular partisans distributed propaganda newspapers, such as Finnish translations of the official Communist Party paper Pravda (Russian: Правда). Notable Soviet politician, Yuri Andropov, took part in these partisan guerrilla actions. Finnish sources state that, although Soviet partisan activity in East Karelia disrupted Finnish military supply and communication assets, almost two thirds of the attacks targeted civilians, killing 200 and injuring 50, including children and elderly.", "title": "Trench warfare from 1942 to 1944" }, { "paragraph_id": 37, "text": "Between 1942 and 1943, military operations were limited, although the front did see some action. In January 1942, the Soviet Karelian Front attempted to retake Medvezhyegorsk (Finnish: Karhumäki), which had been lost to the Finns in late 1941. With the arrival of spring in April, Soviet forces went on the offensive on the Svir River front, in the Kestenga (Finnish: Kiestinki) region further north in Lapland as well as in the far north at Petsamo with the 14th Rifle Division's amphibious landings supported by the Northern Fleet. All Soviet offensives started promisingly, but due either to the Soviets overextending their lines or stubborn defensive resistance, the offensives were repulsed. After Finnish and German counterattacks in Kestenga, the front lines were generally stalemated. In September 1942, the USSR attacked again at Medvezhyegorsk, but despite five days of fighting, the Soviets only managed to push the Finnish lines back 500 m (550 yd) on a roughly 1 km (0.62 mi) long stretch of the front. Later that month, a Soviet landing with two battalions in Petsamo was defeated by a German counterattack. In November 1941, Hitler decided to separate the German forces fighting in Lapland from the Army of Norway and create the Army of Lapland, commanded by Colonel General Eduard Dietl. In June 1942, the Army of Lapland was redesignated the 20th Mountain Army.", "title": "Trench warfare from 1942 to 1944" }, { "paragraph_id": 38, "text": "In the early stages of the war, the Finnish Army overran the former 1939 border, but ceased their advance 30–32 km (19–20 mi) from the center of Leningrad. Multiple authors have stated that Finland participated in the siege of Leningrad (Russian: Блокада Ленинграда), but the full extent and nature of their participation is debated and a clear consensus has yet to emerge. American historian David Glantz writes that the Finnish Army generally maintained their lines and contributed little to the siege from 1941 to 1944, whereas Russian historian Nikolai Baryshnikov [ru] stated in 2002 that Finland tacitly supported Hitler's starvation policy for the city. However, in 2009 British historian Michael Jones disputed Baryshnikov's claim and asserted that the Finnish Army cut off the city's northern supply routes but did not take further military action. In 2006, American author Lisa Kirschenbaum wrote that the siege started \"when German and Finnish troops severed all land routes in and out of Leningrad.\"", "title": "Trench warfare from 1942 to 1944" }, { "paragraph_id": 39, "text": "According to Clements, Mannerheim personally refused Hitler's request of assaulting Leningrad during their meeting on 4 June 1942. Mannerheim explained to Hitler that \"Finland had every reason to wish to stay out of any further provocation of the Soviet Union.\" In 2014, author Jeff Rutherford described the city as being \"ensnared\" between the German and Finnish armies. British historian John Barber described it as a \"siege by the German and Finnish armies from 8 September 1941 to 27 January 1944 [...]\" in his foreword in 2017. Likewise, in 2017, Alexis Peri wrote that the city was \"completely cut off, save a heavily patrolled water passage over Lake Ladoga\" by \"Hitler's Army Group North and his Finnish allies.\"", "title": "Trench warfare from 1942 to 1944" }, { "paragraph_id": 40, "text": "The 150 speedboats, two minelayers and four steamships of the Finnish Ladoga Naval Detachment, as well as numerous shore batteries, had been stationed on Lake Ladoga since August 1941. Finnish Lieutenant General Paavo Talvela proposed on 17 May 1942 to create a joint Finnish–German–Italian unit on the lake to disrupt Soviet supply convoys to Leningrad. The unit was named Naval Detachment K and comprised four Italian MAS torpedo motorboats of the XII Squadriglia MAS, four German KM-type minelayers and the Finnish torpedo-motorboat Sisu. The detachment began operations in August 1942 and sank numerous smaller Soviet watercraft and flatboats and assaulted enemy bases and beach fronts until it was dissolved in the winter of 1942–43. Twenty-three Siebel ferries and nine infantry transports of the German Einsatzstab Fähre Ost were also deployed to Lake Ladoga and unsuccessfully assaulted the island of Sukho, which protected the main supply route to Leningrad, in October 1942.", "title": "Trench warfare from 1942 to 1944" }, { "paragraph_id": 41, "text": "Despite the siege of the city, the Soviet Baltic Fleet was still able to operate from Leningrad. The Finnish Navy's flagship Ilmarinen had been sunk in September 1941 in the gulf by mines during the failed diversionary Operation North Wind in 1941. In early 1942, Soviet forces recaptured the island of Gogland, but lost it and the Bolshoy Tyuters islands to Finnish forces later in spring 1942. During the winter between 1941 and 1942, the Soviet Baltic Fleet decided to use their large submarine fleet in offensive operations. Though initial submarine operations in the summer of 1942 were successful, the Kriegsmarine and Finnish Navy soon intensified their anti-submarine efforts, making Soviet submarine operations later in 1942 costly. The underwater offensive carried out by the Soviets convinced the Germans to lay anti-submarine nets as well as supporting minefields between Porkkala Peninsula and Naissaar, which proved to be an insurmountable obstacle for Soviet submarines. On the Arctic Ocean, Finnish radio intelligence intercepted Allied messages on supply convoys to Murmansk, such as PQ 17 and PQ 18, and relayed the information to the Abwehr, German intelligence.", "title": "Trench warfare from 1942 to 1944" }, { "paragraph_id": 42, "text": "On 19 July 1941, the Finns created a military administration in occupied East Karelia with the goal of preparing the region for eventual incorporation into Finland. The Finns aimed to expel the Russian portion of the local population (constituting to about a half), who were deemed \"non-national\", from the area once the war was over, and replace them with the local Finnic peoples, such as Karelians, Finns, Ingrians and Vepsians. Most of the East Karelian population had already been evacuated before the Finnish forces arrived, but about 85,000 people — mostly elderly, women and children — were left behind, less than half of whom were Karelians. A significant number of civilians, almost 30 per cent of the remaining Russians, were interned in concentration camps.", "title": "Trench warfare from 1942 to 1944" }, { "paragraph_id": 43, "text": "The winter between 1941 and 1942 was particularly harsh for the Finnish urban population due to poor harvests and a shortage of agricultural labourers. However, conditions were much worse for Russians in Finnish concentration camps. More than 3,500 people died, mostly from starvation, amounting to 13.8 per cent of those detained, while the corresponding figure for the free population of the occupied territories was 2.6 per cent, and 1.4 per cent for Finland. Conditions gradually improved, ethnic discrimination in wage levels and food rations was terminated, and new schools were established for the Russian-speaking population the following year, after Commander-in-Chief Mannerheim called for the International Committee of the Red Cross from Geneva to inspect the camps. By the end of the occupation, mortality rates had dropped to the same levels as in Finland.", "title": "Trench warfare from 1942 to 1944" }, { "paragraph_id": 44, "text": "In 1939, Finland had a small Jewish population of approximately 2,000 people, of whom 300 were refugees from Germany, Austria, and Czechoslovakia. They had full civil rights and fought with other Finns in the ranks of the Finnish Army. The field synagogue in East Karelia was one of the very few functioning synagogues on the Axis side during the war. There were several cases of Jewish officers of the Finnish Army being awarded the German Iron Cross, which they declined. German soldiers were treated by Jewish medical officers—who sometimes saved the soldiers' lives. German command mentioned Finnish Jews at the Wannsee Conference in January 1942, wishing to transport them to the Majdanek concentration camp in occupied Poland. SS leader Heinrich Himmler also raised the topic of Finnish Jews during his visit in Finland in the summer of 1942; Finnish Prime Minister Jukka Rangell replied that Finland did not have a Jewish question. In November 1942, the Minister of the Interior Toivo Horelli and the head of State Police Arno Anthoni secretly deported eight Jewish refugees to the Gestapo, raising protests among Finnish Social Democrat Party ministers. Only one of the deportees survived. After the incident, the Finnish government refused to transfer any more Jews to German detainment.", "title": "Trench warfare from 1942 to 1944" }, { "paragraph_id": 45, "text": "Finland began to seek an exit from the war after the German defeat at the Battle of Stalingrad in February 1943. Finnish Prime Minister Edwin Linkomies formed a new cabinet in March 1943 with peace as the top priority. Similarly, the Finns were distressed by the Allied invasion of Sicily in July and the German defeat in the Battle of Kursk in August. Negotiations were conducted intermittently in 1943 and 1944 between Finland, the Western Allies and the Soviets, but no agreement was reached. Stalin decided to force Finland to surrender with a bombing campaign on Helsinki. Starting in February 1944, it included three major air attacks totaling over 6,000 sorties. Finnish anti-aircraft defence repelled the raids, and only 5% of the dropped bombs hit their planned targets. In Helsinki, decoy searchlights and fires were placed outside the city to deceive Soviet bombers into dropping their payloads on unpopulated areas. Major air attacks also hit Oulu and Kotka, but pre-emptive radio intelligence and effective defence kept the number of casualties low.", "title": "Soviet offensive in 1944" }, { "paragraph_id": 46, "text": "The Soviet Leningrad–Novgorod Offensive finally lifted the siege of Leningrad on 27 January 1944. The Army Group North was pushed to Ida-Viru County on the Estonian border. Stiff German and Estonian defence in Narva from February to August prevented the use of occupied Estonia as a favourable base for Soviet amphibious and air assaults against Helsinki and other Finnish coastal cities in support of a land offensive. Field Marshal Mannerheim had reminded the German command on numerous occasions that if the German troops withdrew from Estonia, Finland would be forced to make peace, even on extremely unfavourable terms. Finland abandoned peace negotiations in April 1944 because of the unfavourable terms the USSR demanded.", "title": "Soviet offensive in 1944" }, { "paragraph_id": 47, "text": "On 9 June 1944, the Soviet Leningrad Front launched an offensive against Finnish positions on the Karelian Isthmus and in the area of Lake Ladoga, timed to coincide with Operation Overlord in Normandy as agreed during the Tehran Conference. Along the 21.7 km (13.5 mi)-wide breakthrough, the Red Army concentrated 3,000 guns and mortars. In some places, the concentration of artillery pieces exceeded 200 guns for every kilometre of front or one for every 5 m (5.5 yd). Soviet artillery fired over 80,000 rounds along the front on the Karelian Isthmus. On the second day of the offensive, the artillery barrages and superior number of Soviet forces crushed the main Finnish defence line. The Red Army penetrated the second line of defence, the Vammelsuu–Taipale line (VT line), by the sixth day and recaptured Viipuri with insignificant resistance on 20 June. The Soviet breakthrough on the Karelian Isthmus forced the Finns to reinforce the area, thus allowing the concurrent Soviet offensive in East Karelia to meet less resistance and to recapture Petrozavodsk by 28 June 1944.", "title": "Soviet offensive in 1944" }, { "paragraph_id": 48, "text": "On 25 June, the Red Army reached the third line of defence, the Viipuri–Kuparsaari–Taipale line (VKT line), and the decisive Battle of Tali-Ihantala began, which has been described as the largest battle in Nordic military history. By then, the Finnish Army had retreated around 100 km (62 mi) to approximately the same line of defence they had held at the end of the Winter War. Finland especially lacked modern anti-tank weaponry that could stop Soviet heavy armour, such as the KV-1 or IS-2. Thus, German Foreign Minister Joachim von Ribbentrop offered German hand-held Panzerfaust and Panzerschreck antitank weapons in exchange for a guarantee that Finland would not seek a separate peace with the Soviets. On 26 June, President Risto Ryti gave the guarantee as a personal undertaking that he, Field Marshal Mannerheim and Prime Minister Edwin Linkomies intended to last legally only for the remainder of Ryti's presidency. In addition to delivering thousands of anti-tank weapons, Hitler sent the 122nd Infantry Division and the half-strength 303rd Assault Gun Brigade armed with Sturmgeschütz III tank destroyers as well as the Luftwaffe's Detachment Kuhlmey to provide temporary support in the most vulnerable sectors. With the new supplies and assistance from Germany, the Finnish Army halted the numerically and materially superior Soviet advance at Tali-Ihantala on 9 July 1944 and stabilised the front.", "title": "Soviet offensive in 1944" }, { "paragraph_id": 49, "text": "More battles were fought toward the end of the war, the last of which was the Battle of Ilomantsi, fought between 26 July and 13 August 1944 and resulting in a Finnish victory with the destruction of two Soviet divisions. Resisting the Soviet offensive had exhausted Finnish resources. Despite German support under the Ryti-Ribbentrop Agreement, Finland asserted that it was unable to blunt another major offensive. Soviet victories against German Army Groups Center and North during Operation Bagration made the situation even more dire for Finland. With no imminent further Soviet offensives, Finland sought to leave the war. On 1 August, Ryti resigned, and on 4 August, Field Marshal Mannerheim was sworn in as the new president. He annulled the agreement between Ryti and Ribbentrop on 17 August to allow Finland to sue for peace with the Soviets again, and peace terms from Moscow arrived on 29 August.", "title": "Soviet offensive in 1944" }, { "paragraph_id": 50, "text": "Finland was required to return to the borders agreed to in the 1940 Moscow Peace Treaty, demobilise its armed forces, fulfill war reparations and cede the municipality of Petsamo. The Finns were also required to end any diplomatic relations with Germany immediately and to expel the Wehrmacht from Finnish territory by 15 September 1944; any troops remaining were to be disarmed, arrested and turned over to the Allies. The Finnish Parliament accepted those terms in a secret meeting on 2 September and requested for official negotiations for an armistice to begin. The Finnish Army implemented a ceasefire at 8:00 a.m. Helsinki time on 4 September. The Red Army followed suit a day later. On 14 September, a delegation led by Finnish Prime Minister Antti Hackzell and Foreign Minister Carl Enckell began negotiating, with the Soviet Union and the United Kingdom, the final terms of the Moscow Armistice, which eventually included additional stipulations from the Soviets. They were presented by Molotov on 18 September and accepted by the Finnish Parliament a day later.", "title": "Soviet offensive in 1944" }, { "paragraph_id": 51, "text": "The motivations for the Soviet peace agreement with Finland are debated. Several Western historians stated that the original Soviet designs for Finland were no different from those for the Baltic countries. American political scientist Dan Reiter asserted that for Moscow, the control of Finland was necessary. Reiter and the British historian Victor Rothwell quoted Molotov as telling his Lithuanian counterpart in 1940, when the Soviets effectively annexed Lithuania, that minor states such as Finland, \"will be included within the honourable family of Soviet peoples\". Reiter stated that concern over severe losses pushed Stalin into accepting a limited outcome in the war rather than pursuing annexation, although some Soviet documents called for military occupation of Finland. He also wrote that Stalin had described territorial concessions, reparations and military bases as his objective with Finland to representatives from the UK, in December 1941, and the US, in March 1943, as well as the Tehran Conference. He believed that in the end, \"Stalin's desire to crush Hitler quickly and decisively without distraction from the Finnish sideshow\" concluded the war. Red Army officers captured as prisoners of war during the Battle of Tali-Ihantala revealed that their intention was to reach Helsinki, and that they were to be strengthened with reinforcements for this task. This was confirmed by intercepted Soviet radio messages.", "title": "Soviet offensive in 1944" }, { "paragraph_id": 52, "text": "Russian historian Nikolai Baryshnikov disputed the view that the Soviet Union sought to deprive Finland of its independence. He argued that there is no documentary evidence for such claims and that the Soviet government was always open for negotiations. Baryshnikov cited sources like the public information chief of Finnish Headquarters, Major Kalle Lehmus [fi], to show that Finnish leadership had learned of the limited Soviet plans for Finland by at least July 1944 after intelligence revealed that some Soviet divisions were to be transferred to reserve in Leningrad. Finnish historian Heikki Ylikangas [fi] stated similar findings in 2009. According to him, the Soviets refocused their efforts in the summer of 1944 from the Finnish Front to defeating Germany, and Mannerheim received intelligence from Colonel Aladár Paasonen in June 1944 that the Soviet Union was aiming for peace, not occupation. Evidence of the Soviet leadership's intentions for the occupation of Finland has later been uncovered. In 2018, it was revealed that the Soviets' designed and printed (in Goznak) new banknotes for Finland during the closing phases of the war, which were to be put into use after the planned occupation of the country.", "title": "Soviet offensive in 1944" }, { "paragraph_id": 53, "text": "According to Finnish historians, the casualties of the Finnish Defence Forces amounted to 63,204 dead or missing and around 158,000 wounded. Officially, the Soviets captured 2,377 Finnish prisoners-of-war, but Finnish researchers estimated the number to be around 3,500 prisoners. A total of 939 Finnish civilians died in air raids and 190 civilians were killed by Soviet partisans. Germany suffered approximately 84,000 casualties in the Finnish front: 16,400 killed, 60,400 wounded and 6,800 missing. In addition to the original peace terms of restoring the 1940 border, Finland was required to pay war reparations to the USSR, conduct domestic war-responsibility trials, cede the municipality of Petsamo and lease the Porkkala Peninsula to the Soviets, as well as ban fascist elements and allow left-wing groups, such as the Communist Party of Finland. A Soviet-led Allied Control Commission was installed to enforce and monitor the peace agreement in Finland. The requirement to disarm or expel any German troops left on Finnish soil by 15 September 1944 eventually escalated into the Lapland War between Finland and Germany and the evacuation of the 200,000-strong 20th Mountain Army to Norway.", "title": "Aftermath and casualties" }, { "paragraph_id": 54, "text": "The Soviet demand for $600 million in war indemnities was reduced to $300 million (equivalent to $6.2 billion in 2022), most likely because of pressure from the US and the UK. After the ceasefire, the Soviets insisted for the payments to be based on 1938 prices, which doubled the de facto amount. The temporary Moscow Armistice was finalised without changes later in the 1947 Paris Peace Treaties. Henrik Lunde noted that Finland survived the war without losing its independence, unlike many of Germany's allies. Likewise, Helsinki, along with Moscow, was the only capital of a combatant nation that was not occupied in Continental Europe. In the longer term, Peter Provis analysed that by following self-censorship and limited appeasement policies as well as by fulfilling the Soviet demands, Finland avoided the fate of other nations that were annexed by the Soviets. Because of Soviet pressure, Finland decided not to accept economic aid from the Marshall Plan. On 6 April 1948, Finland and the Soviet Union agreed to sign the Finno-Soviet Treaty of 1948, which was introduced since Finland wanted more political independence from the USSR and the Soviets sought to prevent Finland from being used by Western powers to invade the USSR. On 19 September 1955, Finland and the Soviet Union agreed to extend the Finno-Soviet Treaty of 1948 and the Soviets also agreed to return the Porkkala Peninsula to Finland. In January 1956, twelve years after the beginning of the lease in 1944, the Soviets withdrew from their naval base on Porkkala and the peninsula was returned to Finnish sovereignty.", "title": "Aftermath and casualties" }, { "paragraph_id": 55, "text": "Many civilians who had been displaced after the Winter War had moved back into Karelia during the Continuation War and so had to be evacuated from Karelia again. Of the 260,000 civilians who had returned Karelia, only 19 chose to remain and become Soviet citizens. Most of the Ingrian Finns, together with Votes and Izhorians living in German-occupied Ingria, had been evacuated to Finland in 1943–1944. After the armistice, Finland was forced to return the evacuees. Soviet authorities did not allow the 55,733 returnees to resettle in Ingria and deported the Ingrian Finns to central regions of the Soviet Union.", "title": "Aftermath and casualties" }, { "paragraph_id": 56, "text": "The war is considered a Soviet victory. According to Finnish historians, Soviet casualties in the Continuation War were not accurately recorded and various approximations have arisen. Russian historian Grigori Krivosheev estimated in 1997 that around 250,000 were killed or missing in action while 575,000 were medical casualties (385,000 wounded and 190,000 sick). Finnish author Nenye and others stated in 2016 that at least 305,000 were confirmed dead, or missing, according to the latest research and the number of wounded certainly exceeded 500,000. Of material losses, authors Jowett and Snodgrass state that 697 Soviet tanks were destroyed, 842 field artillery pieces captured, and 1,600 airplanes destroyed by Finnish fighter planes (1,030 by anti-aircraft fire and 75 by the Navy).", "title": "Aftermath and casualties" }, { "paragraph_id": 57, "text": "The number of Soviet prisoners of war in Finland was estimated by Finnish historians to be around 64,000, 56,000 of whom were captured in 1941. Around 2,600 to 2,800 Soviet prisoners of war were rendered to Germany in exchange for roughly 2,200 Finnic prisoners of war. Of the Soviet prisoners, at least 18,318 were documented to have died in Finnish prisoner of war camps. Finnish archival sources indicate that the highest mortality rates were observed in the largest prisoner of war camps, with mortality rates as high as 41%. For small camps, the comparable mortality rate was under 5%. Nearly 85% of the deaths happened between November 1941 and September 1942 with the highest monthly number of deaths, 2,665, recorded in February 1942. For comparison, the amount of deaths in February 1943 was 92. Historian Oula Silvennoinen [fi] attributes the amount of Soviet deaths to several factors, which include Finnish unpreparedness to handle unexpectedly large amounts of prisoners resulting in overcrowding, a lack of warm clothing among prisoners captured predominantly during the summer offensive, limited supplies of food (often made worse by camp personnel stealing food for themselves), and disease as a result of the previous factors. According to historian Antti Kujala, approximately 1,200 prisoners were shot, \"most\" of whom illegally.", "title": "Aftermath and casualties" }, { "paragraph_id": 58, "text": "The extent of Finland's participation in the siege of Leningrad, and whether Soviet civilian casualties during the siege should be attributed to the Continuation War, is debated and lacks a consensus (estimates of civilian deaths during the siege range from 632,253 to 1,042,000).", "title": "Aftermath and casualties" }, { "paragraph_id": 59, "text": "Several literary and cinematic arrangements have been made on the basis of the Continuation War. The best-known story about the Continuation War is Väinö Linna's novel The Unknown Soldier (Finnish: Tuntematon sotilas), which was the basis for three films in 1955, 1985, and 2017. There is also a 1999 film Ambush, based on a novel by Antti Tuuri on the events in Rukajärvi, Karelia, and a 2007 film 1944: The Final Defence, based on the Battle of Tali-Ihantala. The final stages of the Continuation War were the primary focus of Soviet director Yuli Raizman's 1945 documentary entitled A Propos of the Truce with Finland (Russian: К вопросу о перемирии с Финляндией). The documentary illustrates the strategic operations that led to the breakthrough on the Karelian Isthmus by the Soviets as well as how Soviet propaganda presented the war overall. The film is titled Läpimurto Kannaksella ja rauhanneuvottelut in Finnish.", "title": "In film and literature" }, { "paragraph_id": 60, "text": "", "title": "External links" } ]
The Continuation War, also known as the Second Soviet-Finnish War, was a conflict fought by Finland and Nazi Germany against the Soviet Union during World War II. It began with a Finnish declaration of war and invasion on 25 June 1941 and ended on 19 September 1944 with the Moscow Armistice. The Soviet Union and Finland had previously fought the Winter War from 1939 to 1940, which ended with the Soviet failure to conquer Finland and the Moscow Peace Treaty. Numerous reasons have been proposed for the Finnish decision to invade, with regaining territory lost during the Winter War regarded as the most common. Other justifications for the conflict include Finnish President Risto Ryti's vision of a Greater Finland and Commander-in-Chief Carl Gustaf Emil Mannerheim's desire to annex East Karelia. On 22 June 1941, the Axis invaded the Soviet Union. Three days later, the Soviet Union conducted an air raid on Finnish cities which prompted Finland to declare war and allow German troops in Finland to begin offensive warfare. By September 1941, Finland had regained its post–Winter War concessions to the Soviet Union in Karelia. The Finnish Army continued its offensive past the 1939 border during the invasion of East Karelia and halted it only around 30–32 km (19–20 mi) from the centre of Leningrad. It participated in besieging the city by cutting the northern supply routes and by digging in until 1944. In Lapland, joint German-Finnish forces failed to capture Murmansk or to cut the Kirov (Murmansk) Railway. The Soviet Vyborg–Petrozavodsk Offensive in June and August 1944 drove the Finns from most of the territories that they had gained during the war, but the Finnish Army halted the offensive in August 1944. Hostilities between Finland and the USSR ceased in September 1944 with the signing of the Moscow Armistice in which Finland restored its borders per the 1940 Moscow Peace Treaty and additionally ceded Petsamo and leased the Porkkala Peninsula to the Soviets. Furthermore, Finland was required to pay war reparations to the Soviet Union, accept partial responsibility for the war, and acknowledge that it had been a German ally. Finland was also required by the agreement to expel German troops from Finnish territory, which led to the Lapland War between Finland and Germany.
2002-01-11T12:53:38Z
2023-12-18T05:46:24Z
[ "Template:Inflation-year", "Template:Ill", "Template:Lang-de", "Template:Cite book", "Template:Finland topics", "Template:Main", "Template:Use dmy dates", "Template:Lang-fi", "Template:Main articles", "Template:Reflist", "Template:Refbegin", "Template:Good article", "Template:Infobox military conflict", "Template:Sfn", "Template:See also", "Template:Legend", "Template:Lang-ru", "Template:Div col", "Template:Refend", "Template:Short description", "Template:Finnish Defence Forces", "Template:Div col end", "Template:Commons category", "Template:Russian Conflicts", "Template:Use British English", "Template:Ship", "Template:Cite journal", "Template:Cite magazine", "Template:Joseph Stalin", "Template:Authority control", "Template:Lang", "Template:Convert", "Template:TOC limit", "Template:Cite news", "Template:Cite web", "Template:World War II", "Template:Use shortened footnotes", "Template:Portal", "Template:Notelist-la", "Template:Harvnb", "Template:Refn" ]
https://en.wikipedia.org/wiki/Continuation_War
7,713
Chinese remainder theorem
In mathematics, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer n by several integers, then one can determine uniquely the remainder of the division of n by the product of these integers, under the condition that the divisors are pairwise coprime (no two divisors share a common factor other than 1). For example, if we know that the remainder of n divided by 3 is 2, the remainder of n divided by 5 is 3, and the remainder of n divided by 7 is 2, then without knowing the value of n, we can determine that the remainder of n divided by 105 (the product of 3, 5, and 7) is 23. Importantly, this tells us that if n is a natural number less than 105, then 23 is the only possible value of n. The earliest known statement of the theorem is by the Chinese mathematician Sunzi in the Sunzi Suanjing in the 3rd century CE. The Chinese remainder theorem is widely used for computing with large integers, as it allows replacing a computation for which one knows a bound on the size of the result by several similar computations on small integers. The Chinese remainder theorem (expressed in terms of congruences) is true over every principal ideal domain. It has been generalized to any ring, with a formulation involving two-sided ideals. The earliest known statement of the theorem, as a problem with specific numbers, appears in the 3rd-century book Sunzi Suanjing by the Chinese mathematician Sunzi: There are certain things whose number is unknown. If we count them by threes, we have two left over; by fives, we have three left over; and by sevens, two are left over. How many things are there? Sunzi's work contains neither a proof nor a full algorithm. What amounts to an algorithm for solving this problem was described by Aryabhata (6th century). Special cases of the Chinese remainder theorem were also known to Brahmagupta (7th century), and appear in Fibonacci's Liber Abaci (1202). The result was later generalized with a complete solution called Da-yan-shu (大衍術) in Qin Jiushao's 1247 Mathematical Treatise in Nine Sections which was translated into English in early 19th century by British missionary Alexander Wylie. The notion of congruences was first introduced and used by Carl Friedrich Gauss in his Disquisitiones Arithmeticae of 1801. Gauss illustrates the Chinese remainder theorem on a problem involving calendars, namely, "to find the years that have a certain period number with respect to the solar and lunar cycle and the Roman indiction." Gauss introduces a procedure for solving the problem that had already been used by Leonhard Euler but was in fact an ancient method that had appeared several times. Let n1, ..., nk be integers greater than 1, which are often called moduli or divisors. Let us denote by N the product of the ni. The Chinese remainder theorem asserts that if the ni are pairwise coprime, and if a1, ..., ak are integers such that 0 ≤ ai < ni for every i, then there is one and only one integer x, such that 0 ≤ x < N and the remainder of the Euclidean division of x by ni is ai for every i. This may be restated as follows in terms of congruences: If the n i {\displaystyle n_{i}} are pairwise coprime, and if a1, ..., ak are any integers, then the system has a solution, and any two solutions, say x1 and x2, are congruent modulo N, that is, x1 ≡ x2 (mod N ). In abstract algebra, the theorem is often restated as: if the ni are pairwise coprime, the map defines a ring isomorphism between the ring of integers modulo N and the direct product of the rings of integers modulo the ni. This means that for doing a sequence of arithmetic operations in Z / N Z , {\displaystyle \mathbb {Z} /N\mathbb {Z} ,} one may do the same computation independently in each Z / n i Z {\displaystyle \mathbb {Z} /n_{i}\mathbb {Z} } and then get the result by applying the isomorphism (from the right to the left). This may be much faster than the direct computation if N and the number of operations are large. This is widely used, under the name multi-modular computation, for linear algebra over the integers or the rational numbers. The theorem can also be restated in the language of combinatorics as the fact that the infinite arithmetic progressions of integers form a Helly family. The existence and the uniqueness of the solution may be proven independently. However, the first proof of existence, given below, uses this uniqueness. Suppose that x and y are both solutions to all the congruences. As x and y give the same remainder, when divided by ni, their difference x − y is a multiple of each ni. As the ni are pairwise coprime, their product N also divides x − y, and thus x and y are congruent modulo N. If x and y are supposed to be non-negative and less than N (as in the first statement of the theorem), then their difference may be a multiple of N only if x = y. The map maps congruence classes modulo N to sequences of congruence classes modulo ni. The proof of uniqueness shows that this map is injective. As the domain and the codomain of this map have the same number of elements, the map is also surjective, which proves the existence of the solution. This proof is very simple but does not provide any direct way for computing a solution. Moreover, it cannot be generalized to other situations where the following proof can. Existence may be established by an explicit construction of x. This construction may be split into two steps, first solving the problem in the case of two moduli, and then extending this solution to the general case by induction on the number of moduli. We want to solve the system: where n 1 {\displaystyle n_{1}} and n 2 {\displaystyle n_{2}} are coprime. Bézout's identity asserts the existence of two integers m 1 {\displaystyle m_{1}} and m 2 {\displaystyle m_{2}} such that The integers m 1 {\displaystyle m_{1}} and m 2 {\displaystyle m_{2}} may be computed by the extended Euclidean algorithm. A solution is given by Indeed, implying that x ≡ a 1 ( mod n 1 ) . {\displaystyle x\equiv a_{1}{\pmod {n_{1}}}.} The second congruence is proved similarly, by exchanging the subscripts 1 and 2. Consider a sequence of congruence equations: where the n i {\displaystyle n_{i}} are pairwise coprime. The two first equations have a solution a 1 , 2 {\displaystyle a_{1,2}} provided by the method of the previous section. The set of the solutions of these two first equations is the set of all solutions of the equation As the other n i {\displaystyle n_{i}} are coprime with n 1 n 2 , {\displaystyle n_{1}n_{2},} this reduces solving the initial problem of k equations to a similar problem with k − 1 {\displaystyle k-1} equations. Iterating the process, one gets eventually the solutions of the initial problem. For constructing a solution, it is not necessary to make an induction on the number of moduli. However, such a direct construction involves more computation with large numbers, which makes it less efficient and less used. Nevertheless, Lagrange interpolation is a special case of this construction, applied to polynomials instead of integers. Let N i = N / n i {\displaystyle N_{i}=N/n_{i}} be the product of all moduli but one. As the n i {\displaystyle n_{i}} are pairwise coprime, N i {\displaystyle N_{i}} and n i {\displaystyle n_{i}} are coprime. Thus Bézout's identity applies, and there exist integers M i {\displaystyle M_{i}} and m i {\displaystyle m_{i}} such that A solution of the system of congruences is In fact, as N j {\displaystyle N_{j}} is a multiple of n i {\displaystyle n_{i}} for i ≠ j , {\displaystyle i\neq j,} we have for every i . {\displaystyle i.} Consider a system of congruences: where the n i {\displaystyle n_{i}} are pairwise coprime, and let N = n 1 n 2 ⋯ n k . {\displaystyle N=n_{1}n_{2}\cdots n_{k}.} In this section several methods are described for computing the unique solution for x {\displaystyle x} , such that 0 ≤ x < N , {\displaystyle 0\leq x<N,} and these methods are applied on the example Several methods of computation are presented. The two first ones are useful for small examples, but become very inefficient when the product n 1 ⋯ n k {\displaystyle n_{1}\cdots n_{k}} is large. The third one uses the existence proof given in § Existence (constructive proof). It is the most convenient when the product n 1 ⋯ n k {\displaystyle n_{1}\cdots n_{k}} is large, or for computer computation. It is easy to check whether a value of x is a solution: it suffices to compute the remainder of the Euclidean division of x by each ni. Thus, to find the solution, it suffices to check successively the integers from 0 to N until finding the solution. Although very simple, this method is very inefficient. For the simple example considered here, 40 integers (including 0) have to be checked for finding the solution, which is 39. This is an exponential time algorithm, as the size of the input is, up to a constant factor, the number of digits of N, and the average number of operations is of the order of N. Therefore, this method is rarely used, neither for hand-written computation nor on computers. The search of the solution may be made dramatically faster by sieving. For this method, we suppose, without loss of generality, that 0 ≤ a i < n i {\displaystyle 0\leq a_{i}<n_{i}} (if it were not the case, it would suffice to replace each a i {\displaystyle a_{i}} by the remainder of its division by n i {\displaystyle n_{i}} ). This implies that the solution belongs to the arithmetic progression By testing the values of these numbers modulo n 2 , {\displaystyle n_{2},} one eventually finds a solution x 2 {\displaystyle x_{2}} of the two first congruences. Then the solution belongs to the arithmetic progression Testing the values of these numbers modulo n 3 , {\displaystyle n_{3},} and continuing until every modulus has been tested eventually yields the solution. This method is faster if the moduli have been ordered by decreasing value, that is if n 1 > n 2 > ⋯ > n k . {\displaystyle n_{1}>n_{2}>\cdots >n_{k}.} For the example, this gives the following computation. We consider first the numbers that are congruent to 4 modulo 5 (the largest modulus), which are 4, 9 = 4 + 5, 14 = 9 + 5, ... For each of them, compute the remainder by 4 (the second largest modulus) until getting a number congruent to 3 modulo 4. Then one can proceed by adding 20 = 5 × 4 at each step, and computing only the remainders by 3. This gives This method works well for hand-written computation with a product of moduli that is not too big. However, it is much slower than other methods, for very large products of moduli. Although dramatically faster than the systematic search, this method also has an exponential time complexity and is therefore not used on computers. The constructive existence proof shows that, in the case of two moduli, the solution may be obtained by the computation of the Bézout coefficients of the moduli, followed by a few multiplications, additions and reductions modulo n 1 n 2 {\displaystyle n_{1}n_{2}} (for getting a result in the interval ( 0 , n 1 n 2 − 1 ) {\displaystyle (0,n_{1}n_{2}-1)} ). As the Bézout's coefficients may be computed with the extended Euclidean algorithm, the whole computation, at most, has a quadratic time complexity of O ( ( s 1 + s 2 ) 2 ) , {\displaystyle O((s_{1}+s_{2})^{2}),} where s i {\displaystyle s_{i}} denotes the number of digits of n i . {\displaystyle n_{i}.} For more than two moduli, the method for two moduli allows the replacement of any two congruences by a single congruence modulo the product of the moduli. Iterating this process provides eventually the solution with a complexity, which is quadratic in the number of digits of the product of all moduli. This quadratic time complexity does not depend on the order in which the moduli are regrouped. One may regroup the two first moduli, then regroup the resulting modulus with the next one, and so on. This strategy is the easiest to implement, but it also requires more computation involving large numbers. Another strategy consists in partitioning the moduli in pairs whose product have comparable sizes (as much as possible), applying, in parallel, the method of two moduli to each pair, and iterating with a number of moduli approximatively divided by two. This method allows an easy parallelization of the algorithm. Also, if fast algorithms (that is, algorithms working in quasilinear time) are used for the basic operations, this method provides an algorithm for the whole computation that works in quasilinear time. On the current example (which has only three moduli), both strategies are identical and work as follows. Bézout's identity for 3 and 4 is Putting this in the formula given for proving the existence gives for a solution of the two first congruences, the other solutions being obtained by adding to −9 any multiple of 3 × 4 = 12. One may continue with any of these solutions, but the solution 3 = −9 +12 is smaller (in absolute value) and thus leads probably to an easier computation Bézout identity for 5 and 3 × 4 = 12 is Applying the same formula again, we get a solution of the problem: The other solutions are obtained by adding any multiple of 3 × 4 × 5 = 60, and the smallest positive solution is −21 + 60 = 39. The system of congruences solved by the Chinese remainder theorem may be rewritten as a system of linear Diophantine equations: where the unknown integers are x {\displaystyle x} and the x i . {\displaystyle x_{i}.} Therefore, every general method for solving such systems may be used for finding the solution of Chinese remainder theorem, such as the reduction of the matrix of the system to Smith normal form or Hermite normal form. However, as usual when using a general algorithm for a more specific problem, this approach is less efficient than the method of the preceding section, based on a direct use of Bézout's identity. In § Statement, the Chinese remainder theorem has been stated in three different ways: in terms of remainders, of congruences, and of a ring isomorphism. The statement in terms of remainders does not apply, in general, to principal ideal domains, as remainders are not defined in such rings. However, the two other versions make sense over a principal ideal domain R: it suffices to replace "integer" by "element of the domain" and Z {\displaystyle \mathbb {Z} } by R. These two versions of the theorem are true in this context, because the proofs (except for the first existence proof), are based on Euclid's lemma and Bézout's identity, which are true over every principal domain. However, in general, the theorem is only an existence theorem and does not provide any way for computing the solution, unless one has an algorithm for computing the coefficients of Bézout's identity. The statement in terms of remainders given in § Theorem statement cannot be generalized to any principal ideal domain, but its generalization to Euclidean domains is straightforward. The univariate polynomials over a field is the typical example of a Euclidean domain which is not the integers. Therefore, we state the theorem for the case of the ring R = K [ X ] {\displaystyle R=K[X]} for a field K . {\displaystyle K.} For getting the theorem for a general Euclidean domain, it suffices to replace the degree by the Euclidean function of the Euclidean domain. The Chinese remainder theorem for polynomials is thus: Let P i ( X ) {\displaystyle P_{i}(X)} (the moduli) be, for i = 1 , … , k {\displaystyle i=1,\dots ,k} , pairwise coprime polynomials in R = K [ X ] {\displaystyle R=K[X]} . Let d i = deg P i {\displaystyle d_{i}=\deg P_{i}} be the degree of P i ( X ) {\displaystyle P_{i}(X)} , and D {\displaystyle D} be the sum of the d i . {\displaystyle d_{i}.} If A i ( X ) , … , A k ( X ) {\displaystyle A_{i}(X),\ldots ,A_{k}(X)} are polynomials such that A i ( X ) = 0 {\displaystyle A_{i}(X)=0} or deg A i < d i {\displaystyle \deg A_{i}<d_{i}} for every i, then, there is one and only one polynomial P ( X ) {\displaystyle P(X)} , such that deg P < D {\displaystyle \deg P<D} and the remainder of the Euclidean division of P ( X ) {\displaystyle P(X)} by P i ( X ) {\displaystyle P_{i}(X)} is A i ( X ) {\displaystyle A_{i}(X)} for every i. The construction of the solution may be done as in § Existence (constructive proof) or § Existence (direct proof). However, the latter construction may be simplified by using, as follows, partial fraction decomposition instead of the extended Euclidean algorithm. Thus, we want to find a polynomial P ( X ) {\displaystyle P(X)} , which satisfies the congruences for i = 1 , … , k . {\displaystyle i=1,\ldots ,k.} Consider the polynomials The partial fraction decomposition of 1 / Q ( X ) {\displaystyle 1/Q(X)} gives k polynomials S i ( X ) {\displaystyle S_{i}(X)} with degrees deg S i ( X ) < d i , {\displaystyle \deg S_{i}(X)<d_{i},} such that and thus Then a solution of the simultaneous congruence system is given by the polynomial In fact, we have for 1 ≤ i ≤ k . {\displaystyle 1\leq i\leq k.} This solution may have a degree larger than D = ∑ i = 1 k d i . {\displaystyle D=\sum _{i=1}^{k}d_{i}.} The unique solution of degree less than D {\displaystyle D} may be deduced by considering the remainder B i ( X ) {\displaystyle B_{i}(X)} of the Euclidean division of A i ( X ) S i ( X ) {\displaystyle A_{i}(X)S_{i}(X)} by P i ( X ) . {\displaystyle P_{i}(X).} This solution is A special case of Chinese remainder theorem for polynomials is Lagrange interpolation. For this, consider k monic polynomials of degree one: They are pairwise coprime if the x i {\displaystyle x_{i}} are all different. The remainder of the division by P i ( X ) {\displaystyle P_{i}(X)} of a polynomial P ( X ) {\displaystyle P(X)} is P ( x i ) {\displaystyle P(x_{i})} , by the polynomial remainder theorem. Now, let A 1 , … , A k {\displaystyle A_{1},\ldots ,A_{k}} be constants (polynomials of degree 0) in K . {\displaystyle K.} Both Lagrange interpolation and Chinese remainder theorem assert the existence of a unique polynomial P ( X ) , {\displaystyle P(X),} of degree less than k {\displaystyle k} such that for every i . {\displaystyle i.} Lagrange interpolation formula is exactly the result, in this case, of the above construction of the solution. More precisely, let The partial fraction decomposition of 1 Q ( X ) {\displaystyle {\frac {1}{Q(X)}}} is In fact, reducing the right-hand side to a common denominator one gets and the numerator is equal to one, as being a polynomial of degree less than k , {\displaystyle k,} which takes the value one for k {\displaystyle k} different values of X . {\displaystyle X.} Using the above general formula, we get the Lagrange interpolation formula: Hermite interpolation is an application of the Chinese remainder theorem for univariate polynomials, which may involve moduli of arbitrary degrees (Lagrange interpolation involves only moduli of degree one). The problem consists of finding a polynomial of the least possible degree, such that the polynomial and its first derivatives take given values at some fixed points. More precisely, let x 1 , … , x k {\displaystyle x_{1},\ldots ,x_{k}} be k {\displaystyle k} elements of the ground field K , {\displaystyle K,} and, for i = 1 , … , k , {\displaystyle i=1,\ldots ,k,} let a i , 0 , a i , 1 , … , a i , r i − 1 {\displaystyle a_{i,0},a_{i,1},\ldots ,a_{i,r_{i}-1}} be the values of the first r i {\displaystyle r_{i}} derivatives of the sought polynomial at x i {\displaystyle x_{i}} (including the 0th derivative, which is the value of the polynomial itself). The problem is to find a polynomial P ( X ) {\displaystyle P(X)} such that its j th derivative takes the value a i , j {\displaystyle a_{i,j}} at x i , {\displaystyle x_{i},} for i = 1 , … , k {\displaystyle i=1,\ldots ,k} and j = 0 , … , r j . {\displaystyle j=0,\ldots ,r_{j}.} Consider the polynomial This is the Taylor polynomial of order r i − 1 {\displaystyle r_{i}-1} at x i {\displaystyle x_{i}} , of the unknown polynomial P ( X ) . {\displaystyle P(X).} Therefore, we must have Conversely, any polynomial P ( X ) {\displaystyle P(X)} that satisfies these k {\displaystyle k} congruences, in particular verifies, for any i = 1 , … , k {\displaystyle i=1,\ldots ,k} therefore P i ( X ) {\displaystyle P_{i}(X)} is its Taylor polynomial of order r i − 1 {\displaystyle r_{i}-1} at x i {\displaystyle x_{i}} , that is, P ( X ) {\displaystyle P(X)} solves the initial Hermite interpolation problem. The Chinese remainder theorem asserts that there exists exactly one polynomial of degree less than the sum of the r i , {\displaystyle r_{i},} which satisfies these k {\displaystyle k} congruences. There are several ways for computing the solution P ( X ) . {\displaystyle P(X).} One may use the method described at the beginning of § Over univariate polynomial rings and Euclidean domains. One may also use the constructions given in § Existence (constructive proof) or § Existence (direct proof). The Chinese remainder theorem can be generalized to non-coprime moduli. Let m , n , a , b {\displaystyle m,n,a,b} be any integers, let g = gcd ( m , n ) {\displaystyle g=\gcd(m,n)} ; M = lcm ( m , n ) {\displaystyle M=\operatorname {lcm} (m,n)} , and consider the system of congruences: If a ≡ b ( mod g ) {\displaystyle a\equiv b{\pmod {g}}} , then this system has a unique solution modulo M = m n / g {\displaystyle M=mn/g} . Otherwise, it has no solutions. If one uses Bézout's identity to write g = u m + v n {\displaystyle g=um+vn} , then the solution is given by This defines an integer, as g divides both m and n. Otherwise, the proof is very similar to that for coprime moduli. The Chinese remainder theorem can be generalized to any ring, by using coprime ideals (also called comaximal ideals). Two ideals I and J are coprime if there are elements i ∈ I {\displaystyle i\in I} and j ∈ J {\displaystyle j\in J} such that i + j = 1. {\displaystyle i+j=1.} This relation plays the role of Bézout's identity in the proofs related to this generalization, which otherwise are very similar. The generalization may be stated as follows. Let I1, ..., Ik be two-sided ideals of a ring R {\displaystyle R} and let I be their intersection. If the ideals are pairwise coprime, we have the isomorphism: between the quotient ring R / I {\displaystyle R/I} and the direct product of the R / I i , {\displaystyle R/I_{i},} where " x mod I {\displaystyle x{\bmod {I}}} " denotes the image of the element x {\displaystyle x} in the quotient ring defined by the ideal I . {\displaystyle I.} Moreover, if R {\displaystyle R} is commutative, then the ideal intersection of pairwise coprime ideals is equal to their product; that is if Ii and Ij are coprime for all i ≠ j. Let I 1 , I 2 , … , I k {\displaystyle I_{1},I_{2},\dots ,I_{k}} be pairwise coprime two-sided ideals with ⋂ i = 1 k I i = 0 , {\displaystyle \bigcap _{i=1}^{k}I_{i}=0,} and be the isomorphism defined above. Let f i = ( 0 , … , 1 , … , 0 ) {\displaystyle f_{i}=(0,\ldots ,1,\ldots ,0)} be the element of ( R / I 1 ) × ⋯ × ( R / I k ) {\displaystyle (R/I_{1})\times \cdots \times (R/I_{k})} whose components are all 0 except the i th which is 1, and e i = φ − 1 ( f i ) . {\displaystyle e_{i}=\varphi ^{-1}(f_{i}).} The e i {\displaystyle e_{i}} are central idempotents that are pairwise orthogonal; this means, in particular, that e i 2 = e i {\displaystyle e_{i}^{2}=e_{i}} and e i e j = e j e i = 0 {\displaystyle e_{i}e_{j}=e_{j}e_{i}=0} for every i and j. Moreover, one has e 1 + ⋯ + e n = 1 , {\textstyle e_{1}+\cdots +e_{n}=1,} and I i = R ( 1 − e i ) . {\displaystyle I_{i}=R(1-e_{i}).} In summary, this generalized Chinese remainder theorem is the equivalence between giving pairwise coprime two-sided ideals with a zero intersection, and giving central and pairwise orthogonal idempotents that sum to 1. The Chinese remainder theorem has been used to construct a Gödel numbering for sequences, which is involved in the proof of Gödel's incompleteness theorems. The prime-factor FFT algorithm (also called Good-Thomas algorithm) uses the Chinese remainder theorem for reducing the computation of a fast Fourier transform of size n 1 n 2 {\displaystyle n_{1}n_{2}} to the computation of two fast Fourier transforms of smaller sizes n 1 {\displaystyle n_{1}} and n 2 {\displaystyle n_{2}} (providing that n 1 {\displaystyle n_{1}} and n 2 {\displaystyle n_{2}} are coprime). Most implementations of RSA use the Chinese remainder theorem during signing of HTTPS certificates and during decryption. The Chinese remainder theorem can also be used in secret sharing, which consists of distributing a set of shares among a group of people who, all together (but no one alone), can recover a certain secret from the given set of shares. Each of the shares is represented in a congruence, and the solution of the system of congruences using the Chinese remainder theorem is the secret to be recovered. Secret sharing using the Chinese remainder theorem uses, along with the Chinese remainder theorem, special sequences of integers that guarantee the impossibility of recovering the secret from a set of shares with less than a certain cardinality. The range ambiguity resolution techniques used with medium pulse repetition frequency radar can be seen as a special case of the Chinese remainder theorem. Given a surjection Z / n → Z / m {\displaystyle \mathbb {Z} /n\to \mathbb {Z} /m} of finite abelian groups, we can use the Chinese remainder theorem to give a complete description of any such map. First of all, the theorem gives isomorphisms where { p m 1 , … , p m j } ⊆ { p n 1 , … , p n i } {\displaystyle \{p_{m_{1}},\ldots ,p_{m_{j}}\}\subseteq \{p_{n_{1}},\ldots ,p_{n_{i}}\}} . In addition, for any induced map from the original surjection, we have a k ≥ b l {\displaystyle a_{k}\geq b_{l}} and p n k = p m l , {\displaystyle p_{n_{k}}=p_{m_{l}},} since for a pair of primes p , q {\displaystyle p,q} , the only non-zero surjections can be defined if p = q {\displaystyle p=q} and a ≥ b {\displaystyle a\geq b} . These observations are pivotal for constructing the ring of profinite integers, which is given as an inverse limit of all such maps. Dedekind's theorem on the linear independence of characters. Let M be a monoid and k an integral domain, viewed as a monoid by considering the multiplication on k. Then any finite family ( fi )i∈I of distinct monoid homomorphisms fi : M → k is linearly independent. In other words, every family (αi)i∈I of elements αi ∈ k satisfying must be equal to the family (0)i∈I. Proof. First assume that k is a field, otherwise, replace the integral domain k by its quotient field, and nothing will change. We can linearly extend the monoid homomorphisms fi : M → k to k-algebra homomorphisms Fi : k[M] → k, where k[M] is the monoid ring of M over k. Then, by linearity, the condition yields Next, for i, j ∈ I; i ≠ j the two k-linear maps Fi : k[M] → k and Fj : k[M] → k are not proportional to each other. Otherwise fi and fj would also be proportional, and thus equal since as monoid homomorphisms they satisfy: fi (1) = 1 = fj (1), which contradicts the assumption that they are distinct. Therefore, the kernels Ker Fi and Ker Fj are distinct. Since k[M]/Ker Fi ≅ Fi (k[M]) = k is a field, Ker Fi is a maximal ideal of k[M] for every i in I. Because they are distinct and maximal the ideals Ker Fi and Ker Fj are coprime whenever i ≠ j. The Chinese Remainder Theorem (for general rings) yields an isomorphism: where Consequently, the map is surjective. Under the isomorphisms k[M]/Ker Fi → Fi (k[M]) = k, the map Φ corresponds to: Now, yields for every vector (ui)i∈I in the image of the map ψ. Since ψ is surjective, this means that for every vector Consequently, (αi)i∈I = (0)i∈I. QED.
[ { "paragraph_id": 0, "text": "In mathematics, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer n by several integers, then one can determine uniquely the remainder of the division of n by the product of these integers, under the condition that the divisors are pairwise coprime (no two divisors share a common factor other than 1).", "title": "" }, { "paragraph_id": 1, "text": "For example, if we know that the remainder of n divided by 3 is 2, the remainder of n divided by 5 is 3, and the remainder of n divided by 7 is 2, then without knowing the value of n, we can determine that the remainder of n divided by 105 (the product of 3, 5, and 7) is 23. Importantly, this tells us that if n is a natural number less than 105, then 23 is the only possible value of n.", "title": "" }, { "paragraph_id": 2, "text": "The earliest known statement of the theorem is by the Chinese mathematician Sunzi in the Sunzi Suanjing in the 3rd century CE.", "title": "" }, { "paragraph_id": 3, "text": "The Chinese remainder theorem is widely used for computing with large integers, as it allows replacing a computation for which one knows a bound on the size of the result by several similar computations on small integers.", "title": "" }, { "paragraph_id": 4, "text": "The Chinese remainder theorem (expressed in terms of congruences) is true over every principal ideal domain. It has been generalized to any ring, with a formulation involving two-sided ideals.", "title": "" }, { "paragraph_id": 5, "text": "The earliest known statement of the theorem, as a problem with specific numbers, appears in the 3rd-century book Sunzi Suanjing by the Chinese mathematician Sunzi:", "title": "History" }, { "paragraph_id": 6, "text": "There are certain things whose number is unknown. If we count them by threes, we have two left over; by fives, we have three left over; and by sevens, two are left over. How many things are there?", "title": "History" }, { "paragraph_id": 7, "text": "Sunzi's work contains neither a proof nor a full algorithm. What amounts to an algorithm for solving this problem was described by Aryabhata (6th century). Special cases of the Chinese remainder theorem were also known to Brahmagupta (7th century), and appear in Fibonacci's Liber Abaci (1202). The result was later generalized with a complete solution called Da-yan-shu (大衍術) in Qin Jiushao's 1247 Mathematical Treatise in Nine Sections which was translated into English in early 19th century by British missionary Alexander Wylie.", "title": "History" }, { "paragraph_id": 8, "text": "The notion of congruences was first introduced and used by Carl Friedrich Gauss in his Disquisitiones Arithmeticae of 1801. Gauss illustrates the Chinese remainder theorem on a problem involving calendars, namely, \"to find the years that have a certain period number with respect to the solar and lunar cycle and the Roman indiction.\" Gauss introduces a procedure for solving the problem that had already been used by Leonhard Euler but was in fact an ancient method that had appeared several times.", "title": "History" }, { "paragraph_id": 9, "text": "Let n1, ..., nk be integers greater than 1, which are often called moduli or divisors. Let us denote by N the product of the ni.", "title": "Statement" }, { "paragraph_id": 10, "text": "The Chinese remainder theorem asserts that if the ni are pairwise coprime, and if a1, ..., ak are integers such that 0 ≤ ai < ni for every i, then there is one and only one integer x, such that 0 ≤ x < N and the remainder of the Euclidean division of x by ni is ai for every i.", "title": "Statement" }, { "paragraph_id": 11, "text": "This may be restated as follows in terms of congruences: If the n i {\\displaystyle n_{i}} are pairwise coprime, and if a1, ..., ak are any integers, then the system", "title": "Statement" }, { "paragraph_id": 12, "text": "has a solution, and any two solutions, say x1 and x2, are congruent modulo N, that is, x1 ≡ x2 (mod N ).", "title": "Statement" }, { "paragraph_id": 13, "text": "In abstract algebra, the theorem is often restated as: if the ni are pairwise coprime, the map", "title": "Statement" }, { "paragraph_id": 14, "text": "defines a ring isomorphism", "title": "Statement" }, { "paragraph_id": 15, "text": "between the ring of integers modulo N and the direct product of the rings of integers modulo the ni. This means that for doing a sequence of arithmetic operations in Z / N Z , {\\displaystyle \\mathbb {Z} /N\\mathbb {Z} ,} one may do the same computation independently in each Z / n i Z {\\displaystyle \\mathbb {Z} /n_{i}\\mathbb {Z} } and then get the result by applying the isomorphism (from the right to the left). This may be much faster than the direct computation if N and the number of operations are large. This is widely used, under the name multi-modular computation, for linear algebra over the integers or the rational numbers.", "title": "Statement" }, { "paragraph_id": 16, "text": "The theorem can also be restated in the language of combinatorics as the fact that the infinite arithmetic progressions of integers form a Helly family.", "title": "Statement" }, { "paragraph_id": 17, "text": "The existence and the uniqueness of the solution may be proven independently. However, the first proof of existence, given below, uses this uniqueness.", "title": "Proof" }, { "paragraph_id": 18, "text": "Suppose that x and y are both solutions to all the congruences. As x and y give the same remainder, when divided by ni, their difference x − y is a multiple of each ni. As the ni are pairwise coprime, their product N also divides x − y, and thus x and y are congruent modulo N. If x and y are supposed to be non-negative and less than N (as in the first statement of the theorem), then their difference may be a multiple of N only if x = y.", "title": "Proof" }, { "paragraph_id": 19, "text": "The map", "title": "Proof" }, { "paragraph_id": 20, "text": "maps congruence classes modulo N to sequences of congruence classes modulo ni. The proof of uniqueness shows that this map is injective. As the domain and the codomain of this map have the same number of elements, the map is also surjective, which proves the existence of the solution.", "title": "Proof" }, { "paragraph_id": 21, "text": "This proof is very simple but does not provide any direct way for computing a solution. Moreover, it cannot be generalized to other situations where the following proof can.", "title": "Proof" }, { "paragraph_id": 22, "text": "Existence may be established by an explicit construction of x. This construction may be split into two steps, first solving the problem in the case of two moduli, and then extending this solution to the general case by induction on the number of moduli.", "title": "Proof" }, { "paragraph_id": 23, "text": "We want to solve the system:", "title": "Proof" }, { "paragraph_id": 24, "text": "where n 1 {\\displaystyle n_{1}} and n 2 {\\displaystyle n_{2}} are coprime.", "title": "Proof" }, { "paragraph_id": 25, "text": "Bézout's identity asserts the existence of two integers m 1 {\\displaystyle m_{1}} and m 2 {\\displaystyle m_{2}} such that", "title": "Proof" }, { "paragraph_id": 26, "text": "The integers m 1 {\\displaystyle m_{1}} and m 2 {\\displaystyle m_{2}} may be computed by the extended Euclidean algorithm.", "title": "Proof" }, { "paragraph_id": 27, "text": "A solution is given by", "title": "Proof" }, { "paragraph_id": 28, "text": "Indeed,", "title": "Proof" }, { "paragraph_id": 29, "text": "implying that x ≡ a 1 ( mod n 1 ) . {\\displaystyle x\\equiv a_{1}{\\pmod {n_{1}}}.} The second congruence is proved similarly, by exchanging the subscripts 1 and 2.", "title": "Proof" }, { "paragraph_id": 30, "text": "Consider a sequence of congruence equations:", "title": "Proof" }, { "paragraph_id": 31, "text": "where the n i {\\displaystyle n_{i}} are pairwise coprime. The two first equations have a solution a 1 , 2 {\\displaystyle a_{1,2}} provided by the method of the previous section. The set of the solutions of these two first equations is the set of all solutions of the equation", "title": "Proof" }, { "paragraph_id": 32, "text": "As the other n i {\\displaystyle n_{i}} are coprime with n 1 n 2 , {\\displaystyle n_{1}n_{2},} this reduces solving the initial problem of k equations to a similar problem with k − 1 {\\displaystyle k-1} equations. Iterating the process, one gets eventually the solutions of the initial problem.", "title": "Proof" }, { "paragraph_id": 33, "text": "For constructing a solution, it is not necessary to make an induction on the number of moduli. However, such a direct construction involves more computation with large numbers, which makes it less efficient and less used. Nevertheless, Lagrange interpolation is a special case of this construction, applied to polynomials instead of integers.", "title": "Proof" }, { "paragraph_id": 34, "text": "Let N i = N / n i {\\displaystyle N_{i}=N/n_{i}} be the product of all moduli but one. As the n i {\\displaystyle n_{i}} are pairwise coprime, N i {\\displaystyle N_{i}} and n i {\\displaystyle n_{i}} are coprime. Thus Bézout's identity applies, and there exist integers M i {\\displaystyle M_{i}} and m i {\\displaystyle m_{i}} such that", "title": "Proof" }, { "paragraph_id": 35, "text": "A solution of the system of congruences is", "title": "Proof" }, { "paragraph_id": 36, "text": "In fact, as N j {\\displaystyle N_{j}} is a multiple of n i {\\displaystyle n_{i}} for i ≠ j , {\\displaystyle i\\neq j,} we have", "title": "Proof" }, { "paragraph_id": 37, "text": "for every i . {\\displaystyle i.}", "title": "Proof" }, { "paragraph_id": 38, "text": "Consider a system of congruences:", "title": "Computation" }, { "paragraph_id": 39, "text": "where the n i {\\displaystyle n_{i}} are pairwise coprime, and let N = n 1 n 2 ⋯ n k . {\\displaystyle N=n_{1}n_{2}\\cdots n_{k}.} In this section several methods are described for computing the unique solution for x {\\displaystyle x} , such that 0 ≤ x < N , {\\displaystyle 0\\leq x<N,} and these methods are applied on the example", "title": "Computation" }, { "paragraph_id": 40, "text": "Several methods of computation are presented. The two first ones are useful for small examples, but become very inefficient when the product n 1 ⋯ n k {\\displaystyle n_{1}\\cdots n_{k}} is large. The third one uses the existence proof given in § Existence (constructive proof). It is the most convenient when the product n 1 ⋯ n k {\\displaystyle n_{1}\\cdots n_{k}} is large, or for computer computation.", "title": "Computation" }, { "paragraph_id": 41, "text": "It is easy to check whether a value of x is a solution: it suffices to compute the remainder of the Euclidean division of x by each ni. Thus, to find the solution, it suffices to check successively the integers from 0 to N until finding the solution.", "title": "Computation" }, { "paragraph_id": 42, "text": "Although very simple, this method is very inefficient. For the simple example considered here, 40 integers (including 0) have to be checked for finding the solution, which is 39. This is an exponential time algorithm, as the size of the input is, up to a constant factor, the number of digits of N, and the average number of operations is of the order of N.", "title": "Computation" }, { "paragraph_id": 43, "text": "Therefore, this method is rarely used, neither for hand-written computation nor on computers.", "title": "Computation" }, { "paragraph_id": 44, "text": "The search of the solution may be made dramatically faster by sieving. For this method, we suppose, without loss of generality, that 0 ≤ a i < n i {\\displaystyle 0\\leq a_{i}<n_{i}} (if it were not the case, it would suffice to replace each a i {\\displaystyle a_{i}} by the remainder of its division by n i {\\displaystyle n_{i}} ). This implies that the solution belongs to the arithmetic progression", "title": "Computation" }, { "paragraph_id": 45, "text": "By testing the values of these numbers modulo n 2 , {\\displaystyle n_{2},} one eventually finds a solution x 2 {\\displaystyle x_{2}} of the two first congruences. Then the solution belongs to the arithmetic progression", "title": "Computation" }, { "paragraph_id": 46, "text": "Testing the values of these numbers modulo n 3 , {\\displaystyle n_{3},} and continuing until every modulus has been tested eventually yields the solution.", "title": "Computation" }, { "paragraph_id": 47, "text": "This method is faster if the moduli have been ordered by decreasing value, that is if n 1 > n 2 > ⋯ > n k . {\\displaystyle n_{1}>n_{2}>\\cdots >n_{k}.} For the example, this gives the following computation. We consider first the numbers that are congruent to 4 modulo 5 (the largest modulus), which are 4, 9 = 4 + 5, 14 = 9 + 5, ... For each of them, compute the remainder by 4 (the second largest modulus) until getting a number congruent to 3 modulo 4. Then one can proceed by adding 20 = 5 × 4 at each step, and computing only the remainders by 3. This gives", "title": "Computation" }, { "paragraph_id": 48, "text": "This method works well for hand-written computation with a product of moduli that is not too big. However, it is much slower than other methods, for very large products of moduli. Although dramatically faster than the systematic search, this method also has an exponential time complexity and is therefore not used on computers.", "title": "Computation" }, { "paragraph_id": 49, "text": "The constructive existence proof shows that, in the case of two moduli, the solution may be obtained by the computation of the Bézout coefficients of the moduli, followed by a few multiplications, additions and reductions modulo n 1 n 2 {\\displaystyle n_{1}n_{2}} (for getting a result in the interval ( 0 , n 1 n 2 − 1 ) {\\displaystyle (0,n_{1}n_{2}-1)} ). As the Bézout's coefficients may be computed with the extended Euclidean algorithm, the whole computation, at most, has a quadratic time complexity of O ( ( s 1 + s 2 ) 2 ) , {\\displaystyle O((s_{1}+s_{2})^{2}),} where s i {\\displaystyle s_{i}} denotes the number of digits of n i . {\\displaystyle n_{i}.}", "title": "Computation" }, { "paragraph_id": 50, "text": "For more than two moduli, the method for two moduli allows the replacement of any two congruences by a single congruence modulo the product of the moduli. Iterating this process provides eventually the solution with a complexity, which is quadratic in the number of digits of the product of all moduli. This quadratic time complexity does not depend on the order in which the moduli are regrouped. One may regroup the two first moduli, then regroup the resulting modulus with the next one, and so on. This strategy is the easiest to implement, but it also requires more computation involving large numbers.", "title": "Computation" }, { "paragraph_id": 51, "text": "Another strategy consists in partitioning the moduli in pairs whose product have comparable sizes (as much as possible), applying, in parallel, the method of two moduli to each pair, and iterating with a number of moduli approximatively divided by two. This method allows an easy parallelization of the algorithm. Also, if fast algorithms (that is, algorithms working in quasilinear time) are used for the basic operations, this method provides an algorithm for the whole computation that works in quasilinear time.", "title": "Computation" }, { "paragraph_id": 52, "text": "On the current example (which has only three moduli), both strategies are identical and work as follows.", "title": "Computation" }, { "paragraph_id": 53, "text": "Bézout's identity for 3 and 4 is", "title": "Computation" }, { "paragraph_id": 54, "text": "Putting this in the formula given for proving the existence gives", "title": "Computation" }, { "paragraph_id": 55, "text": "for a solution of the two first congruences, the other solutions being obtained by adding to −9 any multiple of 3 × 4 = 12. One may continue with any of these solutions, but the solution 3 = −9 +12 is smaller (in absolute value) and thus leads probably to an easier computation", "title": "Computation" }, { "paragraph_id": 56, "text": "Bézout identity for 5 and 3 × 4 = 12 is", "title": "Computation" }, { "paragraph_id": 57, "text": "Applying the same formula again, we get a solution of the problem:", "title": "Computation" }, { "paragraph_id": 58, "text": "The other solutions are obtained by adding any multiple of 3 × 4 × 5 = 60, and the smallest positive solution is −21 + 60 = 39.", "title": "Computation" }, { "paragraph_id": 59, "text": "The system of congruences solved by the Chinese remainder theorem may be rewritten as a system of linear Diophantine equations:", "title": "Computation" }, { "paragraph_id": 60, "text": "where the unknown integers are x {\\displaystyle x} and the x i . {\\displaystyle x_{i}.} Therefore, every general method for solving such systems may be used for finding the solution of Chinese remainder theorem, such as the reduction of the matrix of the system to Smith normal form or Hermite normal form. However, as usual when using a general algorithm for a more specific problem, this approach is less efficient than the method of the preceding section, based on a direct use of Bézout's identity.", "title": "Computation" }, { "paragraph_id": 61, "text": "In § Statement, the Chinese remainder theorem has been stated in three different ways: in terms of remainders, of congruences, and of a ring isomorphism. The statement in terms of remainders does not apply, in general, to principal ideal domains, as remainders are not defined in such rings. However, the two other versions make sense over a principal ideal domain R: it suffices to replace \"integer\" by \"element of the domain\" and Z {\\displaystyle \\mathbb {Z} } by R. These two versions of the theorem are true in this context, because the proofs (except for the first existence proof), are based on Euclid's lemma and Bézout's identity, which are true over every principal domain.", "title": "Over principal ideal domains" }, { "paragraph_id": 62, "text": "However, in general, the theorem is only an existence theorem and does not provide any way for computing the solution, unless one has an algorithm for computing the coefficients of Bézout's identity.", "title": "Over principal ideal domains" }, { "paragraph_id": 63, "text": "The statement in terms of remainders given in § Theorem statement cannot be generalized to any principal ideal domain, but its generalization to Euclidean domains is straightforward. The univariate polynomials over a field is the typical example of a Euclidean domain which is not the integers. Therefore, we state the theorem for the case of the ring R = K [ X ] {\\displaystyle R=K[X]} for a field K . {\\displaystyle K.} For getting the theorem for a general Euclidean domain, it suffices to replace the degree by the Euclidean function of the Euclidean domain.", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 64, "text": "The Chinese remainder theorem for polynomials is thus: Let P i ( X ) {\\displaystyle P_{i}(X)} (the moduli) be, for i = 1 , … , k {\\displaystyle i=1,\\dots ,k} , pairwise coprime polynomials in R = K [ X ] {\\displaystyle R=K[X]} . Let d i = deg P i {\\displaystyle d_{i}=\\deg P_{i}} be the degree of P i ( X ) {\\displaystyle P_{i}(X)} , and D {\\displaystyle D} be the sum of the d i . {\\displaystyle d_{i}.} If A i ( X ) , … , A k ( X ) {\\displaystyle A_{i}(X),\\ldots ,A_{k}(X)} are polynomials such that A i ( X ) = 0 {\\displaystyle A_{i}(X)=0} or deg A i < d i {\\displaystyle \\deg A_{i}<d_{i}} for every i, then, there is one and only one polynomial P ( X ) {\\displaystyle P(X)} , such that deg P < D {\\displaystyle \\deg P<D} and the remainder of the Euclidean division of P ( X ) {\\displaystyle P(X)} by P i ( X ) {\\displaystyle P_{i}(X)} is A i ( X ) {\\displaystyle A_{i}(X)} for every i.", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 65, "text": "The construction of the solution may be done as in § Existence (constructive proof) or § Existence (direct proof). However, the latter construction may be simplified by using, as follows, partial fraction decomposition instead of the extended Euclidean algorithm.", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 66, "text": "Thus, we want to find a polynomial P ( X ) {\\displaystyle P(X)} , which satisfies the congruences", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 67, "text": "for i = 1 , … , k . {\\displaystyle i=1,\\ldots ,k.}", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 68, "text": "Consider the polynomials", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 69, "text": "The partial fraction decomposition of 1 / Q ( X ) {\\displaystyle 1/Q(X)} gives k polynomials S i ( X ) {\\displaystyle S_{i}(X)} with degrees deg S i ( X ) < d i , {\\displaystyle \\deg S_{i}(X)<d_{i},} such that", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 70, "text": "and thus", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 71, "text": "Then a solution of the simultaneous congruence system is given by the polynomial", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 72, "text": "In fact, we have", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 73, "text": "for 1 ≤ i ≤ k . {\\displaystyle 1\\leq i\\leq k.}", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 74, "text": "This solution may have a degree larger than D = ∑ i = 1 k d i . {\\displaystyle D=\\sum _{i=1}^{k}d_{i}.} The unique solution of degree less than D {\\displaystyle D} may be deduced by considering the remainder B i ( X ) {\\displaystyle B_{i}(X)} of the Euclidean division of A i ( X ) S i ( X ) {\\displaystyle A_{i}(X)S_{i}(X)} by P i ( X ) . {\\displaystyle P_{i}(X).} This solution is", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 75, "text": "A special case of Chinese remainder theorem for polynomials is Lagrange interpolation. For this, consider k monic polynomials of degree one:", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 76, "text": "They are pairwise coprime if the x i {\\displaystyle x_{i}} are all different. The remainder of the division by P i ( X ) {\\displaystyle P_{i}(X)} of a polynomial P ( X ) {\\displaystyle P(X)} is P ( x i ) {\\displaystyle P(x_{i})} , by the polynomial remainder theorem.", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 77, "text": "Now, let A 1 , … , A k {\\displaystyle A_{1},\\ldots ,A_{k}} be constants (polynomials of degree 0) in K . {\\displaystyle K.} Both Lagrange interpolation and Chinese remainder theorem assert the existence of a unique polynomial P ( X ) , {\\displaystyle P(X),} of degree less than k {\\displaystyle k} such that", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 78, "text": "for every i . {\\displaystyle i.}", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 79, "text": "Lagrange interpolation formula is exactly the result, in this case, of the above construction of the solution. More precisely, let", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 80, "text": "The partial fraction decomposition of 1 Q ( X ) {\\displaystyle {\\frac {1}{Q(X)}}} is", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 81, "text": "In fact, reducing the right-hand side to a common denominator one gets", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 82, "text": "and the numerator is equal to one, as being a polynomial of degree less than k , {\\displaystyle k,} which takes the value one for k {\\displaystyle k} different values of X . {\\displaystyle X.}", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 83, "text": "Using the above general formula, we get the Lagrange interpolation formula:", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 84, "text": "Hermite interpolation is an application of the Chinese remainder theorem for univariate polynomials, which may involve moduli of arbitrary degrees (Lagrange interpolation involves only moduli of degree one).", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 85, "text": "The problem consists of finding a polynomial of the least possible degree, such that the polynomial and its first derivatives take given values at some fixed points.", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 86, "text": "More precisely, let x 1 , … , x k {\\displaystyle x_{1},\\ldots ,x_{k}} be k {\\displaystyle k} elements of the ground field K , {\\displaystyle K,} and, for i = 1 , … , k , {\\displaystyle i=1,\\ldots ,k,} let a i , 0 , a i , 1 , … , a i , r i − 1 {\\displaystyle a_{i,0},a_{i,1},\\ldots ,a_{i,r_{i}-1}} be the values of the first r i {\\displaystyle r_{i}} derivatives of the sought polynomial at x i {\\displaystyle x_{i}} (including the 0th derivative, which is the value of the polynomial itself). The problem is to find a polynomial P ( X ) {\\displaystyle P(X)} such that its j th derivative takes the value a i , j {\\displaystyle a_{i,j}} at x i , {\\displaystyle x_{i},} for i = 1 , … , k {\\displaystyle i=1,\\ldots ,k} and j = 0 , … , r j . {\\displaystyle j=0,\\ldots ,r_{j}.}", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 87, "text": "Consider the polynomial", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 88, "text": "This is the Taylor polynomial of order r i − 1 {\\displaystyle r_{i}-1} at x i {\\displaystyle x_{i}} , of the unknown polynomial P ( X ) . {\\displaystyle P(X).} Therefore, we must have", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 89, "text": "Conversely, any polynomial P ( X ) {\\displaystyle P(X)} that satisfies these k {\\displaystyle k} congruences, in particular verifies, for any i = 1 , … , k {\\displaystyle i=1,\\ldots ,k}", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 90, "text": "therefore P i ( X ) {\\displaystyle P_{i}(X)} is its Taylor polynomial of order r i − 1 {\\displaystyle r_{i}-1} at x i {\\displaystyle x_{i}} , that is, P ( X ) {\\displaystyle P(X)} solves the initial Hermite interpolation problem. The Chinese remainder theorem asserts that there exists exactly one polynomial of degree less than the sum of the r i , {\\displaystyle r_{i},} which satisfies these k {\\displaystyle k} congruences.", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 91, "text": "There are several ways for computing the solution P ( X ) . {\\displaystyle P(X).} One may use the method described at the beginning of § Over univariate polynomial rings and Euclidean domains. One may also use the constructions given in § Existence (constructive proof) or § Existence (direct proof).", "title": "Over univariate polynomial rings and Euclidean domains" }, { "paragraph_id": 92, "text": "The Chinese remainder theorem can be generalized to non-coprime moduli. Let m , n , a , b {\\displaystyle m,n,a,b} be any integers, let g = gcd ( m , n ) {\\displaystyle g=\\gcd(m,n)} ; M = lcm ( m , n ) {\\displaystyle M=\\operatorname {lcm} (m,n)} , and consider the system of congruences:", "title": "Generalization to non-coprime moduli" }, { "paragraph_id": 93, "text": "If a ≡ b ( mod g ) {\\displaystyle a\\equiv b{\\pmod {g}}} , then this system has a unique solution modulo M = m n / g {\\displaystyle M=mn/g} . Otherwise, it has no solutions.", "title": "Generalization to non-coprime moduli" }, { "paragraph_id": 94, "text": "If one uses Bézout's identity to write g = u m + v n {\\displaystyle g=um+vn} , then the solution is given by", "title": "Generalization to non-coprime moduli" }, { "paragraph_id": 95, "text": "This defines an integer, as g divides both m and n. Otherwise, the proof is very similar to that for coprime moduli.", "title": "Generalization to non-coprime moduli" }, { "paragraph_id": 96, "text": "The Chinese remainder theorem can be generalized to any ring, by using coprime ideals (also called comaximal ideals). Two ideals I and J are coprime if there are elements i ∈ I {\\displaystyle i\\in I} and j ∈ J {\\displaystyle j\\in J} such that i + j = 1. {\\displaystyle i+j=1.} This relation plays the role of Bézout's identity in the proofs related to this generalization, which otherwise are very similar. The generalization may be stated as follows.", "title": "Generalization to arbitrary rings" }, { "paragraph_id": 97, "text": "Let I1, ..., Ik be two-sided ideals of a ring R {\\displaystyle R} and let I be their intersection. If the ideals are pairwise coprime, we have the isomorphism:", "title": "Generalization to arbitrary rings" }, { "paragraph_id": 98, "text": "between the quotient ring R / I {\\displaystyle R/I} and the direct product of the R / I i , {\\displaystyle R/I_{i},} where \" x mod I {\\displaystyle x{\\bmod {I}}} \" denotes the image of the element x {\\displaystyle x} in the quotient ring defined by the ideal I . {\\displaystyle I.} Moreover, if R {\\displaystyle R} is commutative, then the ideal intersection of pairwise coprime ideals is equal to their product; that is", "title": "Generalization to arbitrary rings" }, { "paragraph_id": 99, "text": "if Ii and Ij are coprime for all i ≠ j.", "title": "Generalization to arbitrary rings" }, { "paragraph_id": 100, "text": "Let I 1 , I 2 , … , I k {\\displaystyle I_{1},I_{2},\\dots ,I_{k}} be pairwise coprime two-sided ideals with ⋂ i = 1 k I i = 0 , {\\displaystyle \\bigcap _{i=1}^{k}I_{i}=0,} and", "title": "Generalization to arbitrary rings" }, { "paragraph_id": 101, "text": "be the isomorphism defined above. Let f i = ( 0 , … , 1 , … , 0 ) {\\displaystyle f_{i}=(0,\\ldots ,1,\\ldots ,0)} be the element of ( R / I 1 ) × ⋯ × ( R / I k ) {\\displaystyle (R/I_{1})\\times \\cdots \\times (R/I_{k})} whose components are all 0 except the i th which is 1, and e i = φ − 1 ( f i ) . {\\displaystyle e_{i}=\\varphi ^{-1}(f_{i}).}", "title": "Generalization to arbitrary rings" }, { "paragraph_id": 102, "text": "The e i {\\displaystyle e_{i}} are central idempotents that are pairwise orthogonal; this means, in particular, that e i 2 = e i {\\displaystyle e_{i}^{2}=e_{i}} and e i e j = e j e i = 0 {\\displaystyle e_{i}e_{j}=e_{j}e_{i}=0} for every i and j. Moreover, one has e 1 + ⋯ + e n = 1 , {\\textstyle e_{1}+\\cdots +e_{n}=1,} and I i = R ( 1 − e i ) . {\\displaystyle I_{i}=R(1-e_{i}).}", "title": "Generalization to arbitrary rings" }, { "paragraph_id": 103, "text": "In summary, this generalized Chinese remainder theorem is the equivalence between giving pairwise coprime two-sided ideals with a zero intersection, and giving central and pairwise orthogonal idempotents that sum to 1.", "title": "Generalization to arbitrary rings" }, { "paragraph_id": 104, "text": "The Chinese remainder theorem has been used to construct a Gödel numbering for sequences, which is involved in the proof of Gödel's incompleteness theorems.", "title": "Applications" }, { "paragraph_id": 105, "text": "The prime-factor FFT algorithm (also called Good-Thomas algorithm) uses the Chinese remainder theorem for reducing the computation of a fast Fourier transform of size n 1 n 2 {\\displaystyle n_{1}n_{2}} to the computation of two fast Fourier transforms of smaller sizes n 1 {\\displaystyle n_{1}} and n 2 {\\displaystyle n_{2}} (providing that n 1 {\\displaystyle n_{1}} and n 2 {\\displaystyle n_{2}} are coprime).", "title": "Applications" }, { "paragraph_id": 106, "text": "Most implementations of RSA use the Chinese remainder theorem during signing of HTTPS certificates and during decryption.", "title": "Applications" }, { "paragraph_id": 107, "text": "The Chinese remainder theorem can also be used in secret sharing, which consists of distributing a set of shares among a group of people who, all together (but no one alone), can recover a certain secret from the given set of shares. Each of the shares is represented in a congruence, and the solution of the system of congruences using the Chinese remainder theorem is the secret to be recovered. Secret sharing using the Chinese remainder theorem uses, along with the Chinese remainder theorem, special sequences of integers that guarantee the impossibility of recovering the secret from a set of shares with less than a certain cardinality.", "title": "Applications" }, { "paragraph_id": 108, "text": "The range ambiguity resolution techniques used with medium pulse repetition frequency radar can be seen as a special case of the Chinese remainder theorem.", "title": "Applications" }, { "paragraph_id": 109, "text": "Given a surjection Z / n → Z / m {\\displaystyle \\mathbb {Z} /n\\to \\mathbb {Z} /m} of finite abelian groups, we can use the Chinese remainder theorem to give a complete description of any such map. First of all, the theorem gives isomorphisms", "title": "Applications" }, { "paragraph_id": 110, "text": "where { p m 1 , … , p m j } ⊆ { p n 1 , … , p n i } {\\displaystyle \\{p_{m_{1}},\\ldots ,p_{m_{j}}\\}\\subseteq \\{p_{n_{1}},\\ldots ,p_{n_{i}}\\}} . In addition, for any induced map", "title": "Applications" }, { "paragraph_id": 111, "text": "from the original surjection, we have a k ≥ b l {\\displaystyle a_{k}\\geq b_{l}} and p n k = p m l , {\\displaystyle p_{n_{k}}=p_{m_{l}},} since for a pair of primes p , q {\\displaystyle p,q} , the only non-zero surjections", "title": "Applications" }, { "paragraph_id": 112, "text": "can be defined if p = q {\\displaystyle p=q} and a ≥ b {\\displaystyle a\\geq b} .", "title": "Applications" }, { "paragraph_id": 113, "text": "These observations are pivotal for constructing the ring of profinite integers, which is given as an inverse limit of all such maps.", "title": "Applications" }, { "paragraph_id": 114, "text": "Dedekind's theorem on the linear independence of characters. Let M be a monoid and k an integral domain, viewed as a monoid by considering the multiplication on k. Then any finite family ( fi )i∈I of distinct monoid homomorphisms fi : M → k is linearly independent. In other words, every family (αi)i∈I of elements αi ∈ k satisfying", "title": "Applications" }, { "paragraph_id": 115, "text": "must be equal to the family (0)i∈I.", "title": "Applications" }, { "paragraph_id": 116, "text": "Proof. First assume that k is a field, otherwise, replace the integral domain k by its quotient field, and nothing will change. We can linearly extend the monoid homomorphisms fi : M → k to k-algebra homomorphisms Fi : k[M] → k, where k[M] is the monoid ring of M over k. Then, by linearity, the condition", "title": "Applications" }, { "paragraph_id": 117, "text": "yields", "title": "Applications" }, { "paragraph_id": 118, "text": "Next, for i, j ∈ I; i ≠ j the two k-linear maps Fi : k[M] → k and Fj : k[M] → k are not proportional to each other. Otherwise fi and fj would also be proportional, and thus equal since as monoid homomorphisms they satisfy: fi (1) = 1 = fj (1), which contradicts the assumption that they are distinct.", "title": "Applications" }, { "paragraph_id": 119, "text": "Therefore, the kernels Ker Fi and Ker Fj are distinct. Since k[M]/Ker Fi ≅ Fi (k[M]) = k is a field, Ker Fi is a maximal ideal of k[M] for every i in I. Because they are distinct and maximal the ideals Ker Fi and Ker Fj are coprime whenever i ≠ j. The Chinese Remainder Theorem (for general rings) yields an isomorphism:", "title": "Applications" }, { "paragraph_id": 120, "text": "where", "title": "Applications" }, { "paragraph_id": 121, "text": "Consequently, the map", "title": "Applications" }, { "paragraph_id": 122, "text": "is surjective. Under the isomorphisms k[M]/Ker Fi → Fi (k[M]) = k, the map Φ corresponds to:", "title": "Applications" }, { "paragraph_id": 123, "text": "Now,", "title": "Applications" }, { "paragraph_id": 124, "text": "yields", "title": "Applications" }, { "paragraph_id": 125, "text": "for every vector (ui)i∈I in the image of the map ψ. Since ψ is surjective, this means that", "title": "Applications" }, { "paragraph_id": 126, "text": "for every vector", "title": "Applications" }, { "paragraph_id": 127, "text": "Consequently, (αi)i∈I = (0)i∈I. QED.", "title": "Applications" } ]
In mathematics, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer n by several integers, then one can determine uniquely the remainder of the division of n by the product of these integers, under the condition that the divisors are pairwise coprime. For example, if we know that the remainder of n divided by 3 is 2, the remainder of n divided by 5 is 3, and the remainder of n divided by 7 is 2, then without knowing the value of n, we can determine that the remainder of n divided by 105 is 23. Importantly, this tells us that if n is a natural number less than 105, then 23 is the only possible value of n. The earliest known statement of the theorem is by the Chinese mathematician Sunzi in the Sunzi Suanjing in the 3rd century CE. The Chinese remainder theorem is widely used for computing with large integers, as it allows replacing a computation for which one knows a bound on the size of the result by several similar computations on small integers. The Chinese remainder theorem is true over every principal ideal domain. It has been generalized to any ring, with a formulation involving two-sided ideals.
2002-01-11T20:57:05Z
2023-11-30T11:03:17Z
[ "Template:Blockquote", "Template:Lang", "Template:Sfn", "Template:Harvnb", "Template:Snd", "Template:Nowrap", "Template:Slink", "Template:Main article", "Template:MathWorld", "Template:Number theory", "Template:Authority control", "Template:Mvar", "Template:Planetmath", "Template:Short description", "Template:Math", "Template:Section link", "Template:Reflist", "Template:Citation", "Template:Springer" ]
https://en.wikipedia.org/wiki/Chinese_remainder_theorem
7,716
Cyril M. Kornbluth
Cyril M. Kornbluth (July 2, 1923 – March 21, 1958) was an American science fiction author and a member of the Futurians. He used a variety of pen-names, including Cecil Corwin, S. D. Gottesman, Edward J. Bellin, Kenneth Falconer, Walter C. Davies, Simon Eisner, Jordan Park, Arthur Cooke, Paul Dennis Lavond, and Scott Mariner. Kornbluth was born and grew up in the uptown Manhattan neighborhood of Inwood, in New York City. He was of Polish Jewish descent, the son of a World War I veteran and grandson of a tailor, a Jewish immigrant from Galicia. The "M" in Kornbluth's name may have been in tribute to his wife, Mary Byers; Kornbluth's colleague and collaborator Frederik Pohl confirmed Kornbluth's lack of any actual middle name in at least one interview. According to his widow, Kornbluth was a "precocious child", learning to read by the age of three and writing his own stories by the time he was seven. He graduated from high school at thirteen, received a CCNY scholarship at fourteen, and was "thrown out for leading a student strike" without graduating. As a teenager, he became a member of the Futurians, an influential group of science fiction fans and writers. While a member of the Futurians, he met and became friends with Frederik Pohl, Donald A. Wollheim, Robert A. W. Lowndes, and his future wife Mary Byers. He also participated in the Fantasy Amateur Press Association. Kornbluth served in the US Army during World War II (European 'Theatre'). He received a Bronze Star for his service in the Battle of the Bulge, where he served as a member of a heavy machine gun crew. Upon his discharge, he returned to finish his education at the University of Chicago under the G.I. Bill. While living in Chicago he also worked at Trans-Radio Press, a news wire service. In 1951 he started writing full-time, returning to the East Coast where he collaborated on novels with his old Futurian friends Frederik Pohl and Judith Merril. Kornbluth began writing at 15. His first solo story, "The Rocket of 1955", was published in Richard Wilson's fanzine Escape (Vol. 1, No 2, August 1939); his first collaboration, "Stepsons of Mars," written with Richard Wilson and published under the name "Ivar Towers", appeared in the April 1940 Astonishing. His other short fiction includes "The Little Black Bag", "The Marching Morons", "The Altar at Midnight", "MS. Found in a Chinese Fortune Cookie", "Gomez" and "The Advent on Channel Twelve". "The Little Black Bag" was first adapted for television live on the television show Tales of Tomorrow on May 30, 1952. It was later adapted for television by the BBC in 1969 for its Out of the Unknown series. In 1970, the same story was adapted by Rod Serling for an episode of his Night Gallery series. This dramatization starred Burgess Meredith as the alcoholic Dr. William Fall, who had long lost his doctor's license and become a homeless alcoholic. He finds a bag containing advanced medical technology from the future, which, after an unsuccessful attempt to pawn it, he uses benevolently. "The Marching Morons" is a look at a far future in which the world's population consists of five billion idiots and a few million geniuses – the precarious minority of the "elite" working desperately to keep things running behind the scenes. In his introduction to The Best of C. M. Kornbluth, Pohl states that "The Marching Morons" is a direct sequel to "The Little Black Bag": it is easy to miss this, as "Bag" is set in the contemporary present while "Morons" takes place several centuries from now, and there is no character who appears in both stories. The titular black bag in the first story is actually an artifact from the time period of "The Marching Morons": a medical kit filled with self-driven instruments enabling a far-future moron to "play doctor". A future Earth similar to "The Marching Morons" – a civilisation of morons protected by a small minority of hidden geniuses – is used again in the final stages of Kornbluth & Pohl's Search the Sky. "MS. Found in a Chinese Fortune Cookie" (1957) is supposedly written by Kornbluth using notes by "Cecil Corwin", who has been declared insane and incarcerated, and who smuggles out in fortune cookies the ultimate secret of life. This fate is said to be Kornbluth's response to the unauthorized publication of "Mask of Demeter" (as by "Corwin" and "Martin Pearson" (Donald A. Wollheim)) in Wollheim's anthology Prize Science Fiction in 1953. Biographer Mark Rich describes the 1958 story "Two Dooms" as one of several stories which are "concern[ed] with the ethics of theoretical science" and which "explore moral quandaries of the atomic age": "Two Dooms" follows atomic physicist Edward Royland on his accidental journey into an alternative universe where the Nazis and Japanese rule a divided United States. In his own world, Royland debated whether to delay progress at the Los Alamos nuclear research site or to help the atomic bomb achieve its terrifying result. Encountering both a slave village and a concentration camp in the alternative America, he comes to grips with the idea of life under bondage. Many of Kornbluth's novels were written as collaborations: either with Judith Merril (using the pseudonym Cyril Judd), or with Frederik Pohl. These include Gladiator-At-Law and The Space Merchants. The Space Merchants contributed significantly to the maturing and to the wider academic respectability of the science fiction genre, not only in America but also in Europe. Kornbluth also wrote several novels under his own name, including The Syndic and Not This August. Kornbluth died at age 34 in Levittown, New York. On a day when he was due to meet with Bob Mills in New York City to interview for the position of editor of The Magazine of Fantasy & Science Fiction, he was delayed because he had to shovel snow from his driveway. After running to meet his train following this delay, Kornbluth suffered a fatal heart attack on the platform of the station. A number of short stories remained unfinished at Kornbluth's death; these were eventually completed and published by Pohl. One of these stories, "The Meeting" (The Magazine of Fantasy & Science Fiction, November 1972), was the co-winner of the 1973 Hugo Award for Best Short Story; it tied with R. A. Lafferty's "Eurema's Dam." Almost all of Kornbluth's solo SF stories have been collected as His Share of Glory: The Complete Short Science Fiction of C. M. Kornbluth (NESFA Press, 1997). Frederik Pohl, in his autobiography The Way the Future Was, Damon Knight, in his memoir The Futurians, and Isaac Asimov, in his memoirs In Memory Yet Green and I. Asimov: A Memoir, all give descriptions of Kornbluth as a man of odd personal habits and eccentricities. Kornbluth, for example, decided to educate himself by reading his way through an entire encyclopedia from A to Z; in the course of this effort, he acquired a great deal of esoteric knowledge that found its way into his stories, in alphabetical order by subject. When Kornbluth wrote a story that mentioned the ballista, an Ancient Roman weapon, Pohl knew that Kornbluth had finished the 'A's and had started on the 'B's. According to Pohl, Kornbluth never brushed his teeth, and they were literally green. Deeply embarrassed by this, Kornbluth developed the habit of holding his hand in front of his mouth when speaking. Spider Robinson praised this collection, saying "I haven't enjoyed a book so much in years." Mark Rich wrote, "Critics judging Kornbluth by this anthology, edited by Pohl, have seen a growing bitterness in his later stories. This reflects editorial choice more than reality, because Kornbluth also wrote delightful humor in his last years, in stories not collected here. These tales demonstrate Kornbluth's effective use of everyday individuals from a variety of ethnic backgrounds as well as his well-tuned ear for dialect." Kornbluth's name is mentioned in Lemony Snicket's Series of Unfortunate Events as a member of V.F.D., a secret organization dedicated to the promotion of literacy, classical learning, and crime prevention.
[ { "paragraph_id": 0, "text": "Cyril M. Kornbluth (July 2, 1923 – March 21, 1958) was an American science fiction author and a member of the Futurians. He used a variety of pen-names, including Cecil Corwin, S. D. Gottesman, Edward J. Bellin, Kenneth Falconer, Walter C. Davies, Simon Eisner, Jordan Park, Arthur Cooke, Paul Dennis Lavond, and Scott Mariner.", "title": "" }, { "paragraph_id": 1, "text": "Kornbluth was born and grew up in the uptown Manhattan neighborhood of Inwood, in New York City. He was of Polish Jewish descent, the son of a World War I veteran and grandson of a tailor, a Jewish immigrant from Galicia.", "title": "Biography" }, { "paragraph_id": 2, "text": "The \"M\" in Kornbluth's name may have been in tribute to his wife, Mary Byers; Kornbluth's colleague and collaborator Frederik Pohl confirmed Kornbluth's lack of any actual middle name in at least one interview.", "title": "Biography" }, { "paragraph_id": 3, "text": "According to his widow, Kornbluth was a \"precocious child\", learning to read by the age of three and writing his own stories by the time he was seven. He graduated from high school at thirteen, received a CCNY scholarship at fourteen, and was \"thrown out for leading a student strike\" without graduating.", "title": "Biography" }, { "paragraph_id": 4, "text": "As a teenager, he became a member of the Futurians, an influential group of science fiction fans and writers. While a member of the Futurians, he met and became friends with Frederik Pohl, Donald A. Wollheim, Robert A. W. Lowndes, and his future wife Mary Byers. He also participated in the Fantasy Amateur Press Association.", "title": "Biography" }, { "paragraph_id": 5, "text": "Kornbluth served in the US Army during World War II (European 'Theatre'). He received a Bronze Star for his service in the Battle of the Bulge, where he served as a member of a heavy machine gun crew. Upon his discharge, he returned to finish his education at the University of Chicago under the G.I. Bill. While living in Chicago he also worked at Trans-Radio Press, a news wire service. In 1951 he started writing full-time, returning to the East Coast where he collaborated on novels with his old Futurian friends Frederik Pohl and Judith Merril.", "title": "Biography" }, { "paragraph_id": 6, "text": "Kornbluth began writing at 15. His first solo story, \"The Rocket of 1955\", was published in Richard Wilson's fanzine Escape (Vol. 1, No 2, August 1939); his first collaboration, \"Stepsons of Mars,\" written with Richard Wilson and published under the name \"Ivar Towers\", appeared in the April 1940 Astonishing. His other short fiction includes \"The Little Black Bag\", \"The Marching Morons\", \"The Altar at Midnight\", \"MS. Found in a Chinese Fortune Cookie\", \"Gomez\" and \"The Advent on Channel Twelve\".", "title": "Work" }, { "paragraph_id": 7, "text": "\"The Little Black Bag\" was first adapted for television live on the television show Tales of Tomorrow on May 30, 1952. It was later adapted for television by the BBC in 1969 for its Out of the Unknown series. In 1970, the same story was adapted by Rod Serling for an episode of his Night Gallery series. This dramatization starred Burgess Meredith as the alcoholic Dr. William Fall, who had long lost his doctor's license and become a homeless alcoholic. He finds a bag containing advanced medical technology from the future, which, after an unsuccessful attempt to pawn it, he uses benevolently.", "title": "Work" }, { "paragraph_id": 8, "text": "\"The Marching Morons\" is a look at a far future in which the world's population consists of five billion idiots and a few million geniuses – the precarious minority of the \"elite\" working desperately to keep things running behind the scenes. In his introduction to The Best of C. M. Kornbluth, Pohl states that \"The Marching Morons\" is a direct sequel to \"The Little Black Bag\": it is easy to miss this, as \"Bag\" is set in the contemporary present while \"Morons\" takes place several centuries from now, and there is no character who appears in both stories. The titular black bag in the first story is actually an artifact from the time period of \"The Marching Morons\": a medical kit filled with self-driven instruments enabling a far-future moron to \"play doctor\". A future Earth similar to \"The Marching Morons\" – a civilisation of morons protected by a small minority of hidden geniuses – is used again in the final stages of Kornbluth & Pohl's Search the Sky.", "title": "Work" }, { "paragraph_id": 9, "text": "\"MS. Found in a Chinese Fortune Cookie\" (1957) is supposedly written by Kornbluth using notes by \"Cecil Corwin\", who has been declared insane and incarcerated, and who smuggles out in fortune cookies the ultimate secret of life. This fate is said to be Kornbluth's response to the unauthorized publication of \"Mask of Demeter\" (as by \"Corwin\" and \"Martin Pearson\" (Donald A. Wollheim)) in Wollheim's anthology Prize Science Fiction in 1953.", "title": "Work" }, { "paragraph_id": 10, "text": "Biographer Mark Rich describes the 1958 story \"Two Dooms\" as one of several stories which are \"concern[ed] with the ethics of theoretical science\" and which \"explore moral quandaries of the atomic age\":", "title": "Work" }, { "paragraph_id": 11, "text": "\"Two Dooms\" follows atomic physicist Edward Royland on his accidental journey into an alternative universe where the Nazis and Japanese rule a divided United States. In his own world, Royland debated whether to delay progress at the Los Alamos nuclear research site or to help the atomic bomb achieve its terrifying result. Encountering both a slave village and a concentration camp in the alternative America, he comes to grips with the idea of life under bondage.", "title": "Work" }, { "paragraph_id": 12, "text": "Many of Kornbluth's novels were written as collaborations: either with Judith Merril (using the pseudonym Cyril Judd), or with Frederik Pohl. These include Gladiator-At-Law and The Space Merchants. The Space Merchants contributed significantly to the maturing and to the wider academic respectability of the science fiction genre, not only in America but also in Europe. Kornbluth also wrote several novels under his own name, including The Syndic and Not This August.", "title": "Work" }, { "paragraph_id": 13, "text": "Kornbluth died at age 34 in Levittown, New York. On a day when he was due to meet with Bob Mills in New York City to interview for the position of editor of The Magazine of Fantasy & Science Fiction, he was delayed because he had to shovel snow from his driveway. After running to meet his train following this delay, Kornbluth suffered a fatal heart attack on the platform of the station.", "title": "Death" }, { "paragraph_id": 14, "text": "A number of short stories remained unfinished at Kornbluth's death; these were eventually completed and published by Pohl. One of these stories, \"The Meeting\" (The Magazine of Fantasy & Science Fiction, November 1972), was the co-winner of the 1973 Hugo Award for Best Short Story; it tied with R. A. Lafferty's \"Eurema's Dam.\" Almost all of Kornbluth's solo SF stories have been collected as His Share of Glory: The Complete Short Science Fiction of C. M. Kornbluth (NESFA Press, 1997).", "title": "Death" }, { "paragraph_id": 15, "text": "Frederik Pohl, in his autobiography The Way the Future Was, Damon Knight, in his memoir The Futurians, and Isaac Asimov, in his memoirs In Memory Yet Green and I. Asimov: A Memoir, all give descriptions of Kornbluth as a man of odd personal habits and eccentricities.", "title": "Personality and habits" }, { "paragraph_id": 16, "text": "Kornbluth, for example, decided to educate himself by reading his way through an entire encyclopedia from A to Z; in the course of this effort, he acquired a great deal of esoteric knowledge that found its way into his stories, in alphabetical order by subject. When Kornbluth wrote a story that mentioned the ballista, an Ancient Roman weapon, Pohl knew that Kornbluth had finished the 'A's and had started on the 'B's.", "title": "Personality and habits" }, { "paragraph_id": 17, "text": "According to Pohl, Kornbluth never brushed his teeth, and they were literally green. Deeply embarrassed by this, Kornbluth developed the habit of holding his hand in front of his mouth when speaking.", "title": "Personality and habits" }, { "paragraph_id": 18, "text": "Spider Robinson praised this collection, saying \"I haven't enjoyed a book so much in years.\" Mark Rich wrote, \"Critics judging Kornbluth by this anthology, edited by Pohl, have seen a growing bitterness in his later stories. This reflects editorial choice more than reality, because Kornbluth also wrote delightful humor in his last years, in stories not collected here. These tales demonstrate Kornbluth's effective use of everyday individuals from a variety of ethnic backgrounds as well as his well-tuned ear for dialect.\"", "title": "Bibliography" }, { "paragraph_id": 19, "text": "Kornbluth's name is mentioned in Lemony Snicket's Series of Unfortunate Events as a member of V.F.D., a secret organization dedicated to the promotion of literacy, classical learning, and crime prevention.", "title": "Trivia" } ]
Cyril M. Kornbluth was an American science fiction author and a member of the Futurians. He used a variety of pen-names, including Cecil Corwin, S. D. Gottesman, Edward J. Bellin, Kenneth Falconer, Walter C. Davies, Simon Eisner, Jordan Park, Arthur Cooke, Paul Dennis Lavond, and Scott Mariner.
2002-01-11T22:13:57Z
2023-08-18T03:39:32Z
[ "Template:More citations needed", "Template:Reflist", "Template:Cite encyclopedia", "Template:ISBN", "Template:Internet Archive author", "Template:Librivox author", "Template:Unreferenced section", "Template:Cite web", "Template:Webarchive", "Template:Wikiquote", "Template:Bquote", "Template:Citation needed", "Template:Cite book", "Template:FadedPage", "Template:Authority control", "Template:Short description", "Template:Redirect", "Template:Use mdy dates", "Template:Infobox writer", "Template:Gutenberg author", "Template:Isfdb name" ]
https://en.wikipedia.org/wiki/Cyril_M._Kornbluth
7,720
Coprophagia
Coprophagia (/ˌkɒprəˈfeɪdʒiə/ KOP-rə-FAY-jee-ə) or coprophagy (/kəˈprɒfədʒi/ kə-PROF-ə-jee) is the consumption of feces. The word is derived from the Ancient Greek κόπρος kópros "feces" and φαγεῖν phageîn "to eat". Coprophagy refers to many kinds of feces-eating, including eating feces of other species (heterospecifics), of other individuals (allocoprophagy), or one's own (autocoprophagy) – those once deposited or taken directly from the anus. In humans, coprophagia has been described since the late 19th century in individuals with mental illnesses and in some sexual acts, such as the practices of anilingus and felching where sex partners insert their tongue into each other's anus and ingest biologically significant amounts of feces. Some animal species eat feces as a normal behavior, in particular lagomorphs, which do so to allow tough plant materials to be digested more thoroughly by passing twice through the digestive tract. Other species may eat feces under certain conditions. The feces of the rock ptarmigan is used in Urumiit, which is a delicacy in some Inuit cuisine. Several beverages are made using the feces of animals, including but not limited to Kopi luwak, panda tea, insect tea, and Black Ivory Coffee. Casu martzu is a cheese that uses the digestive processes of live maggots to help ferment and break down the cheese's fats. In Fecal microbiota transplant (FMT), also known as a stool transplant, fecal bacteria and other microbes from a healthy individual are transferred into a patient as an effective treatment for Clostridioides difficile infection (CDI). This treatment has also been used to try to cure other conditions with various results. See: Fecal microbiota transplant. Ayurveda and Siddha medicine use various animal excreta in various forms. The dung and urine of the Zebu is especially important in the list. Centuries ago (mid 16th century) physicians tasted their patients' feces, to better judge their state and condition, according to François Rabelais, who studied medicine but was also a writer of satirical and grotesque fiction. Further information is needed to confirm the accuracy and context of statement. Lewin reported, "... consumption of fresh, warm camel feces has been recommended by Bedouins as a remedy for bacterial dysentery; its efficacy (probably attributable to the antibiotic subtilisin from Bacillus subtilis) was anecdotally confirmed by German soldiers in Africa during World War II". However, this story is likely a myth, independent research was not able to verify any of these claims. Members of a religious cult in Thailand routinely ate the feces and dead skin of their leader, whom they considered to be a holy man with healing powers. Coprophilia is a paraphilia (DSM-5), where the object of sexual interest is feces, and may be associated with coprophagia. Coprophagia is sometimes depicted in pornography, usually under the term "scat" (from scatology). A notorious example of this is the pornographic shock video 2 Girls 1 Cup. The 120 Days of Sodom, a 1785 novel by Marquis de Sade, is full of detailed descriptions of erotic sadomasochistic coprophagia. The film of the same name also contains scenes of coprophilia and coprophagia. Coprophagia has also been observed in some people with schizophrenia and pica. Coprophagous insects consume and redigest the feces of large animals. These feces contain substantial amounts of semidigested food, particularly in the case of herbivores, owing to the inefficiency of the large animals' digestive systems. Thousands of species of coprophagous insects are known, especially among the orders Diptera and Coleoptera. Examples of such flies are Scathophaga stercoraria and Sepsis cynipsea, dung flies commonly found in Europe around cattle droppings. Among beetles, dung beetles are a diverse lineage, many of which feed on the microorganism-rich liquid component of mammals' dung, and lay their eggs in balls composed mainly of the remaining fibrous material. Termites eat one another's feces as a means of obtaining their hindgut protists. Termites and protists have a symbiotic relationship (e.g. with the protozoan that allows the termites to digest the cellulose in their diet). For example, in one group of termites, a three-way symbiotic relationship exists; termites of the family Rhinotermitidae, cellulolytic protists of the genus Pseudotrichonympha in the guts of these termites, and intracellular bacterial symbionts of the protists. Domesticated and wild mammals are sometimes coprophagic, and in some species, this forms an essential part of their method of digesting tough plant material. Some dogs may lack critical digestive enzymes when they are only eating processed dried foods, so they gain these from consuming fecal matter. They only consume fecal matter that is less than two days old which supports this theory. Species within the Lagomorpha (rabbits, hares, and pikas) produce two types of fecal pellets: hard ones, and soft ones called cecotropes. Animals in these species reingest their cecotropes, to extract further nutrients. Cecotropes derive from chewed plant material that collects in the cecum, a chamber between the large and small intestine, containing large quantities of symbiotic bacteria that help with the digestion of cellulose and also produce certain B vitamins. After excretion of the soft cecotrope, it is again eaten whole by the animal and redigested in a special part of the stomach. The pellets remain intact for up to six hours in the stomach; the bacteria within continue to digest the plant carbohydrates. This double-digestion process enables these animals to extract nutrients that they may have missed during the first passage through the gut, as well as the nutrients formed by the microbial activity. This process serves the same purpose within these animals as rumination (cud-chewing) does in cattle and sheep. Cattle in the United States are often fed chicken litter. Concerns have arisen that the practice of feeding chicken litter to cattle could lead to bovine spongiform encephalopathy (mad-cow disease) because of the crushed bone meal in chicken feed. The U.S. Food and Drug Administration regulates this practice by attempting to prevent the introduction of any part of cattle brain or spinal cord into livestock feed. Chickens also eat their own feces. Other countries, such as Canada, have banned chicken litter for use as a livestock feed. The young of elephants, giant pandas, koalas, and hippos eat the feces of their mothers or other animals in the herd, to obtain the bacteria required to properly digest vegetation found in their ecosystems. When such animals are born, their intestines are sterile and do not contain these bacteria. Without doing this, they would be unable to obtain any nutritional value from plants. Piglets with access to maternal feces early in life exhibited better performance. Hamsters, guinea pigs, chinchillas, hedgehogs, and pigs eat their own droppings, which are thought to be a source of vitamins B and K, produced by gut bacteria. Sometimes, there is also the aspect of self-anointment while these creatures eat their droppings. On rare occasions gorillas have been observed consuming their feces, possibly out of boredom, a desire for warm food, or to reingest seeds contained in the feces. Some carnivorous plants, such as pitcher plants of the genus Nepenthes, obtain nourishment from the feces of commensal animals. Notable examples include Nepenthes jamban, whose specific name is the Indonesian word for toilet. Manure is organic matter, mostly animal feces, that is used as organic fertilizer for plants in agriculture.
[ { "paragraph_id": 0, "text": "Coprophagia (/ˌkɒprəˈfeɪdʒiə/ KOP-rə-FAY-jee-ə) or coprophagy (/kəˈprɒfədʒi/ kə-PROF-ə-jee) is the consumption of feces. The word is derived from the Ancient Greek κόπρος kópros \"feces\" and φαγεῖν phageîn \"to eat\". Coprophagy refers to many kinds of feces-eating, including eating feces of other species (heterospecifics), of other individuals (allocoprophagy), or one's own (autocoprophagy) – those once deposited or taken directly from the anus.", "title": "" }, { "paragraph_id": 1, "text": "In humans, coprophagia has been described since the late 19th century in individuals with mental illnesses and in some sexual acts, such as the practices of anilingus and felching where sex partners insert their tongue into each other's anus and ingest biologically significant amounts of feces. Some animal species eat feces as a normal behavior, in particular lagomorphs, which do so to allow tough plant materials to be digested more thoroughly by passing twice through the digestive tract. Other species may eat feces under certain conditions.", "title": "" }, { "paragraph_id": 2, "text": "The feces of the rock ptarmigan is used in Urumiit, which is a delicacy in some Inuit cuisine. Several beverages are made using the feces of animals, including but not limited to Kopi luwak, panda tea, insect tea, and Black Ivory Coffee. Casu martzu is a cheese that uses the digestive processes of live maggots to help ferment and break down the cheese's fats.", "title": "Coprophagia by humans" }, { "paragraph_id": 3, "text": "In Fecal microbiota transplant (FMT), also known as a stool transplant, fecal bacteria and other microbes from a healthy individual are transferred into a patient as an effective treatment for Clostridioides difficile infection (CDI). This treatment has also been used to try to cure other conditions with various results. See: Fecal microbiota transplant.", "title": "Coprophagia by humans" }, { "paragraph_id": 4, "text": "Ayurveda and Siddha medicine use various animal excreta in various forms. The dung and urine of the Zebu is especially important in the list.", "title": "Coprophagia by humans" }, { "paragraph_id": 5, "text": "Centuries ago (mid 16th century) physicians tasted their patients' feces, to better judge their state and condition, according to François Rabelais, who studied medicine but was also a writer of satirical and grotesque fiction. Further information is needed to confirm the accuracy and context of statement.", "title": "Coprophagia by humans" }, { "paragraph_id": 6, "text": "Lewin reported, \"... consumption of fresh, warm camel feces has been recommended by Bedouins as a remedy for bacterial dysentery; its efficacy (probably attributable to the antibiotic subtilisin from Bacillus subtilis) was anecdotally confirmed by German soldiers in Africa during World War II\". However, this story is likely a myth, independent research was not able to verify any of these claims.", "title": "Coprophagia by humans" }, { "paragraph_id": 7, "text": "Members of a religious cult in Thailand routinely ate the feces and dead skin of their leader, whom they considered to be a holy man with healing powers.", "title": "Coprophagia by humans" }, { "paragraph_id": 8, "text": "Coprophilia is a paraphilia (DSM-5), where the object of sexual interest is feces, and may be associated with coprophagia. Coprophagia is sometimes depicted in pornography, usually under the term \"scat\" (from scatology). A notorious example of this is the pornographic shock video 2 Girls 1 Cup. The 120 Days of Sodom, a 1785 novel by Marquis de Sade, is full of detailed descriptions of erotic sadomasochistic coprophagia. The film of the same name also contains scenes of coprophilia and coprophagia.", "title": "Coprophagia by humans" }, { "paragraph_id": 9, "text": "Coprophagia has also been observed in some people with schizophrenia and pica.", "title": "Coprophagia by humans" }, { "paragraph_id": 10, "text": "Coprophagous insects consume and redigest the feces of large animals. These feces contain substantial amounts of semidigested food, particularly in the case of herbivores, owing to the inefficiency of the large animals' digestive systems. Thousands of species of coprophagous insects are known, especially among the orders Diptera and Coleoptera. Examples of such flies are Scathophaga stercoraria and Sepsis cynipsea, dung flies commonly found in Europe around cattle droppings. Among beetles, dung beetles are a diverse lineage, many of which feed on the microorganism-rich liquid component of mammals' dung, and lay their eggs in balls composed mainly of the remaining fibrous material.", "title": "Coprophagia by nonhuman animals" }, { "paragraph_id": 11, "text": "Termites eat one another's feces as a means of obtaining their hindgut protists. Termites and protists have a symbiotic relationship (e.g. with the protozoan that allows the termites to digest the cellulose in their diet). For example, in one group of termites, a three-way symbiotic relationship exists; termites of the family Rhinotermitidae, cellulolytic protists of the genus Pseudotrichonympha in the guts of these termites, and intracellular bacterial symbionts of the protists.", "title": "Coprophagia by nonhuman animals" }, { "paragraph_id": 12, "text": "Domesticated and wild mammals are sometimes coprophagic, and in some species, this forms an essential part of their method of digesting tough plant material.", "title": "Coprophagia by nonhuman animals" }, { "paragraph_id": 13, "text": "Some dogs may lack critical digestive enzymes when they are only eating processed dried foods, so they gain these from consuming fecal matter. They only consume fecal matter that is less than two days old which supports this theory.", "title": "Coprophagia by nonhuman animals" }, { "paragraph_id": 14, "text": "Species within the Lagomorpha (rabbits, hares, and pikas) produce two types of fecal pellets: hard ones, and soft ones called cecotropes. Animals in these species reingest their cecotropes, to extract further nutrients. Cecotropes derive from chewed plant material that collects in the cecum, a chamber between the large and small intestine, containing large quantities of symbiotic bacteria that help with the digestion of cellulose and also produce certain B vitamins. After excretion of the soft cecotrope, it is again eaten whole by the animal and redigested in a special part of the stomach. The pellets remain intact for up to six hours in the stomach; the bacteria within continue to digest the plant carbohydrates. This double-digestion process enables these animals to extract nutrients that they may have missed during the first passage through the gut, as well as the nutrients formed by the microbial activity. This process serves the same purpose within these animals as rumination (cud-chewing) does in cattle and sheep.", "title": "Coprophagia by nonhuman animals" }, { "paragraph_id": 15, "text": "Cattle in the United States are often fed chicken litter. Concerns have arisen that the practice of feeding chicken litter to cattle could lead to bovine spongiform encephalopathy (mad-cow disease) because of the crushed bone meal in chicken feed. The U.S. Food and Drug Administration regulates this practice by attempting to prevent the introduction of any part of cattle brain or spinal cord into livestock feed. Chickens also eat their own feces. Other countries, such as Canada, have banned chicken litter for use as a livestock feed.", "title": "Coprophagia by nonhuman animals" }, { "paragraph_id": 16, "text": "The young of elephants, giant pandas, koalas, and hippos eat the feces of their mothers or other animals in the herd, to obtain the bacteria required to properly digest vegetation found in their ecosystems. When such animals are born, their intestines are sterile and do not contain these bacteria. Without doing this, they would be unable to obtain any nutritional value from plants. Piglets with access to maternal feces early in life exhibited better performance.", "title": "Coprophagia by nonhuman animals" }, { "paragraph_id": 17, "text": "Hamsters, guinea pigs, chinchillas, hedgehogs, and pigs eat their own droppings, which are thought to be a source of vitamins B and K, produced by gut bacteria. Sometimes, there is also the aspect of self-anointment while these creatures eat their droppings. On rare occasions gorillas have been observed consuming their feces, possibly out of boredom, a desire for warm food, or to reingest seeds contained in the feces.", "title": "Coprophagia by nonhuman animals" }, { "paragraph_id": 18, "text": "Some carnivorous plants, such as pitcher plants of the genus Nepenthes, obtain nourishment from the feces of commensal animals. Notable examples include Nepenthes jamban, whose specific name is the Indonesian word for toilet. Manure is organic matter, mostly animal feces, that is used as organic fertilizer for plants in agriculture.", "title": "Coprophagia by plants" } ]
Coprophagia or coprophagy is the consumption of feces. The word is derived from the Ancient Greek κόπρος kópros "feces" and φαγεῖν phageîn "to eat". Coprophagy refers to many kinds of feces-eating, including eating feces of other species (heterospecifics), of other individuals (allocoprophagy), or one's own (autocoprophagy) – those once deposited or taken directly from the anus. In humans, coprophagia has been described since the late 19th century in individuals with mental illnesses and in some sexual acts, such as the practices of anilingus and felching where sex partners insert their tongue into each other's anus and ingest biologically significant amounts of feces. Some animal species eat feces as a normal behavior, in particular lagomorphs, which do so to allow tough plant materials to be digested more thoroughly by passing twice through the digestive tract. Other species may eat feces under certain conditions.
2002-01-12T01:47:01Z
2023-12-01T17:28:11Z
[ "Template:Reflist", "Template:Cite journal", "Template:IPAc-en", "Template:Cite web", "Template:Cite news", "Template:Cite book", "Template:Cite encyclopedia", "Template:Refend", "Template:Feeding", "Template:Transliteration", "Template:Distinguish", "Template:Refbegin", "Template:Commons category", "Template:Short description", "Template:Lang", "Template:Respell" ]
https://en.wikipedia.org/wiki/Coprophagia
7,721
C. L. Moore
Catherine Lucille Moore (January 24, 1911 – April 4, 1987) was an American science fiction and fantasy writer, who first came to prominence in the 1930s writing as C. L. Moore. She was among the first women to write in the science fiction and fantasy genres (though earlier woman writers in these genres include Clare Winger Harris, Greye La Spina, and Francis Stevens, among others). Moore's work paved the way for many other female speculative fiction writers. Moore married her first husband Henry Kuttner in 1940, and most of her work from 1940 to 1958 (Kuttner's death) was written by the couple collaboratively. They were prolific co-authors under their own names, although more often under any one of several pseudonyms. As "Catherine Kuttner", she had a brief career as a television scriptwriter from 1958 to 1962. She retired from writing in 1963. Moore was born on January 24, 1911, in Indianapolis, Indiana. She was chronically ill as a child and spent much of her time reading literature of the fantastic. She left college during the Great Depression to work as a secretary at the Fletcher Trust Company in Indianapolis. The Vagabond, a student-run magazine at Indiana University, published three of her stories when she was a student there. The three short stories, all with a fantasy theme and all credited to "Catherine Moore", appeared in 1930/31. Her first professional sales appeared in pulp magazines beginning in 1933. Her decision to publish under the name "C. L. Moore" stemmed not from a desire to hide her gender, but to keep her employers at Fletcher Trust from knowing that she was working as a writer on the side. Her early work included two significant series in Weird Tales, then edited by Farnsworth Wright. One features the rogue and adventurer Northwest Smith wandering through the Solar System; the other features the swordswoman/warrior Jirel of Joiry, one of the first female protagonists in sword-and-sorcery fiction. Both series are sometimes named for their lead characters. One of the Northwest Smith stories, "Nymph of Darkness" (Fantasy Magazine (April 1935); expurgated version, Weird Tales (Dec 1939)) was written in collaboration with Forrest J Ackerman. The most famous Northwest Smith story is "Shambleau", which was also Moore's first professional sale. It originally appeared in the November 1933 issue of Weird Tales, netting her $100, and later becoming a popular anthology reprint. Her most famous Jirel story is also the first one, "Black God's Kiss", which was the cover story in the October 1934 issue of Weird Tales, subtitled "the weirdest story ever told" (see figure). Moore's early stories were notable for their emphasis on the senses and emotions, which was unusual in genre fiction at the time. Moore's work also appeared in Astounding Science Fiction magazine throughout the 1940s. Several stories written for that magazine were later collected in her first published book, Judgment Night (1952) One of them, the novella "No Woman Born" (1944), was to be included in more than 10 different science fiction anthologies including The Best of C. L. Moore. Included in that collection were "Judgment Night" (first published in August and September 1943), the lush rendering of a future galactic empire with a sober meditation on the nature of power and its inevitable loss; "The Code" (July 1945), an homage to the classic Faust with modern theories and Lovecraftian dread; "Promised Land" (February 1950) and "Heir Apparent" (July 1950), both documenting the grim twisting that mankind must undergo in order to spread into the Solar System; and "Paradise Street" (September 1950), a futuristic take on the Old West conflict between lone hunter and wilderness-taming settlers. Moore met Henry Kuttner, also a science fiction writer, in 1936 when he wrote her a fan letter under the impression that "C. L. Moore" was a man. They soon collaborated on a story that combined Moore's signature characters, Northwest Smith and Jirel of Joiry: "Quest of the Starstone" (1937). Moore and Kuttner married in 1940 and thereafter wrote many of their stories in collaboration, sometimes under their own names, but more often using the joint pseudonyms C. H. Liddell, Lawrence O'Donnell, or Lewis Padgett — most commonly the latter, a combination of their mothers' maiden names. Moore still occasionally wrote solo work during this period, including the frequently anthologized "No Woman Born" (1944). A selection of Moore's solo short fiction work from 1942 through 1950 was collected in 1952's Judgement Night. Moore's only solo novel, Doomsday Morning, appeared in 1957. The vast majority of Moore's work in the period, though, was written as part of a very prolific partnership. Working together, the couple managed to combine Moore's style with Kuttner's more cerebral storytelling. They continued to work in science fiction and fantasy, and their works include two frequently anthologized sci-fi classics: "Mimsy Were the Borogoves" (February 1943), the basis for the film The Last Mimzy (2007), and Vintage Season (September 1946), the basis for the film Timescape (1992). As "Lewis Padgett" they also penned two mystery novels: The Brass Ring (1946) and The Day He Died (1947). After Kuttner's death in 1958, Moore continued teaching her writing course at the University of Southern California, but permanently retired from writing any further literary fiction. Instead, working as "Catherine Kuttner", she carved out a short-lived career as a scriptwriter for Warner Bros. television, writing episodes of the westerns Sugarfoot, Maverick, and The Alaskans, as well as the detective series 77 Sunset Strip, all between 1958 and 1962. However, upon marrying Thomas Reggie (who was not a writer) in 1963, she ceased writing entirely. Moore was the author guest of honor at Kansas City, Missouri's fantasy and science fiction convention BYOB-Con 6, held over the U.S. Memorial Day weekend in May 1976. She was a pro guest of honor at Denvention II (the 39th World Science Fiction Convention) in 1981. In a 1979 interview, she said that she and a writer friend were collaborating on a fantasy story, and how it could possibly form the basis of a new series. But nothing was ever published. In 1981, Moore received two annual awards for her career in fantasy literature: the World Fantasy Award for Life Achievement, chosen by a panel of judges at the World Fantasy Convention, and the Gandalf Grand Master Award, chosen by vote of participants in the World Science Fiction Convention. (Thus she became the eighth and final Grand Master of Fantasy, sponsored by the Swordsmen and Sorcerers' Guild of America, in partial analogy to the Grand Master of Science Fiction sponsored by the Science Fiction Writers of America.) Moore was an active member of the Tom and Terri Pinckard Science Fiction literary salon and a frequent contributor to literary discussions with the regular membership, including Robert Bloch, George Clayton Johnson, Larry Niven, Jerry Pournelle, Norman Spinrad, A. E. van Vogt, and others, as well as many visiting writers and speakers. Moore developed Alzheimer's disease, but that was not obvious for several years. She had ceased to attend the meetings when she was nominated to be the first woman Grand Master of the Science Fiction Writers of America; the nomination was withdrawn at the request of her husband, Thomas Reggie, who said the award and ceremony would be at best confusing and likely upsetting to her, given the progress of her disease. She died on April 4, 1987, at her home in Hollywood, California.
[ { "paragraph_id": 0, "text": "Catherine Lucille Moore (January 24, 1911 – April 4, 1987) was an American science fiction and fantasy writer, who first came to prominence in the 1930s writing as C. L. Moore. She was among the first women to write in the science fiction and fantasy genres (though earlier woman writers in these genres include Clare Winger Harris, Greye La Spina, and Francis Stevens, among others). Moore's work paved the way for many other female speculative fiction writers.", "title": "" }, { "paragraph_id": 1, "text": "Moore married her first husband Henry Kuttner in 1940, and most of her work from 1940 to 1958 (Kuttner's death) was written by the couple collaboratively. They were prolific co-authors under their own names, although more often under any one of several pseudonyms.", "title": "" }, { "paragraph_id": 2, "text": "As \"Catherine Kuttner\", she had a brief career as a television scriptwriter from 1958 to 1962. She retired from writing in 1963.", "title": "" }, { "paragraph_id": 3, "text": "Moore was born on January 24, 1911, in Indianapolis, Indiana. She was chronically ill as a child and spent much of her time reading literature of the fantastic. She left college during the Great Depression to work as a secretary at the Fletcher Trust Company in Indianapolis.", "title": "Early life" }, { "paragraph_id": 4, "text": "The Vagabond, a student-run magazine at Indiana University, published three of her stories when she was a student there. The three short stories, all with a fantasy theme and all credited to \"Catherine Moore\", appeared in 1930/31. Her first professional sales appeared in pulp magazines beginning in 1933. Her decision to publish under the name \"C. L. Moore\" stemmed not from a desire to hide her gender, but to keep her employers at Fletcher Trust from knowing that she was working as a writer on the side.", "title": "Early career" }, { "paragraph_id": 5, "text": "Her early work included two significant series in Weird Tales, then edited by Farnsworth Wright. One features the rogue and adventurer Northwest Smith wandering through the Solar System; the other features the swordswoman/warrior Jirel of Joiry, one of the first female protagonists in sword-and-sorcery fiction. Both series are sometimes named for their lead characters. One of the Northwest Smith stories, \"Nymph of Darkness\" (Fantasy Magazine (April 1935); expurgated version, Weird Tales (Dec 1939)) was written in collaboration with Forrest J Ackerman.", "title": "Early career" }, { "paragraph_id": 6, "text": "The most famous Northwest Smith story is \"Shambleau\", which was also Moore's first professional sale. It originally appeared in the November 1933 issue of Weird Tales, netting her $100, and later becoming a popular anthology reprint.", "title": "Early career" }, { "paragraph_id": 7, "text": "Her most famous Jirel story is also the first one, \"Black God's Kiss\", which was the cover story in the October 1934 issue of Weird Tales, subtitled \"the weirdest story ever told\" (see figure). Moore's early stories were notable for their emphasis on the senses and emotions, which was unusual in genre fiction at the time.", "title": "Early career" }, { "paragraph_id": 8, "text": "Moore's work also appeared in Astounding Science Fiction magazine throughout the 1940s. Several stories written for that magazine were later collected in her first published book, Judgment Night (1952) One of them, the novella \"No Woman Born\" (1944), was to be included in more than 10 different science fiction anthologies including The Best of C. L. Moore.", "title": "Early career" }, { "paragraph_id": 9, "text": "Included in that collection were \"Judgment Night\" (first published in August and September 1943), the lush rendering of a future galactic empire with a sober meditation on the nature of power and its inevitable loss; \"The Code\" (July 1945), an homage to the classic Faust with modern theories and Lovecraftian dread; \"Promised Land\" (February 1950) and \"Heir Apparent\" (July 1950), both documenting the grim twisting that mankind must undergo in order to spread into the Solar System; and \"Paradise Street\" (September 1950), a futuristic take on the Old West conflict between lone hunter and wilderness-taming settlers.", "title": "Early career" }, { "paragraph_id": 10, "text": "Moore met Henry Kuttner, also a science fiction writer, in 1936 when he wrote her a fan letter under the impression that \"C. L. Moore\" was a man. They soon collaborated on a story that combined Moore's signature characters, Northwest Smith and Jirel of Joiry: \"Quest of the Starstone\" (1937).", "title": "Marriage to Henry Kuttner and literary collaborations" }, { "paragraph_id": 11, "text": "Moore and Kuttner married in 1940 and thereafter wrote many of their stories in collaboration, sometimes under their own names, but more often using the joint pseudonyms C. H. Liddell, Lawrence O'Donnell, or Lewis Padgett — most commonly the latter, a combination of their mothers' maiden names. Moore still occasionally wrote solo work during this period, including the frequently anthologized \"No Woman Born\" (1944). A selection of Moore's solo short fiction work from 1942 through 1950 was collected in 1952's Judgement Night. Moore's only solo novel, Doomsday Morning, appeared in 1957.", "title": "Marriage to Henry Kuttner and literary collaborations" }, { "paragraph_id": 12, "text": "The vast majority of Moore's work in the period, though, was written as part of a very prolific partnership. Working together, the couple managed to combine Moore's style with Kuttner's more cerebral storytelling. They continued to work in science fiction and fantasy, and their works include two frequently anthologized sci-fi classics: \"Mimsy Were the Borogoves\" (February 1943), the basis for the film The Last Mimzy (2007), and Vintage Season (September 1946), the basis for the film Timescape (1992). As \"Lewis Padgett\" they also penned two mystery novels: The Brass Ring (1946) and The Day He Died (1947).", "title": "Marriage to Henry Kuttner and literary collaborations" }, { "paragraph_id": 13, "text": "After Kuttner's death in 1958, Moore continued teaching her writing course at the University of Southern California, but permanently retired from writing any further literary fiction. Instead, working as \"Catherine Kuttner\", she carved out a short-lived career as a scriptwriter for Warner Bros. television, writing episodes of the westerns Sugarfoot, Maverick, and The Alaskans, as well as the detective series 77 Sunset Strip, all between 1958 and 1962. However, upon marrying Thomas Reggie (who was not a writer) in 1963, she ceased writing entirely.", "title": "Later career" }, { "paragraph_id": 14, "text": "Moore was the author guest of honor at Kansas City, Missouri's fantasy and science fiction convention BYOB-Con 6, held over the U.S. Memorial Day weekend in May 1976. She was a pro guest of honor at Denvention II (the 39th World Science Fiction Convention) in 1981.", "title": "Later career" }, { "paragraph_id": 15, "text": "In a 1979 interview, she said that she and a writer friend were collaborating on a fantasy story, and how it could possibly form the basis of a new series. But nothing was ever published.", "title": "Later career" }, { "paragraph_id": 16, "text": "In 1981, Moore received two annual awards for her career in fantasy literature: the World Fantasy Award for Life Achievement, chosen by a panel of judges at the World Fantasy Convention, and the Gandalf Grand Master Award, chosen by vote of participants in the World Science Fiction Convention. (Thus she became the eighth and final Grand Master of Fantasy, sponsored by the Swordsmen and Sorcerers' Guild of America, in partial analogy to the Grand Master of Science Fiction sponsored by the Science Fiction Writers of America.)", "title": "Later career" }, { "paragraph_id": 17, "text": "Moore was an active member of the Tom and Terri Pinckard Science Fiction literary salon and a frequent contributor to literary discussions with the regular membership, including Robert Bloch, George Clayton Johnson, Larry Niven, Jerry Pournelle, Norman Spinrad, A. E. van Vogt, and others, as well as many visiting writers and speakers.", "title": "Later career" }, { "paragraph_id": 18, "text": "Moore developed Alzheimer's disease, but that was not obvious for several years. She had ceased to attend the meetings when she was nominated to be the first woman Grand Master of the Science Fiction Writers of America; the nomination was withdrawn at the request of her husband, Thomas Reggie, who said the award and ceremony would be at best confusing and likely upsetting to her, given the progress of her disease. She died on April 4, 1987, at her home in Hollywood, California.", "title": "Later life" } ]
Catherine Lucille Moore was an American science fiction and fantasy writer, who first came to prominence in the 1930s writing as C. L. Moore. She was among the first women to write in the science fiction and fantasy genres. Moore's work paved the way for many other female speculative fiction writers. Moore married her first husband Henry Kuttner in 1940, and most of her work from 1940 to 1958 was written by the couple collaboratively. They were prolific co-authors under their own names, although more often under any one of several pseudonyms. As "Catherine Kuttner", she had a brief career as a television scriptwriter from 1958 to 1962. She retired from writing in 1963.
2002-01-12T02:20:11Z
2023-10-13T21:47:43Z
[ "Template:More citations needed", "Template:Notelist", "Template:Short description", "Template:Cite book", "Template:IMDb name", "Template:Commons category", "Template:Wikisource author", "Template:Internet Archive author", "Template:Isfdb name", "Template:World Fantasy Award Life Achievement", "Template:Efn", "Template:Isfdb title", "Template:Cite news", "Template:Gutenberg author", "Template:Sfhof", "Template:Infobox writer", "Template:LCAuth", "Template:ISBN", "Template:Reflist", "Template:Cite web", "Template:Clear", "Template:Cite encyclopedia", "Template:Library resources box", "Template:Wikiquote", "Template:Authority control", "Template:Librivox author", "Template:Lewis Padgett" ]
https://en.wikipedia.org/wiki/C._L._Moore
7,722
Compactron
Compactrons are a type of thermionic valve, or vacuum tube, which contain multiple electrode structures packed into a single enclosure. They were designed to compete with early transistor electronics and were used in televisions, radios, and similar roles. The Compactron was a trade name applied to multi-electrode structure tubes specifically constructed on a 12-pin Duodecar base. This vacuum tube family was introduced in 1961 by General Electric in Owensboro, Kentucky to compete with transistorized electronics during the solid state transition. Television sets were a primary application. The idea of multi-electrode tubes itself was far from new and indeed the Loewe company of Germany was producing multi-electrode tubes as far back as 1926, and they even included all of the required passive components as well. Use was prevalent in televisions because transistors were slow to achieve the high power and frequency capabilities needed particularly in color television sets. The first portable color television, the General Electric Porta-Color, was designed using 13 tubes, 10 of which were Compactrons. Even before the compactron design was unveiled, nearly all tube based electronic equipment used multi-electrode tubes of one type or another. Virtually every AM/FM radio receiver of the 1950s and 60's used a 6AK8 (EABC80) tube (or equivalent) consisting of three diodes and a triode which was designed in 1954. Compactron's integrated valve design helped lower power consumption and heat generation (they were to tubes what integrated circuits were to transistors). Compactrons were also used in a few high end Hi-Fi stereos. They were also used by Ampeg and Fender in some of their guitar amplifiers. No modern tube based Hi-Fi systems are known to use this tube type, as simpler and more readily available tubes have again filled this niche. One tube, the 7868, is used in some Hi-Fi systems made today. This tube is a Novar tube. It has the same physical dimensions as the compactron, but a 9 pin base. The exhaust tip is on the top or bottom of the tube, depending on the manufacturer's preference. It is currently in production by Electro-Harmonix.(The new power amp, Linear Tube Audio's Ultralinear, uses 4 17JN6 compactron tubes as the power tube in the amp.) The amp generates 20 watts of power with these inexpensive TV tubes. A distinguishing feature of most Compactrons is the placement of the evacuation tip on the bottom end, rather than the top end as was customary with "miniature" tubes, and a characteristic 3/4" diameter circle pin pattern. Examples of Compactrons type types include: Due to their specific applications in television circuits, many different Compactron types were produced. Almost all were assigned using standard US tube numbers. Integrated circuits (of the analogue and digital type) gradually took over all of the functions that the Compactron was designed for. "Hybrid" television sets produced in the early to mid-1970s made use of a combination of tubes (typically Compactrons), transistors, and integrated circuits in the same set. By the mid-1980s this type of tube was functionally obsolete. Compactrons simply don't exist in any TV sets designed after 1986. Other specialist uses of the tube declined in parallel with the television set manufacture. Manufacture of Compactrons ceased in the early 1990s. New old stock replacements for almost all Compactron types produced are easily found for sale on the Internet.
[ { "paragraph_id": 0, "text": "Compactrons are a type of thermionic valve, or vacuum tube, which contain multiple electrode structures packed into a single enclosure. They were designed to compete with early transistor electronics and were used in televisions, radios, and similar roles.", "title": "" }, { "paragraph_id": 1, "text": "The Compactron was a trade name applied to multi-electrode structure tubes specifically constructed on a 12-pin Duodecar base. This vacuum tube family was introduced in 1961 by General Electric in Owensboro, Kentucky to compete with transistorized electronics during the solid state transition. Television sets were a primary application. The idea of multi-electrode tubes itself was far from new and indeed the Loewe company of Germany was producing multi-electrode tubes as far back as 1926, and they even included all of the required passive components as well.", "title": "History" }, { "paragraph_id": 2, "text": "Use was prevalent in televisions because transistors were slow to achieve the high power and frequency capabilities needed particularly in color television sets. The first portable color television, the General Electric Porta-Color, was designed using 13 tubes, 10 of which were Compactrons. Even before the compactron design was unveiled, nearly all tube based electronic equipment used multi-electrode tubes of one type or another. Virtually every AM/FM radio receiver of the 1950s and 60's used a 6AK8 (EABC80) tube (or equivalent) consisting of three diodes and a triode which was designed in 1954.", "title": "History" }, { "paragraph_id": 3, "text": "Compactron's integrated valve design helped lower power consumption and heat generation (they were to tubes what integrated circuits were to transistors). Compactrons were also used in a few high end Hi-Fi stereos. They were also used by Ampeg and Fender in some of their guitar amplifiers. No modern tube based Hi-Fi systems are known to use this tube type, as simpler and more readily available tubes have again filled this niche. One tube, the 7868, is used in some Hi-Fi systems made today. This tube is a Novar tube. It has the same physical dimensions as the compactron, but a 9 pin base. The exhaust tip is on the top or bottom of the tube, depending on the manufacturer's preference. It is currently in production by Electro-Harmonix.(The new power amp, Linear Tube Audio's Ultralinear, uses 4 17JN6 compactron tubes as the power tube in the amp.) The amp generates 20 watts of power with these inexpensive TV tubes.", "title": "History" }, { "paragraph_id": 4, "text": "A distinguishing feature of most Compactrons is the placement of the evacuation tip on the bottom end, rather than the top end as was customary with \"miniature\" tubes, and a characteristic 3/4\" diameter circle pin pattern.", "title": "Notable features" }, { "paragraph_id": 5, "text": "Examples of Compactrons type types include:", "title": "Examples" }, { "paragraph_id": 6, "text": "Due to their specific applications in television circuits, many different Compactron types were produced. Almost all were assigned using standard US tube numbers.", "title": "Examples" }, { "paragraph_id": 7, "text": "Integrated circuits (of the analogue and digital type) gradually took over all of the functions that the Compactron was designed for. \"Hybrid\" television sets produced in the early to mid-1970s made use of a combination of tubes (typically Compactrons), transistors, and integrated circuits in the same set. By the mid-1980s this type of tube was functionally obsolete. Compactrons simply don't exist in any TV sets designed after 1986. Other specialist uses of the tube declined in parallel with the television set manufacture. Manufacture of Compactrons ceased in the early 1990s. New old stock replacements for almost all Compactron types produced are easily found for sale on the Internet.", "title": "Technological obsolescence" } ]
Compactrons are a type of thermionic valve, or vacuum tube, which contain multiple electrode structures packed into a single enclosure. They were designed to compete with early transistor electronics and were used in televisions, radios, and similar roles.
2002-02-25T15:43:11Z
2023-11-01T12:31:12Z
[ "Template:Electronic component", "Template:Short description", "Template:Original research", "Template:Reflist", "Template:Cite web" ]
https://en.wikipedia.org/wiki/Compactron
7,723
Carmichael number
In number theory, a Carmichael number is a composite number n {\displaystyle n} , which in modular arithmetic satisfies the congruence relation: for all integers b {\displaystyle b} . The relation may also be expressed in the form: for all integers b {\displaystyle b} which are relatively prime to n {\displaystyle n} . Carmichael numbers are named after American mathematician Robert Carmichael, the term having been introduced by Nicolaas Beeger in 1950 (Øystein Ore had referred to them in 1948 as numbers with the "Fermat property", or "F numbers" for short). They are infinite in number. They constitute the comparatively rare instances where the strict converse of Fermat's Little Theorem does not hold. This fact precludes the use of that theorem as an absolute test of primality. The Carmichael numbers form the subset K1 of the Knödel numbers. Fermat's little theorem states that if p {\displaystyle p} is a prime number, then for any integer b {\displaystyle b} , the number b p − b {\displaystyle b^{p}-b} is an integer multiple of p {\displaystyle p} . Carmichael numbers are composite numbers which have the same property. Carmichael numbers are also called Fermat pseudoprimes or absolute Fermat pseudoprimes. A Carmichael number will pass a Fermat primality test to every base b {\displaystyle b} relatively prime to the number, even though it is not actually prime. This makes tests based on Fermat's Little Theorem less effective than strong probable prime tests such as the Baillie–PSW primality test and the Miller–Rabin primality test. However, no Carmichael number is either an Euler–Jacobi pseudoprime or a strong pseudoprime to every base relatively prime to it so, in theory, either an Euler or a strong probable prime test could prove that a Carmichael number is, in fact, composite. Arnault gives a 397-digit Carmichael number N {\displaystyle N} that is a strong pseudoprime to all prime bases less than 307: where is a 131-digit prime. p {\displaystyle p} is the smallest prime factor of N {\displaystyle N} , so this Carmichael number is also a (not necessarily strong) pseudoprime to all bases less than p {\displaystyle p} . As numbers become larger, Carmichael numbers become increasingly rare. For example, there are 20,138,200 Carmichael numbers between 1 and 10 (approximately one in 50 trillion (5·10) numbers). An alternative and equivalent definition of Carmichael numbers is given by Korselt's criterion. It follows from this theorem that all Carmichael numbers are odd, since any even composite number that is square-free (and hence has only one prime factor of two) will have at least one odd prime factor, and thus p − 1 ∣ n − 1 {\displaystyle p-1\mid n-1} results in an even dividing an odd, a contradiction. (The oddness of Carmichael numbers also follows from the fact that − 1 {\displaystyle -1} is a Fermat witness for any even composite number.) From the criterion it also follows that Carmichael numbers are cyclic. Additionally, it follows that there are no Carmichael numbers with exactly two prime divisors. Korselt was the first who observed the basic properties of Carmichael numbers, but he did not give any examples. In 1910, Carmichael found the first and smallest such number, 561, which explains the name "Carmichael number". That 561 is a Carmichael number can be seen with Korselt's criterion. Indeed, 561 = 3 ⋅ 11 ⋅ 17 {\displaystyle 561=3\cdot 11\cdot 17} is square-free and 2 ∣ 560 {\displaystyle 2\mid 560} , 10 ∣ 560 {\displaystyle 10\mid 560} and 16 ∣ 560 {\displaystyle 16\mid 560} . The next six Carmichael numbers are (sequence A002997 in the OEIS): These first seven Carmichael numbers, from 561 to 8911, were all found by the Czech mathematician Václav Šimerka in 1885 (thus preceding not just Carmichael but also Korselt, although Šimerka did not find anything like Korselt's criterion). His work, published in Czech scientific journal Časopis pro pěstování matematiky a fysiky, however, remained unnoticed. Jack Chernick proved a theorem in 1939 which can be used to construct a subset of Carmichael numbers. The number ( 6 k + 1 ) ( 12 k + 1 ) ( 18 k + 1 ) {\displaystyle (6k+1)(12k+1)(18k+1)} is a Carmichael number if its three factors are all prime. Whether this formula produces an infinite quantity of Carmichael numbers is an open question (though it is implied by Dickson's conjecture). Paul Erdős heuristically argued there should be infinitely many Carmichael numbers. In 1994 W. R. (Red) Alford, Andrew Granville and Carl Pomerance used a bound on Olson's constant to show that there really do exist infinitely many Carmichael numbers. Specifically, they showed that for sufficiently large n {\displaystyle n} , there are at least n 2 / 7 {\displaystyle n^{2/7}} Carmichael numbers between 1 and n . {\displaystyle n.} Thomas Wright proved that if a {\displaystyle a} and m {\displaystyle m} are relatively prime, then there are infinitely many Carmichael numbers in the arithmetic progression a + k ⋅ m {\displaystyle a+k\cdot m} , where k = 1 , 2 , … {\displaystyle k=1,2,\ldots } . Löh and Niebuhr in 1992 found some very large Carmichael numbers, including one with 1,101,518 factors and over 16 million digits. This has been improved to 10,333,229,505 prime factors and 295,486,761,787 digits, so the largest known Carmichael number is much greater than the largest known prime. Carmichael numbers have at least three positive prime factors. The first Carmichael numbers with k = 3 , 4 , 5 , … {\displaystyle k=3,4,5,\ldots } prime factors are (sequence A006931 in the OEIS): The first Carmichael numbers with 4 prime factors are (sequence A074379 in the OEIS): The second Carmichael number (1105) can be expressed as the sum of two squares in more ways than any smaller number. The third Carmichael number (1729) is the Hardy-Ramanujan Number: the smallest number that can be expressed as the sum of two cubes (of positive numbers) in two different ways. Let C ( X ) {\displaystyle C(X)} denote the number of Carmichael numbers less than or equal to X {\displaystyle X} . The distribution of Carmichael numbers by powers of 10 (sequence A055553 in the OEIS): In 1953, Knödel proved the upper bound: for some constant k 1 {\displaystyle k_{1}} . In 1956, Erdős improved the bound to for some constant k 2 {\displaystyle k_{2}} . He further gave a heuristic argument suggesting that this upper bound should be close to the true growth rate of C ( X ) {\displaystyle C(X)} . In the other direction, Alford, Granville and Pomerance proved in 1994 that for sufficiently large X, In 2005, this bound was further improved by Harman to who subsequently improved the exponent to 0.7039 ⋅ 0.4736 = 0.33336704 > 1 / 3 {\displaystyle 0.7039\cdot 0.4736=0.33336704>1/3} . Regarding the asymptotic distribution of Carmichael numbers, there have been several conjectures. In 1956, Erdős conjectured that there were X 1 − o ( 1 ) {\displaystyle X^{1-o(1)}} Carmichael numbers for X sufficiently large. In 1981, Pomerance sharpened Erdős' heuristic arguments to conjecture that there are at least Carmichael numbers up to X {\displaystyle X} , where L ( x ) = exp ( log x log log log x log log x ) {\displaystyle L(x)=\exp {\left({\frac {\log x\log \log \log x}{\log \log x}}\right)}} . However, inside current computational ranges (such as the counts of Carmichael numbers performed by Pinch up to 10), these conjectures are not yet borne out by the data. In 2021, Daniel Larsen proved an analogue of Bertrand's postulate for Carmichael numbers first conjectured by Alford, Granville, and Pomerance in 1994. Using techniques developed by Yitang Zhang and James Maynard to establish results concerning small gaps between primes, his work yielded the much stronger statement that, for any δ > 0 {\displaystyle \delta >0} and sufficiently large x {\displaystyle x} in terms of δ {\displaystyle \delta } , there will always be at least Carmichael numbers between x {\displaystyle x} and The notion of Carmichael number generalizes to a Carmichael ideal in any number field K. For any nonzero prime ideal p {\displaystyle {\mathfrak {p}}} in O K {\displaystyle {\mathcal {O}}_{K}} , we have α N ( p ) ≡ α mod p {\displaystyle \alpha ^{{\rm {N}}({\mathfrak {p}})}\equiv \alpha {\bmod {\mathfrak {p}}}} for all α {\displaystyle \alpha } in O K {\displaystyle {\mathcal {O}}_{K}} , where N ( p ) {\displaystyle {\rm {N}}({\mathfrak {p}})} is the norm of the ideal p {\displaystyle {\mathfrak {p}}} . (This generalizes Fermat's little theorem, that m p ≡ m mod p {\displaystyle m^{p}\equiv m{\bmod {p}}} for all integers m when p is prime.) Call a nonzero ideal a {\displaystyle {\mathfrak {a}}} in O K {\displaystyle {\mathcal {O}}_{K}} Carmichael if it is not a prime ideal and α N ( a ) ≡ α mod a {\displaystyle \alpha ^{{\rm {N}}({\mathfrak {a}})}\equiv \alpha {\bmod {\mathfrak {a}}}} for all α ∈ O K {\displaystyle \alpha \in {\mathcal {O}}_{K}} , where N ( a ) {\displaystyle {\rm {N}}({\mathfrak {a}})} is the norm of the ideal a {\displaystyle {\mathfrak {a}}} . When K is Q {\displaystyle \mathbf {Q} } , the ideal a {\displaystyle {\mathfrak {a}}} is principal, and if we let a be its positive generator then the ideal a = ( a ) {\displaystyle {\mathfrak {a}}=(a)} is Carmichael exactly when a is a Carmichael number in the usual sense. When K is larger than the rationals it is easy to write down Carmichael ideals in O K {\displaystyle {\mathcal {O}}_{K}} : for any prime number p that splits completely in K, the principal ideal p O K {\displaystyle p{\mathcal {O}}_{K}} is a Carmichael ideal. Since infinitely many prime numbers split completely in any number field, there are infinitely many Carmichael ideals in O K {\displaystyle {\mathcal {O}}_{K}} . For example, if p is any prime number that is 1 mod 4, the ideal (p) in the Gaussian integers Z[ i ] is a Carmichael ideal. Both prime and Carmichael numbers satisfy the following equality: A positive composite integer n {\displaystyle n} is a Lucas–Carmichael number if and only if n {\displaystyle n} is square-free, and for all prime divisors p {\displaystyle p} of n {\displaystyle n} , it is true that p + 1 ∣ n + 1 {\displaystyle p+1\mid n+1} . The first Lucas–Carmichael numbers are: Quasi–Carmichael numbers are squarefree composite numbers n with the property that for every prime factor p of n, p + b divides n + b positively with b being any integer besides 0. If b = −1, these are Carmichael numbers, and if b = 1, these are Lucas–Carmichael numbers. The first Quasi–Carmichael numbers are: An n-Knödel number for a given positive integer n is a composite number m with the property that each i < m coprime to m satisfies i m − n ≡ 1 ( mod m ) {\displaystyle i^{m-n}\equiv 1{\pmod {m}}} . The n = 1 case are Carmichael numbers. Carmichael numbers can be generalized using concepts of abstract algebra. The above definition states that a composite integer n is Carmichael precisely when the nth-power-raising function pn from the ring Zn of integers modulo n to itself is the identity function. The identity is the only Zn-algebra endomorphism on Zn so we can restate the definition as asking that pn be an algebra endomorphism of Zn. As above, pn satisfies the same property whenever n is prime. The nth-power-raising function pn is also defined on any Zn-algebra A. A theorem states that n is prime if and only if all such functions pn are algebra endomorphisms. In-between these two conditions lies the definition of Carmichael number of order m for any positive integer m as any composite number n such that pn is an endomorphism on every Zn-algebra that can be generated as Zn-module by m elements. Carmichael numbers of order 1 are just the ordinary Carmichael numbers. According to Howe, 17 · 31 · 41 · 43 · 89 · 97 · 167 · 331 is an order 2 Carmichael number. This product is equal to 443,372,888,629,441. Korselt's criterion can be generalized to higher-order Carmichael numbers, as shown by Howe. A heuristic argument, given in the same paper, appears to suggest that there are infinitely many Carmichael numbers of order m, for any m. However, not a single Carmichael number of order 3 or above is known.
[ { "paragraph_id": 0, "text": "In number theory, a Carmichael number is a composite number n {\\displaystyle n} , which in modular arithmetic satisfies the congruence relation:", "title": "" }, { "paragraph_id": 1, "text": "for all integers b {\\displaystyle b} . The relation may also be expressed in the form:", "title": "" }, { "paragraph_id": 2, "text": "for all integers b {\\displaystyle b} which are relatively prime to n {\\displaystyle n} . Carmichael numbers are named after American mathematician Robert Carmichael, the term having been introduced by Nicolaas Beeger in 1950 (Øystein Ore had referred to them in 1948 as numbers with the \"Fermat property\", or \"F numbers\" for short). They are infinite in number.", "title": "" }, { "paragraph_id": 3, "text": "They constitute the comparatively rare instances where the strict converse of Fermat's Little Theorem does not hold. This fact precludes the use of that theorem as an absolute test of primality.", "title": "" }, { "paragraph_id": 4, "text": "The Carmichael numbers form the subset K1 of the Knödel numbers.", "title": "" }, { "paragraph_id": 5, "text": "Fermat's little theorem states that if p {\\displaystyle p} is a prime number, then for any integer b {\\displaystyle b} , the number b p − b {\\displaystyle b^{p}-b} is an integer multiple of p {\\displaystyle p} . Carmichael numbers are composite numbers which have the same property. Carmichael numbers are also called Fermat pseudoprimes or absolute Fermat pseudoprimes. A Carmichael number will pass a Fermat primality test to every base b {\\displaystyle b} relatively prime to the number, even though it is not actually prime. This makes tests based on Fermat's Little Theorem less effective than strong probable prime tests such as the Baillie–PSW primality test and the Miller–Rabin primality test.", "title": "Overview" }, { "paragraph_id": 6, "text": "However, no Carmichael number is either an Euler–Jacobi pseudoprime or a strong pseudoprime to every base relatively prime to it so, in theory, either an Euler or a strong probable prime test could prove that a Carmichael number is, in fact, composite.", "title": "Overview" }, { "paragraph_id": 7, "text": "Arnault gives a 397-digit Carmichael number N {\\displaystyle N} that is a strong pseudoprime to all prime bases less than 307:", "title": "Overview" }, { "paragraph_id": 8, "text": "where", "title": "Overview" }, { "paragraph_id": 9, "text": "is a 131-digit prime. p {\\displaystyle p} is the smallest prime factor of N {\\displaystyle N} , so this Carmichael number is also a (not necessarily strong) pseudoprime to all bases less than p {\\displaystyle p} .", "title": "Overview" }, { "paragraph_id": 10, "text": "As numbers become larger, Carmichael numbers become increasingly rare. For example, there are 20,138,200 Carmichael numbers between 1 and 10 (approximately one in 50 trillion (5·10) numbers).", "title": "Overview" }, { "paragraph_id": 11, "text": "An alternative and equivalent definition of Carmichael numbers is given by Korselt's criterion.", "title": "Overview" }, { "paragraph_id": 12, "text": "It follows from this theorem that all Carmichael numbers are odd, since any even composite number that is square-free (and hence has only one prime factor of two) will have at least one odd prime factor, and thus p − 1 ∣ n − 1 {\\displaystyle p-1\\mid n-1} results in an even dividing an odd, a contradiction. (The oddness of Carmichael numbers also follows from the fact that − 1 {\\displaystyle -1} is a Fermat witness for any even composite number.) From the criterion it also follows that Carmichael numbers are cyclic. Additionally, it follows that there are no Carmichael numbers with exactly two prime divisors.", "title": "Overview" }, { "paragraph_id": 13, "text": "Korselt was the first who observed the basic properties of Carmichael numbers, but he did not give any examples. In 1910, Carmichael found the first and smallest such number, 561, which explains the name \"Carmichael number\".", "title": "Discovery" }, { "paragraph_id": 14, "text": "That 561 is a Carmichael number can be seen with Korselt's criterion. Indeed, 561 = 3 ⋅ 11 ⋅ 17 {\\displaystyle 561=3\\cdot 11\\cdot 17} is square-free and 2 ∣ 560 {\\displaystyle 2\\mid 560} , 10 ∣ 560 {\\displaystyle 10\\mid 560} and 16 ∣ 560 {\\displaystyle 16\\mid 560} .", "title": "Discovery" }, { "paragraph_id": 15, "text": "The next six Carmichael numbers are (sequence A002997 in the OEIS):", "title": "Discovery" }, { "paragraph_id": 16, "text": "These first seven Carmichael numbers, from 561 to 8911, were all found by the Czech mathematician Václav Šimerka in 1885 (thus preceding not just Carmichael but also Korselt, although Šimerka did not find anything like Korselt's criterion). His work, published in Czech scientific journal Časopis pro pěstování matematiky a fysiky, however, remained unnoticed.", "title": "Discovery" }, { "paragraph_id": 17, "text": "Jack Chernick proved a theorem in 1939 which can be used to construct a subset of Carmichael numbers. The number ( 6 k + 1 ) ( 12 k + 1 ) ( 18 k + 1 ) {\\displaystyle (6k+1)(12k+1)(18k+1)} is a Carmichael number if its three factors are all prime. Whether this formula produces an infinite quantity of Carmichael numbers is an open question (though it is implied by Dickson's conjecture).", "title": "Discovery" }, { "paragraph_id": 18, "text": "Paul Erdős heuristically argued there should be infinitely many Carmichael numbers. In 1994 W. R. (Red) Alford, Andrew Granville and Carl Pomerance used a bound on Olson's constant to show that there really do exist infinitely many Carmichael numbers. Specifically, they showed that for sufficiently large n {\\displaystyle n} , there are at least n 2 / 7 {\\displaystyle n^{2/7}} Carmichael numbers between 1 and n . {\\displaystyle n.}", "title": "Discovery" }, { "paragraph_id": 19, "text": "Thomas Wright proved that if a {\\displaystyle a} and m {\\displaystyle m} are relatively prime, then there are infinitely many Carmichael numbers in the arithmetic progression a + k ⋅ m {\\displaystyle a+k\\cdot m} , where k = 1 , 2 , … {\\displaystyle k=1,2,\\ldots } .", "title": "Discovery" }, { "paragraph_id": 20, "text": "Löh and Niebuhr in 1992 found some very large Carmichael numbers, including one with 1,101,518 factors and over 16 million digits. This has been improved to 10,333,229,505 prime factors and 295,486,761,787 digits, so the largest known Carmichael number is much greater than the largest known prime.", "title": "Discovery" }, { "paragraph_id": 21, "text": "Carmichael numbers have at least three positive prime factors. The first Carmichael numbers with k = 3 , 4 , 5 , … {\\displaystyle k=3,4,5,\\ldots } prime factors are (sequence A006931 in the OEIS):", "title": "Properties" }, { "paragraph_id": 22, "text": "The first Carmichael numbers with 4 prime factors are (sequence A074379 in the OEIS):", "title": "Properties" }, { "paragraph_id": 23, "text": "The second Carmichael number (1105) can be expressed as the sum of two squares in more ways than any smaller number. The third Carmichael number (1729) is the Hardy-Ramanujan Number: the smallest number that can be expressed as the sum of two cubes (of positive numbers) in two different ways.", "title": "Properties" }, { "paragraph_id": 24, "text": "Let C ( X ) {\\displaystyle C(X)} denote the number of Carmichael numbers less than or equal to X {\\displaystyle X} . The distribution of Carmichael numbers by powers of 10 (sequence A055553 in the OEIS):", "title": "Properties" }, { "paragraph_id": 25, "text": "In 1953, Knödel proved the upper bound:", "title": "Properties" }, { "paragraph_id": 26, "text": "for some constant k 1 {\\displaystyle k_{1}} .", "title": "Properties" }, { "paragraph_id": 27, "text": "In 1956, Erdős improved the bound to", "title": "Properties" }, { "paragraph_id": 28, "text": "for some constant k 2 {\\displaystyle k_{2}} . He further gave a heuristic argument suggesting that this upper bound should be close to the true growth rate of C ( X ) {\\displaystyle C(X)} .", "title": "Properties" }, { "paragraph_id": 29, "text": "In the other direction, Alford, Granville and Pomerance proved in 1994 that for sufficiently large X,", "title": "Properties" }, { "paragraph_id": 30, "text": "In 2005, this bound was further improved by Harman to", "title": "Properties" }, { "paragraph_id": 31, "text": "who subsequently improved the exponent to 0.7039 ⋅ 0.4736 = 0.33336704 > 1 / 3 {\\displaystyle 0.7039\\cdot 0.4736=0.33336704>1/3} .", "title": "Properties" }, { "paragraph_id": 32, "text": "Regarding the asymptotic distribution of Carmichael numbers, there have been several conjectures. In 1956, Erdős conjectured that there were X 1 − o ( 1 ) {\\displaystyle X^{1-o(1)}} Carmichael numbers for X sufficiently large. In 1981, Pomerance sharpened Erdős' heuristic arguments to conjecture that there are at least", "title": "Properties" }, { "paragraph_id": 33, "text": "Carmichael numbers up to X {\\displaystyle X} , where L ( x ) = exp ( log x log log log x log log x ) {\\displaystyle L(x)=\\exp {\\left({\\frac {\\log x\\log \\log \\log x}{\\log \\log x}}\\right)}} .", "title": "Properties" }, { "paragraph_id": 34, "text": "However, inside current computational ranges (such as the counts of Carmichael numbers performed by Pinch up to 10), these conjectures are not yet borne out by the data.", "title": "Properties" }, { "paragraph_id": 35, "text": "In 2021, Daniel Larsen proved an analogue of Bertrand's postulate for Carmichael numbers first conjectured by Alford, Granville, and Pomerance in 1994. Using techniques developed by Yitang Zhang and James Maynard to establish results concerning small gaps between primes, his work yielded the much stronger statement that, for any δ > 0 {\\displaystyle \\delta >0} and sufficiently large x {\\displaystyle x} in terms of δ {\\displaystyle \\delta } , there will always be at least", "title": "Properties" }, { "paragraph_id": 36, "text": "Carmichael numbers between x {\\displaystyle x} and", "title": "Properties" }, { "paragraph_id": 37, "text": "The notion of Carmichael number generalizes to a Carmichael ideal in any number field K. For any nonzero prime ideal p {\\displaystyle {\\mathfrak {p}}} in O K {\\displaystyle {\\mathcal {O}}_{K}} , we have α N ( p ) ≡ α mod p {\\displaystyle \\alpha ^{{\\rm {N}}({\\mathfrak {p}})}\\equiv \\alpha {\\bmod {\\mathfrak {p}}}} for all α {\\displaystyle \\alpha } in O K {\\displaystyle {\\mathcal {O}}_{K}} , where N ( p ) {\\displaystyle {\\rm {N}}({\\mathfrak {p}})} is the norm of the ideal p {\\displaystyle {\\mathfrak {p}}} . (This generalizes Fermat's little theorem, that m p ≡ m mod p {\\displaystyle m^{p}\\equiv m{\\bmod {p}}} for all integers m when p is prime.) Call a nonzero ideal a {\\displaystyle {\\mathfrak {a}}} in O K {\\displaystyle {\\mathcal {O}}_{K}} Carmichael if it is not a prime ideal and α N ( a ) ≡ α mod a {\\displaystyle \\alpha ^{{\\rm {N}}({\\mathfrak {a}})}\\equiv \\alpha {\\bmod {\\mathfrak {a}}}} for all α ∈ O K {\\displaystyle \\alpha \\in {\\mathcal {O}}_{K}} , where N ( a ) {\\displaystyle {\\rm {N}}({\\mathfrak {a}})} is the norm of the ideal a {\\displaystyle {\\mathfrak {a}}} . When K is Q {\\displaystyle \\mathbf {Q} } , the ideal a {\\displaystyle {\\mathfrak {a}}} is principal, and if we let a be its positive generator then the ideal a = ( a ) {\\displaystyle {\\mathfrak {a}}=(a)} is Carmichael exactly when a is a Carmichael number in the usual sense.", "title": "Generalizations" }, { "paragraph_id": 38, "text": "When K is larger than the rationals it is easy to write down Carmichael ideals in O K {\\displaystyle {\\mathcal {O}}_{K}} : for any prime number p that splits completely in K, the principal ideal p O K {\\displaystyle p{\\mathcal {O}}_{K}} is a Carmichael ideal. Since infinitely many prime numbers split completely in any number field, there are infinitely many Carmichael ideals in O K {\\displaystyle {\\mathcal {O}}_{K}} . For example, if p is any prime number that is 1 mod 4, the ideal (p) in the Gaussian integers Z[ i ] is a Carmichael ideal.", "title": "Generalizations" }, { "paragraph_id": 39, "text": "Both prime and Carmichael numbers satisfy the following equality:", "title": "Generalizations" }, { "paragraph_id": 40, "text": "A positive composite integer n {\\displaystyle n} is a Lucas–Carmichael number if and only if n {\\displaystyle n} is square-free, and for all prime divisors p {\\displaystyle p} of n {\\displaystyle n} , it is true that p + 1 ∣ n + 1 {\\displaystyle p+1\\mid n+1} . The first Lucas–Carmichael numbers are:", "title": "Lucas–Carmichael number" }, { "paragraph_id": 41, "text": "Quasi–Carmichael numbers are squarefree composite numbers n with the property that for every prime factor p of n, p + b divides n + b positively with b being any integer besides 0. If b = −1, these are Carmichael numbers, and if b = 1, these are Lucas–Carmichael numbers. The first Quasi–Carmichael numbers are:", "title": "Quasi–Carmichael number" }, { "paragraph_id": 42, "text": "An n-Knödel number for a given positive integer n is a composite number m with the property that each i < m coprime to m satisfies i m − n ≡ 1 ( mod m ) {\\displaystyle i^{m-n}\\equiv 1{\\pmod {m}}} . The n = 1 case are Carmichael numbers.", "title": "Knödel number" }, { "paragraph_id": 43, "text": "Carmichael numbers can be generalized using concepts of abstract algebra.", "title": "Higher-order Carmichael numbers" }, { "paragraph_id": 44, "text": "The above definition states that a composite integer n is Carmichael precisely when the nth-power-raising function pn from the ring Zn of integers modulo n to itself is the identity function. The identity is the only Zn-algebra endomorphism on Zn so we can restate the definition as asking that pn be an algebra endomorphism of Zn. As above, pn satisfies the same property whenever n is prime.", "title": "Higher-order Carmichael numbers" }, { "paragraph_id": 45, "text": "The nth-power-raising function pn is also defined on any Zn-algebra A. A theorem states that n is prime if and only if all such functions pn are algebra endomorphisms.", "title": "Higher-order Carmichael numbers" }, { "paragraph_id": 46, "text": "In-between these two conditions lies the definition of Carmichael number of order m for any positive integer m as any composite number n such that pn is an endomorphism on every Zn-algebra that can be generated as Zn-module by m elements. Carmichael numbers of order 1 are just the ordinary Carmichael numbers.", "title": "Higher-order Carmichael numbers" }, { "paragraph_id": 47, "text": "According to Howe, 17 · 31 · 41 · 43 · 89 · 97 · 167 · 331 is an order 2 Carmichael number. This product is equal to 443,372,888,629,441.", "title": "Higher-order Carmichael numbers" }, { "paragraph_id": 48, "text": "Korselt's criterion can be generalized to higher-order Carmichael numbers, as shown by Howe.", "title": "Higher-order Carmichael numbers" }, { "paragraph_id": 49, "text": "A heuristic argument, given in the same paper, appears to suggest that there are infinitely many Carmichael numbers of order m, for any m. However, not a single Carmichael number of order 3 or above is known.", "title": "Higher-order Carmichael numbers" } ]
In number theory, a Carmichael number is a composite number n , which in modular arithmetic satisfies the congruence relation: for all integers b . The relation may also be expressed in the form: for all integers b which are relatively prime to n . Carmichael numbers are named after American mathematician Robert Carmichael, the term having been introduced by Nicolaas Beeger in 1950. They are infinite in number. They constitute the comparatively rare instances where the strict converse of Fermat's Little Theorem does not hold. This fact precludes the use of that theorem as an absolute test of primality. The Carmichael numbers form the subset K1 of the Knödel numbers.
2002-01-12T12:11:16Z
2023-11-20T20:58:20Z
[ "Template:Main", "Template:Reflist", "Template:Springer", "Template:MathWorld", "Template:Short description", "Template:Cite book", "Template:Cite journal", "Template:Failed verification", "Template:Space", "Template:Cite web", "Template:Classes of natural numbers", "Template:OEIS", "Template:Cite conference", "Template:MathPages" ]
https://en.wikipedia.org/wiki/Carmichael_number
7,727
Controlled Substances Act
The Controlled Substances Act (CSA) is the statute establishing federal U.S. drug policy under which the manufacture, importation, possession, use, and distribution of certain substances is regulated. It was passed by the 91st United States Congress as Title II of the Comprehensive Drug Abuse Prevention and Control Act of 1970 and signed into law by President Richard Nixon. The Act also served as the national implementing legislation for the Single Convention on Narcotic Drugs. The legislation created five schedules (classifications), with varying qualifications for a substance to be included in each. Two federal agencies, the Drug Enforcement Administration (DEA) and the Food and Drug Administration (FDA), determine which substances are added to or removed from the various schedules, although the statute passed by Congress created the initial listing. Congress has sometimes scheduled other substances through legislation such as the Hillory J. Farias and Samantha Reid Date-Rape Prevention Act of 2000, which placed gamma hydroxybutyrate (GHB) in Schedule I and sodium oxybate (the isolated sodium salt in GHB) in Schedule III when used under an FDA New Drug Application (NDA) or Investigational New Drug (IND). Classification decisions are required to be made on criteria including potential for abuse (an undefined term), currently accepted medical use in treatment in the United States, and international treaties. The nation first outlawed addictive drugs in the early 1900s and the International Opium Convention helped lead international agreements regulating trade. The Food and Drugs Act of 1906 was the beginning of over 200 laws concerning public health and consumer protections. Others were the Federal Food, Drug, and Cosmetic Act (1938), and the Kefauver Harris Amendment of 1962. In 1969, President Richard Nixon announced that the Attorney General, John N. Mitchell, was preparing a comprehensive new measure to more effectively meet the narcotic and dangerous drug problems at the federal level by combining all existing federal laws into a single new statute. With the help of White House Counsel head, John Dean; the Executive Director of the Shafer Commission, Michael Sonnenreich; and the Director of the BNDD, John Ingersoll creating and writing the legislation, Mitchell was able to present Nixon with the bill. The CSA not only combined existing federal drug laws and expanded their scope, but it also changed the nature of federal drug law policies and expanded federal law enforcement pertaining to controlled substances. Title II, Part F of the Comprehensive Drug Abuse Prevention and Control Act of 1970 established the National Commission on Marijuana and Drug Abuse—known as the Shafer Commission after its chairman, Raymond P. Shafer—to study cannabis abuse in the United States. During his presentation of the commission's First Report to Congress, Sonnenreich and Shafer recommended the decriminalization of marijuana in small amounts, with Shafer stating, [T]he criminal law is too harsh a tool to apply to personal possession even in the effort to discourage use. It implies an overwhelming indictment of the behavior which we believe is not appropriate. The actual and potential harm of use of the drug is not great enough to justify intrusion by the criminal law into private behavior, a step which our society takes only with the greatest reluctance. Rufus King notes that this stratagem was similar to that used by Harry Anslinger when he consolidated the previous anti-drug treaties into the Single Convention and took the opportunity to add new provisions that otherwise might have been unpalatable to the international community. According to David T. Courtwright, "the Act was part of an omnibus reform package designed to rationalize, and in some respects to liberalize, American drug policy." (Courtwright noted that the Act became, not libertarian, but instead repressionistic to the point of tyrannical in its intent; a cruel and/or arbitrary exercise of power). It eliminated mandatory minimum sentences and provided support for drug treatment and research. King notes that the rehabilitation clauses were added as a compromise to Senator Jim Hughes, who favored a moderate approach. The bill, as introduced by Senator Everett Dirksen, ran to 91 pages. While it was being drafted, the Uniform Controlled Substances Act, to be passed by state legislatures, was also being drafted by the Department of Justice; its wording closely mirrored the Controlled Substances Act. Since its enactment in 1970, the Act has been amended numerous times: The Controlled Substances Act consists of two subchapters. Subchapter I defines Schedules I–V, lists chemicals used in the manufacture of controlled substances, and differentiates lawful and unlawful manufacturing, distribution, and possession of controlled substances, including possession of Schedule I drugs for personal use; this subchapter also specifies the dollar amounts of fines and durations of prison terms for violations. Subchapter II describes the laws for exportation and importation of controlled substances, again specifying fines and prison terms for violations. The Drug Enforcement Administration was established in 1973, combining the Bureau of Narcotics and Dangerous Drugs (BNDD) and Customs' drug agents. Proceedings to add, delete, or change the schedule of a drug or other substance may be initiated by the DEA, the Department of Health and Human Services (HHS), or by petition from any interested party, including the manufacturer of a drug, a medical society or association, a pharmacy association, a public interest group concerned with drug abuse, a state or local government agency, or an individual citizen. When a petition is received by the DEA, the agency begins its own investigation of the drug. The DEA may begin an investigation of a drug at any time based upon information received from laboratories, state and local law enforcement and regulatory agencies, or other sources of information. Once the DEA has collected the necessary data, the Deputy Administrator of DEA, requests from HHS a scientific and medical evaluation and recommendation as to whether the drug or other substance should be controlled or removed from control. This request is sent to the Assistant Secretary of Health of HHS. Then, HHS solicits information from the Commissioner of the Food and Drug Administration and evaluations and recommendations from the National Institute on Drug Abuse and, on occasion, from the scientific and medical community at large. The Assistant Secretary, by authority of the Secretary, compiles the information and transmits back to the DEA a medical and scientific evaluation regarding the drug or other substance, a recommendation as to whether the drug should be controlled, and in what schedule it should be placed. The HHS recommendation on scheduling is binding to the extent that if HHS recommends, based on its medical and scientific evaluation, that the substance not be controlled, then the DEA may not control the substance. Once the DEA has received the scientific and medical evaluation from HHS, the DEA Administrator evaluates all available data and makes a final decision whether to propose that a drug or other substance be controlled and into which schedule it should be placed. Under certain circumstances, the Government may temporarily schedule a drug without following the normal procedure. An example is when international treaties require control of a substance. 21 U.S.C. § 811(h) allows the Attorney General to temporarily place a substance in Schedule I "to avoid an imminent hazard to the public safety". Thirty days' notice is required before the order can be issued, and the scheduling expires after a year. The period may be extended six months if rulemaking proceedings to permanently schedule the drug are in progress. In any case, once these proceedings are complete, the temporary order is automatically vacated. Unlike ordinary scheduling proceedings, such temporary orders are not subject to judicial review. The CSA creates a closed system of distribution for those authorized to handle controlled substances. The cornerstone of this system is the registration of all those authorized by the DEA to handle controlled substances. All individuals and firms that are registered are required to maintain complete and accurate inventories and records of all transactions involving controlled substances, as well as security for the storage of controlled substances. The Congressional findings in 21 USC §§ 801(7), 801a(2), and 801a(3) state that a major purpose of the CSA is to "enable the United States to meet all of its obligations" under international treaties. The CSA bears many resemblances to these Conventions. Both the CSA and the treaties set out a system for classifying controlled substances in several schedules in accordance with the binding scientific and medical findings of a public health authority. Under 21 U.S.C. § 811 of the CSA, that authority is the Secretary of Health and Human Services (HHS). Under Article 3 of the Single Convention and Article 2 of the Convention on Psychotropic Substances, the World Health Organization is that authority. The domestic and international legal nature of these treaty obligations must be considered in light of the supremacy of the United States Constitution over treaties or acts and the equality of treaties and Congressional acts. In Reid v. Covert the Supreme Court of the United States addressed both these issues directly and clearly holding: [N]o agreement with a foreign nation can confer power on the Congress, or on any other branch of Government, which is free from the restraints of the Constitution. Article VI, the Supremacy Clause of the Constitution, declares: "This Constitution, and the Laws of the United States which shall be made in Pursuance thereof, and all Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land; . . ." There is nothing in this language which intimates that treaties and laws enacted pursuant to them do not have to comply with the provisions of the Constitution. Nor is there anything in the debates which accompanied the drafting and ratification of the Constitution which even suggests such a result. These debates, as well as the history that surrounds the adoption of the treaty provision in Article VI, make it clear that the reason treaties were not limited to those made in "pursuance" of the Constitution was so that agreements made by the United States under the Articles of Confederation, including the important peace treaties which concluded the Revolutionary War, would remain in effect. It would be manifestly contrary to the objectives of those who created the Constitution, as well as those who were responsible for the Bill of Rights—let alone alien to our entire constitutional history and tradition—to construe Article VI as permitting the United States to exercise power under an international agreement without observing constitutional prohibitions. In effect, such construction would permit amendment of that document in a manner not sanctioned by Article V. The prohibitions of the Constitution were designed to apply to all branches of the National Government, and they cannot be nullified by the Executive or by the Executive and the Senate combined. There is nothing new or unique about what we say here. This Court has regularly and uniformly recognized the supremacy of the Constitution over a treaty. For example, in Geofroy v. Riggs, 133 U. S. 258, 133 U. S. 267, it declared: "The treaty power, as expressed in the Constitution, is in terms unlimited except by those restraints which are found in that instrument against the action of the government or of its departments, and those arising from the nature of the government itself and of that of the States. It would not be contended that it extends so far as to authorize what the Constitution forbids, or a change in the character of the government, or in that of one of the States, or a cession of any portion of the territory of the latter, without its consent." This Court has repeatedly taken the position that an Act of Congress, which must comply with the Constitution, is on a full parity with a treaty, and that, when a statute which is subsequent in time is inconsistent with a treaty, the statute to the extent of conflict renders the treaty null. It would be completely anomalous to say that a treaty need not comply with the Constitution when such an agreement can be overridden by a statute that must conform to that instrument. According to the Cato Institute, these treaties only bind (legally obligate) the United States to comply with them as long as that nation agrees to remain a state party to these treaties. The U.S. Congress and the President of the United States have the absolute sovereign right to withdraw from or abrogate at any time these two instruments, in accordance with said nation's Constitution, at which point these treaties will cease to bind that nation in any way, shape, or form. A provision for automatic compliance with treaty obligations is found at 21 U.S.C. § 811(d), which also establishes mechanisms for amending international drug control regulations to correspond with HHS findings on scientific and medical issues. If control of a substance is mandated by the Single Convention, the Attorney General is required to "issue an order controlling such drug under the schedule he deems most appropriate to carry out such obligations," without regard to the normal scheduling procedure or the findings of the HHS Secretary. However, the Secretary has great influence over any drug scheduling proposal under the Single Convention, because 21 U.S.C. § 811(d)(2)(B) requires the Secretary the power to "evaluate the proposal and furnish a recommendation to the Secretary of State which shall be binding on the representative of the United States in discussions and negotiations relating to the proposal." Similarly, if the United Nations Commission on Narcotic Drugs adds or transfers a substance to a schedule established by the Convention on Psychotropic Substances, so that current U.S. regulations on the drug do not meet the treaty's requirements, the Secretary is required to issue a recommendation on how the substance should be scheduled under the CSA. If the Secretary agrees with the Commission's scheduling decision, he can recommend that the Attorney General initiate proceedings to reschedule the drug accordingly. If the HHS Secretary disagrees with the UN controls, the Attorney General must temporarily place the drug in Schedule IV or V (whichever meets the minimum requirements of the treaty) and exclude the substance from any regulations not mandated by the treaty. The Secretary is required to request that the Secretary of State take action, through the Commission or the UN Economic and Social Council, to remove the drug from international control or transfer it to a different schedule under the Convention. The temporary scheduling expires as soon as control is no longer needed to meet international treaty obligations. This provision was invoked in 1984 to place Rohypnol (flunitrazepam) in Schedule IV. The drug did not then meet the Controlled Substances Act's criteria for scheduling; however, control was required by the Convention on Psychotropic Substances. In 1999, an FDA official explained to Congress: Rohypnol is not approved or available for medical use in the United States, but it is temporarily controlled in Schedule IV pursuant to a treaty obligation under the 1971 Convention on Psychotropic Substances. At the time flunitrazepam was placed temporarily in Schedule IV (November 5, 1984), there was no evidence of abuse or trafficking of the drug in the United States. The Cato Institute's Handbook for Congress calls for repealing the CSA, an action that would likely bring the United States into conflict with international law, were the United States not to exercise its sovereign right to withdraw from and/or abrogate the Single Convention on Narcotic Drugs and/or the 1971 Convention on Psychotropic Substances prior to repealing the Controlled Substances Act. The exception would be if the U.S. were to claim that the treaty obligations violate the United States Constitution. Many articles in these treaties—such as Article 35 and Article 36 of the Single Convention—are prefaced with phrases such as "Having due regard to their constitutional, legal and administrative systems, the Parties shall . . ." or "Subject to its constitutional limitations, each Party shall . . ." According to former United Nations Drug Control Programme Chief of Demand Reduction Cindy Fazey, "This has been used by the USA not to implement part of article 3 of the 1988 Convention, which prevents inciting others to use narcotic or psychotropic drugs, on the basis that this would be in contravention of their constitutional amendment guaranteeing freedom of speech". There are five different schedules of controlled substances, numbered I–V. The CSA describes the different schedules based on three factors: The following table gives a summary of the different schedules. Placing a drug or other substance in a certain schedule or removing it from a certain schedule is primarily based on 21 USC §§ 801, 801a, 802, 811, 812, 813, and 814. Every schedule otherwise requires finding and specifying the "potential for abuse" before a substance can be placed in that schedule. The specific classification of any given drug or other substance is usually a source of controversy, as is the purpose and effectiveness of the entire regulatory scheme. The term "controlled substance" means a drug or other substance, or immediate precursor, included in schedule I, II, III, IV, or V of part B of this subchapter. The term does not include distilled spirits, wine, absinthe, malt beverages, nicotine or tobacco, as those terms are defined or used in subtitle E of the Internal Revenue Code of 1986. Some have argued that this is an important exemption, since alcohol and tobacco are two of the most widely used drugs in the United States. Schedule I substances are described as those that have all of the following findings: No prescriptions may be written for Schedule I substances, and such substances are subject to production quotas which the DEA imposes. Under the DEA's interpretation of the CSA, a drug does not necessarily have to have the same "high potential for abuse" as heroin, for example, to merit placement in Schedule I: [W]hen it comes to a drug that is currently listed in schedule I, if it is undisputed that such drug has no currently accepted medical use in treatment in the United States and a lack of accepted safety for use under medical supervision, and it is further undisputed that the drug has at least some potential for abuse sufficient to warrant control under the CSA, the drug must remain in schedule I. In such circumstances, placement of the drug in schedules II through V would conflict with the CSA since such drug would not meet the criterion of "a currently accepted medical use in treatment in the United States." 21 USC 812(b). (emphasis added) Drugs listed in this control schedule include: In addition to the named substance, usually all possible ethers, esters, salts and stereo isomers of these substances are also controlled and also 'analogues', which are chemically similar chemicals. Schedule II substances are those that have the following findings: Except when dispensed directly to an ultimate user by a practitioner other than a pharmacist, no controlled substance in Schedule II, which is a prescription drug as determined under the Federal Food, Drug, and Cosmetic Act (21 USC 301 et seq.), may be dispensed without the written or electronically transmitted (21 CFR 1306.08) prescription of a practitioner, except that in emergency situations, as prescribed by the Secretary by regulation after consultation with the Attorney General, such drug may be dispensed upon oral prescription in accordance with section 503(b) of that Act (21 USC 353 (b)). With exceptions, an original prescription is always required even though faxing in a prescription in advance to a pharmacy by a prescriber is allowed. Prescriptions shall be retained in conformity with the requirements of section 827 of this title. No prescription for a controlled substance in Schedule II may be refilled. Notably no emergency situation provisions exist outside the Controlled Substances Act's "closed system" although this closed system may be unavailable or nonfunctioning in the event of accidents in remote areas or disasters such as hurricanes and earthquakes. Acts which would widely be considered morally imperative remain offenses subject to heavy penalties. These drugs vary in potency: for example fentanyl is about 80 times as potent as morphine (heroin is roughly two times as potent). More significantly, they vary in nature. Pharmacology and CSA scheduling have a weak relationship. Because refills of prescriptions for Schedule II substances are not allowed, it can be burdensome to both the practitioner and the patient if the substances are to be used on a long-term basis. To provide relief, in 2007, 21 CFR 1306.12 was amended (at 72 FR 64921) to allow practitioners to write up to three prescriptions at once, to provide up to a 90-day supply, specifying on each the earliest date on which it may be filled. Drugs in this schedule include: Schedule III substances are those that have the following findings: Except when dispensed directly by a practitioner, other than a pharmacist, to an ultimate user, no controlled substance in Schedule III or IV, which is a prescription drug as determined under the Federal Food, Drug, and Cosmetic Act (21 USC 301 et seq.), may be dispensed without a written, electronically transmitted, or oral prescription in conformity with section 503(b) of that Act (21 USC 353 (b)). Such prescriptions may not be filled or refilled more than six months after the date thereof or be refilled more than five times after the date of the prescription unless renewed by the practitioner. A prescription for controlled substances in Schedules III, IV, and V issued by a practitioner, may be communicated either orally, in writing, electronically transmitted or by facsimile to the pharmacist, and may be refilled if so authorized on the prescription or by call-in. Control of wholesale distribution is somewhat less stringent than Schedule II drugs. Provisions for emergency situations are less restrictive within the "closed system" of the Controlled Substances Act than for Schedule II though no schedule has provisions to address circumstances where the closed system is unavailable, nonfunctioning or otherwise inadequate. Drugs in this schedule include: Placement on schedules; findings required Schedule IV substances are those that have the following findings: Control measures are similar to Schedule III. Prescriptions for Schedule IV drugs may be refilled up to five times within a six-month period. A prescription for controlled substances in Schedules III, IV, and V issued by a practitioner, may be communicated either orally, in writing, electronically transmitted or by facsimile to the pharmacist, and may be refilled if so authorized on the prescription or by call-in. Drugs in this schedule include: Schedule V substances are those that have the following findings: No controlled substance in Schedule V which is a drug may be distributed or dispensed other than for a medical purpose. A prescription for controlled substances in Schedules III, IV, and V issued by a practitioner, may be communicated either orally, in writing, electronically transmitted or by facsimile to the pharmacist, and may be refilled if so authorized on the prescription or by call-in. Drugs in this schedule include: These psychoactive drugs are not controlled by the act, and are also allowed for sale intended for recreational use at the federal level (others are allowed for sale as dietary supplements, but not specifically regulated or intended for recreational use): The Controlled Substances Act also provides for federal regulation of precursors used to manufacture some of the controlled substances. The DEA list of chemicals is actually modified when the United States Attorney General determines that illegal manufacturing processes have changed. In addition to the CSA, due to pseudoephedrine (PSE) and ephedrine being widely used in the manufacture of methamphetamine, the U.S. Congress passed the Methamphetamine Precursor Control Act which places restrictions on the sale of any medicine containing pseudoephedrine. That bill was then superseded by the Combat Methamphetamine Epidemic Act of 2005, which was passed as an amendment to the Patriot Act renewal and included wider and more comprehensive restrictions on the sale of PSE-containing products. This law requires customer signature of a "log-book" and presentation of valid photo ID in order to purchase PSE-containing products from all retailers. Additionally, the law restricts an individual to the retail purchase of no more than three packages or 3.6 grams of such product per day per purchase – and no more than 9 grams in a single month. A violation of this statute constitutes a misdemeanor. Retailers now commonly require PSE-containing products to be sold behind the pharmacy or service counter. This affects many preparations which were previously available over-the-counter without restriction, such as Actifed and its generic equivalents. A common misunderstanding amongst researchers is that most national laws (including the Controlled Substance Act) allows the supply/use of small amounts of a controlled substance for non-clinical / non-in vivo research without licences. A typical use case might be having a few milligrams or microlitres of a controlled substance within larger chemical collections (often 10K’s of chemicals) for in vitro screening or sale. Researchers often believe that there is some form of "research exemption" for such small amounts. This incorrect view may be further re-enforced by R&D chemical suppliers often stating and asking scientists to confirm that anything bought is for research use only. A further misconception is that the Controlled Substances Act simply lists a few hundred substances (e.g. MDMA, Fentanyl, Amphetamine, etc.) and compliance can be achieved via checking a CAS number, chemical name or similar identifier. However, the reality is that in most cases all ethers, esters, salts and stereo isomers are also controlled and it is impossible to simply list all of these. The act contains several "generic statements" or "chemical space" laws, which aim to control all chemicals similar to the "named" substance, these provide detailed descriptions similar to Markushes, these include ones for Fentanyl and also synthetic cannabinoids. Due to this complexity in legislation the identification of controlled chemicals in research or chemical supply is often carried out computationally on the chemical structure, either by in house systems maintained a company or by the use of commercial software solutions. Automated systems are often required as many research operations can have chemical collections running into 10Ks of molecules at the 1–5 mg scale, which are likely to include controlled substances, especially within medicinal chemistry research, even if the core research of the company is not narcotic or psychotropic drugs. These may not have been controlled when created, but they have subsequently been declared controlled, or fall within chemical space close to known controlled substances, or are used as tool compounds, precursors or sythetic intermediates. Historically, in an attempt to prevent psychoactive chemicals which are chemically similar to controlled substance, but not specifically controlled by it, the CSA also controls "analogues" of many listed controlled substances. The definition of what 'analogue' means is kept deliberately vague, presumably to make it harder to circumvent this rule, as it's not clear what is / is not controlled, thus placing an element of risk and deterrent in those performing the supply. It is up to the courts to then decide whether a specific chemical is an analogue, often via a 'battle of experts' for the defense and prosecution which can lead to extended and more uncertain prosecutions. The use of the 'analogue' definition also make it more difficult for companies involved in the legitimate supply of chemicals for research and industrial purposes to know whether a chemical is regulated under the CSA Starting in 2012, with the Synthetic drug abuse prevention act, and later an amendment to the CSA in 2018 defining fentanyl chemical space, the CSA started to use Markush descriptions to clearly define what analogues or chemical space is controlled. These chemical space, chemical family, generic statements or markush statements (depending on the legislation terminology) have been used for many years by other countries, notably the UK in the Misuse of Drugs Act. These have the advantage of clearly defining what is controlled, making prosecutions easier and compliance by legitimate companies simpler. However the downside is that these tend to be harder to understand for non-chemists and also give those wishing to supply for illegitimate reasons something to 'aim' for in terms of non-controlled chemical space. For both Markush and analogue type approaches, typically computational systems are used to flag likely regulated chemicals. The CSA does not include a definition of "drug abuse". In addition, research shows certain substances on Schedule I, for drugs which have no accepted medical uses and high potential for abuse, actually have accepted medical uses, have low potential for abuse, or both. One of those substances is cannabis, which is either decriminalized or legalized in 33 states of the United States. Similar legislation outside of the United States:
[ { "paragraph_id": 0, "text": "The Controlled Substances Act (CSA) is the statute establishing federal U.S. drug policy under which the manufacture, importation, possession, use, and distribution of certain substances is regulated. It was passed by the 91st United States Congress as Title II of the Comprehensive Drug Abuse Prevention and Control Act of 1970 and signed into law by President Richard Nixon. The Act also served as the national implementing legislation for the Single Convention on Narcotic Drugs.", "title": "" }, { "paragraph_id": 1, "text": "The legislation created five schedules (classifications), with varying qualifications for a substance to be included in each. Two federal agencies, the Drug Enforcement Administration (DEA) and the Food and Drug Administration (FDA), determine which substances are added to or removed from the various schedules, although the statute passed by Congress created the initial listing. Congress has sometimes scheduled other substances through legislation such as the Hillory J. Farias and Samantha Reid Date-Rape Prevention Act of 2000, which placed gamma hydroxybutyrate (GHB) in Schedule I and sodium oxybate (the isolated sodium salt in GHB) in Schedule III when used under an FDA New Drug Application (NDA) or Investigational New Drug (IND). Classification decisions are required to be made on criteria including potential for abuse (an undefined term), currently accepted medical use in treatment in the United States, and international treaties.", "title": "" }, { "paragraph_id": 2, "text": "The nation first outlawed addictive drugs in the early 1900s and the International Opium Convention helped lead international agreements regulating trade. The Food and Drugs Act of 1906 was the beginning of over 200 laws concerning public health and consumer protections. Others were the Federal Food, Drug, and Cosmetic Act (1938), and the Kefauver Harris Amendment of 1962.", "title": "History" }, { "paragraph_id": 3, "text": "In 1969, President Richard Nixon announced that the Attorney General, John N. Mitchell, was preparing a comprehensive new measure to more effectively meet the narcotic and dangerous drug problems at the federal level by combining all existing federal laws into a single new statute. With the help of White House Counsel head, John Dean; the Executive Director of the Shafer Commission, Michael Sonnenreich; and the Director of the BNDD, John Ingersoll creating and writing the legislation, Mitchell was able to present Nixon with the bill.", "title": "History" }, { "paragraph_id": 4, "text": "The CSA not only combined existing federal drug laws and expanded their scope, but it also changed the nature of federal drug law policies and expanded federal law enforcement pertaining to controlled substances. Title II, Part F of the Comprehensive Drug Abuse Prevention and Control Act of 1970 established the National Commission on Marijuana and Drug Abuse—known as the Shafer Commission after its chairman, Raymond P. Shafer—to study cannabis abuse in the United States. During his presentation of the commission's First Report to Congress, Sonnenreich and Shafer recommended the decriminalization of marijuana in small amounts, with Shafer stating,", "title": "History" }, { "paragraph_id": 5, "text": "[T]he criminal law is too harsh a tool to apply to personal possession even in the effort to discourage use. It implies an overwhelming indictment of the behavior which we believe is not appropriate. The actual and potential harm of use of the drug is not great enough to justify intrusion by the criminal law into private behavior, a step which our society takes only with the greatest reluctance.", "title": "History" }, { "paragraph_id": 6, "text": "Rufus King notes that this stratagem was similar to that used by Harry Anslinger when he consolidated the previous anti-drug treaties into the Single Convention and took the opportunity to add new provisions that otherwise might have been unpalatable to the international community. According to David T. Courtwright, \"the Act was part of an omnibus reform package designed to rationalize, and in some respects to liberalize, American drug policy.\" (Courtwright noted that the Act became, not libertarian, but instead repressionistic to the point of tyrannical in its intent; a cruel and/or arbitrary exercise of power). It eliminated mandatory minimum sentences and provided support for drug treatment and research.", "title": "History" }, { "paragraph_id": 7, "text": "King notes that the rehabilitation clauses were added as a compromise to Senator Jim Hughes, who favored a moderate approach. The bill, as introduced by Senator Everett Dirksen, ran to 91 pages. While it was being drafted, the Uniform Controlled Substances Act, to be passed by state legislatures, was also being drafted by the Department of Justice; its wording closely mirrored the Controlled Substances Act.", "title": "History" }, { "paragraph_id": 8, "text": "Since its enactment in 1970, the Act has been amended numerous times:", "title": "History" }, { "paragraph_id": 9, "text": "The Controlled Substances Act consists of two subchapters. Subchapter I defines Schedules I–V, lists chemicals used in the manufacture of controlled substances, and differentiates lawful and unlawful manufacturing, distribution, and possession of controlled substances, including possession of Schedule I drugs for personal use; this subchapter also specifies the dollar amounts of fines and durations of prison terms for violations. Subchapter II describes the laws for exportation and importation of controlled substances, again specifying fines and prison terms for violations.", "title": "Statute content" }, { "paragraph_id": 10, "text": "The Drug Enforcement Administration was established in 1973, combining the Bureau of Narcotics and Dangerous Drugs (BNDD) and Customs' drug agents. Proceedings to add, delete, or change the schedule of a drug or other substance may be initiated by the DEA, the Department of Health and Human Services (HHS), or by petition from any interested party, including the manufacturer of a drug, a medical society or association, a pharmacy association, a public interest group concerned with drug abuse, a state or local government agency, or an individual citizen. When a petition is received by the DEA, the agency begins its own investigation of the drug.", "title": "Enforcement authority" }, { "paragraph_id": 11, "text": "The DEA may begin an investigation of a drug at any time based upon information received from laboratories, state and local law enforcement and regulatory agencies, or other sources of information. Once the DEA has collected the necessary data, the Deputy Administrator of DEA, requests from HHS a scientific and medical evaluation and recommendation as to whether the drug or other substance should be controlled or removed from control.", "title": "Enforcement authority" }, { "paragraph_id": 12, "text": "This request is sent to the Assistant Secretary of Health of HHS. Then, HHS solicits information from the Commissioner of the Food and Drug Administration and evaluations and recommendations from the National Institute on Drug Abuse and, on occasion, from the scientific and medical community at large. The Assistant Secretary, by authority of the Secretary, compiles the information and transmits back to the DEA a medical and scientific evaluation regarding the drug or other substance, a recommendation as to whether the drug should be controlled, and in what schedule it should be placed.", "title": "Enforcement authority" }, { "paragraph_id": 13, "text": "The HHS recommendation on scheduling is binding to the extent that if HHS recommends, based on its medical and scientific evaluation, that the substance not be controlled, then the DEA may not control the substance. Once the DEA has received the scientific and medical evaluation from HHS, the DEA Administrator evaluates all available data and makes a final decision whether to propose that a drug or other substance be controlled and into which schedule it should be placed. Under certain circumstances, the Government may temporarily schedule a drug without following the normal procedure.", "title": "Enforcement authority" }, { "paragraph_id": 14, "text": "An example is when international treaties require control of a substance. 21 U.S.C. § 811(h) allows the Attorney General to temporarily place a substance in Schedule I \"to avoid an imminent hazard to the public safety\". Thirty days' notice is required before the order can be issued, and the scheduling expires after a year. The period may be extended six months if rulemaking proceedings to permanently schedule the drug are in progress. In any case, once these proceedings are complete, the temporary order is automatically vacated. Unlike ordinary scheduling proceedings, such temporary orders are not subject to judicial review.", "title": "Enforcement authority" }, { "paragraph_id": 15, "text": "The CSA creates a closed system of distribution for those authorized to handle controlled substances. The cornerstone of this system is the registration of all those authorized by the DEA to handle controlled substances. All individuals and firms that are registered are required to maintain complete and accurate inventories and records of all transactions involving controlled substances, as well as security for the storage of controlled substances.", "title": "Enforcement authority" }, { "paragraph_id": 16, "text": "The Congressional findings in 21 USC §§ 801(7), 801a(2), and 801a(3) state that a major purpose of the CSA is to \"enable the United States to meet all of its obligations\" under international treaties. The CSA bears many resemblances to these Conventions. Both the CSA and the treaties set out a system for classifying controlled substances in several schedules in accordance with the binding scientific and medical findings of a public health authority. Under 21 U.S.C. § 811 of the CSA, that authority is the Secretary of Health and Human Services (HHS). Under Article 3 of the Single Convention and Article 2 of the Convention on Psychotropic Substances, the World Health Organization is that authority.", "title": "Treaty obligations" }, { "paragraph_id": 17, "text": "The domestic and international legal nature of these treaty obligations must be considered in light of the supremacy of the United States Constitution over treaties or acts and the equality of treaties and Congressional acts. In Reid v. Covert the Supreme Court of the United States addressed both these issues directly and clearly holding:", "title": "Treaty obligations" }, { "paragraph_id": 18, "text": "[N]o agreement with a foreign nation can confer power on the Congress, or on any other branch of Government, which is free from the restraints of the Constitution.", "title": "Treaty obligations" }, { "paragraph_id": 19, "text": "Article VI, the Supremacy Clause of the Constitution, declares:", "title": "Treaty obligations" }, { "paragraph_id": 20, "text": "\"This Constitution, and the Laws of the United States which shall be made in Pursuance thereof, and all Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land; . . .\"", "title": "Treaty obligations" }, { "paragraph_id": 21, "text": "There is nothing in this language which intimates that treaties and laws enacted pursuant to them do not have to comply with the provisions of the Constitution. Nor is there anything in the debates which accompanied the drafting and ratification of the Constitution which even suggests such a result. These debates, as well as the history that surrounds the adoption of the treaty provision in Article VI, make it clear that the reason treaties were not limited to those made in \"pursuance\" of the Constitution was so that agreements made by the United States under the Articles of Confederation, including the important peace treaties which concluded the Revolutionary War, would remain in effect. It would be manifestly contrary to the objectives of those who created the Constitution, as well as those who were responsible for the Bill of Rights—let alone alien to our entire constitutional history and tradition—to construe Article VI as permitting the United States to exercise power under an international agreement without observing constitutional prohibitions. In effect, such construction would permit amendment of that document in a manner not sanctioned by Article V. The prohibitions of the Constitution were designed to apply to all branches of the National Government, and they cannot be nullified by the Executive or by the Executive and the Senate combined.", "title": "Treaty obligations" }, { "paragraph_id": 22, "text": "There is nothing new or unique about what we say here. This Court has regularly and uniformly recognized the supremacy of the Constitution over a treaty. For example, in Geofroy v. Riggs, 133 U. S. 258, 133 U. S. 267, it declared:", "title": "Treaty obligations" }, { "paragraph_id": 23, "text": "\"The treaty power, as expressed in the Constitution, is in terms unlimited except by those restraints which are found in that instrument against the action of the government or of its departments, and those arising from the nature of the government itself and of that of the States. It would not be contended that it extends so far as to authorize what the Constitution forbids, or a change in the character of the government, or in that of one of the States, or a cession of any portion of the territory of the latter, without its consent.\"", "title": "Treaty obligations" }, { "paragraph_id": 24, "text": "This Court has repeatedly taken the position that an Act of Congress, which must comply with the Constitution, is on a full parity with a treaty, and that, when a statute which is subsequent in time is inconsistent with a treaty, the statute to the extent of conflict renders the treaty null. It would be completely anomalous to say that a treaty need not comply with the Constitution when such an agreement can be overridden by a statute that must conform to that instrument.", "title": "Treaty obligations" }, { "paragraph_id": 25, "text": "According to the Cato Institute, these treaties only bind (legally obligate) the United States to comply with them as long as that nation agrees to remain a state party to these treaties. The U.S. Congress and the President of the United States have the absolute sovereign right to withdraw from or abrogate at any time these two instruments, in accordance with said nation's Constitution, at which point these treaties will cease to bind that nation in any way, shape, or form.", "title": "Treaty obligations" }, { "paragraph_id": 26, "text": "A provision for automatic compliance with treaty obligations is found at 21 U.S.C. § 811(d), which also establishes mechanisms for amending international drug control regulations to correspond with HHS findings on scientific and medical issues. If control of a substance is mandated by the Single Convention, the Attorney General is required to \"issue an order controlling such drug under the schedule he deems most appropriate to carry out such obligations,\" without regard to the normal scheduling procedure or the findings of the HHS Secretary. However, the Secretary has great influence over any drug scheduling proposal under the Single Convention, because 21 U.S.C. § 811(d)(2)(B) requires the Secretary the power to \"evaluate the proposal and furnish a recommendation to the Secretary of State which shall be binding on the representative of the United States in discussions and negotiations relating to the proposal.\"", "title": "Treaty obligations" }, { "paragraph_id": 27, "text": "Similarly, if the United Nations Commission on Narcotic Drugs adds or transfers a substance to a schedule established by the Convention on Psychotropic Substances, so that current U.S. regulations on the drug do not meet the treaty's requirements, the Secretary is required to issue a recommendation on how the substance should be scheduled under the CSA. If the Secretary agrees with the Commission's scheduling decision, he can recommend that the Attorney General initiate proceedings to reschedule the drug accordingly.", "title": "Treaty obligations" }, { "paragraph_id": 28, "text": "If the HHS Secretary disagrees with the UN controls, the Attorney General must temporarily place the drug in Schedule IV or V (whichever meets the minimum requirements of the treaty) and exclude the substance from any regulations not mandated by the treaty. The Secretary is required to request that the Secretary of State take action, through the Commission or the UN Economic and Social Council, to remove the drug from international control or transfer it to a different schedule under the Convention. The temporary scheduling expires as soon as control is no longer needed to meet international treaty obligations.", "title": "Treaty obligations" }, { "paragraph_id": 29, "text": "This provision was invoked in 1984 to place Rohypnol (flunitrazepam) in Schedule IV. The drug did not then meet the Controlled Substances Act's criteria for scheduling; however, control was required by the Convention on Psychotropic Substances. In 1999, an FDA official explained to Congress:", "title": "Treaty obligations" }, { "paragraph_id": 30, "text": "Rohypnol is not approved or available for medical use in the United States, but it is temporarily controlled in Schedule IV pursuant to a treaty obligation under the 1971 Convention on Psychotropic Substances. At the time flunitrazepam was placed temporarily in Schedule IV (November 5, 1984), there was no evidence of abuse or trafficking of the drug in the United States.", "title": "Treaty obligations" }, { "paragraph_id": 31, "text": "The Cato Institute's Handbook for Congress calls for repealing the CSA, an action that would likely bring the United States into conflict with international law, were the United States not to exercise its sovereign right to withdraw from and/or abrogate the Single Convention on Narcotic Drugs and/or the 1971 Convention on Psychotropic Substances prior to repealing the Controlled Substances Act. The exception would be if the U.S. were to claim that the treaty obligations violate the United States Constitution. Many articles in these treaties—such as Article 35 and Article 36 of the Single Convention—are prefaced with phrases such as \"Having due regard to their constitutional, legal and administrative systems, the Parties shall . . .\" or \"Subject to its constitutional limitations, each Party shall . . .\" According to former United Nations Drug Control Programme Chief of Demand Reduction Cindy Fazey, \"This has been used by the USA not to implement part of article 3 of the 1988 Convention, which prevents inciting others to use narcotic or psychotropic drugs, on the basis that this would be in contravention of their constitutional amendment guaranteeing freedom of speech\".", "title": "Treaty obligations" }, { "paragraph_id": 32, "text": "There are five different schedules of controlled substances, numbered I–V. The CSA describes the different schedules based on three factors:", "title": "Schedules of controlled substances" }, { "paragraph_id": 33, "text": "The following table gives a summary of the different schedules.", "title": "Schedules of controlled substances" }, { "paragraph_id": 34, "text": "Placing a drug or other substance in a certain schedule or removing it from a certain schedule is primarily based on 21 USC §§ 801, 801a, 802, 811, 812, 813, and 814. Every schedule otherwise requires finding and specifying the \"potential for abuse\" before a substance can be placed in that schedule. The specific classification of any given drug or other substance is usually a source of controversy, as is the purpose and effectiveness of the entire regulatory scheme.", "title": "Schedules of controlled substances" }, { "paragraph_id": 35, "text": "The term \"controlled substance\" means a drug or other substance, or immediate precursor, included in schedule I, II, III, IV, or V of part B of this subchapter. The term does not include distilled spirits, wine, absinthe, malt beverages, nicotine or tobacco, as those terms are defined or used in subtitle E of the Internal Revenue Code of 1986.", "title": "Schedules of controlled substances" }, { "paragraph_id": 36, "text": "Some have argued that this is an important exemption, since alcohol and tobacco are two of the most widely used drugs in the United States.", "title": "Schedules of controlled substances" }, { "paragraph_id": 37, "text": "", "title": "Schedules of controlled substances" }, { "paragraph_id": 38, "text": "Schedule I substances are described as those that have all of the following findings:", "title": "Schedules of controlled substances" }, { "paragraph_id": 39, "text": "No prescriptions may be written for Schedule I substances, and such substances are subject to production quotas which the DEA imposes.", "title": "Schedules of controlled substances" }, { "paragraph_id": 40, "text": "Under the DEA's interpretation of the CSA, a drug does not necessarily have to have the same \"high potential for abuse\" as heroin, for example, to merit placement in Schedule I:", "title": "Schedules of controlled substances" }, { "paragraph_id": 41, "text": "[W]hen it comes to a drug that is currently listed in schedule I, if it is undisputed that such drug has no currently accepted medical use in treatment in the United States and a lack of accepted safety for use under medical supervision, and it is further undisputed that the drug has at least some potential for abuse sufficient to warrant control under the CSA, the drug must remain in schedule I. In such circumstances, placement of the drug in schedules II through V would conflict with the CSA since such drug would not meet the criterion of \"a currently accepted medical use in treatment in the United States.\" 21 USC 812(b). (emphasis added)", "title": "Schedules of controlled substances" }, { "paragraph_id": 42, "text": "Drugs listed in this control schedule include:", "title": "Schedules of controlled substances" }, { "paragraph_id": 43, "text": "In addition to the named substance, usually all possible ethers, esters, salts and stereo isomers of these substances are also controlled and also 'analogues', which are chemically similar chemicals.", "title": "Schedules of controlled substances" }, { "paragraph_id": 44, "text": "", "title": "Schedules of controlled substances" }, { "paragraph_id": 45, "text": "Schedule II substances are those that have the following findings:", "title": "Schedules of controlled substances" }, { "paragraph_id": 46, "text": "Except when dispensed directly to an ultimate user by a practitioner other than a pharmacist, no controlled substance in Schedule II, which is a prescription drug as determined under the Federal Food, Drug, and Cosmetic Act (21 USC 301 et seq.), may be dispensed without the written or electronically transmitted (21 CFR 1306.08) prescription of a practitioner, except that in emergency situations, as prescribed by the Secretary by regulation after consultation with the Attorney General, such drug may be dispensed upon oral prescription in accordance with section 503(b) of that Act (21 USC 353 (b)). With exceptions, an original prescription is always required even though faxing in a prescription in advance to a pharmacy by a prescriber is allowed.", "title": "Schedules of controlled substances" }, { "paragraph_id": 47, "text": "Prescriptions shall be retained in conformity with the requirements of section 827 of this title. No prescription for a controlled substance in Schedule II may be refilled. Notably no emergency situation provisions exist outside the Controlled Substances Act's \"closed system\" although this closed system may be unavailable or nonfunctioning in the event of accidents in remote areas or disasters such as hurricanes and earthquakes. Acts which would widely be considered morally imperative remain offenses subject to heavy penalties.", "title": "Schedules of controlled substances" }, { "paragraph_id": 48, "text": "These drugs vary in potency: for example fentanyl is about 80 times as potent as morphine (heroin is roughly two times as potent). More significantly, they vary in nature. Pharmacology and CSA scheduling have a weak relationship.", "title": "Schedules of controlled substances" }, { "paragraph_id": 49, "text": "Because refills of prescriptions for Schedule II substances are not allowed, it can be burdensome to both the practitioner and the patient if the substances are to be used on a long-term basis. To provide relief, in 2007, 21 CFR 1306.12 was amended (at 72 FR 64921) to allow practitioners to write up to three prescriptions at once, to provide up to a 90-day supply, specifying on each the earliest date on which it may be filled.", "title": "Schedules of controlled substances" }, { "paragraph_id": 50, "text": "Drugs in this schedule include:", "title": "Schedules of controlled substances" }, { "paragraph_id": 51, "text": "", "title": "Schedules of controlled substances" }, { "paragraph_id": 52, "text": "Schedule III substances are those that have the following findings:", "title": "Schedules of controlled substances" }, { "paragraph_id": 53, "text": "Except when dispensed directly by a practitioner, other than a pharmacist, to an ultimate user, no controlled substance in Schedule III or IV, which is a prescription drug as determined under the Federal Food, Drug, and Cosmetic Act (21 USC 301 et seq.), may be dispensed without a written, electronically transmitted, or oral prescription in conformity with section 503(b) of that Act (21 USC 353 (b)). Such prescriptions may not be filled or refilled more than six months after the date thereof or be refilled more than five times after the date of the prescription unless renewed by the practitioner.", "title": "Schedules of controlled substances" }, { "paragraph_id": 54, "text": "A prescription for controlled substances in Schedules III, IV, and V issued by a practitioner, may be communicated either orally, in writing, electronically transmitted or by facsimile to the pharmacist, and may be refilled if so authorized on the prescription or by call-in. Control of wholesale distribution is somewhat less stringent than Schedule II drugs. Provisions for emergency situations are less restrictive within the \"closed system\" of the Controlled Substances Act than for Schedule II though no schedule has provisions to address circumstances where the closed system is unavailable, nonfunctioning or otherwise inadequate.", "title": "Schedules of controlled substances" }, { "paragraph_id": 55, "text": "Drugs in this schedule include:", "title": "Schedules of controlled substances" }, { "paragraph_id": 56, "text": "", "title": "Schedules of controlled substances" }, { "paragraph_id": 57, "text": "Placement on schedules; findings required Schedule IV substances are those that have the following findings:", "title": "Schedules of controlled substances" }, { "paragraph_id": 58, "text": "Control measures are similar to Schedule III. Prescriptions for Schedule IV drugs may be refilled up to five times within a six-month period. A prescription for controlled substances in Schedules III, IV, and V issued by a practitioner, may be communicated either orally, in writing, electronically transmitted or by facsimile to the pharmacist, and may be refilled if so authorized on the prescription or by call-in.", "title": "Schedules of controlled substances" }, { "paragraph_id": 59, "text": "Drugs in this schedule include:", "title": "Schedules of controlled substances" }, { "paragraph_id": 60, "text": "", "title": "Schedules of controlled substances" }, { "paragraph_id": 61, "text": "Schedule V substances are those that have the following findings:", "title": "Schedules of controlled substances" }, { "paragraph_id": 62, "text": "No controlled substance in Schedule V which is a drug may be distributed or dispensed other than for a medical purpose. A prescription for controlled substances in Schedules III, IV, and V issued by a practitioner, may be communicated either orally, in writing, electronically transmitted or by facsimile to the pharmacist, and may be refilled if so authorized on the prescription or by call-in.", "title": "Schedules of controlled substances" }, { "paragraph_id": 63, "text": "Drugs in this schedule include:", "title": "Schedules of controlled substances" }, { "paragraph_id": 64, "text": "These psychoactive drugs are not controlled by the act, and are also allowed for sale intended for recreational use at the federal level (others are allowed for sale as dietary supplements, but not specifically regulated or intended for recreational use):", "title": "Schedules of controlled substances" }, { "paragraph_id": 65, "text": "The Controlled Substances Act also provides for federal regulation of precursors used to manufacture some of the controlled substances. The DEA list of chemicals is actually modified when the United States Attorney General determines that illegal manufacturing processes have changed.", "title": "Regulation of precursors" }, { "paragraph_id": 66, "text": "In addition to the CSA, due to pseudoephedrine (PSE) and ephedrine being widely used in the manufacture of methamphetamine, the U.S. Congress passed the Methamphetamine Precursor Control Act which places restrictions on the sale of any medicine containing pseudoephedrine. That bill was then superseded by the Combat Methamphetamine Epidemic Act of 2005, which was passed as an amendment to the Patriot Act renewal and included wider and more comprehensive restrictions on the sale of PSE-containing products. This law requires customer signature of a \"log-book\" and presentation of valid photo ID in order to purchase PSE-containing products from all retailers.", "title": "Regulation of precursors" }, { "paragraph_id": 67, "text": "Additionally, the law restricts an individual to the retail purchase of no more than three packages or 3.6 grams of such product per day per purchase – and no more than 9 grams in a single month. A violation of this statute constitutes a misdemeanor. Retailers now commonly require PSE-containing products to be sold behind the pharmacy or service counter. This affects many preparations which were previously available over-the-counter without restriction, such as Actifed and its generic equivalents.", "title": "Regulation of precursors" }, { "paragraph_id": 68, "text": "A common misunderstanding amongst researchers is that most national laws (including the Controlled Substance Act) allows the supply/use of small amounts of a controlled substance for non-clinical / non-in vivo research without licences. A typical use case might be having a few milligrams or microlitres of a controlled substance within larger chemical collections (often 10K’s of chemicals) for in vitro screening or sale. Researchers often believe that there is some form of \"research exemption\" for such small amounts. This incorrect view may be further re-enforced by R&D chemical suppliers often stating and asking scientists to confirm that anything bought is for research use only.", "title": "Research exemptions" }, { "paragraph_id": 69, "text": "A further misconception is that the Controlled Substances Act simply lists a few hundred substances (e.g. MDMA, Fentanyl, Amphetamine, etc.) and compliance can be achieved via checking a CAS number, chemical name or similar identifier. However, the reality is that in most cases all ethers, esters, salts and stereo isomers are also controlled and it is impossible to simply list all of these. The act contains several \"generic statements\" or \"chemical space\" laws, which aim to control all chemicals similar to the \"named\" substance, these provide detailed descriptions similar to Markushes, these include ones for Fentanyl and also synthetic cannabinoids.", "title": "Research exemptions" }, { "paragraph_id": 70, "text": "Due to this complexity in legislation the identification of controlled chemicals in research or chemical supply is often carried out computationally on the chemical structure, either by in house systems maintained a company or by the use of commercial software solutions. Automated systems are often required as many research operations can have chemical collections running into 10Ks of molecules at the 1–5 mg scale, which are likely to include controlled substances, especially within medicinal chemistry research, even if the core research of the company is not narcotic or psychotropic drugs. These may not have been controlled when created, but they have subsequently been declared controlled, or fall within chemical space close to known controlled substances, or are used as tool compounds, precursors or sythetic intermediates.", "title": "Research exemptions" }, { "paragraph_id": 71, "text": "Historically, in an attempt to prevent psychoactive chemicals which are chemically similar to controlled substance, but not specifically controlled by it, the CSA also controls \"analogues\" of many listed controlled substances. The definition of what 'analogue' means is kept deliberately vague, presumably to make it harder to circumvent this rule, as it's not clear what is / is not controlled, thus placing an element of risk and deterrent in those performing the supply. It is up to the courts to then decide whether a specific chemical is an analogue, often via a 'battle of experts' for the defense and prosecution which can lead to extended and more uncertain prosecutions. The use of the 'analogue' definition also make it more difficult for companies involved in the legitimate supply of chemicals for research and industrial purposes to know whether a chemical is regulated under the CSA", "title": "Analogues vs Markush descriptions" }, { "paragraph_id": 72, "text": "Starting in 2012, with the Synthetic drug abuse prevention act, and later an amendment to the CSA in 2018 defining fentanyl chemical space, the CSA started to use Markush descriptions to clearly define what analogues or chemical space is controlled. These chemical space, chemical family, generic statements or markush statements (depending on the legislation terminology) have been used for many years by other countries, notably the UK in the Misuse of Drugs Act.", "title": "Analogues vs Markush descriptions" }, { "paragraph_id": 73, "text": "These have the advantage of clearly defining what is controlled, making prosecutions easier and compliance by legitimate companies simpler. However the downside is that these tend to be harder to understand for non-chemists and also give those wishing to supply for illegitimate reasons something to 'aim' for in terms of non-controlled chemical space. For both Markush and analogue type approaches, typically computational systems are used to flag likely regulated chemicals.", "title": "Analogues vs Markush descriptions" }, { "paragraph_id": 74, "text": "The CSA does not include a definition of \"drug abuse\". In addition, research shows certain substances on Schedule I, for drugs which have no accepted medical uses and high potential for abuse, actually have accepted medical uses, have low potential for abuse, or both. One of those substances is cannabis, which is either decriminalized or legalized in 33 states of the United States.", "title": "Criticism" }, { "paragraph_id": 75, "text": "Similar legislation outside of the United States:", "title": "See also" } ]
The Controlled Substances Act (CSA) is the statute establishing federal U.S. drug policy under which the manufacture, importation, possession, use, and distribution of certain substances is regulated. It was passed by the 91st United States Congress as Title II of the Comprehensive Drug Abuse Prevention and Control Act of 1970 and signed into law by President Richard Nixon. The Act also served as the national implementing legislation for the Single Convention on Narcotic Drugs. The legislation created five schedules (classifications), with varying qualifications for a substance to be included in each. Two federal agencies, the Drug Enforcement Administration (DEA) and the Food and Drug Administration (FDA), determine which substances are added to or removed from the various schedules, although the statute passed by Congress created the initial listing. Congress has sometimes scheduled other substances through legislation such as the Hillory J. Farias and Samantha Reid Date-Rape Prevention Act of 2000, which placed gamma hydroxybutyrate (GHB) in Schedule I and sodium oxybate in Schedule III when used under an FDA New Drug Application (NDA) or Investigational New Drug (IND). Classification decisions are required to be made on criteria including potential for abuse, currently accepted medical use in treatment in the United States, and international treaties.
2002-02-25T15:51:15Z
2023-12-17T01:57:02Z
[ "Template:Blockquote", "Template:USStatute", "Template:Use mdy dates", "Template:US drug laws", "Template:Usc", "Template:Short description", "Template:Regulation of therapeutic goods in the United States", "Template:Efn", "Template:Citation", "Template:Cite wikisource", "Template:Infobox U.S. legislation", "Template:USCSub2", "Template:Federal Register", "Template:Portal", "Template:Collist", "Template:Cite web", "Template:Cite book", "Template:Dead link", "Template:Use American English", "Template:Rp", "Template:Uscsub", "Template:Further", "Template:Notelist", "Template:Reflist", "Template:ISBN", "Template:Uscsub2", "Template:CodeFedReg", "Template:Cite journal", "Template:Quote", "Template:Ndash", "Template:Anchor", "Template:Cannabis in the United States", "Template:Main", "Template:Cite report", "Template:Webarchive", "Template:Drug control laws" ]
https://en.wikipedia.org/wiki/Controlled_Substances_Act
7,728
Claude Piron
Claude Piron, also known by the pseudonym Johán Valano, was a Swiss psychologist, Esperantist, translator, and writer. He worked as a translator for the United Nations from 1956 to 1961 and then for the World Health Organization. He was a prolific author of Esperanto works. He spoke Esperanto from childhood and used it in Japan, China, Uzbekistan, Kazakhstan, in Africa and Latin America, and in nearly all the countries of Europe. Piron was a psychotherapist and taught from 1973 to 1994 in the psychology department at the University of Geneva in Switzerland. His French-language book Le défi des langues — Du gâchis au bon sens (The Language Challenge: From Chaos to Common Sense, 1994) is a kind of psychoanalysis of international communication. A Portuguese version, O desafio das linguas, was published in 2002 (Campinas, São Paulo, Pontes). In a lecture on the current system of international communication Piron argued that "Esperanto relies entirely on innate reflexes" and "differs from all other languages in that you can always trust your natural tendency to generalize patterns... The same neuropsychological law...—called by Jean Piaget generalizing assimilation—applies to word formation as well as to grammar." His diverse Esperanto writings include instructional books, books for beginners, novels, short stories, poems, articles and non-fiction books. His most famous works are Gerda malaperis! and La Bona Lingvo (The Good Language). Gerda malaperis! is a novella which uses basic grammar and vocabulary in the first chapter and builds up to expert Esperanto by the end, including word lists so that beginners may easily follow along. In La Bona Lingvo, Piron captures the basic linguistic and social aspects of Esperanto. He argues strongly for imaginative use of the basic Esperanto morpheme inventory and word-formation techniques, and against unnecessary importation of neologisms from European languages. He also presents the idea that, once one has learned enough vocabulary to express himself, it is easier to think clearly in Esperanto than in many other languages. Piron is the author of a book in French, Le bonheur clés en main (The Keys to Happiness), which distinguishes among pleasure, happiness and joy. He showed how one may avoid contributing to his own "anti-happiness" (l'anti-bonheur) and how one may expand the areas of happiness in his life. Piron's view was that, while one may desire happiness, desire is not enough. He said that just as people must do certain things in order to become physically stronger, they must do certain things in order to become happier. Media related to Claude Piron at Wikimedia Commons
[ { "paragraph_id": 0, "text": "Claude Piron, also known by the pseudonym Johán Valano, was a Swiss psychologist, Esperantist, translator, and writer. He worked as a translator for the United Nations from 1956 to 1961 and then for the World Health Organization.", "title": "" }, { "paragraph_id": 1, "text": "He was a prolific author of Esperanto works. He spoke Esperanto from childhood and used it in Japan, China, Uzbekistan, Kazakhstan, in Africa and Latin America, and in nearly all the countries of Europe.", "title": "" }, { "paragraph_id": 2, "text": "Piron was a psychotherapist and taught from 1973 to 1994 in the psychology department at the University of Geneva in Switzerland. His French-language book Le défi des langues — Du gâchis au bon sens (The Language Challenge: From Chaos to Common Sense, 1994) is a kind of psychoanalysis of international communication. A Portuguese version, O desafio das linguas, was published in 2002 (Campinas, São Paulo, Pontes).", "title": "Life" }, { "paragraph_id": 3, "text": "In a lecture on the current system of international communication Piron argued that \"Esperanto relies entirely on innate reflexes\" and \"differs from all other languages in that you can always trust your natural tendency to generalize patterns... The same neuropsychological law...—called by Jean Piaget generalizing assimilation—applies to word formation as well as to grammar.\"", "title": "Life" }, { "paragraph_id": 4, "text": "His diverse Esperanto writings include instructional books, books for beginners, novels, short stories, poems, articles and non-fiction books. His most famous works are Gerda malaperis! and La Bona Lingvo (The Good Language).", "title": "Life" }, { "paragraph_id": 5, "text": "Gerda malaperis! is a novella which uses basic grammar and vocabulary in the first chapter and builds up to expert Esperanto by the end, including word lists so that beginners may easily follow along.", "title": "Life" }, { "paragraph_id": 6, "text": "In La Bona Lingvo, Piron captures the basic linguistic and social aspects of Esperanto. He argues strongly for imaginative use of the basic Esperanto morpheme inventory and word-formation techniques, and against unnecessary importation of neologisms from European languages. He also presents the idea that, once one has learned enough vocabulary to express himself, it is easier to think clearly in Esperanto than in many other languages.", "title": "Life" }, { "paragraph_id": 7, "text": "Piron is the author of a book in French, Le bonheur clés en main (The Keys to Happiness), which distinguishes among pleasure, happiness and joy. He showed how one may avoid contributing to his own \"anti-happiness\" (l'anti-bonheur) and how one may expand the areas of happiness in his life. Piron's view was that, while one may desire happiness, desire is not enough. He said that just as people must do certain things in order to become physically stronger, they must do certain things in order to become happier.", "title": "Life" }, { "paragraph_id": 8, "text": "Media related to Claude Piron at Wikimedia Commons", "title": "External links" } ]
Claude Piron, also known by the pseudonym Johán Valano, was a Swiss psychologist, Esperantist, translator, and writer. He worked as a translator for the United Nations from 1956 to 1961 and then for the World Health Organization. He was a prolific author of Esperanto works. He spoke Esperanto from childhood and used it in Japan, China, Uzbekistan, Kazakhstan, in Africa and Latin America, and in nearly all the countries of Europe.
2002-01-12T15:26:00Z
2023-10-08T20:21:45Z
[ "Template:Reflist", "Template:Cite web", "Template:Commons category-inline", "Template:YouTube", "Template:Authority control", "Template:Short description", "Template:Use dmy dates", "Template:Infobox scientist" ]
https://en.wikipedia.org/wiki/Claude_Piron
7,729
Captain America
Captain America is a superhero created by Joe Simon and Jack Kirby who appears in American comic books published by Marvel Comics. The character first appeared in Captain America Comics #1, published on December 20, 1940 by Timely Comics, a corporate predecessor to Marvel. Captain America's civilian identity is Steve Rogers, a frail man enhanced to the peak of human physical perfection by an experimental "super-soldier serum" after joining the United States Army to aid the country's efforts in World War II. Equipped with an American flag-inspired costume and a virtually indestructible shield, Captain America and his sidekick Bucky Barnes clashed frequently with the villainous Red Skull and other members of the Axis powers. In the final days of the war, an accident left Captain America frozen in a state of suspended animation until he was revived in modern times. He resumes his exploits as a costumed hero and becomes leader of the superhero team the Avengers, but frequently struggles as a "man out of time" to adjust to the new era. The character quickly emerged as Timely's most popular and commercially successful wartime creation upon his original publication, though the popularity of superheroes declined in the post-war period and Captain America Comics was discontinued in 1950. The character saw a short-lived revival in 1953 before returning to comics in 1964, and has since remained in continuous publication. Captain America's creation as an explicitly anti-Nazi figure was a deliberately political undertaking: Simon and Kirby were stridently opposed to the actions of Nazi Germany and supporters of U.S. intervention in World War II, with Simon conceiving of the character specifically in response to the American non-interventionism movement. Political messages have subsequently remained a defining feature of Captain America stories, with writers regularly using the character to comment on the state of American society and government. Having appeared in more than ten thousand stories in more than five thousand media formats, Captain America is one of the most popular and recognized Marvel Comics characters, and has been described as an icon of American popular culture. Though Captain America was not the first United States-themed superhero, he would become the most popular and enduring of the many patriotic American superheroes created during World War II. Captain America was the first Marvel character to appear in a medium outside of comic books, in the 1944 serial film Captain America; the character has subsequently appeared in a variety of films and other media, including the Marvel Cinematic Universe, where he was portrayed by actor Chris Evans from the character's first appearance in Captain America: The First Avenger (2011) to his final appearance in Avengers: Endgame (2019). "It was a time of deep passion. Hitler was grabbing all of Europe, we had Nazis in America, Nazis holding mass meetings in Madison Square Garden. [...] Captain America was created in that atmosphere, he was a natural outgrowth of the passionate mood of the country." – Jack Kirby In 1940, Timely Comics publisher Martin Goodman responded to the growing popularity of superhero comics – particularly Superman at rival publisher National Comics Publications, the corporate predecessor to DC Comics – by hiring freelancer Joe Simon to create a new superhero for the company. Simon began to develop the character by determining who their nemesis could be, noting that the most successful superheroes were defined by their relationship with a compelling villain, and eventually settled on Adolf Hitler. He rationalized that Hitler was the "best villain of them all" as he was "hated by everyone in the free world", and that it would be a unique approach for a superhero to face a real-life adversary rather than a fictional one. This approach was also intentionally political. Simon was stridently opposed to the actions of Nazi Germany and supported U.S. intervention in World War II, and intended the hero to be a response to the American non-interventionism movement. Simon initially considered "Super American" for the hero's name, but felt there were already multiple comic book characters with "super" in their names. He worked out the details of the character, who was eventually named "Captain America", after he completed sketches in consultation with Goodman. The hero's civilian name "Steve Rogers" was derived from the telegraphy term "roger", meaning "message received". Goodman elected to launch Captain America with his own self-titled comic book, making him the first Timely character to debut with his own ongoing series without having first appeared in an anthology. Simon sought to have Jack Kirby be the primary artist on the series: the two developed a working relationship and friendship in the late 1930s after working together at Fox Feature Syndicate, and had previously developed characters for Timely together. Kirby also shared Simon's pro-intervention views, and was particularly drawn to the character in this regard. Goodman, conversely, wanted a team of artists on the series. It was ultimately determined that Kirby would serve as penciller, with Al Avison and Al Gabriele assisting as inkers; Simon additionally negotiated for himself and Kirby to receive 25 percent of the profits from the comic. Simon regards Kirby as a co-creator of Captain America, stating that "if Kirby hadn't drawn it, it might not have been much of anything." Captain America Comics #1 was published on December 20, 1940, with a cover date of March 1941. While the front cover of the issue featured Captain America punching Hitler, the comic itself established the Red Skull as Captain America's primary adversary, and also introduced Bucky Barnes as Captain America's teenaged sidekick. Simon stated that he personally regarded Captain America's origin story, in which the frail Steve Rogers becomes a supersoldier after receiving an experimental serum, as "the weakest part of the character", and that he and Kirby "didn't put too much thought into the origin. We just wanted to get to the action." Kirby designed the series' action scenes with an emphasis on a sense of continuity across panels, saying that he "choreographed" the sequences as one would a ballet, with a focus on exaggerated character movement. Kirby's layouts in Captain America Comics are characterized by their distorted perspectives, irregularly shaped panels, and the heavy use of speed lines. The first issue of Captain America Comics sold out in a matter of days, and the second issue's print run was set at over one million copies. Captain America quickly became Timely's most popular character, with the publisher creating an official Captain America fan club called the "Sentinels of Liberty". Circulation figures remained close to a million copies per month after the debut issue, which outstripped even the circulation of news magazines such as Time during the same period. Captain America Comics was additionally one of 189 periodicals that the US Department of War deemed appropriate to distribute to its soldiers without prior screening. The character would also make appearances in several of Timely's other comic titles, including All Winners Comics, Marvel Mystery Comics, U.S.A. Comics, and All Select Comics. Though Captain America was not the first United States-themed superhero – a distinction that belongs to The Shield at MLJ Comics – he would become the most popular patriotic American superhero of those created during World War II. Captain America's popularity drew a complaint from MLJ that the character's triangular heater shield too closely resembled the chest symbol of The Shield. This prompted Goodman to direct Simon and Kirby to change the design beginning with Captain America Comics #2. The revised round shield went on to become an iconic element of the character; its use as a discus-like throwing weapon originated in a short prose story in Captain America Comics #3, written by Stan Lee in his professional debut as a writer. Timely's publication of Captain America Comics led the company to be targeted with threatening letters and phone calls from the German American Bund, an American Nazi organization. When members began loitering on the streets outside the company's office, police protection was posted and New York mayor Fiorello La Guardia personally contacted Simon and Kirby to guarantee the safety of the publisher's employees. Simon wrote the first two issues of Captain America Comics before becoming the editor for the series; they were the only Captain America stories he would ever directly write. While Captain America generated acclaim and industry fame for Simon and Kirby, the pair believed that Goodman was withholding the promised percentage of profits for the series, prompting Simon to seek employment for himself and Kirby at National Comics Publications. When Goodman learned of Simon and Kirby's intentions, he effectively fired them from Timely Comics, telling them they were to leave the company after they completed work on Captain America Comics #10. The authorship of Captain America Comics was subsequently assumed by a variety of individuals, including Otto Binder, Bill Finger, and Manly Wade Wellman as writers, and Al Avison, Vince Alascia, and Syd Shores as pencilers. Superhero comics began to decline in popularity in the post-war period. This prompted a variety of attempts to reposition Captain America, including having the character fight gangsters rather than wartime enemies in Captain America Comics #42 (October 1944), appearing as a high school teacher in Captain America Comics #59 (August 1946), and joining Timely's first superhero team, the All-Winners Squad, in All Winners Comics #19 (Fall 1946). The series nevertheless continued to face dwindling sales, and Captain America Comics ended with its 75th issue in February 1950. Horror comics were ascendant as a popular comic genre during this period; in keeping with the trend, the final two issues of Captain America Comics were published under the title Captain America's Weird Tales. Timely's corporate successor Atlas Comics relaunched the character in 1953 in Young Men #24, where Captain America appears alongside the wartime heroes Human Torch and Toro, which was followed by a revival of Captain America Comics in 1954 written by Stan Lee and drawn by John Romita. In the spirit of the Cold War and McCarthyism, the character was billed as "Captain America, Commie Smasher" and faced enemies associated with the Soviet Union. The series was a commercial failure, and was cancelled after just three issues. Romita attributed the series' failure to the changing political climate, particularly the public opposition to the Korean War; the character subsequently fell out of active publication for nearly a decade, with Romita noting that "for a while, 'Captain America' was a dirty word". Captain America made his ostensible return in the anthology Strange Tales #114 (November 1963), published by Atlas' corporate successor Marvel Comics. In an 18-page story written by Lee and illustrated by Kirby, Captain America reemerges following years of apparent retirement, though he is revealed as an impostor who is defeated by Johnny Storm of the Fantastic Four. A caption in the final panel indicates that the story was a "test" to gauge interest in a potential return for Captain America; the reader response to the story was enthusiastic, and the character was formally reintroduced in The Avengers #4 (March 1964). The Avengers #4 retroactively established that Captain America had fallen into the Atlantic Ocean in the final days of World War II, where he spent decades frozen in ice in a state of suspended animation before being found and recovered. Captain America solo stories written by Lee with Kirby as the primary penciller were published in the anthology Tales of Suspense alongside solo stories focused on fellow Avengers member Iron Man beginning in November 1964; the character also appeared in Lee and Kirby's World War II-set Sgt. Fury and his Howling Commandos beginning in December same year. These runs introduced and retroactively established several new companions of Captain America, including Nick Fury, Peggy Carter, and Sharon Carter. In 1966, Joe Simon sued Marvel Comics, asserting that he was legally entitled to renew the copyright on the character upon the expiration of the original 28-year term. The two parties settled out of court, with Simon agreeing to a statement that the character had been created under terms of employment by the publisher, and was therefore work for hire owned by the company. Captain America's self-titled ongoing series was relaunched in April 1968, with Lee as writer and Kirby as penciller; Kirby later departed the series, and was replaced by Gene Colan. In 1969, writer and artist Jim Steranko authored a three-issue run of Captain America. Despite the brevity of Steranko's time on the series, his contributions significantly influenced how Captain America was represented in post-war comics, reestablishing the character's secret identity and introducing a more experimental art style to the series. "This was the '70s – prime anti-war years – and here was a guy with a flag on his chest who was supposed to represent what most people distrusted. No one knew what to do with him." – Steve Englehart In contrast to the character's enthusiastic participation in World War II, comics featuring Captain America rarely broached the topic of the Vietnam War, though the subject of Captain America's potential participation was frequently debated by readers in the letters to the editor section in Captain America. Marvel maintained a position of neutrality on Vietnam; in 1971, Stan Lee wrote in an editorial that a poll indicated that a majority of readers did not want Captain America to be involved in Vietnam, adding that he believed the character "simply doesn't lend himself to the John Wayne-type character he once was" and that he could not "see any of our characters taking on a role of super-patriotism in the world as it is today". Captain America stories in the 1970s began to increasingly focus on domestic American political issues, such as poverty, racism, pollution, and political corruption. Captain America #117 (September 1969) introduced The Falcon as the first African-American superhero in mainstream comic books and who would become Captain America's partner; the series was cover titled as Captain America and the Falcon beginning February 1971, which it would maintain for the next seven and a half years. These political shifts were significantly shaped by comics created by writer Steve Englehart and artist Sal Buscema, who joined the series in 1972. In a 1974 storyline written by Englehart directly inspired by the Watergate scandal, Captain America is framed for murder by the fascistic Secret Empire, whose leader is ultimately revealed to be the president of the United States. The incident causes a disillusioned Steve Rogers to briefly drop the moniker of Captain America to become "Nomad, the man without a country", though he later vowed to "reclaim the ideals of America, which its leaders have trampled upon" and again assumed the role of Captain America. Englehart and Buscema's run was highly acclaimed, bringing Captain America from one of Marvel's lowest-selling titles to its top-selling comic, and the conflict between America as it idealizes itself to be and America in reality would recur frequently as a theme in Captain America comics in the subsequent decades. In 1975, Roy Thomas created the comic book series The Invaders. Set during World War II, the comic focuses on a superhero team composed of Timely's wartime-era superheroes, with Captain America as its leader; Thomas, a fan of stories from the Golden Age of Comic Books, drew inspiration for the series from Timely's All-Winners Squad. Jack Kirby wrote and illustrated run on Captain America and the Falcon from 1975 to 1977. This was followed by issues authored by a number of writers and artists, including Roy Thomas, Donald F. Glut, Roger McKenzie, and Sal Buscema; the series was also re-titled Captain America beginning with issue 223 in 1978. Owing to the series' lack of a regular writer, Captain America editor Roger Stern and artist John Byrne authored the series from 1980 to 1981. Their run that saw a storyline in which Captain America declines an offer to run for president of the United States. Following Stern and Byrne, Captain America was authored by writer J.M. Dematteis and artist Mike Zeck from 1981 to 1984. Their run featured a year-long storyline in which Captain America faced a crisis of confidence in the face of what Dematteis described as "Reagan Cold War rhetoric". The story was originally planned culminate in Captain America #300 with Captain America renunciating violence to become a pacifist; when that ending was rejected by Marvel editor-in-chief Jim Shooter, Dematteis resigned from Captain America in protest. Writer Mark Gruenwald, editor of Captain America from 1982 to 1985, served as writer on the series from 1985 to 1995. Various artists illustrated the series over the course of Gruenwald's decade-long run, including Paul Neary from 1985 to 1987, and Kieron Dwyer from 1988 to 1990. In contrast to DeMatteis, Gruenwald placed less emphasis on Steve Rogers' life as a civilian, wishing to show "that Steve Rogers is Captain America first [...] he has no greater needs than being Captain America." Among the most significant storylines appearing in Gruenwald's run was "The Choice" in 1987, in which Steve Rogers renounces the identity of Captain America to briefly become simply "The Captain" after the United States government orders him to continue his superheroic activities directly under their control. After Gruenwald departed the series, writer Mark Waid and artist Ron Garney began to author Captain America in 1995. Despite early acclaim, including the reintroduction of Captain America's love interest Sharon Carter, their run was terminated after ten issues as a result of Marvel's "Heroes Reborn" rebranding in 1996. The rebrand saw artists Jim Lee and Rob Liefeld, who had left the company in the early 1990s to establish Image Comics, return to Marvel to re-imagine several of the company's characters. Marvel faced various financial difficulties in the 1990s, culminating in the company filing for Chapter 11 bankruptcy protection in 1996, and "Heroes Reborn" was introduced as part of an effort to increase sales. As part of the rebrand, Liefeld illustrated and co-wrote with Jeph Loeb a run on Captain America that was ultimately cancelled after six issues. Marvel stated that the series was cancelled due to low sales, though Liefeld has contended that he was fired after he refused to take a lower pay rate amid Marvel's bankruptcy proceedings. Waid would return to Captain America in 1998, initially with Garney as arist and later with Andy Kubert. In 1999, Joe Simon filed to claim the copyright to Captain America under a provision of the Copyright Act of 1976 that allows the original creators of works that have been sold to corporations to reclaim them after the original 56-year copyright term has expired. Marvel challenged the claim, arguing that Simon's 1966 settlement made the character ineligible for copyright transfer. Simon and Marvel settled out of court in 2003, in a deal that paid Simon royalties for merchandising and licensing of the character. Writer and artist Dan Jurgens took over Captain America from Waid in 2000, positioning the character in a world he described as "more cynical [...] in terms of how we view our government, our politicians and people's motives in general". In the wake of the September 11 attacks, a new Captain America series written by John Ney Rieber with artwork by John Cassady was published under the Marvel Knights imprint from 2002 to 2003. The series received criticism for its depiction of Captain America fighting terrorists modelled after Al-Qaeda, though Cassady contended that the aim of the series was to depict "the emotions this hero was going through" in the wake of 9/11, and the "guilt and anger a man in his position would feel". In 2005, Marvel relaunched Captain America in a new volume written by Ed Brubaker and illustrated by Steve Epting. The run saw the publication of "The Winter Soldier", which reintroduced Captain America's previously deceased partner Bucky Barnes as a brainwashed cybernetic assassin. Contemporaneously, Captain America was a central character in the 2006 crossover storyline "Civil War", written by Mark Millar and penciled by Steve McNiven, which saw the character come into conflict with fellow Avengers member Iron Man over government efforts to regulate superheroes. The character was killed in the 2007 storyline "The Death of Captain America" written by Brubaker, which was accompanied by the miniseries Fallen Son: The Death of Captain America written by Jeph Loeb; the character was later revived in the 2009 limited series Captain America: Reborn. Brubaker's run on Captain America, which ran across various titles until 2012, was critically and commercially acclaimed; Captain America #25 (which contains the character's death) was the best-selling comic of 2007, and Brubaker won the Harvey Award for Best Writer for the series in 2006. After Brubaker's run on Captain America ended in 2012, a new volume of the series written by Rick Remender was published as part of the Marvel Now rebranding initiative, which saw Sam Wilson assume the mantle of Captain America in 2014. This was followed by a run written by Nick Spencer beginning in 2016, in which Captain America was replaced by a version of himself later known as "Hydra Supreme", loyal to the villainous organization Hydra, culminating in the 2017 crossover event Secret Empire. As part of Marvel's Fresh Start rebrand in 2018, a new Captain America series written by Ta-Nehisi Coates with art by Leinil Francis Yu was published from 2018 to 2021. A new volume of Captain America written by J. Michael Straczynski began publication in September 2023. As of 2015, Captain America has appeared in more than ten thousand stories in more than five thousand media formats, including comic books, books, and trade publications. The character's origin story has been retold and revised multiple times throughout his editorial history, though its broad details have remained generally consistent. Steven "Steve" Rogers was born in the 1920s to an impoverished family on the Lower East Side of New York City. The frail and infirm Rogers attempts to join the U.S. Army in order to fight in the Second World War, but is rejected after being deemed unfit for military service. His resolve is nevertheless noticed by the military, and he is recruited as the first test subject for "Project Rebirth", a secret government program that seeks to create super soldiers through the development of the "Super-Soldier Serum". Though the serum successfully enhances Rogers to the peak to human physical perfection, a Nazi spy posing as a military observer destroys the remaining supply of the serum and assassinates its inventor, foiling plans to produce additional super soldiers. Rogers is given a patriotic uniform and shield by the American government and becomes the costumed superhero Captain America. He goes on to fight the villainous Red Skull and other members of the Axis powers both domestically and abroad, alongside his sidekick Bucky Barnes and as a member of the Invaders. In the final days of the war, Rogers and Barnes seemingly perish after falling from an experimental drone plane into the northern Atlantic Ocean. Rogers is found decades later by the superhero team the Avengers, the Super-Soldier Serum having allowed him to survive frozen in a block of ice in a state of suspended animation. Reawakened in modern times, Rogers resumes activities as a costumed hero, joining and later becoming leader of the Avengers. Many of his exploits involve missions undertaken for the Avengers or for S.H.I.E.L.D., an espionage and international law enforcement agency operated by his former war comrade Nick Fury. Through Fury, Rogers befriends Sharon Carter, a S.H.I.E.L.D. agent with whom he eventually begins a partnership and an on-again off-again romance. He meets and trains Sam Wilson, who becomes the superhero Falcon, and they establish an enduring friendship and partnership. After a conspiracy hatched by the Secret Empire to discredit Rogers is revealed to have been personally orchestrated by the President of the United States, a disillusioned Rogers abandons the mantle of Captain America and assumes the title of "Nomad", the "man without a country". He eventually re-assumes the title, and later declines an offer from the "New Populist Party" to run for president himself. He again abandons the mantle of Captain America to briefly assume the alias of "The Captain" when a government commission orders him to work directly for the U.S. government. In the aftermath of the September 11 attacks, Rogers reveals his secret identity to the world. Following the disbandment of the Avengers, he discovers that Bucky is still alive, having been brainwashed by the Soviets to become the Winter Soldier. Later, in reaction to government efforts to regulate superheroes, Rogers becomes the leader of an underground anti-registration movement that clashes with a pro-registration faction led by fellow Avengers member Iron Man. After significant rancor, he voluntarily surrenders and submits to arrest. At his trial, he is shot and killed by Sharon Carter, whose actions are manipulated by the villainous Dr. Faustus; in his absence, a recovered Bucky assumes the title of Captain America. It is eventually revealed that Rogers did not die, but became displaced in space and time; he is ultimately able to return to the present. He resumes his exploits as a superhero, though his public identity is briefly supplanted by a sleeper agent from the terrorist organization Hydra. "Rogers' transformation into Captain America is underwritten by the military. But, perhaps haunted by his own roots in powerlessness, he is a dissident just as likely to be feuding with his superiors in civilian and military governance as he is to be fighting with the supervillain Red Skull. [...] He is 'a man out of time,' a walking emblem of greatest-generation propaganda brought to life in this splintered postmodern time." – Ta-Nehisi Coates Steve Rogers' personality has shifted across his editorial history, a fact that media scholar J. Richard Stevens sees as a natural consequence of the character being written and re-interpreted by many writers over the span of multiple decades. However, Stevens identifies two aspects of the character's personality that have remained consistent across expressions: his "uncompromising purity" and "his ability to judge the character in others". Early Captain America stories typically paid little attention to Rogers' civilian identity; in his 1970 book The Steranko History of Comics, Jim Steranko notes that the character was often criticized for being two-dimensional as a result. He argues that this was an intentional device, writing that these critics "failed to grasp the true implication of his being. Steve Rogers never existed, except perhaps as an abstract device for the convenience of storytelling. Captain America was not an embodiment of human characteristics but a pure idea." Following the character's return to comics in the 1960s, many stories gave increased focus to Rogers' civilian identity, particularly his struggles as a "man out of time" attempting to adjust to the modern era. Often, stories depict a brooding or melancholic Rogers as he faces both a physical struggle as Captain America, and an ideological struggle as Steve Rogers to reconcile his social values with modern times. The character is frequently conflicted by his World War II-era "good war" morality being challenged and made anachronistic by the compromising demands of the post-war era. Prior to Bucky Barnes' return to comics in the 2000s, many Captain America stories centered on Rogers' sense of guilt over Barnes' death. Culture scholar Robert G. Weiner argues that these stories mirror the post-traumatic stress disorder and survivor guilt held by many war veterans, and that this trauma distinguishes the character from other well-known superheroes such as Batman and Spider-Man: while those characters became heroes because of a traumatic incident, Rogers carries on as a hero in spite of a traumatic incident, with Weiner asserting that this reinforces the nobility of the character. Though Marvel has historically trended away from making overt partisan statements in the post-war period, writers have nevertheless used Captain America to comment on the state of American society and government at particular moments in history. For example, the conspiracy storyline of "Secret Empire" reflected what writer Steve Englehart saw as broad disillusionment with American institutions in the wake of the Vietnam War and the Watergate scandal, the "Streets of Poison" storyline by Mark Gruenwald in the 1990s was intended to address anxieties around the drug trade and debates on the war on drugs, and "Civil War" by Mark Millar was widely interpreted as an allegory for the Patriot Act and post-9/11 debates on the balance between national security and civil liberties. While the ideological orientation of Captain America stories has shifted in response to changing social and political attitudes, Stevens notes how a central component of Captain America's mythology is that the character himself does not change: when the character's attitudes have shifted, it is consistently framed as an evolution or a new understanding of his previously-held ideals. Stevens argues that the character's seeming paradoxical steadfastness is reflective of "the language of comics, where continuity is continually updated to fit the needs of the serialized present." Despite his status as patriotic superhero, Captain America is rarely depicted as an overly jingoistic figure. Stevens writes that the character's "patriotism is more focused on the universal rights of man as expressed through the American Dream" rather than "a position championing the specific cultural or political goals of the United States." Weiner similarly concurs that the character "embodies what America strives to be, not what it sometimes is". Dittmer agrees that while the character sees himself "as the living embodiment of the American Dream (rather than a tool of the state)", his status as a patriotic superhero nevertheless tethers him to American foreign policy and hegemony. He argues that Captain America tends to skew away from interventionist actions at moments where the United States is undertaking policies that its critics deem imperialist, specifically citing the character's non-participation in the Vietnam and Iraq wars, and argues that the character's inconsistent position on the use of deadly force across his editorial history "is perhaps a tacit acknowledgment of the violence, or the threat of violence, at the heart of American hegemony." "Cap is one of the hardest hero characters to write, because the writer cannot use some exotic super-power to make his episodes seem colorful. [...] All he has to serve him are his extraordinary combat skills, his shield, and his unquenchable love for freedom and justice." – Stan Lee Captain America possesses no superpowers, though the Super-Soldier Serum has enhanced his body's strength, speed, agility, endurance, reflexes, reaction time, and natural self-healing ability to the peak of human physical perfection. He is additionally an expert tactician and field commander, and has achieved mastery in a variety of hand-to-hand combat styles, including boxing and judo. The precise parameters of Captain America's physical prowess vary across stories due to editorial dictates and artistic license taken by authors; Steve Englehart was given an editorial order to give the character superhuman strength in the 1970s, but the change did not remain permanent and was soon forgotten. Steve Rogers is also a skilled visual artist, having worked as a commercial illustrator prior to joining the military, and several storylines have depicted the character working as a freelance artist. The basic design of Captain America's costume has remained largely consistent from its original incarnation in the 1940s. Designed by Joe Simon, the costume is based on the United States flag, with Simon likening the character's appearance to that of "a modern-day crusader": chain mail armor, and a helmet adorned with wings in reference to the Roman god Mercury. Steve Rogers has worn other costumes when he has adopted alternate superhero alter egos: as Nomad he wears a domino mask and a black and gold suit that is cut to expose his bare chest and stomach, and as The Captain he wears a modified version of the Captain America suit with a red, white, and black design. Captain America's shield is the character's primary piece of equipment. It is a round shield with a design featuring a white star on a blue circle surrounded by red and white rings. First appearing in Captain America Comics #1 as a triangular heater shield, beginning in Captain America Comics #2 it was changed to its current circular design due to a complaint from MLJ Comics that the original design too closely resembled the chest symbol of their superhero The Shield. The shield is depicted as constructed from an alloy of vibranium and adamantium, two highly resilient fictional metals appearing in Marvel comic books. It is portrayed as both a virtually indestructible defensive object and a highly aerodynamic offensive weapon: when thrown, it is capable of ricocheting off multiple surfaces and returning to the original thrower. Captain America's first sidekick was Bucky Barnes, introduced in Captain America Comics #1 as the teenaged "mascot" of Steve Rogers' regiment. He is made Captain America's partner in that same issue after accidentally discovering the character's secret identity. Joe Simon described Bucky's creation as being largely motivated by a need to give Captain America "someone to talk to" and avoid the overuse of dialogue delivered through internal monologue, noting that "Bucky was brought in as a way of eliminating too many thought balloons." Bucky was retroactively established as having been killed in the same accident that left Captain America frozen in suspended animation; the character remained deceased for many decades, contrasting the typically ephemeral nature of comic book deaths, until he returned in 2005 as the Winter Soldier. Initially introduced as a brainwashed assassin and antagonist to Captain America, Bucky's memories and personality were later restored, and he was re-established as an ally to Steve Rogers. Rick Jones briefly assumed the role of Captain America's sidekick and the public identity of Bucky following Captain America's return to comics in the 1960s. In 1969, Sam Wilson was introduced as the superhero Falcon and later became Captain America's sidekick, making the characters the first interracial superhero duo in American comic books. Possessing the power to communicate with birds, Wilson is initially depicted as a former social worker living in Harlem, though this identity is revealed to be the result of memories implanted by the Red Skull. He later receives a winged suit from the superhero Black Panther that enables him to fly. Other characters who have served as Rogers' sidekick include Golden Girl (Betsy Ross), Demolition Man (Dennis Dunphy), Jack Flag (Jack Harrison), and Free Spirit (Cathy Webster). Over the course of several decades, writers and artists have established a rogues' gallery of supervillains to face Captain America. The character's primary archenemy is the Red Skull, introduced from the character's origins as an apprentice to Adolf Hitler. Just as Red Skull represents Nazism, many of Captain America's villains represent specific ideologies or political formations: for example, the Serpent Society represents labor unionism, and Flag-Smasher represents anti-nationalism. The political character of Captain America's enemies has shifted over time: the character fought enemies associated with communism during his brief revival in the 1950s before shifting back to Nazi antagonists in the mid-1960s, while comics since 9/11 have frequently depicted the character facing terrorist villains. Steve Rogers' first love interest was Betsy Ross, introduced in his World War II-era comics as a member of the Women's Army Corps who later became the costumed superhero Golden Girl. Peggy Carter, an American member of the French Resistance, was retroactively established in comics published in the 1960s as another of Rogers' wartime lovers. When Rogers is revived in the post-war era, he begins a partnership and on-again off-again relationship with S.H.I.E.L.D. agent Sharon Carter; introduced as Peggy's younger sister, she was later retconned as Peggy's grandniece to reflect Marvel's floating timeline. In comics published in the 1980s, Rogers dated and became engaged to civilian Bernie Rosenthal, though they ended their relationship amicably after Bernie left New York to attend law school. In the 1990s, Rogers had a romantic entanglement with the alternately villainous and antiheroic Diamondback, a member of the Serpent Society. The title of "Captain America" has been used by other characters in the Marvel Universe in addition to Steve Rogers, including William Naslund, Jeffrey Mace, and William Burnside. John Walker, also known as U.S. Agent, was introduced as a villainous Captain America in 1988, and Isaiah Bradley was established in the 2003 limited series Truth: Red, White & Black as an African American man who acquired superpowers after being used as a test subject for the Super-Soldier Serum. Rogers' sidekicks Bucky Barnes and Sam Wilson have also alternately held the title of Captain America: Barnes in 2008 following Rogers' death in 2007, and Wilson following Marvel's 2012 rebranding campaign Marvel Now!. Within the multiverse of parallel universes that compose the Marvel Universe, there are many variations of Steve Rogers and Captain America; this includes Marvel's Ultimate Comics universe, which possesses its own version of Steve Rogers that is more overtly politically conservative. "Over the years, Captain America's story has accurately reflected U.S. attitudes, as our country moved from the self-confidence of the early Cold War to the guilt-ridden angst of the 1970s to the revival of national pride that characterized the Reagan 1980s." – Jacob Heilbrun, The Los Angeles Times Captain America is one of the most popular and widely recognized Marvel Comics characters, and has been described as an icon of American popular culture. He is the most well-known and enduring of the United States-themed superheroes to emerge from the Second World War and inspired a proliferation of patriotic-themed superheroes in American comic books during the 1940s. This included the American Crusader, the Spirit of '76, Yank & Doodle, Captain Flag, and Captain Courageous, among numerous others. Though none would achieve Captain America's commercial success, the volume of Captain America imitators was such that three months after the character's debut, Timely published a statement indicating that "there is only one Captain America" and warning that they would take legal action against publishers that infringed on the character. After being dismissed from Timely, Joe Simon and Jack Kirby would themselves create a new patriotic superhero, the Fighting American, for Prize Comics in 1954; the character became the subject of a lawsuit from Marvel in the 1990s after Rob Liefeld attempted to revive the character following his own departure from Marvel. When the character was killed in 2007, he was eulogized in numerous mainstream media outlets, including The New York Times and The Los Angeles Times, with the former describing him as a "national hero". In 2011, Captain America placed sixth on IGN's "Top 100 Comic Book Heroes of All Time", and second in their 2012 list of "The Top 50 Avengers". Gizmodo and Entertainment Weekly respectively ranked Captain America first and second in their 2015 rankings of Avengers characters. Empire ranked Captain America as the 21st greatest comic book character of all time. Captain America has appeared in a variety of adapted, spin-off, and licensed media, including films, cartoons, video games, toys, clothing, and books. The first appearance of Captain America in a medium outside of comic books was in the 1944 serial film Captain America, which was also the first piece of non-comics media to feature a Marvel Comics character. The character later appeared in two made-for-TV films in 1979, Captain America and Captain America II: Death Too Soon, and a self-titled feature-length film in 1990. A trilogy of Captain America films starring Chris Evans as the title character were produced as part of the Marvel Cinematic Universe (MCU) in the 2010s: Captain America: The First Avenger (2011), Captain America: The Winter Soldier (2014), and Captain America: Civil War (2016). The character also appeared in the ensemble films The Avengers (2012), Avengers: Age of Ultron (2015), Avengers: Infinity War (2018), and Avengers: Endgame (2019). The first appearance of Captain America on television was in the 1966 Grantray-Lawrence Animation series The Marvel Super Heroes. The character would make minor appearances in several Marvel animated series in the subsequent decades, including Spider-Man and His Amazing Friends (1981–1983), X-Men: The Animated Series (1992–1997), and The Avengers: United They Stand (1999–2000). Buoyed by increased popularity from the character's appearances in the MCU, Captain America began appearing in television series in more prominent roles beginning in the 2010s, such as The Avengers: Earth's Mightiest Heroes (2010–2012). Captain America was the first Marvel character to be adapted into a novel with Captain America: The Great Gold Steal by Ted White, published in 1968.
[ { "paragraph_id": 0, "text": "Captain America is a superhero created by Joe Simon and Jack Kirby who appears in American comic books published by Marvel Comics. The character first appeared in Captain America Comics #1, published on December 20, 1940 by Timely Comics, a corporate predecessor to Marvel. Captain America's civilian identity is Steve Rogers, a frail man enhanced to the peak of human physical perfection by an experimental \"super-soldier serum\" after joining the United States Army to aid the country's efforts in World War II. Equipped with an American flag-inspired costume and a virtually indestructible shield, Captain America and his sidekick Bucky Barnes clashed frequently with the villainous Red Skull and other members of the Axis powers. In the final days of the war, an accident left Captain America frozen in a state of suspended animation until he was revived in modern times. He resumes his exploits as a costumed hero and becomes leader of the superhero team the Avengers, but frequently struggles as a \"man out of time\" to adjust to the new era.", "title": "" }, { "paragraph_id": 1, "text": "The character quickly emerged as Timely's most popular and commercially successful wartime creation upon his original publication, though the popularity of superheroes declined in the post-war period and Captain America Comics was discontinued in 1950. The character saw a short-lived revival in 1953 before returning to comics in 1964, and has since remained in continuous publication. Captain America's creation as an explicitly anti-Nazi figure was a deliberately political undertaking: Simon and Kirby were stridently opposed to the actions of Nazi Germany and supporters of U.S. intervention in World War II, with Simon conceiving of the character specifically in response to the American non-interventionism movement. Political messages have subsequently remained a defining feature of Captain America stories, with writers regularly using the character to comment on the state of American society and government.", "title": "" }, { "paragraph_id": 2, "text": "Having appeared in more than ten thousand stories in more than five thousand media formats, Captain America is one of the most popular and recognized Marvel Comics characters, and has been described as an icon of American popular culture. Though Captain America was not the first United States-themed superhero, he would become the most popular and enduring of the many patriotic American superheroes created during World War II. Captain America was the first Marvel character to appear in a medium outside of comic books, in the 1944 serial film Captain America; the character has subsequently appeared in a variety of films and other media, including the Marvel Cinematic Universe, where he was portrayed by actor Chris Evans from the character's first appearance in Captain America: The First Avenger (2011) to his final appearance in Avengers: Endgame (2019).", "title": "" }, { "paragraph_id": 3, "text": "\"It was a time of deep passion. Hitler was grabbing all of Europe, we had Nazis in America, Nazis holding mass meetings in Madison Square Garden. [...] Captain America was created in that atmosphere, he was a natural outgrowth of the passionate mood of the country.\"", "title": "Publication history" }, { "paragraph_id": 4, "text": "– Jack Kirby", "title": "Publication history" }, { "paragraph_id": 5, "text": "In 1940, Timely Comics publisher Martin Goodman responded to the growing popularity of superhero comics – particularly Superman at rival publisher National Comics Publications, the corporate predecessor to DC Comics – by hiring freelancer Joe Simon to create a new superhero for the company. Simon began to develop the character by determining who their nemesis could be, noting that the most successful superheroes were defined by their relationship with a compelling villain, and eventually settled on Adolf Hitler. He rationalized that Hitler was the \"best villain of them all\" as he was \"hated by everyone in the free world\", and that it would be a unique approach for a superhero to face a real-life adversary rather than a fictional one.", "title": "Publication history" }, { "paragraph_id": 6, "text": "This approach was also intentionally political. Simon was stridently opposed to the actions of Nazi Germany and supported U.S. intervention in World War II, and intended the hero to be a response to the American non-interventionism movement. Simon initially considered \"Super American\" for the hero's name, but felt there were already multiple comic book characters with \"super\" in their names. He worked out the details of the character, who was eventually named \"Captain America\", after he completed sketches in consultation with Goodman. The hero's civilian name \"Steve Rogers\" was derived from the telegraphy term \"roger\", meaning \"message received\".", "title": "Publication history" }, { "paragraph_id": 7, "text": "Goodman elected to launch Captain America with his own self-titled comic book, making him the first Timely character to debut with his own ongoing series without having first appeared in an anthology. Simon sought to have Jack Kirby be the primary artist on the series: the two developed a working relationship and friendship in the late 1930s after working together at Fox Feature Syndicate, and had previously developed characters for Timely together. Kirby also shared Simon's pro-intervention views, and was particularly drawn to the character in this regard. Goodman, conversely, wanted a team of artists on the series. It was ultimately determined that Kirby would serve as penciller, with Al Avison and Al Gabriele assisting as inkers; Simon additionally negotiated for himself and Kirby to receive 25 percent of the profits from the comic. Simon regards Kirby as a co-creator of Captain America, stating that \"if Kirby hadn't drawn it, it might not have been much of anything.\"", "title": "Publication history" }, { "paragraph_id": 8, "text": "Captain America Comics #1 was published on December 20, 1940, with a cover date of March 1941. While the front cover of the issue featured Captain America punching Hitler, the comic itself established the Red Skull as Captain America's primary adversary, and also introduced Bucky Barnes as Captain America's teenaged sidekick. Simon stated that he personally regarded Captain America's origin story, in which the frail Steve Rogers becomes a supersoldier after receiving an experimental serum, as \"the weakest part of the character\", and that he and Kirby \"didn't put too much thought into the origin. We just wanted to get to the action.\" Kirby designed the series' action scenes with an emphasis on a sense of continuity across panels, saying that he \"choreographed\" the sequences as one would a ballet, with a focus on exaggerated character movement. Kirby's layouts in Captain America Comics are characterized by their distorted perspectives, irregularly shaped panels, and the heavy use of speed lines.", "title": "Publication history" }, { "paragraph_id": 9, "text": "The first issue of Captain America Comics sold out in a matter of days, and the second issue's print run was set at over one million copies. Captain America quickly became Timely's most popular character, with the publisher creating an official Captain America fan club called the \"Sentinels of Liberty\". Circulation figures remained close to a million copies per month after the debut issue, which outstripped even the circulation of news magazines such as Time during the same period. Captain America Comics was additionally one of 189 periodicals that the US Department of War deemed appropriate to distribute to its soldiers without prior screening. The character would also make appearances in several of Timely's other comic titles, including All Winners Comics, Marvel Mystery Comics, U.S.A. Comics, and All Select Comics.", "title": "Publication history" }, { "paragraph_id": 10, "text": "Though Captain America was not the first United States-themed superhero – a distinction that belongs to The Shield at MLJ Comics – he would become the most popular patriotic American superhero of those created during World War II. Captain America's popularity drew a complaint from MLJ that the character's triangular heater shield too closely resembled the chest symbol of The Shield. This prompted Goodman to direct Simon and Kirby to change the design beginning with Captain America Comics #2. The revised round shield went on to become an iconic element of the character; its use as a discus-like throwing weapon originated in a short prose story in Captain America Comics #3, written by Stan Lee in his professional debut as a writer. Timely's publication of Captain America Comics led the company to be targeted with threatening letters and phone calls from the German American Bund, an American Nazi organization. When members began loitering on the streets outside the company's office, police protection was posted and New York mayor Fiorello La Guardia personally contacted Simon and Kirby to guarantee the safety of the publisher's employees.", "title": "Publication history" }, { "paragraph_id": 11, "text": "Simon wrote the first two issues of Captain America Comics before becoming the editor for the series; they were the only Captain America stories he would ever directly write. While Captain America generated acclaim and industry fame for Simon and Kirby, the pair believed that Goodman was withholding the promised percentage of profits for the series, prompting Simon to seek employment for himself and Kirby at National Comics Publications. When Goodman learned of Simon and Kirby's intentions, he effectively fired them from Timely Comics, telling them they were to leave the company after they completed work on Captain America Comics #10. The authorship of Captain America Comics was subsequently assumed by a variety of individuals, including Otto Binder, Bill Finger, and Manly Wade Wellman as writers, and Al Avison, Vince Alascia, and Syd Shores as pencilers.", "title": "Publication history" }, { "paragraph_id": 12, "text": "Superhero comics began to decline in popularity in the post-war period. This prompted a variety of attempts to reposition Captain America, including having the character fight gangsters rather than wartime enemies in Captain America Comics #42 (October 1944), appearing as a high school teacher in Captain America Comics #59 (August 1946), and joining Timely's first superhero team, the All-Winners Squad, in All Winners Comics #19 (Fall 1946). The series nevertheless continued to face dwindling sales, and Captain America Comics ended with its 75th issue in February 1950. Horror comics were ascendant as a popular comic genre during this period; in keeping with the trend, the final two issues of Captain America Comics were published under the title Captain America's Weird Tales.", "title": "Publication history" }, { "paragraph_id": 13, "text": "Timely's corporate successor Atlas Comics relaunched the character in 1953 in Young Men #24, where Captain America appears alongside the wartime heroes Human Torch and Toro, which was followed by a revival of Captain America Comics in 1954 written by Stan Lee and drawn by John Romita. In the spirit of the Cold War and McCarthyism, the character was billed as \"Captain America, Commie Smasher\" and faced enemies associated with the Soviet Union. The series was a commercial failure, and was cancelled after just three issues. Romita attributed the series' failure to the changing political climate, particularly the public opposition to the Korean War; the character subsequently fell out of active publication for nearly a decade, with Romita noting that \"for a while, 'Captain America' was a dirty word\".", "title": "Publication history" }, { "paragraph_id": 14, "text": "Captain America made his ostensible return in the anthology Strange Tales #114 (November 1963), published by Atlas' corporate successor Marvel Comics. In an 18-page story written by Lee and illustrated by Kirby, Captain America reemerges following years of apparent retirement, though he is revealed as an impostor who is defeated by Johnny Storm of the Fantastic Four. A caption in the final panel indicates that the story was a \"test\" to gauge interest in a potential return for Captain America; the reader response to the story was enthusiastic, and the character was formally reintroduced in The Avengers #4 (March 1964).", "title": "Publication history" }, { "paragraph_id": 15, "text": "The Avengers #4 retroactively established that Captain America had fallen into the Atlantic Ocean in the final days of World War II, where he spent decades frozen in ice in a state of suspended animation before being found and recovered. Captain America solo stories written by Lee with Kirby as the primary penciller were published in the anthology Tales of Suspense alongside solo stories focused on fellow Avengers member Iron Man beginning in November 1964; the character also appeared in Lee and Kirby's World War II-set Sgt. Fury and his Howling Commandos beginning in December same year. These runs introduced and retroactively established several new companions of Captain America, including Nick Fury, Peggy Carter, and Sharon Carter.", "title": "Publication history" }, { "paragraph_id": 16, "text": "In 1966, Joe Simon sued Marvel Comics, asserting that he was legally entitled to renew the copyright on the character upon the expiration of the original 28-year term. The two parties settled out of court, with Simon agreeing to a statement that the character had been created under terms of employment by the publisher, and was therefore work for hire owned by the company. Captain America's self-titled ongoing series was relaunched in April 1968, with Lee as writer and Kirby as penciller; Kirby later departed the series, and was replaced by Gene Colan. In 1969, writer and artist Jim Steranko authored a three-issue run of Captain America. Despite the brevity of Steranko's time on the series, his contributions significantly influenced how Captain America was represented in post-war comics, reestablishing the character's secret identity and introducing a more experimental art style to the series.", "title": "Publication history" }, { "paragraph_id": 17, "text": "\"This was the '70s – prime anti-war years – and here was a guy with a flag on his chest who was supposed to represent what most people distrusted. No one knew what to do with him.\"", "title": "Publication history" }, { "paragraph_id": 18, "text": "– Steve Englehart", "title": "Publication history" }, { "paragraph_id": 19, "text": "In contrast to the character's enthusiastic participation in World War II, comics featuring Captain America rarely broached the topic of the Vietnam War, though the subject of Captain America's potential participation was frequently debated by readers in the letters to the editor section in Captain America. Marvel maintained a position of neutrality on Vietnam; in 1971, Stan Lee wrote in an editorial that a poll indicated that a majority of readers did not want Captain America to be involved in Vietnam, adding that he believed the character \"simply doesn't lend himself to the John Wayne-type character he once was\" and that he could not \"see any of our characters taking on a role of super-patriotism in the world as it is today\".", "title": "Publication history" }, { "paragraph_id": 20, "text": "Captain America stories in the 1970s began to increasingly focus on domestic American political issues, such as poverty, racism, pollution, and political corruption. Captain America #117 (September 1969) introduced The Falcon as the first African-American superhero in mainstream comic books and who would become Captain America's partner; the series was cover titled as Captain America and the Falcon beginning February 1971, which it would maintain for the next seven and a half years. These political shifts were significantly shaped by comics created by writer Steve Englehart and artist Sal Buscema, who joined the series in 1972. In a 1974 storyline written by Englehart directly inspired by the Watergate scandal, Captain America is framed for murder by the fascistic Secret Empire, whose leader is ultimately revealed to be the president of the United States. The incident causes a disillusioned Steve Rogers to briefly drop the moniker of Captain America to become \"Nomad, the man without a country\", though he later vowed to \"reclaim the ideals of America, which its leaders have trampled upon\" and again assumed the role of Captain America. Englehart and Buscema's run was highly acclaimed, bringing Captain America from one of Marvel's lowest-selling titles to its top-selling comic, and the conflict between America as it idealizes itself to be and America in reality would recur frequently as a theme in Captain America comics in the subsequent decades.", "title": "Publication history" }, { "paragraph_id": 21, "text": "In 1975, Roy Thomas created the comic book series The Invaders. Set during World War II, the comic focuses on a superhero team composed of Timely's wartime-era superheroes, with Captain America as its leader; Thomas, a fan of stories from the Golden Age of Comic Books, drew inspiration for the series from Timely's All-Winners Squad. Jack Kirby wrote and illustrated run on Captain America and the Falcon from 1975 to 1977. This was followed by issues authored by a number of writers and artists, including Roy Thomas, Donald F. Glut, Roger McKenzie, and Sal Buscema; the series was also re-titled Captain America beginning with issue 223 in 1978.", "title": "Publication history" }, { "paragraph_id": 22, "text": "Owing to the series' lack of a regular writer, Captain America editor Roger Stern and artist John Byrne authored the series from 1980 to 1981. Their run that saw a storyline in which Captain America declines an offer to run for president of the United States. Following Stern and Byrne, Captain America was authored by writer J.M. Dematteis and artist Mike Zeck from 1981 to 1984. Their run featured a year-long storyline in which Captain America faced a crisis of confidence in the face of what Dematteis described as \"Reagan Cold War rhetoric\". The story was originally planned culminate in Captain America #300 with Captain America renunciating violence to become a pacifist; when that ending was rejected by Marvel editor-in-chief Jim Shooter, Dematteis resigned from Captain America in protest.", "title": "Publication history" }, { "paragraph_id": 23, "text": "Writer Mark Gruenwald, editor of Captain America from 1982 to 1985, served as writer on the series from 1985 to 1995. Various artists illustrated the series over the course of Gruenwald's decade-long run, including Paul Neary from 1985 to 1987, and Kieron Dwyer from 1988 to 1990. In contrast to DeMatteis, Gruenwald placed less emphasis on Steve Rogers' life as a civilian, wishing to show \"that Steve Rogers is Captain America first [...] he has no greater needs than being Captain America.\" Among the most significant storylines appearing in Gruenwald's run was \"The Choice\" in 1987, in which Steve Rogers renounces the identity of Captain America to briefly become simply \"The Captain\" after the United States government orders him to continue his superheroic activities directly under their control.", "title": "Publication history" }, { "paragraph_id": 24, "text": "After Gruenwald departed the series, writer Mark Waid and artist Ron Garney began to author Captain America in 1995. Despite early acclaim, including the reintroduction of Captain America's love interest Sharon Carter, their run was terminated after ten issues as a result of Marvel's \"Heroes Reborn\" rebranding in 1996. The rebrand saw artists Jim Lee and Rob Liefeld, who had left the company in the early 1990s to establish Image Comics, return to Marvel to re-imagine several of the company's characters. Marvel faced various financial difficulties in the 1990s, culminating in the company filing for Chapter 11 bankruptcy protection in 1996, and \"Heroes Reborn\" was introduced as part of an effort to increase sales. As part of the rebrand, Liefeld illustrated and co-wrote with Jeph Loeb a run on Captain America that was ultimately cancelled after six issues. Marvel stated that the series was cancelled due to low sales, though Liefeld has contended that he was fired after he refused to take a lower pay rate amid Marvel's bankruptcy proceedings. Waid would return to Captain America in 1998, initially with Garney as arist and later with Andy Kubert.", "title": "Publication history" }, { "paragraph_id": 25, "text": "In 1999, Joe Simon filed to claim the copyright to Captain America under a provision of the Copyright Act of 1976 that allows the original creators of works that have been sold to corporations to reclaim them after the original 56-year copyright term has expired. Marvel challenged the claim, arguing that Simon's 1966 settlement made the character ineligible for copyright transfer. Simon and Marvel settled out of court in 2003, in a deal that paid Simon royalties for merchandising and licensing of the character.", "title": "Publication history" }, { "paragraph_id": 26, "text": "Writer and artist Dan Jurgens took over Captain America from Waid in 2000, positioning the character in a world he described as \"more cynical [...] in terms of how we view our government, our politicians and people's motives in general\". In the wake of the September 11 attacks, a new Captain America series written by John Ney Rieber with artwork by John Cassady was published under the Marvel Knights imprint from 2002 to 2003. The series received criticism for its depiction of Captain America fighting terrorists modelled after Al-Qaeda, though Cassady contended that the aim of the series was to depict \"the emotions this hero was going through\" in the wake of 9/11, and the \"guilt and anger a man in his position would feel\".", "title": "Publication history" }, { "paragraph_id": 27, "text": "In 2005, Marvel relaunched Captain America in a new volume written by Ed Brubaker and illustrated by Steve Epting. The run saw the publication of \"The Winter Soldier\", which reintroduced Captain America's previously deceased partner Bucky Barnes as a brainwashed cybernetic assassin. Contemporaneously, Captain America was a central character in the 2006 crossover storyline \"Civil War\", written by Mark Millar and penciled by Steve McNiven, which saw the character come into conflict with fellow Avengers member Iron Man over government efforts to regulate superheroes. The character was killed in the 2007 storyline \"The Death of Captain America\" written by Brubaker, which was accompanied by the miniseries Fallen Son: The Death of Captain America written by Jeph Loeb; the character was later revived in the 2009 limited series Captain America: Reborn. Brubaker's run on Captain America, which ran across various titles until 2012, was critically and commercially acclaimed; Captain America #25 (which contains the character's death) was the best-selling comic of 2007, and Brubaker won the Harvey Award for Best Writer for the series in 2006.", "title": "Publication history" }, { "paragraph_id": 28, "text": "After Brubaker's run on Captain America ended in 2012, a new volume of the series written by Rick Remender was published as part of the Marvel Now rebranding initiative, which saw Sam Wilson assume the mantle of Captain America in 2014. This was followed by a run written by Nick Spencer beginning in 2016, in which Captain America was replaced by a version of himself later known as \"Hydra Supreme\", loyal to the villainous organization Hydra, culminating in the 2017 crossover event Secret Empire. As part of Marvel's Fresh Start rebrand in 2018, a new Captain America series written by Ta-Nehisi Coates with art by Leinil Francis Yu was published from 2018 to 2021. A new volume of Captain America written by J. Michael Straczynski began publication in September 2023.", "title": "Publication history" }, { "paragraph_id": 29, "text": "As of 2015, Captain America has appeared in more than ten thousand stories in more than five thousand media formats, including comic books, books, and trade publications. The character's origin story has been retold and revised multiple times throughout his editorial history, though its broad details have remained generally consistent. Steven \"Steve\" Rogers was born in the 1920s to an impoverished family on the Lower East Side of New York City. The frail and infirm Rogers attempts to join the U.S. Army in order to fight in the Second World War, but is rejected after being deemed unfit for military service. His resolve is nevertheless noticed by the military, and he is recruited as the first test subject for \"Project Rebirth\", a secret government program that seeks to create super soldiers through the development of the \"Super-Soldier Serum\". Though the serum successfully enhances Rogers to the peak to human physical perfection, a Nazi spy posing as a military observer destroys the remaining supply of the serum and assassinates its inventor, foiling plans to produce additional super soldiers. Rogers is given a patriotic uniform and shield by the American government and becomes the costumed superhero Captain America. He goes on to fight the villainous Red Skull and other members of the Axis powers both domestically and abroad, alongside his sidekick Bucky Barnes and as a member of the Invaders. In the final days of the war, Rogers and Barnes seemingly perish after falling from an experimental drone plane into the northern Atlantic Ocean.", "title": "Characterization" }, { "paragraph_id": 30, "text": "Rogers is found decades later by the superhero team the Avengers, the Super-Soldier Serum having allowed him to survive frozen in a block of ice in a state of suspended animation. Reawakened in modern times, Rogers resumes activities as a costumed hero, joining and later becoming leader of the Avengers. Many of his exploits involve missions undertaken for the Avengers or for S.H.I.E.L.D., an espionage and international law enforcement agency operated by his former war comrade Nick Fury. Through Fury, Rogers befriends Sharon Carter, a S.H.I.E.L.D. agent with whom he eventually begins a partnership and an on-again off-again romance. He meets and trains Sam Wilson, who becomes the superhero Falcon, and they establish an enduring friendship and partnership. After a conspiracy hatched by the Secret Empire to discredit Rogers is revealed to have been personally orchestrated by the President of the United States, a disillusioned Rogers abandons the mantle of Captain America and assumes the title of \"Nomad\", the \"man without a country\". He eventually re-assumes the title, and later declines an offer from the \"New Populist Party\" to run for president himself. He again abandons the mantle of Captain America to briefly assume the alias of \"The Captain\" when a government commission orders him to work directly for the U.S. government.", "title": "Characterization" }, { "paragraph_id": 31, "text": "In the aftermath of the September 11 attacks, Rogers reveals his secret identity to the world. Following the disbandment of the Avengers, he discovers that Bucky is still alive, having been brainwashed by the Soviets to become the Winter Soldier. Later, in reaction to government efforts to regulate superheroes, Rogers becomes the leader of an underground anti-registration movement that clashes with a pro-registration faction led by fellow Avengers member Iron Man. After significant rancor, he voluntarily surrenders and submits to arrest. At his trial, he is shot and killed by Sharon Carter, whose actions are manipulated by the villainous Dr. Faustus; in his absence, a recovered Bucky assumes the title of Captain America. It is eventually revealed that Rogers did not die, but became displaced in space and time; he is ultimately able to return to the present. He resumes his exploits as a superhero, though his public identity is briefly supplanted by a sleeper agent from the terrorist organization Hydra.", "title": "Characterization" }, { "paragraph_id": 32, "text": "\"Rogers' transformation into Captain America is underwritten by the military. But, perhaps haunted by his own roots in powerlessness, he is a dissident just as likely to be feuding with his superiors in civilian and military governance as he is to be fighting with the supervillain Red Skull. [...] He is 'a man out of time,' a walking emblem of greatest-generation propaganda brought to life in this splintered postmodern time.\"", "title": "Characterization" }, { "paragraph_id": 33, "text": "– Ta-Nehisi Coates", "title": "Characterization" }, { "paragraph_id": 34, "text": "Steve Rogers' personality has shifted across his editorial history, a fact that media scholar J. Richard Stevens sees as a natural consequence of the character being written and re-interpreted by many writers over the span of multiple decades. However, Stevens identifies two aspects of the character's personality that have remained consistent across expressions: his \"uncompromising purity\" and \"his ability to judge the character in others\". Early Captain America stories typically paid little attention to Rogers' civilian identity; in his 1970 book The Steranko History of Comics, Jim Steranko notes that the character was often criticized for being two-dimensional as a result. He argues that this was an intentional device, writing that these critics \"failed to grasp the true implication of his being. Steve Rogers never existed, except perhaps as an abstract device for the convenience of storytelling. Captain America was not an embodiment of human characteristics but a pure idea.\"", "title": "Characterization" }, { "paragraph_id": 35, "text": "Following the character's return to comics in the 1960s, many stories gave increased focus to Rogers' civilian identity, particularly his struggles as a \"man out of time\" attempting to adjust to the modern era. Often, stories depict a brooding or melancholic Rogers as he faces both a physical struggle as Captain America, and an ideological struggle as Steve Rogers to reconcile his social values with modern times. The character is frequently conflicted by his World War II-era \"good war\" morality being challenged and made anachronistic by the compromising demands of the post-war era.", "title": "Characterization" }, { "paragraph_id": 36, "text": "Prior to Bucky Barnes' return to comics in the 2000s, many Captain America stories centered on Rogers' sense of guilt over Barnes' death. Culture scholar Robert G. Weiner argues that these stories mirror the post-traumatic stress disorder and survivor guilt held by many war veterans, and that this trauma distinguishes the character from other well-known superheroes such as Batman and Spider-Man: while those characters became heroes because of a traumatic incident, Rogers carries on as a hero in spite of a traumatic incident, with Weiner asserting that this reinforces the nobility of the character.", "title": "Characterization" }, { "paragraph_id": 37, "text": "Though Marvel has historically trended away from making overt partisan statements in the post-war period, writers have nevertheless used Captain America to comment on the state of American society and government at particular moments in history. For example, the conspiracy storyline of \"Secret Empire\" reflected what writer Steve Englehart saw as broad disillusionment with American institutions in the wake of the Vietnam War and the Watergate scandal, the \"Streets of Poison\" storyline by Mark Gruenwald in the 1990s was intended to address anxieties around the drug trade and debates on the war on drugs, and \"Civil War\" by Mark Millar was widely interpreted as an allegory for the Patriot Act and post-9/11 debates on the balance between national security and civil liberties. While the ideological orientation of Captain America stories has shifted in response to changing social and political attitudes, Stevens notes how a central component of Captain America's mythology is that the character himself does not change: when the character's attitudes have shifted, it is consistently framed as an evolution or a new understanding of his previously-held ideals. Stevens argues that the character's seeming paradoxical steadfastness is reflective of \"the language of comics, where continuity is continually updated to fit the needs of the serialized present.\"", "title": "Characterization" }, { "paragraph_id": 38, "text": "Despite his status as patriotic superhero, Captain America is rarely depicted as an overly jingoistic figure. Stevens writes that the character's \"patriotism is more focused on the universal rights of man as expressed through the American Dream\" rather than \"a position championing the specific cultural or political goals of the United States.\" Weiner similarly concurs that the character \"embodies what America strives to be, not what it sometimes is\". Dittmer agrees that while the character sees himself \"as the living embodiment of the American Dream (rather than a tool of the state)\", his status as a patriotic superhero nevertheless tethers him to American foreign policy and hegemony. He argues that Captain America tends to skew away from interventionist actions at moments where the United States is undertaking policies that its critics deem imperialist, specifically citing the character's non-participation in the Vietnam and Iraq wars, and argues that the character's inconsistent position on the use of deadly force across his editorial history \"is perhaps a tacit acknowledgment of the violence, or the threat of violence, at the heart of American hegemony.\"", "title": "Characterization" }, { "paragraph_id": 39, "text": "\"Cap is one of the hardest hero characters to write, because the writer cannot use some exotic super-power to make his episodes seem colorful. [...] All he has to serve him are his extraordinary combat skills, his shield, and his unquenchable love for freedom and justice.\"", "title": "Powers, abilities, and equipment" }, { "paragraph_id": 40, "text": "– Stan Lee", "title": "Powers, abilities, and equipment" }, { "paragraph_id": 41, "text": "Captain America possesses no superpowers, though the Super-Soldier Serum has enhanced his body's strength, speed, agility, endurance, reflexes, reaction time, and natural self-healing ability to the peak of human physical perfection. He is additionally an expert tactician and field commander, and has achieved mastery in a variety of hand-to-hand combat styles, including boxing and judo. The precise parameters of Captain America's physical prowess vary across stories due to editorial dictates and artistic license taken by authors; Steve Englehart was given an editorial order to give the character superhuman strength in the 1970s, but the change did not remain permanent and was soon forgotten. Steve Rogers is also a skilled visual artist, having worked as a commercial illustrator prior to joining the military, and several storylines have depicted the character working as a freelance artist.", "title": "Powers, abilities, and equipment" }, { "paragraph_id": 42, "text": "The basic design of Captain America's costume has remained largely consistent from its original incarnation in the 1940s. Designed by Joe Simon, the costume is based on the United States flag, with Simon likening the character's appearance to that of \"a modern-day crusader\": chain mail armor, and a helmet adorned with wings in reference to the Roman god Mercury. Steve Rogers has worn other costumes when he has adopted alternate superhero alter egos: as Nomad he wears a domino mask and a black and gold suit that is cut to expose his bare chest and stomach, and as The Captain he wears a modified version of the Captain America suit with a red, white, and black design.", "title": "Powers, abilities, and equipment" }, { "paragraph_id": 43, "text": "Captain America's shield is the character's primary piece of equipment. It is a round shield with a design featuring a white star on a blue circle surrounded by red and white rings. First appearing in Captain America Comics #1 as a triangular heater shield, beginning in Captain America Comics #2 it was changed to its current circular design due to a complaint from MLJ Comics that the original design too closely resembled the chest symbol of their superhero The Shield. The shield is depicted as constructed from an alloy of vibranium and adamantium, two highly resilient fictional metals appearing in Marvel comic books. It is portrayed as both a virtually indestructible defensive object and a highly aerodynamic offensive weapon: when thrown, it is capable of ricocheting off multiple surfaces and returning to the original thrower.", "title": "Powers, abilities, and equipment" }, { "paragraph_id": 44, "text": "Captain America's first sidekick was Bucky Barnes, introduced in Captain America Comics #1 as the teenaged \"mascot\" of Steve Rogers' regiment. He is made Captain America's partner in that same issue after accidentally discovering the character's secret identity. Joe Simon described Bucky's creation as being largely motivated by a need to give Captain America \"someone to talk to\" and avoid the overuse of dialogue delivered through internal monologue, noting that \"Bucky was brought in as a way of eliminating too many thought balloons.\" Bucky was retroactively established as having been killed in the same accident that left Captain America frozen in suspended animation; the character remained deceased for many decades, contrasting the typically ephemeral nature of comic book deaths, until he returned in 2005 as the Winter Soldier. Initially introduced as a brainwashed assassin and antagonist to Captain America, Bucky's memories and personality were later restored, and he was re-established as an ally to Steve Rogers. Rick Jones briefly assumed the role of Captain America's sidekick and the public identity of Bucky following Captain America's return to comics in the 1960s.", "title": "Supporting cast" }, { "paragraph_id": 45, "text": "In 1969, Sam Wilson was introduced as the superhero Falcon and later became Captain America's sidekick, making the characters the first interracial superhero duo in American comic books. Possessing the power to communicate with birds, Wilson is initially depicted as a former social worker living in Harlem, though this identity is revealed to be the result of memories implanted by the Red Skull. He later receives a winged suit from the superhero Black Panther that enables him to fly. Other characters who have served as Rogers' sidekick include Golden Girl (Betsy Ross), Demolition Man (Dennis Dunphy), Jack Flag (Jack Harrison), and Free Spirit (Cathy Webster).", "title": "Supporting cast" }, { "paragraph_id": 46, "text": "Over the course of several decades, writers and artists have established a rogues' gallery of supervillains to face Captain America. The character's primary archenemy is the Red Skull, introduced from the character's origins as an apprentice to Adolf Hitler. Just as Red Skull represents Nazism, many of Captain America's villains represent specific ideologies or political formations: for example, the Serpent Society represents labor unionism, and Flag-Smasher represents anti-nationalism. The political character of Captain America's enemies has shifted over time: the character fought enemies associated with communism during his brief revival in the 1950s before shifting back to Nazi antagonists in the mid-1960s, while comics since 9/11 have frequently depicted the character facing terrorist villains.", "title": "Supporting cast" }, { "paragraph_id": 47, "text": "Steve Rogers' first love interest was Betsy Ross, introduced in his World War II-era comics as a member of the Women's Army Corps who later became the costumed superhero Golden Girl. Peggy Carter, an American member of the French Resistance, was retroactively established in comics published in the 1960s as another of Rogers' wartime lovers. When Rogers is revived in the post-war era, he begins a partnership and on-again off-again relationship with S.H.I.E.L.D. agent Sharon Carter; introduced as Peggy's younger sister, she was later retconned as Peggy's grandniece to reflect Marvel's floating timeline. In comics published in the 1980s, Rogers dated and became engaged to civilian Bernie Rosenthal, though they ended their relationship amicably after Bernie left New York to attend law school. In the 1990s, Rogers had a romantic entanglement with the alternately villainous and antiheroic Diamondback, a member of the Serpent Society.", "title": "Supporting cast" }, { "paragraph_id": 48, "text": "The title of \"Captain America\" has been used by other characters in the Marvel Universe in addition to Steve Rogers, including William Naslund, Jeffrey Mace, and William Burnside. John Walker, also known as U.S. Agent, was introduced as a villainous Captain America in 1988, and Isaiah Bradley was established in the 2003 limited series Truth: Red, White & Black as an African American man who acquired superpowers after being used as a test subject for the Super-Soldier Serum. Rogers' sidekicks Bucky Barnes and Sam Wilson have also alternately held the title of Captain America: Barnes in 2008 following Rogers' death in 2007, and Wilson following Marvel's 2012 rebranding campaign Marvel Now!. Within the multiverse of parallel universes that compose the Marvel Universe, there are many variations of Steve Rogers and Captain America; this includes Marvel's Ultimate Comics universe, which possesses its own version of Steve Rogers that is more overtly politically conservative.", "title": "Supporting cast" }, { "paragraph_id": 49, "text": "\"Over the years, Captain America's story has accurately reflected U.S. attitudes, as our country moved from the self-confidence of the early Cold War to the guilt-ridden angst of the 1970s to the revival of national pride that characterized the Reagan 1980s.\"", "title": "Cultural impact and legacy" }, { "paragraph_id": 50, "text": "– Jacob Heilbrun, The Los Angeles Times", "title": "Cultural impact and legacy" }, { "paragraph_id": 51, "text": "Captain America is one of the most popular and widely recognized Marvel Comics characters, and has been described as an icon of American popular culture. He is the most well-known and enduring of the United States-themed superheroes to emerge from the Second World War and inspired a proliferation of patriotic-themed superheroes in American comic books during the 1940s. This included the American Crusader, the Spirit of '76, Yank & Doodle, Captain Flag, and Captain Courageous, among numerous others. Though none would achieve Captain America's commercial success, the volume of Captain America imitators was such that three months after the character's debut, Timely published a statement indicating that \"there is only one Captain America\" and warning that they would take legal action against publishers that infringed on the character. After being dismissed from Timely, Joe Simon and Jack Kirby would themselves create a new patriotic superhero, the Fighting American, for Prize Comics in 1954; the character became the subject of a lawsuit from Marvel in the 1990s after Rob Liefeld attempted to revive the character following his own departure from Marvel.", "title": "Cultural impact and legacy" }, { "paragraph_id": 52, "text": "When the character was killed in 2007, he was eulogized in numerous mainstream media outlets, including The New York Times and The Los Angeles Times, with the former describing him as a \"national hero\". In 2011, Captain America placed sixth on IGN's \"Top 100 Comic Book Heroes of All Time\", and second in their 2012 list of \"The Top 50 Avengers\". Gizmodo and Entertainment Weekly respectively ranked Captain America first and second in their 2015 rankings of Avengers characters. Empire ranked Captain America as the 21st greatest comic book character of all time.", "title": "Cultural impact and legacy" }, { "paragraph_id": 53, "text": "Captain America has appeared in a variety of adapted, spin-off, and licensed media, including films, cartoons, video games, toys, clothing, and books. The first appearance of Captain America in a medium outside of comic books was in the 1944 serial film Captain America, which was also the first piece of non-comics media to feature a Marvel Comics character. The character later appeared in two made-for-TV films in 1979, Captain America and Captain America II: Death Too Soon, and a self-titled feature-length film in 1990. A trilogy of Captain America films starring Chris Evans as the title character were produced as part of the Marvel Cinematic Universe (MCU) in the 2010s: Captain America: The First Avenger (2011), Captain America: The Winter Soldier (2014), and Captain America: Civil War (2016). The character also appeared in the ensemble films The Avengers (2012), Avengers: Age of Ultron (2015), Avengers: Infinity War (2018), and Avengers: Endgame (2019).", "title": "In other media" }, { "paragraph_id": 54, "text": "The first appearance of Captain America on television was in the 1966 Grantray-Lawrence Animation series The Marvel Super Heroes. The character would make minor appearances in several Marvel animated series in the subsequent decades, including Spider-Man and His Amazing Friends (1981–1983), X-Men: The Animated Series (1992–1997), and The Avengers: United They Stand (1999–2000). Buoyed by increased popularity from the character's appearances in the MCU, Captain America began appearing in television series in more prominent roles beginning in the 2010s, such as The Avengers: Earth's Mightiest Heroes (2010–2012). Captain America was the first Marvel character to be adapted into a novel with Captain America: The Great Gold Steal by Ted White, published in 1968.", "title": "In other media" } ]
Captain America is a superhero created by Joe Simon and Jack Kirby who appears in American comic books published by Marvel Comics. The character first appeared in Captain America Comics #1, published on December 20, 1940 by Timely Comics, a corporate predecessor to Marvel. Captain America's civilian identity is Steve Rogers, a frail man enhanced to the peak of human physical perfection by an experimental "super-soldier serum" after joining the United States Army to aid the country's efforts in World War II. Equipped with an American flag-inspired costume and a virtually indestructible shield, Captain America and his sidekick Bucky Barnes clashed frequently with the villainous Red Skull and other members of the Axis powers. In the final days of the war, an accident left Captain America frozen in a state of suspended animation until he was revived in modern times. He resumes his exploits as a costumed hero and becomes leader of the superhero team the Avengers, but frequently struggles as a "man out of time" to adjust to the new era. The character quickly emerged as Timely's most popular and commercially successful wartime creation upon his original publication, though the popularity of superheroes declined in the post-war period and Captain America Comics was discontinued in 1950. The character saw a short-lived revival in 1953 before returning to comics in 1964, and has since remained in continuous publication. Captain America's creation as an explicitly anti-Nazi figure was a deliberately political undertaking: Simon and Kirby were stridently opposed to the actions of Nazi Germany and supporters of U.S. intervention in World War II, with Simon conceiving of the character specifically in response to the American non-interventionism movement. Political messages have subsequently remained a defining feature of Captain America stories, with writers regularly using the character to comment on the state of American society and government. Having appeared in more than ten thousand stories in more than five thousand media formats, Captain America is one of the most popular and recognized Marvel Comics characters, and has been described as an icon of American popular culture. Though Captain America was not the first United States-themed superhero, he would become the most popular and enduring of the many patriotic American superheroes created during World War II. Captain America was the first Marvel character to appear in a medium outside of comic books, in the 1944 serial film Captain America; the character has subsequently appeared in a variety of films and other media, including the Marvel Cinematic Universe, where he was portrayed by actor Chris Evans from the character's first appearance in Captain America: The First Avenger (2011) to his final appearance in Avengers: Endgame (2019).
2002-02-25T15:51:15Z
2023-12-25T02:26:09Z
[ "Template:Navboxes", "Template:Further", "Template:Notelist", "Template:Clear", "Template:Cite web", "Template:Official website", "Template:Use mdy dates", "Template:Multiple image", "Template:Authority control", "Template:Sfn", "Template:Efn", "Template:Sister project links", "Template:Comicbookdb", "Template:Portal bar", "Template:Refend", "Template:Captain America", "Template:Short description", "Template:Good article", "Template:Infobox comics character", "Template:Cite news", "Template:Cite journal", "Template:Pp-semi-indef", "Template:Cite book", "Template:Redirect", "Template:See also", "Template:Captain America characters", "Template:Quote box", "Template:Main", "Template:As of", "Template:Reflist", "Template:Refbegin" ]
https://en.wikipedia.org/wiki/Captain_America
7,730
Cyclops (disambiguation)
A Cyclops is a one-eyed monster in Greek mythology. Cyclops or The Cyclops may also refer to:
[ { "paragraph_id": 0, "text": "A Cyclops is a one-eyed monster in Greek mythology.", "title": "" }, { "paragraph_id": 1, "text": "Cyclops or The Cyclops may also refer to:", "title": "" } ]
A Cyclops is a one-eyed monster in Greek mythology. Cyclops or The Cyclops may also refer to:
2002-01-12T20:10:01Z
2023-11-10T13:13:44Z
[ "Template:Wiktionary", "Template:TOC right", "Template:SS", "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/Cyclops_(disambiguation)
7,731
Christian countercult movement
The Christian countercult movement or the Christian anti-cult movement is a social movement among certain Protestant evangelical and fundamentalist and other Christian ministries ("discernment ministries") and individual activists who oppose religious sects that they consider cults. Christian countercult activism mainly stems from evangelicalism or fundamentalism. The countercult movement asserts that particular Christian sects are erroneous because their beliefs are not in accordance with the teachings of the Bible. It also states that a religious sect can be considered a cult if its beliefs involve a denial of any of the essential Christian teachings (such as salvation, the Trinity, Jesus himself as a person, the ministry and miracles of Jesus, his crucifixion, his resurrection, the Second Coming and the Rapture). Countercult ministries often concern themselves with religious sects that consider themselves Christian but hold beliefs that are thought to contradict the teachings of the Bible. Such sects may include: The Church of Jesus Christ of Latter-day Saints, the Unification Church, Christian Science, Jehovah's Witnesses, and the New Apostolic Reformation. Some Protestants classify the Catholic Church as a cult. Some also denounce non-Christian religions such as Islam, Wicca, Paganism, New Age groups, Buddhism, Hinduism and other religions like UFO religions. Countercult literature usually expresses specific doctrinal or theological concerns and it also has a missionary or apologetic purpose. It presents a rebuttal by emphasizing the teachings of the Bible against the beliefs of non-fundamental Christian sects. Christian countercult activist writers also emphasize the need for Christians to evangelize to followers of cults. Some Christians also share concerns similar to those of the secular anti-cult movement. The movement publishes its views through a variety of media, including books, magazines, and newsletters, radio broadcasting, audio and video cassette production, direct-mail appeals, proactive evangelistic encounters, professional and avocational websites, as well as lecture series, training workshops and counter-cult conferences. Christians have applied theological criteria to assess the teachings of non-orthodox movements throughout church history. The Apostles themselves were involved in challenging the doctrines and claims of various teachers. The Apostle Paul wrote an entire epistle, Galatians, antagonistic to the teachings of a Jewish sect that claimed adherence to the teachings of both Jesus and Moses (cf. Acts 15 and Gal. 1:6–10). The First Epistle of John is devoted to countering early proto-Gnostic cults that had arisen in the first century CE, all claiming to be Christian (1 John 2:19). The early Church in the post-apostolic period was much more involved in "defending its frontiers against alternative soteriologies—either by defining its own position with greater and greater exactness, or by attacking other religions, and particularly the Hellenistic mysteries." In fact, a good deal of the early Christian literature is devoted to the exposure and refutation of unorthodox theology, mystery religions and Gnostic groups. Irenaeus, Tertullian and Hippolytus of Rome were some of the early Christian apologists who engaged in critical analyses of unorthodox theology, Greco-Roman pagan religions, and Gnostic groups. In the Protestant tradition, some of the earliest writings opposing unorthodox groups (such as the Swedenborgians) can be traced back to John Wesley, Alexander Campbell and Princeton Theological Seminary theologians like Charles Hodge and B. B. Warfield. The first known usage of the term cult by a Protestant apologist to denote a group is heretical or unorthodox is in Anti-Christian Cults by A. H. Barrington, published in 1898. Quite a few of the pioneering apologists were Baptist pastors, like I. M. Haldeman, or participants in the Plymouth Brethren, like William C. Irvine and Sydney Watson. Watson wrote a series of didactic novels like Escaped from the Snare: Christian Science, Bewitched by Spiritualism, and The Gilded Lie (Millennial Dawnism), as warnings of the dangers posed by cultic groups. Watson's use of fiction to counter the cults has been repeated by later novelists like Frank E. Peretti. The early twentieth-century apologists generally applied the words heresy and sects to groups like the Christadelphians, Mormons, Jehovah's Witnesses, Spiritualists, and Theosophists. This was reflected in several chapters contributed to the multi-volume work released in 1915 The Fundamentals, where apologists criticized the teachings of Charles Taze Russell, Mary Baker Eddy, the Mormons and Spiritualists. Since the 1940s, the approach of traditional Christians was to apply the meaning of cult such that it included those religious groups who use other scriptures beside the Bible or have teachings and practices deviating from traditional Christian teachings and practices. Some examples of sources (with published dates where known) that documented this approach are: One of the first prominent countercult apologists was Jan Karel van Baalen (1890–1968), an ordained minister in the Christian Reformed Church in North America. His book The Chaos of Cults, which was first published in 1938, became a classic in the field as it was repeatedly revised and updated until 1962. Historically, one of the most important protagonists of the movement was Walter Martin (1928–1989), whose numerous books include the 1955 The Rise of the Cults: An Introductory Guide to the Non-Christian Cults and the 1965 The Kingdom of the Cults: An Analysis of Major Cult Systems in the Present Christian Era, which continues to be influential. He became well-known in conservative Christian circles through a radio program, "The Bible Answer Man", currently hosted by Hank Hanegraaff. In The Rise of the Cults, Martin gave the following definition of a cult: By cultism we mean the adherence to doctrines which are pointedly contradictory to orthodox Christianity and which yet claim the distinction of either tracing their origin to orthodox sources or of being in essential harmony with those sources. Cultism, in short, is any major deviation from orthodox Christianity relative to the cardinal doctrines of the Christian faith. As Martin's definition suggests, the countercult ministries concentrate on non-traditional groups that claim to be Christian, so chief targets have been, Jehovah's Witnesses, Armstrongism, Christian Science and the Unification Church, but also smaller groups like the Swedenborgian Church. Various other conservative Christian leaders—among them John Ankerberg and Norman Geisler—have emphasized themes similar to Martin's. Perhaps more importantly, numerous other well-known conservative Christian leaders as well as many conservative pastors have accepted Martin's definition of a cult as well as his understanding of the groups to which he gave that label. Dave Breese summed up this kind of definition in these words: A cult is a religious perversion. It is a belief and practice in the world of religion which calls for devotion to a religious view or leader centered in false doctrine. It is an organized heresy. A cult may take many forms but it is basically a religious movement which distorts or warps orthodox faith to the point where truth becomes perverted into a lie. A cult is impossible to define except against the absolute standard of the teaching of Holy Scripture. Kenne "Ken" Silva is said by other discernment bloggers to have pioneered online discernment ministry. Ken was a Baptist pastor who ran the discernment blog "Apprising". Silva wrote many blog articles about the Emerging Church, the Word of Faith Movement, the Jehovah's Witnesses, the Gay Christian Movement, and many other groups. He started his blog in 2005 and wrote there until his death in 2014. Silva's work paved the way for other internet discernment ministries such as Pirate Christian Radio, a group of blogs and podcasts founded by Lutheran pastor Chris Rosebrough in 2008, and Pulpit & Pen, a discernment blog founded by Baptist pastor and polemicist J.D. Hall. Since the 1980s, the term new religions or new religious movements has slowly entered into evangelical usage alongside the word cult. Some book titles use both terms. The acceptance of these alternatives to the word cult in evangelicalism reflects, in part, the wider usage of such language in the sociology of religion. The term countercult apologetics first appeared in Protestant evangelical literature as a self-designation in the late 1970s and early 1980s in articles by Ronald Enroth and David Fetcho, and by Walter Martin in Martin Speaks Out on the Cults. A mid-1980s debate about apologetic methodology between Ronald Enroth and J. Gordon Melton, led the latter to place more emphasis in his publications on differentiating the Christian countercult from the secular anti-cult. Eric Pement urged Melton to adopt the label "Christian countercult", and since the early 1990s the terms has entered into popular usage and is recognized by sociologists such as Douglas Cowan. The only existing umbrella organization within the countercult movement in the United States is the EMNR (Evangelical Ministries to New Religions), founded in 1982 by Martin, Enroth, Gordon Lewis, and James Bjornstad. While the greatest number of countercult ministries are found in the United States, ministries exist in Australia, Brazil, Canada, Denmark, England, Ethiopia, Germany, Hungary, Italy, Mexico, New Zealand, Philippines, Romania, Russia, Sweden, and Ukraine. A comparison between the methods employed in the United States and other nations discloses some similarities in emphasis, but also other nuances in emphasis. The similarities are that globally these ministries share a common concern about the evangelization of people in cults and new religions. There is also often a common thread of comparing orthodox doctrines and biblical passages with the teachings of the groups under examination. In some of the European and southern hemisphere contexts, however, confrontational methods of engagement are not always relied on, and dialogical approaches are sometimes advocated. A group of organizations that originated within the context of established religion is working in more general fields of "cult awareness," especially in Europe. Their leaders are theologians, and they are often social ministries affiliated to big churches. The phenomena of cults has also entered into the discourses of Christian missions and theology of religions. An initial step in this direction occurred in 1980 when the Lausanne Committee for World Evangelization convened a mini-consultation in Thailand. From that consultation a position paper was produced. The issue was revisited at the Lausanne Forum in 2004 with another paper. The latter paper adopts a different methodology to that advocated in 1980. In the 1990s, discussions in academic missions and theological journals indicate that another trajectory is emerging that reflects the influence of contextual missions theory. Advocates of this approach maintain that apologetics as a tool needs to be retained, but do not favor a confrontational style of engagement. Countercult apologetics has several variations and methods employed in analyzing and responding to cults. The different nuances in countercult apologetics have been discussed by John A. Saliba and Philip Johnson. The dominant method is the emphasis on detecting unorthodox or heretical doctrines and contrasting those with orthodox interpretations of the Bible and early creedal documents. Some apologists, such as Francis J. Beckwith, have emphasized a philosophical approach, pointing out logical, epistemological and metaphysical problems within the teachings of a particular group. Another approach involves former members of cultic groups recounting their spiritual autobiographies, which highlight experiences of disenchantment with the group, unanswered questions and doubts about commitment to the group, culminating in the person's conversion to evangelical Christianity. Apologists like Dave Hunt in Peace, Prosperity and the Coming Holocaust and Hal Lindsey in The Terminal Generation have tended to interpret the phenomena of cults as part of the burgeoning evidence of signs that Christ's Second Advent is close at hand. Both Hunt and Constance Cumbey have applied a conspiracy model to interpreting the emergence of New Age spirituality and linking that to speculations about fulfilled prophecies heralding Christ's reappearance.
[ { "paragraph_id": 0, "text": "The Christian countercult movement or the Christian anti-cult movement is a social movement among certain Protestant evangelical and fundamentalist and other Christian ministries (\"discernment ministries\") and individual activists who oppose religious sects that they consider cults.", "title": "" }, { "paragraph_id": 1, "text": "Christian countercult activism mainly stems from evangelicalism or fundamentalism. The countercult movement asserts that particular Christian sects are erroneous because their beliefs are not in accordance with the teachings of the Bible. It also states that a religious sect can be considered a cult if its beliefs involve a denial of any of the essential Christian teachings (such as salvation, the Trinity, Jesus himself as a person, the ministry and miracles of Jesus, his crucifixion, his resurrection, the Second Coming and the Rapture).", "title": "Overview" }, { "paragraph_id": 2, "text": "Countercult ministries often concern themselves with religious sects that consider themselves Christian but hold beliefs that are thought to contradict the teachings of the Bible. Such sects may include: The Church of Jesus Christ of Latter-day Saints, the Unification Church, Christian Science, Jehovah's Witnesses, and the New Apostolic Reformation. Some Protestants classify the Catholic Church as a cult. Some also denounce non-Christian religions such as Islam, Wicca, Paganism, New Age groups, Buddhism, Hinduism and other religions like UFO religions.", "title": "Overview" }, { "paragraph_id": 3, "text": "Countercult literature usually expresses specific doctrinal or theological concerns and it also has a missionary or apologetic purpose. It presents a rebuttal by emphasizing the teachings of the Bible against the beliefs of non-fundamental Christian sects. Christian countercult activist writers also emphasize the need for Christians to evangelize to followers of cults. Some Christians also share concerns similar to those of the secular anti-cult movement.", "title": "Overview" }, { "paragraph_id": 4, "text": "The movement publishes its views through a variety of media, including books, magazines, and newsletters, radio broadcasting, audio and video cassette production, direct-mail appeals, proactive evangelistic encounters, professional and avocational websites, as well as lecture series, training workshops and counter-cult conferences.", "title": "Overview" }, { "paragraph_id": 5, "text": "Christians have applied theological criteria to assess the teachings of non-orthodox movements throughout church history. The Apostles themselves were involved in challenging the doctrines and claims of various teachers. The Apostle Paul wrote an entire epistle, Galatians, antagonistic to the teachings of a Jewish sect that claimed adherence to the teachings of both Jesus and Moses (cf. Acts 15 and Gal. 1:6–10). The First Epistle of John is devoted to countering early proto-Gnostic cults that had arisen in the first century CE, all claiming to be Christian (1 John 2:19).", "title": "History" }, { "paragraph_id": 6, "text": "The early Church in the post-apostolic period was much more involved in \"defending its frontiers against alternative soteriologies—either by defining its own position with greater and greater exactness, or by attacking other religions, and particularly the Hellenistic mysteries.\" In fact, a good deal of the early Christian literature is devoted to the exposure and refutation of unorthodox theology, mystery religions and Gnostic groups. Irenaeus, Tertullian and Hippolytus of Rome were some of the early Christian apologists who engaged in critical analyses of unorthodox theology, Greco-Roman pagan religions, and Gnostic groups.", "title": "History" }, { "paragraph_id": 7, "text": "In the Protestant tradition, some of the earliest writings opposing unorthodox groups (such as the Swedenborgians) can be traced back to John Wesley, Alexander Campbell and Princeton Theological Seminary theologians like Charles Hodge and B. B. Warfield. The first known usage of the term cult by a Protestant apologist to denote a group is heretical or unorthodox is in Anti-Christian Cults by A. H. Barrington, published in 1898.", "title": "History" }, { "paragraph_id": 8, "text": "Quite a few of the pioneering apologists were Baptist pastors, like I. M. Haldeman, or participants in the Plymouth Brethren, like William C. Irvine and Sydney Watson. Watson wrote a series of didactic novels like Escaped from the Snare: Christian Science, Bewitched by Spiritualism, and The Gilded Lie (Millennial Dawnism), as warnings of the dangers posed by cultic groups. Watson's use of fiction to counter the cults has been repeated by later novelists like Frank E. Peretti.", "title": "History" }, { "paragraph_id": 9, "text": "The early twentieth-century apologists generally applied the words heresy and sects to groups like the Christadelphians, Mormons, Jehovah's Witnesses, Spiritualists, and Theosophists. This was reflected in several chapters contributed to the multi-volume work released in 1915 The Fundamentals, where apologists criticized the teachings of Charles Taze Russell, Mary Baker Eddy, the Mormons and Spiritualists.", "title": "History" }, { "paragraph_id": 10, "text": "Since the 1940s, the approach of traditional Christians was to apply the meaning of cult such that it included those religious groups who use other scriptures beside the Bible or have teachings and practices deviating from traditional Christian teachings and practices. Some examples of sources (with published dates where known) that documented this approach are:", "title": "History" }, { "paragraph_id": 11, "text": "One of the first prominent countercult apologists was Jan Karel van Baalen (1890–1968), an ordained minister in the Christian Reformed Church in North America. His book The Chaos of Cults, which was first published in 1938, became a classic in the field as it was repeatedly revised and updated until 1962.", "title": "History" }, { "paragraph_id": 12, "text": "Historically, one of the most important protagonists of the movement was Walter Martin (1928–1989), whose numerous books include the 1955 The Rise of the Cults: An Introductory Guide to the Non-Christian Cults and the 1965 The Kingdom of the Cults: An Analysis of Major Cult Systems in the Present Christian Era, which continues to be influential. He became well-known in conservative Christian circles through a radio program, \"The Bible Answer Man\", currently hosted by Hank Hanegraaff.", "title": "History" }, { "paragraph_id": 13, "text": "In The Rise of the Cults, Martin gave the following definition of a cult:", "title": "History" }, { "paragraph_id": 14, "text": "By cultism we mean the adherence to doctrines which are pointedly contradictory to orthodox Christianity and which yet claim the distinction of either tracing their origin to orthodox sources or of being in essential harmony with those sources. Cultism, in short, is any major deviation from orthodox Christianity relative to the cardinal doctrines of the Christian faith.", "title": "History" }, { "paragraph_id": 15, "text": "As Martin's definition suggests, the countercult ministries concentrate on non-traditional groups that claim to be Christian, so chief targets have been, Jehovah's Witnesses, Armstrongism, Christian Science and the Unification Church, but also smaller groups like the Swedenborgian Church.", "title": "History" }, { "paragraph_id": 16, "text": "Various other conservative Christian leaders—among them John Ankerberg and Norman Geisler—have emphasized themes similar to Martin's. Perhaps more importantly, numerous other well-known conservative Christian leaders as well as many conservative pastors have accepted Martin's definition of a cult as well as his understanding of the groups to which he gave that label. Dave Breese summed up this kind of definition in these words:", "title": "History" }, { "paragraph_id": 17, "text": "A cult is a religious perversion. It is a belief and practice in the world of religion which calls for devotion to a religious view or leader centered in false doctrine. It is an organized heresy. A cult may take many forms but it is basically a religious movement which distorts or warps orthodox faith to the point where truth becomes perverted into a lie. A cult is impossible to define except against the absolute standard of the teaching of Holy Scripture.", "title": "History" }, { "paragraph_id": 18, "text": "Kenne \"Ken\" Silva is said by other discernment bloggers to have pioneered online discernment ministry. Ken was a Baptist pastor who ran the discernment blog \"Apprising\". Silva wrote many blog articles about the Emerging Church, the Word of Faith Movement, the Jehovah's Witnesses, the Gay Christian Movement, and many other groups. He started his blog in 2005 and wrote there until his death in 2014.", "title": "History" }, { "paragraph_id": 19, "text": "Silva's work paved the way for other internet discernment ministries such as Pirate Christian Radio, a group of blogs and podcasts founded by Lutheran pastor Chris Rosebrough in 2008, and Pulpit & Pen, a discernment blog founded by Baptist pastor and polemicist J.D. Hall.", "title": "History" }, { "paragraph_id": 20, "text": "Since the 1980s, the term new religions or new religious movements has slowly entered into evangelical usage alongside the word cult. Some book titles use both terms.", "title": "Other technical terminology" }, { "paragraph_id": 21, "text": "The acceptance of these alternatives to the word cult in evangelicalism reflects, in part, the wider usage of such language in the sociology of religion.", "title": "Other technical terminology" }, { "paragraph_id": 22, "text": "The term countercult apologetics first appeared in Protestant evangelical literature as a self-designation in the late 1970s and early 1980s in articles by Ronald Enroth and David Fetcho, and by Walter Martin in Martin Speaks Out on the Cults. A mid-1980s debate about apologetic methodology between Ronald Enroth and J. Gordon Melton, led the latter to place more emphasis in his publications on differentiating the Christian countercult from the secular anti-cult. Eric Pement urged Melton to adopt the label \"Christian countercult\", and since the early 1990s the terms has entered into popular usage and is recognized by sociologists such as Douglas Cowan.", "title": "Apologetics" }, { "paragraph_id": 23, "text": "The only existing umbrella organization within the countercult movement in the United States is the EMNR (Evangelical Ministries to New Religions), founded in 1982 by Martin, Enroth, Gordon Lewis, and James Bjornstad.", "title": "Apologetics" }, { "paragraph_id": 24, "text": "While the greatest number of countercult ministries are found in the United States, ministries exist in Australia, Brazil, Canada, Denmark, England, Ethiopia, Germany, Hungary, Italy, Mexico, New Zealand, Philippines, Romania, Russia, Sweden, and Ukraine. A comparison between the methods employed in the United States and other nations discloses some similarities in emphasis, but also other nuances in emphasis. The similarities are that globally these ministries share a common concern about the evangelization of people in cults and new religions. There is also often a common thread of comparing orthodox doctrines and biblical passages with the teachings of the groups under examination. In some of the European and southern hemisphere contexts, however, confrontational methods of engagement are not always relied on, and dialogical approaches are sometimes advocated.", "title": "Worldwide organizations" }, { "paragraph_id": 25, "text": "A group of organizations that originated within the context of established religion is working in more general fields of \"cult awareness,\" especially in Europe. Their leaders are theologians, and they are often social ministries affiliated to big churches.", "title": "Worldwide organizations" }, { "paragraph_id": 26, "text": "The phenomena of cults has also entered into the discourses of Christian missions and theology of religions. An initial step in this direction occurred in 1980 when the Lausanne Committee for World Evangelization convened a mini-consultation in Thailand. From that consultation a position paper was produced. The issue was revisited at the Lausanne Forum in 2004 with another paper. The latter paper adopts a different methodology to that advocated in 1980.", "title": "Contextual missiology" }, { "paragraph_id": 27, "text": "In the 1990s, discussions in academic missions and theological journals indicate that another trajectory is emerging that reflects the influence of contextual missions theory. Advocates of this approach maintain that apologetics as a tool needs to be retained, but do not favor a confrontational style of engagement.", "title": "Contextual missiology" }, { "paragraph_id": 28, "text": "Countercult apologetics has several variations and methods employed in analyzing and responding to cults. The different nuances in countercult apologetics have been discussed by John A. Saliba and Philip Johnson.", "title": "Variations and models" }, { "paragraph_id": 29, "text": "The dominant method is the emphasis on detecting unorthodox or heretical doctrines and contrasting those with orthodox interpretations of the Bible and early creedal documents. Some apologists, such as Francis J. Beckwith, have emphasized a philosophical approach, pointing out logical, epistemological and metaphysical problems within the teachings of a particular group. Another approach involves former members of cultic groups recounting their spiritual autobiographies, which highlight experiences of disenchantment with the group, unanswered questions and doubts about commitment to the group, culminating in the person's conversion to evangelical Christianity.", "title": "Variations and models" }, { "paragraph_id": 30, "text": "Apologists like Dave Hunt in Peace, Prosperity and the Coming Holocaust and Hal Lindsey in The Terminal Generation have tended to interpret the phenomena of cults as part of the burgeoning evidence of signs that Christ's Second Advent is close at hand. Both Hunt and Constance Cumbey have applied a conspiracy model to interpreting the emergence of New Age spirituality and linking that to speculations about fulfilled prophecies heralding Christ's reappearance.", "title": "Variations and models" } ]
The Christian countercult movement or the Christian anti-cult movement is a social movement among certain Protestant evangelical and fundamentalist and other Christian ministries and individual activists who oppose religious sects that they consider cults.
2002-01-12T21:03:27Z
2023-12-18T18:16:02Z
[ "Template:Reflist", "Template:Cite web", "Template:Short description", "Template:Fact", "Template:Lang", "Template:Portal", "Template:Status of religious freedom", "Template:Citation", "Template:Opposition to NRMs", "Template:Refbegin", "Template:Refend", "Template:Not to be confused with", "Template:OCLC", "Template:Cite book", "Template:Webarchive", "Template:Use dmy dates", "Template:Citation needed", "Template:ISBN", "Template:Cite journal" ]
https://en.wikipedia.org/wiki/Christian_countercult_movement
7,732
Professor X
Professor X (Prof. Charles Francis Xavier) is a character appearing in American comic books published by Marvel Comics. Created by writer Stan Lee and artist/co-writer Jack Kirby, the character first appeared in The X-Men #1 (September 1963). The character is depicted as the founder and occasional leader of the X-Men. Xavier is a member of a subspecies of humans known as mutants, who are born with superhuman abilities. He is an exceptionally powerful telepath, who can read and control the minds of others. To both shelter and train mutants from around the world, he runs a private school in the X-Mansion in Salem Center, located in Westchester County, New York. Xavier also strives to serve a greater good by promoting peaceful coexistence and equality between humans and mutants in a world where zealous anti-mutant bigotry is widespread, though he later abandons his dream in favor of establishing a mutant nation on Krakoa. Throughout much of the character's history, Xavier has been depicted with paraplegia and uses a wheelchair. One of the world's most powerful mutant telepaths, Xavier is a scientific genius and a leading authority in genetics. He has devised Cerebro and other equipment to enhance psionic powers and detect and track people with the mutant gene. Xavier's pacifist and assimilationist ideology and actions have often been contrasted with that of Magneto, a mutant leader (initially characterized as a supervillain and later as a complex antihero) with whom Xavier has a complicated relationship. Writer Chris Claremont, who originated Magneto's backstory as well as the relationship between the two men, modeled his characterization of Xavier on David Ben Gurion, and that of Magneto on Menachem Begin. Patrick Stewart portrayed the character in the first three films in the 20th Century Fox X-Men film series and in various video games, while James McAvoy portrayed a younger version of the character in the 2011 prequel X-Men: First Class. Both actors reprised the role in the film X-Men: Days of Future Past. Stewart would reprise the role in the film Logan (2017), while McAvoy would further appear as his younger iteration of the character in X-Men: Apocalypse (2016), Deadpool 2 (2018) and Dark Phoenix (2019). Harry Lloyd portrayed the character in the third season of the television series Legion. Stewart again returned to the role, portraying an alternate version of the character in the 2022 Marvel Cinematic Universe film Doctor Strange in the Multiverse of Madness. Created by writer Stan Lee and artist/co-writer Jack Kirby, Professor X first appeared in X-Men #1 (September 1963). Stan Lee has stated that the physical inspiration of Professor Xavier was from Academy Award-winning actor Yul Brynner. Writer Scott Lobdell established Xavier's middle name to be "Francis" in Uncanny X-Men #309 (February 1994). Xavier's goals are to promote the peaceful affirmation of mutant rights, to mediate the co-existence of mutants and humans, to protect mutants from violent humans, and to protect society from antagonistic mutants, including his old friend, Magneto. To achieve these aims, he founded Xavier's School for Gifted Youngsters (later named the Xavier Institute) to teach mutants to explore and control their powers. Its first group of students was the original X-Men (Cyclops, Iceman, Marvel Girl, Angel, and Beast). Xavier's students consider him a visionary and often refer to their mission as "Xavier's dream". He is highly regarded by others in the Marvel Universe, respected by various governments, and trusted by several other superhero teams, including the Avengers and the Fantastic Four. However, he also has a manipulative streak which has resulted in several significant fallings-out with allies and students. He often acts as a public advocate for mutant rights and is the authority most of the Marvel superhero community turns to for advice on mutants. Despite this, his status as a mutant himself and originator of the X-Men only became public during the 2001 story "E Is for Extinction". He also appears in almost all of the X-Men animated series and in many video games, although usually as a non-playable character. Patrick Stewart plays him in the 2000s X-Men film series, as well as providing his voice in some of the X-Men video games (including some not connected to the film series). According to BusinessWeek, Charles Xavier is listed as one of the top ten most intelligent fictional characters in American comics. In a number of comics, Xavier is shown to have a dark side, a part of himself that he struggles to suppress. Perhaps the most notable appearance of this character element is in the Onslaught storyline, in which the crossover event's antagonist is a physical manifestation of that dark side. Also, Onslaught is created in the most violent act Xavier claims to have done: erasing the mind of Magneto. In X-Men #106 (August 1977), the new X-Men fight images of the original team, which have been created by what Xavier says is his "evil self ... who would use his powers for personal gain and conquest", which he says he is normally able to keep in check. In the 1984 four-part series titled The X-Men and the Micronauts, Xavier's dark desires manifest themselves as the Entity and threaten to destroy the Micronauts' universe. In other instances, Xavier is shown to be secretive and manipulative. During the Onslaught storyline, the X-Men find Xavier's files, the "Xavier Protocols", which detail how to kill many of the characters, including Xavier himself, should the need ever arise, such as if they went rogue. Astonishing X-Men vol. 3, #12 (August 2005) reveals that when Xavier realizes that the Danger Room has become sentient, he keeps it trapped and experiments on it for years, an act that Cyclops calls "the oppression of a new life" and equates to humanity's treatment of mutants (however, X-Men Legacy #220 - 224 reveals that Xavier did not intend for the Danger Room to become sentient: it was an accident, and Xavier sought a way to free Danger, but was unable to find a way to accomplish this without deleting her sentience as well). Charles Francis Xavier was born in New York City to the wealthy Dr. Brian Xavier, a well-respected nuclear scientist, and Sharon Xavier. The family lives in a very grand mansion estate in Westchester County because of the riches his father's nuclear research has brought them. He later grows up to attend Pembroke College at the University of Oxford, where he earns a Professorship in Genetics and other science fields, and goes on to live first in Oxford and then London for a number of years. Crucially, as he enters late adolescence, Xavier inherits the mansion-house he was raised in, enabling him not only to continue to live in it, but also to turn it in to Xavier's School for Gifted Youngsters, which he begins together with the first of the X-Men. Brian, his father, dies in an accident when Xavier is still very young, and Brian's science partner Kurt Marko comforts and then marries the grieving Sharon. When Xavier's telepathic mutant powers emerge, he discovers Marko cares only about his mother's money. After the wedding, Kurt moves in with the Xaviers, bringing with him his son Cain. Kurt quickly grows neglectful of Sharon, driving her to alcoholism, and abuses both Charles and Cain. Cain takes out his frustrations and insecurities on his stepbrother. Charles uses his telepathic powers to read Cain's mind and explore the extent of his psychological damage, which only leads to Cain becoming more aggressive toward him and the young Xavier feeling Cain's pain firsthand. Sharon dies soon after, and a fight erupts between Cain and Charles that causes some of Kurt's lab equipment to explode. Mortally wounded, Kurt drags the two children out before dying, and admits he was partly responsible for Brian's death. With help from his superhuman powers and natural genius, Xavier becomes an excellent student and athlete, though he gives up the latter, believing his powers give him an unfair advantage. Due to his powers, by the time he graduates from high school, Charles loses all of his hair. He enters Bard College at age 16 and graduates with his bachelor's degree in biology in only two years. In graduate studies, he receives Ph.D.s in Genetics, Biophysics, Psychology, and Anthropology with a two-year residence at Pembroke College, University of Oxford. He also receives an M.D. in Psychiatry while spending several years in London. He is later appointed adjunct professor at Columbia University. Origins of Marvel Comics: X-Men #1 (2010) presents a different version of events, suggesting a scholarship to the University of Oxford rescued him from his abusive home, after which he "never looked back", suggesting he began his academic career as a very young man at Oxford. His stepbrother is resentful of him. At graduate school, he meets a Scottish girl named Moira Kinross, a fellow genetics student with whom he falls in love. The two agree to get married, but soon, Xavier is drafted into the Korean War. He carves himself a niche as a soldier in search and rescue missions alongside Shadowcat's father, Carmen Pryde, and witnesses Cain's transformation into Juggernaut when he touches a ruby with an inscription on it in an underground temple. During the war, he receives a letter from Moira telling him that she is breaking up with him. He later discovers that Moira married her old boyfriend Joseph MacTaggert, who abuses her. Deeply depressed when Moira broke off their engagement without explanation, Xavier began traveling around the world as an adventurer after leaving the army. In Cairo, he meets a young girl named Ororo Munroe (later known as Storm), who is a pickpocket, and the Shadow King, a powerful mutant who is posing as Egyptian crime lord Amahl Farouk. Xavier defeats the Shadow King, barely escaping with his life. This encounter leads to Xavier's decision to devote his life to protecting humanity from evil mutants and safeguarding innocent mutants from human oppression. Xavier visits his friend Daniel Shomron, who runs a clinic for traumatized Holocaust victims in Haifa, Israel. There, he meets a man going by the name of Magnus (who would later become Magneto), a Holocaust survivor who works as a volunteer in the clinic, and Gabrielle Haller, a woman driven into a catatonic coma by the trauma she experienced. Xavier uses his mental powers to break her out of her catatonia and the two fall in love. Xavier and Magneto become good friends, although neither immediately reveals to the other that he is a mutant. The two hold lengthy debates hypothesizing what will happen if humanity is faced with a new super-powered race of humans. While Xavier is optimistic, Magneto's experiences in the Holocaust lead him to believe that humanity will ultimately oppress the new race of humans as they have done with other minorities. The two friends reveal their powers to each other when they fight Nazi Baron Wolfgang von Strucker and his Hydra agents, who kidnap Gabrielle because she knows the location of their secret cache of gold. Magneto attempts to kill Strucker but Xavier stops him. Realizing that his and Xavier's views on mutant-human relations are incompatible, Magneto leaves with the gold. Charles stays in Israel for some time, but he and Gabrielle separate on good terms, neither knowing that she is pregnant with his son, who grows up to become the mutant Legion. In a strange town near the Himalayas, Xavier encounters an alien calling himself Lucifer, the advance scout for an invasion by his race, and foils his plans. In retaliation, Lucifer drops a huge stone block on Xavier, crippling his legs. After Lucifer leaves, a young woman named Sage hears Xavier's telepathic cries for help and rescues him, bringing him to safety, beginning a long alliance between the two. In a hospital in India, he is brought to an American nurse, Amelia Voght, who looks after him and, as she sees to his recovery, they fall in love. When he is released from the hospital, the two moved into an apartment in Bombay together. Amelia is troubled to find Charles studying mutation, as she is a mutant and unsettled by it, though she calms when he reveals himself to be a mutant as well. They eventually move to the United States, living on Xavier's family estate. But the night Scott Summers moves into Xavier's mansion, Amelia leaves him, believing Charles would have changed his view and that mutants should lie low. Yet he is recruiting them to what she believes is a lost cause. Charles tries to force her to stay with his mental powers, but immediately ashamed by this, lets her go. She later becomes a disciple of Magneto. Over the years, Charles makes a name for himself as geneticist and psychologist, apparently renowned enough that the Greys were referred to him when no other expert could help their catatonic daughter, Jean. Xavier trains her in the use of her telekinesis, while inhibiting her telepathic abilities until she matures. Around this time, he also starts working with fellow mutation expert, Karl Lykos, as well as Moira MacTaggert again, who built a mutant research station on Muir Island. Apparently, Charles had gotten over Moira in his travels to the Greek island of Kirinos. Xavier discusses his candidates for recruitment to his personal strike force, the X-Men, with Moira, including those he passes over, which are Kurt Wagner, Piotr Rasputin, Pietro and Wanda Maximoff, and Ororo Munroe. Xavier also trains Tessa to spy on Sebastian Shaw. Xavier founded Xavier's School for Gifted Youngsters, which provides a safe haven for mutants and teaches them to master their abilities. In addition, he seeks to foster mutant-human relations by providing his superhero team, the X-Men, as an example of mutants acting in good faith, as he told FBI agent Fred Duncan. With his inherited fortune, he uses his ancestral mansion at 1407 Graymalkin Lane in Salem Center, Westchester County, New York as a base of operations with technologically advanced facilities, including the Danger Room - later, Fantomex mentions that Xavier is a billionaire with a net worth of $3.5 billion. Presenting the image of a stern teacher, Xavier makes his students endure a rigorous training regime. Xavier's first five students are Cyclops, Iceman, Angel, Beast, and Marvel Girl who become the original X-Men. After he completes recruiting the original team of X-Men, he sends them into battle with Magneto. Throughout most of his time with the team, Xavier uses his telepathic powers to keep in constant contact with his students and provides instructions and advice when needed. In addition, he uses a special machine called Cerebro, which enhances his ability to detect mutants and to allow the team to find new students in need of the school. Among the obstacles Xavier faces is his old friend, Magneto, who has grown into an advocate of mutant superiority since their last encounter and who believes the only solution to mutant persecution is domination over humanity. When anthropologist Bolivar Trask resurfaces the "mutant problem", Xavier counters him in a televised debate, however, he appears arrogant and Trask sends his mutant-hunting robot Sentinels to terrorize mutants. The X-Men dispatch them, but Trask sees the error in his ways too late as he is killed by his creations. At one point, Xavier seemingly dies during the X-Men's battle with the sub-human Grotesk, but it is later revealed that Xavier arranged for a reformed former villain named Changeling to impersonate him while he went into hiding to plan a defense against an invasion by the extraterrestrial Z'Nox, imparting a portion of his telepathic abilities to the Changeling to complete the disguise. When the X-Men are captured by the sentient island Krakoa, Xavier assembles a new team to rescue them, including Cyclops' and Havok's long-lost brother, Vulcan, along with Darwin, Petra, and Sway. This new team, composed of students of Dr. Moira MacTaggert, was sent to rescue the original X-Men from Krakoa. However, after rescuing Cyclops, McTaggert's former students were seemingly killed. Upon Cyclops' return, Xavier removed Cyclops' memories of the death of Vulcan and his teammates; and began assembling yet another team of X-Men. Xavier's subsequent rescue team consists of Banshee, Colossus, Sunfire, Nightcrawler, Storm, Wolverine, and Thunderbird. After the mission, the older team of X-Men, except for Cyclops, leave the school, believing they no longer belong there, and Xavier mentors the new X-Men. Xavier forms a psychic bond across galaxies with Princess Lilandra from the Shi'ar Empire. When they finally meet, it is love at first sight. She implores the professor to stop her mad brother, Shi'ar Emperor D'Ken, and he instantly aids her by deploying his X-Men. When Jean Grey returns from the Savage Land to tell him that all the X-Men are dead, he shuts down the school and travels with Lilandra to her kingdom, where she is crowned Empress and he is treated like a child or a trophy husband. Xavier senses the changes taking place in Jean Grey, and returns to Earth to help and resume leadership of the X-Men. Shortly thereafter he battles his pupil after she becomes Dark Phoenix and destroys a populated planet in the Shi'ar Empire. It hurts Xavier to be on the opposite side of Lilandra, but he has no other choice but to challenge the Shi'ar Imperial Guard to a duel over the fate of the Phoenix. Xavier would have lost against the greater power of the Dark Phoenix, but thanks to the help Jean Grey gives him (fighting her Phoenix persona), Xavier emerges victorious; she later commits suicide to prevent herself from endangering more innocent lives. When the X-Men fight members of the extraterrestrial race known as the Brood, Xavier is captured by them, and implanted with a Brood egg, which places Xavier under the Brood's control. During this time, Xavier assembles a team of younger mutants called the New Mutants, secretly intended to be prime hosts for reproduction of the aliens. The X-Men discover this and return to free Xavier, but they are too late to prevent his body from being destroyed with a Brood Queen in its place; however, his soul remains intact. The X-Men and Starjammers subdue this monstrous creature containing Xavier's essence, but the only way to restore him is to clone a new body using tissue samples he donated to the Starjammers and transfer his consciousness into the clone body. This new body possesses functional legs, though the psychosomatic pain Xavier experienced after living so long as a paraplegic takes some time to subside. Subsequently, he even joins the X-Men in the field, but later decides not to continue this practice after realizing that his place is at the school, as the teacher of the New Mutants. After taking a teaching position at Columbia University in Uncanny X-Men #192, Xavier is severely injured and left for dead as the victim of a hate crime. Callisto and her Morlocks, a group of underground-dwelling mutants, get him to safety. One of the Morlocks partially restores Xavier's health, but Callisto warns Xavier that he is not fully healed and that he must spend more time recuperating and restrain himself from exerting his full strength or powers, or his health might fail again. Xavier hides his injuries from the others and resumes his life. Charles meets with former lover Gabrielle Haller on Muir Isle and discovers that they had a child. The boy, David, has autism and dissociative identity disorder. Furthermore, he has vast psionic powers like his father. After helping him and his team to escape from David's mind, Xavier promises he will always be there for him. A reformed Magneto is arrested and put on trial. Xavier attends the trial to defend his friend. Andrea and Andreas Strucker, the children of presumed dead Baron von Strucker, crash the courtroom to attack Magneto and Xavier. Xavier is seriously injured. Dying, he asks a shocked Magneto to look after the X-Men for him. Lilandra, who has a psychic bond with Xavier, feels that he is in great danger and heads to Earth. There, she and Corsair take Xavier with them so Shi'ar advanced technology can heal him. Xavier leaves Magneto in charge of the school, but some of the X-Men are unwilling to forgive their former enemy. Cyclops loses a duel for the leadership of the X-Men against Storm, then leaves them and joins the other four original X-Men to form a new team called X-Factor. In the meantime, Charles becomes stranded in space with the Starjammers, but he is reunited with his lover Lilandra and relishes his carefree lifestyle. He serves as a member of the Starjammers aboard the starship Starjammer, mobile in the Shi'ar Galaxy. He becomes consort to the Princess-Majestrix Lilandra while in exile, and when she later resumes her throne he takes up residence with her in the Imperial palace on the Shi'ar homeworld. Xavier joins Lilandra in her cause to overthrow her sister Deathbird, taking on the powers of Phoenix temporarily wherein he is named Bald Phoenix by Corsair, but sees that he must return to help the X-Men. Xavier eventually becomes imprisoned by the Skrulls during their attempted invasion of the Shi'ar Empire. Xavier breaks free from imprisonment by Warskrull Prime, and is reunited with the X-Men. A healthy Xavier returns from the Shi'ar Empire and is reunited with both the current and original X-Men teams, and resumes his leadership responsibilities of the united teams. In a battle with his old foe, the Shadow King, in the "Muir Island Saga", Xavier's spine is shattered, returning him to his former paraplegic state, while his son David is seemingly killed. In the following months, Xavier rebuilds the mansion, which previously was rebuilt with Shi'ar technology, and restructures the X-Men into two teams. While holding a mutant rights speech, Xavier is nearly assassinated by Stryfe in the guise of Cable, being infected with a fatal techno-organic virus. For reasons of his own, the villain Apocalypse saves him. As a temporary side-effect, he gains full use of his legs and devotes his precious time to the youngest recruit on his team, Jubilee. With all his students now highly trained adults, Professor Xavier renames his school the Xavier Institute For Higher Learning. Also, he assumes control of a private institution, the Massachusetts Academy, making it a new School for Gifted Youngsters. Another group of young mutants is trained here, Generation X, with Banshee and Emma Frost as headmaster and headmistress, respectively. Professor X is for a time the unknowing host of the evil psionic entity Onslaught, the result of a previous battle with Magneto. In that battle, Magneto uses his powers to rip out the adamantium bonded to Wolverine's skeleton, and a furious Xavier wipes Magneto's mind, leaving him in a coma. From the psychic trauma of Xavier using his powers so violently and the mixing of Magneto's and Xavier's repressed anger, Onslaught is born. Onslaught wreaks havoc, destroying much of Manhattan, until many of Marvel's superheroes—including the Avengers, the Fantastic Four and the Hulk—destroy him. Xavier is left without his telepathy and, overcome with guilt, leaves the X-Men and is incarcerated for his actions. He later returns to the X-Men after Operation: Zero Tolerance, in which he is shocked by the cruel act of being turned over to the mutant-hating Bastion, following a clash with the sentient Cerebro and a team of impostor X-Men. Xavier questions his dream again and Magneto shortly thereafter is confronted by the X-Men. After the battle, the UN concedes Genosha to Magnus, and Wolverine is angered by Xavier stopping him from getting his revenge on Magneto. Charles and Logan are later trapped in a dimension with different laws of physics, wherein they have to coordinate their moves together and, in the process, gain a better understanding of the other's views. Apocalypse kidnaps the fabled "Twelve" special mutants (Xavier included) whose combined energies would grant him omnipotence. After Apocalypse's defeat with the help of Skrull mutants, Xavier goes with the young Skrulls known as Cadre K to train them and free them from their oppressors, and eventually returns to aid in Legacy Virus research. Mystique and her Brotherhood start a deadly assault on Muir Isle by releasing an altered form of the Legacy Virus, all in retaliation against the election campaign of Robert Kelly, a seeming mutant-hater. Mystique blows up Moira MacTaggert's laboratory complex, fatally wounding her. Charles goes to the astral plane to meet with her and retrieve information on the cure to the Legacy Virus, but after gathering the information does not want to leave her alone. If not for Jean and Cable talking him down and pulling him back, the professor would have died with his first love, who states she has no regrets. As Beast cures the Legacy Virus, many infected Genoshan mutants recover overnight, providing Magneto, the current ruler of Genosha, with an army to start the third World War. He demands Earth's governments accept him as their leader, and abducts and crucifies Xavier in Magda Square for all to see. A loyal member of Magneto's Acolytes, Amelia Voght, cannot stand to see her former lover punished in such a manner and sets him free. Jean Grey and rather untrained newcomers, as most of the team are elsewhere, distract Magneto and Wolverine guts him. Xavier is too late to intervene. Xavier's evil twin Cassandra Nova, whom Xavier attempted to kill while they were both in their mother's womb, orders a group of rogue Sentinels to destroy the independent mutant nation of Genosha. Magneto, who is Genosha's leader, appears to die along with the vast majority of the nation's inhabitants. Nova then takes over Xavier's body. Posing as Xavier, she reveals his mutation to the world, something he needed to do but did not want to sully his reputation over, before going into space and crippling the Shi'ar Empire. The X-Men restore Xavier, but Lilandra, believing that too much disaster has come from the Shi'ar's involvement with the X-Men, annuls her marriage to Xavier. Lilandra previously had gone insane and tried to assassinate Charles on a trip to Mumbai. During this period, a mutant named Xorn joins the X-Men. Xorn uses his healing power to restore Xavier's use of his legs. When the X-Men receive a distress call from a Scottish island, they are surprised to find Juggernaut with nowhere to go, as the island was destroyed by his further-mutated partner in crime, Black Tom Cassidy, who died. Xavier reaches out to his stepbrother and offers him a place in his mansion, with Cain reluctantly accepting. The Juggernaut redeems himself over the next few weeks and joins the X-Men. Xavier finds out that Cain's father preferred him to his own flesh and blood and that they both thought they deserved the abuse they incurred by Kurt; Cain believed this because his father loved someone else's child more than him, and Charles felt guilty about getting in the way. That it is why neither of them stopped Kurt Marko with their powers. Now outed as a mutant, Xavier makes speeches to the public about mutant tolerance. He also founds the X-Corporation, or X-Corp (not to be confused with the X-Corps), with offices all over the world. The purpose of the X-Corp is to watch over mutant rights and help mutants in need. As a result of being out, the school no longer hides the fact that it is a school for mutants and it opens its doors for more mutant (and even human) students to come in. A student named Quentin Quire and members of his gang start a riot at the Xavier Institute during an open house at the school. As a result, Quire and two other students are killed. Uncertain about his dream's validity, Xavier announces that he will step down as headmaster and be succeeded by Jean Grey. Afterwards, Xorn reveals himself to be Magneto, having apparently not died in the Sentinel raid on Genosha. Magneto undoes the restoration of Xavier's ability to walk, kidnaps him, and destroys the X-Mansion (killing several of the students). Then Xorn/Magneto assaults New York, where Cyclops, Fantomex and a few students confront him. After the rest of the X-Men arrive, Xorn/Magneto kills Jean Grey with an electromagnetically induced stroke, and Wolverine decapitates him. With Jean dead, Xavier leaves the school to Cyclops and Emma Frost, to bury Xorn/Magneto in Genosha. In a retcon of Grant Morrison's storyline, there Xavier meets the "real" Magneto, who mysteriously survived Cassandra Nova's assault. The two resolve their differences and attempt to restore their friendship, leading a team of mutants, the Genoshan Excalibur, to rebuild and restore order to the destroyed island nation. At the mansion, the Danger Room (the X-Men's simulated reality training chamber) gains sentience, christens itself "Danger", assumes a humanoid form, and attacks the X-Men before leaving to kill Xavier. With Magneto's help, Xavier holds off Danger until the X-Men arrive. Danger flees, but not before revealing to Colossus that Xavier has known it to be sentient ever since he upgraded it. Colossus is especially offended by this because he had been held captive and experimented upon by Danger's ally, Ord of the Breakworld. Ashamed, Xavier tries to explain to them that by the time he realized what was happening, he could see no other course. The disgusted X-Men leave. In a prelude to House of M, Magneto's daughter Scarlet Witch has a mental breakdown and causes the death of several Avengers. Magneto brings her to Xavier and asks him to use his mental powers to help her. Although aided by Doctor Strange and the appearance of Cassandra Nova, Xavier is unsuccessful. Xavier orders a meeting of the X-Men and Avengers to decide Wanda's fate. Her brother Quicksilver, believing the heroes plan to kill her, speeds off to Genosha and convinces Wanda that she could right the wrongs she inflicted by using her powers to alter reality. Quicksilver somehow forces a tearful Wanda to reveal to him her heart's desires of Magneto, the assembled New Avengers, and the X-Men, and then uses her powers to make them all real. Thanks to Magneto, though, this re-imagined world is a place where a much more numerous mutant-kind are the dominant species, humans a disenfranchised and oppressed 'silent majority', and Magneto himself rules supreme. In this reality, the only proof that Charles Xavier ever existed is a secret monument in Magneto's palace garden, with the engraved message "He died so Genosha could live". After mutant Layla Miller restores the memories of some of the X-Men and Avengers, they head to Genosha where they discover that Magneto has erected a memorial garden for Xavier commemorating his death. Emma is horrified until Cloak fades into the grave and discovers there is no body inside. After a battle, Scarlet Witch again uses her powers to restore reality and, as a slight against her father, causes a large majority mutants to lose their powers, leaving the mutant race on the brink of extinction and causing the lost powers to become an energy mass, the Collective. With reality restored, Xavier is still missing and the X-Men are unable to detect him with Cerebro. Xavier returns when Cyclops' and Havok's long-lost brother, Vulcan, is revived by the Collective energy released as a result of the "House of M" incident. Vulcan then attacks the X-Men. Xavier, now depowered but able to walk in the wake of "House of M", reveals that he had gathered and trained another team of X-Men (this one composed of students of Dr. Moira MacTaggert) sometime between the original team and the new X-Men team introduced in Giant Size X-Men #1. This team included Vulcan as a member. Like the "Giant Size" X-Men team, McTaggert's former students were sent to rescue the original X-Men from Krakoa, the living island. However, after rescuing Cyclops, McTaggert's former students were seemingly killed. Upon Cyclops' return, Xavier removed Cyclops' memories of the death of Vulcan and his teammates and began assembling the "Giant Size" X-Men. Vulcan skirmishes with the X-Men and eventually flees into space. In spite of Cyclops' feelings, Xavier forms a new team including Havok and Darwin, the lone other survivors of Moira's students. Xavier seeks to confront Vulcan before he can enact his vengeance against the Shi'ar empire, which killed Vulcan's mother. While en route to the Shi'ar homeworld, Xavier is abducted and is later thrown into the M'Kraan Crystal by Vulcan. Darwin follows Xavier into the crystal and pulls Xavier out. This somehow restores Xavier's lost telepathy. With help from his longtime lover, Lilandra, Xavier escapes back to Earth with several of his X-Men. Upon Xavier's return to Earth, as seen in the World War Hulk storyline, he begins to search for lost mutants such as Magneto. Charles' search for more mutants is interrupted by the Hulk, who was sent into extraterrestrial exile by the Illuminati, a group of powerful superbeings to which Xavier belongs. Xavier had no part in (and did not know of) the Hulk exile decision, but Xavier admits to Hulk that he would have concurred to a temporary exile so Bruce Banner could be cured of transforming into the Hulk. However, he also tells the Hulk he would not have agreed to permanent exile. Xavier attempts to surrender to the Hulk, but after viewing the X-Mansion's large graveyard dedicated to post-M-Day mutant deaths, The Hulk concludes the mutants have suffered enough and leaves the Mansion grounds on his own accord. While the X-Men tend to the wounded, Cyclops finally forgives Professor X. While using Cerebra and talking to Beast during the Messiah Complex storyline, Charles detects a new mutant so powerful it fries Cerebra's system. He asks Cyclops to send out a team to find out about the mutant. Once the team has come back empty handed, he argues with Scott for not telling him about the team he deployed to find former Acolytes. Scott tells him outright that he does not need him to run the X-Men anymore. This upsets Charles and annoys him later on when he overhears Cyclops briefing X-Factor on the situation. He also approaches the New X-Men in an attempt to help them figure out a non-violent way to help against the Purifiers, but is quickly rebuked by Surge, who questions where he was when they were getting attacked the first time, and that they did not need to learn from him. Charles questions Cyclops' decision to send X-Force to hunt down his own son, Cable, in front of the students. Cyclops then tells Xavier that he is a distraction that will keep getting in the way and that he must leave the mansion. Xavier is contacted by Cable, who lost the mutant newborn to the traitorous actions of Bishop, who in turn lost the child to the Marauders, and tells him that he is the only one who can help Cable save the future. In the final fight, Xavier is accidentally shot in the head by Bishop. Immediately afterward Xavier's body disappears and Cyclops declares that there are no more X-Men. Professor Xavier survives Bishop's gunshot but falls into a coma. Xavier is kidnapped by Exodus, Tempo, and Karima Shapandar. Exodus tries to heal Xavier, Xavier mentally fights Exodus. Exodus finally approaches Magneto, who is apparently still depowered, for help. Magneto and Karima Shapandar are able to stir Xavier's memories and coax him out of his coma, though Xavier remains slightly confused and partly amnesiac. Later, Exodus confronts Magneto about Joanna Cargill's injury (Magneto was forced to shoot a laser through her eyeball to prevent her attempted an assassination of Xavier). Exodus nearly kills Magneto, and Xavier drags Exodus onto the Astral Plane, putting Xavier's own newly restored mind at stake. Xavier defeats Exodus after a harrowing psionic battle, and Exodus reveals the reason he abducted Xavier and to restore his mind: Exodus wants Xavier to lead the Acolytes and find the mutant messiah child (now under the guardianship of Cable) to indoctrinate the child into their cause. Xavier refuses. Emma Frost's telepathy picks up on the psychic fight, and Emma informs Cyclops that Xavier is alive. Xavier parts company with Magneto and Karima to try to regain his lost memories by visiting people from his past. The first person Charles visits is Carter Ryking, who had gone insane after losing his powers. Charles reads Carter's memories and discovers that when the two were children they were used as test subjects by Nathan Milbury of the Black Womb Project, with the approval of Charles' father, Doctor Brian Xavier. Xavier makes the connection Milbury and X-Men villain, Mister Sinister, who has apparently long been manipulating Charles' life in addition to other X-Men. Afterwards, he discovers he has been targeted by assassins. Charles eventually discovers Mister Sinister had set up Charles, Sebastian Shaw, Juggernaut, and Ryking (Hazard) as potential new hosts for Sinister's mind. Bleeding slowly to death, he apparently gives in to Sinister becoming the new Mister Sinister. But in reality, Xavier is still battling Sinister for control of his body. As Sebastian Shaw and Gambit destroy Sinister's Cronus Machine, the device that he used to transfer his consciousness into new hosts, Xavier drives Sinister out of his body permanently. Xavier thanks Shaw and Gambit for their help and declares he must go and see Cyclops immediately. Professor X returns to the X-Mansion to find it destroyed after recent events. Afterwards, Xavier leaves the ruins of the X-Mansion to secretly meet up Cyclops by psychically coercing his former student for the visit. Xavier explains to Cyclops about the recent events with Mr. Sinister and tries to explain to Cyclops how Sinister has been manipulating Scott's and Jean's lives since when they were children. Xavier attempts to have Scott give him permission to scan Scott's mind for traces of Sinister's influences, but instead, Scott turns the tables on Xavier by revealing that he has secretly invited Emma Frost into their entire meeting and also into Xavier's mind. While in his mind, Emma forces Xavier to relive each of his morally ambiguous decisions under altruistic pretenses. As the issue continues, Charles realizes his human arrogance and that while some of his decisions were morally wrong, he must move forward with his life and deal with the consequences. Emma ends her incursion into Xavier's mind by reminding him of Moira MacTaggert's last words. As he reflects on Moira's words, Xavier gives Cyclops his blessing to lead the X-Men and leaves to find his own path. Following his encounter with Wolverine (in the "Original Sin" Arc) Professor Xavier seeks out his step-brother, the unstoppable Juggernaut in an attempt to reform him. After a conversation about the meaning of the word "Juggernaut" and a review of Juggernaut and Xavier's shared history Xavier offers Cain an empty box as a gift. Confused by Xavier's gift Cain attempts to kill the Professor bringing an entire sports bar down over their heads in the process. Later Cain battles the X-Men in his full Juggernaut armor and conquers the planet. Just as everything appears to be under the Juggernaut's control Xavier reappears and informs him that everything that has just taken place except for Juggernaut destroying the bar took place in Cain's mind. A baffled Cain demands to know how Xavier managed to overcome his psychically shielded helmet to which the Professor replies that he decided to visit Cain in his sleep. Professor Xavier then informs him that he now understands Cain as a person and that he will not attempt to get in his way or reform him again. But Xavier also warns Cain that if he gets in the way of the Professor's path to redemption Xavier will stop him permanently. Following his encounter with Cain it has been revealed that Xavier is now searching for Rogue. After his bruising encounter with Cyclops and Emma Frost, Professor X is forced to revisit the biggest challenge and the biggest failure of his career, Wolverine, when the feral mutant asks for Charles' help in freeing his son from the clutches of the Hellfire Club. As the two search for Daken, Wolverine reveals that when he first joined the X-Men he attempted to assassinate Xavier due to some unknown programming. In response, the Professor broke Logan's mind and rebuilt it so that any and all programming he received was forgotten. Logan also revealed that the real reason Xavier asked him to join the X-Men was that Charles "needed a weapon". Eventually Professor Xavier and Wolverine locate Sebastian Shaw's mansion and attack his minions, just as they are about to enter a bomb explodes from within catching them both off guard. From the wreckage emerges an angry Sebastian who immobilizes Wolverine. Meanwhile, Miss Sinister knocks Daken unconscious and has him taken to the med lab in the mansion's basement. As Shaw prepares to deliver a killing blow to Xavier, Wolverine recovers and stops him telling Xavier to rescue his son. Professor Xavier locates the med lab and after a quick psychic battle with Miss Sinister enters Daken's fractured mind. While in Daken's mind Xavier discovers Romulus's psychic tampering and comments that Daken's mind is even more broken than Wolverine's was. Before Xavier can heal Daken a psychic bomb explodes causing Xavier to become comatose and Daken to wake up. Miss Sinister arrives and attempts to manipulate Daken who reveals that the psychic bomb in his head restored his memories and stabs Miss Sinister in the chest. Meanwhile, Wolverine defeats Shaw and enters the mansion to find Daken standing over an unconscious Xavier preparing to kill him. Wolverine tells Daken that he will not let him hurt Xavier and the two fight. Overcome with guilt over what happened to Daken and Itsu, Wolverine allows himself to be beaten. Just as Daken appears to have won Xavier pulls both of them onto the astral plane revealing that the psychic bomb had little effect on him because his psyche was already shattered. Xavier then explains to Wolverine and Daken that Romulus is solely responsible for Itsu's death and that he lied to Daken about everything because he wanted Wolverine to become his weapon. As the three converse, Daken returns to the physical plane and prevents Shaw from killing Xavier. With the truth revealed Wolverine and Daken decide to kill Romulus. As the two depart Wolverine tells Xavier that he forgives him for all of the dark moments in their history. Wolverine acknowledges that Professor Xavier allowed him to become a hero. Wolverine then tells the Professor that he hopes he will one day be able to forgive him for choosing to kill Romulus. Professor Xavier recruits Gambit to go with him to Australia to find and help Rogue who is currently staying at the X-Men's old base in the Outback; unaware Danger is using Rogue as a conduit for her revenge against him. In a prelude to the "Secret Invasion" storyline, Professor X was at the meeting of the Illuminati when it came to the discussion about the Skrulls planning an invasion by taking out Earth's heroes and posing as them. He claims he was unable to distinguish that Black Bolt had been replaced by a Skrull, and his powers were tested quickly by the Black Bolt Skrull. Professor X leaves after learning even he can no longer trust the others, yet appears to have severely restricted the number of people he informs of the forthcoming alien invasion, as the X-Men were not prepared for the Skrulls, at least at first. Xavier has not seen again during the events of Secret Invasion, though his X-Men in San Francisco are successful at repelling the invaders there through the use of the modified Legacy Virus. During the Dark Reign storyline, Professor X convinces Exodus to disband the Acolytes. A H.A.M.M.E.R. helicopter arrives and from inside appears Norman Osborn, who wants to talk to him. During the Dark Avengers' arrival in San Francisco to enforce martial law and squelch the anti-mutant riots occurring in the city, Xavier appears (back in his wheelchair) in the company of Norman Osborn and publicly denounces Cyclops' actions and urges him to turn himself in. However, this Xavier was revealed to be Mystique who Osborn found to impersonate Xavier in public. The real Xavier is shown in prison on Alcatraz and slowly being stripped of his telepathic powers while in psionic contact with Beast, who was arrested earlier for his part in the anti-mutant riots. It was also revealed by Emma Frost that she and Professor X are both Omega Class Telepaths when she manages to detect the real Professor X. Professor X helps Emma Frost enter Sentry's mind. However, as Emma frees him of the Void's influence, a minute sliver of the entity itself remains in her mind. Xavier quickly tells her to remain in her diamond armor state to prevent the Void from gaining access to her psi-powers. Professor X is later seen with Emma Frost where Beast is recuperating. After what happened at Utopia, Xavier has come to live on the risen Asteroid M, rechristened Utopia, along with the rest of the X-Men, X-Club, and mutant refugees and is also allowed to join the Utopia lead council (Cyclops, Storm, Namor, Iceman, Beast, Wolverine and Emma Frost). While he no longer continues to openly question every move that Cyclops makes, he is still concerned about some of his leadership decisions. Xavier had wanted to return to the mainland to clear his name, but in the aftermath of Osborn declaring Utopia as a mutant detention area, Cyclops refused to let him leave, stating that it would be a tactical advantage to have him as an ace in the hole in case the need arose. To that end, he has kept Xavier out of the field and instead relied on Emma Frost, Psylocke and the Stepford Cuckoos respectively for their own psionic talents. While attending the funeral of Yuriko Takiguchi, Magneto arrives at Utopia, apparently under peaceful motives. Xavier does not believe it, and attacks Magneto telepathically, causing Cyclops to force him to stand down. He later apologizes to Magneto for acting out of his old passions from their complicated relationship, which Magneto accepts. During the Second Coming storyline, Professor Xavier is seen on Utopia delivering a eulogy at Nightcrawler's funeral. Like the other X-Men, he is deeply saddened by Kurt's death and anxious about the arrival of Cable and Hope. Xavier is seen using his powers to help his son Legion control his many personalities and battle the Nimrods. At the conclusion of Second Coming Professor X is seen surveying the aftermath of the battle from a helicopter. As Hope descends to the ground and cradles Cable's lifeless arm, Xavier reflects on everything that has transpired and states that, while he feels that Hope has indeed come to save mutant kind and revive his dream, she is still only a young woman and will have a long and difficult journey before she can truly achieve her potential. During the "Avengers vs. X-Men" storyline, the Phoenix Force is split into five pieces and bonded with Cyclops, Emma Frost, Namor, Colossus and Magik (who become known as the Phoenix Five). Eventually, Cyclops and Frost come to possess the full Phoenix Force, and Professor X is instrumental in confronting them both, and dies in the ensuing battle with Cyclops. The Phoenix Force is subsequently forced to abandon Cyclops as a host by the efforts of both Hope Summers and the Scarlet Witch. Xavier's body is later stolen by the Red Skull's S-Men while the group also captures Rogue and Scarlet Witch. Xavier's brain is removed and fused to the brain of the Red Skull. After Rogue and Scarlet Witch snapped out of the fight they were in, they find the lobotomized body of Professor X. Red Skull uses the new powers conferred upon him by Professor X's brain to provoke anti-mutant riots. His plans are foiled by the Avengers and the X-Men, and the Skull escapes. Professor X's spirit is later seen in the Heaven dimension along with Nightcrawler's spirit at the time when Azazel invades Heaven. During the AXIS storyline, a fragment of Professor X's psyche (which had escaped the scrubbing of his memories) still existed in Red Skull's mind preventing him from unleashing the full potential of Professor X's powers. During a fight with the Stark Sentinels, Doctor Strange and Scarlet Witch attempt to cast a spell to invert the axis of Red Skull's brain and bring out the fragment of Professor X to defeat Onslaught. Doctor Strange was targeted and captured by the Sentinels before they could cast the spell. When Magneto arrived with his supervillain allies, Doctor Doom and Scarlet Witch attempted to cast the inversion spell again and Red Onslaught was knocked unconscious and reverted to his Red Skull form. Although they did not know whether Professor X was now in control, the Avengers decided to be cautious and take Red Skull to Stark Tower. It was later revealed that the spell had actually caused all the heroes and villains present to undergo a "moral inversion" rather than simply bringing out Professor X in the Skull, with the result that the Skull and other villains became heroic while the Avengers and X-Men present became villainous. Eventually, the inversion was undone. After the Skull mounts a telepathic assault that nearly allows him to take control of the Avengers, he is defeated when Deadpool places Magneto's old helmet on Rogue, allowing her to knock out the Skull and take him to Beast. Beast is subsequently able to perform brain surgery on the Skull, extracting the part of Xavier's brain that was grafted onto the villain's own brain without causing any apparent damage to the Skull. Rogers attempts to claim the fragment for himself, but Rogue flies up and incinerates the fragment with the aid of the Human Torch, the two expressing hope that Xavier will rest in peace. The astral form of Professor Xavier has since been revealed to be imprisoned in the Astral Plane after Shadow King somehow acquired it upon Professor X's death. After what appeared to be years in the Astral Plane, Professor X is able to trick Shadow King into playing him in a 'game' that lures Rogue, Mystique and Fantomex onto the Astral Plane, while turning others into carriers for the Shadow King's 'contagious' psychic essence. With the Shadow King certain of his victory, he fails to realize that Xavier's apparent 'surrender' to his game was really just him biding his time until the Shadow King's influence was distracted long enough for him to drop his already-subtly-weakened guard long enough for Xavier to break his bonds, luring in the three aforementioned X-Men as their identities were already fundamentally malleable. With the Shadow King defeated, Xavier is apparently returned to the real world in the body of Fantomex, Fantomex reasoning that nobody really knows who he is as an individual beyond his status as one of the X-Men whereas this act of sacrifice will ensure that he is remembered for a great deed. Proteus has spent years trapped in a psionic hellscape of the Astral Plane, where The Shadow King reigned supreme, only to escape last issue. Part of the reason that he could was the escape of Charles Xavier (who now chooses to go by X, since he is now in a younger body after escaping), and now X leads the X-Men directly into an ambush, as Proteus has warped an entire village with his powers, leading to a mind-to-mind battle that leaves X on the receiving end of a psychic beatdown. Proteus has started his garden and his seeds are planted all over the world. Psylocke is in command and has a plan which mainly consists of Archangel using metal and Mystique morphing into his mother. Once they drain him, Rogue and Bishop convert his energy and release him back to the universe. Whilst this all went down Psylocke and X combined forces to burn out the seeds across the planet. As they are working on it they discover they are not enough to accomplish the task. X mentions the network of psychics the Shadow King was using and that Betsy who is in control should tap into it. She agrees and does so yet unbeknownst to her X was possessed by the Shadow King who violently erupts from X's head. Following X's apparent death after the Shadow King exploded from his skull, the psychic villain tears the X-Men apart until X literally pulls himself back together (a feat he later refuses to explain), and he and Psylocke team up to harness the power of all of Earth's psychics to destroy the Shadow King. As Psylocke says she feels no psychic trace of him anywhere, X implants comforting post-hypnotic psychic suggestions in his allies and then erases their memories (including allowing Warren Worthington to switch between his identities at will). Only Psylocke's memory is left intact, with X telling her she will be the one to "keep him honest" while he embarks on a new mission. X has since made his presence known to his former students and reveals his new plan for all mutantkind. Now clad in a Cerebro-like helmet, Xavier has apparently abandoned his dream for peaceful coexistence, and had turned Krakoa into a sovereign nation state for mutants as well as use it to apparently heal the X-Men from their ordeals during the showdown against the forces of O.N.E. He then leads the X-Men into planting in seeds in strategic locations around the world and Mars, which, overnight, grow into massive plantlike "Habitats". As it turns out, these "Habitats" – and the plants that grew them – are extensions of Krakoa. Through the advancement of mutant technology combined with Krakoa's unique abilities as a living mutant island, Professor X and the X-Men have embassies around the world. Also through this combination of technology and mutant power, Xavier have developed three drugs that could change human life – a pill that extends human life by five years, an adaptable universal antibiotic, and a pill that cures "diseases of the mind, in humans".” In exchange for recognizing the sovereignty of Krakoa, Professor X will give these drugs to mankind, with mutants living in peace on the island. Xavier and Magneto later meet with Raven Darkholm inside his sanctum. The two mutant leaders both greatly pleased with the success of her mission as she presents what they'ed petitioned her to steal. A mysterious USB tab containing sensitive information stolen from Damage Control, Mystique would inquire for her payment as she had met their demands. However, Xavier mentions that he still had more demands that needed to be met as they were building their protected future of Homo Sapiens Superior, seeming to psychokinetically beckon the contents of her theft into his hands while Mystique questions how much more needed to be done for his ultimate pet project. Xavier and Magneto reveal the contents of the USB drive to Cyclops, which are shown to be information on Orchis, an organization dedicated to responding to a large-scale mutant threat and the plans of a Mother Mold. They believe that the creation of the Mother Mold will herald a new generation of Sentinels and along with it, Nimrod. They task Cyclops with assembling a team to destroy the Mother Mold station. Although the team (composed of Cyclops, Marvel Girl, Wolverine, Nightcrawler, Husk, Mystique, Archangel, and Monet) is successfully, they are all killed in the process. X mourns them, vowing "No more." Xavier is revealed to have upgraded Cerebro with the help of Forge, which is now able to copy and store the minds of mutants in a database. After the Five (Hope Summers, Goldballs, Elixir, Proteus, and Tempus) are able to grow the bodies of deceased mutants, Xavier is able to copy the minds back in these empty shells. Thus, he is able to resurrect Cyclops's team, thanking them for what they did. At the U.N., Xavier, Beast, and Emma celebrate with other ambassadors for the recognition of Krakoa as a sovereign nation. Xavier telepathically converses with Emma, revealing that he knows that she manipulated the Russian ambassador to abstain from the vote, before thanking her for her service. Two days after the U.N. vote, Xavier, Magneto, and Wolverine are in Krakoa waiting besides several portals. While Wolverine expresses his misgivings about the upcoming event, Xavier and Magneto assure him all will be alright. Soon after, several villainous mutants, including Mister Sinister, Sebastian Shaw, Exodus, Selene (comics) and Apocalypse arrive through the portals. Apocalypse in particular expresses satisfaction at arriving and Krakoa responds in the same way. Magneto and Xavier reveal that they have invited all mutants, even those who have fought against them in the past, to Krakoa, to form a society. The assembled villainous mutants agree to their terms, and Xavier shakes Apocalypse's hand, welcoming him and the others to their home." While peace reigns on Krakoa, a mysterious team of assassins HALO drops into the island and assassinates Xavier, destroying his Cerebro helmet in the process. The Quiet Council hides Xavier's death from the rest of the world, and through the activation of a Cerebro backup, and the efforts of The Five, Xavier is reborn once more. Soon after, he partakes in a global conference alongside Magneto and Apocalypse, professing that he still loves humanity, whilst subtly warning them in regards to his previous assassination - and his knowledge of an ongoing assassination attempt at the forum itself, foiled by Cyclops and Gorgon. Professor X is a mutant who possesses vast telepathic powers, and is among the strongest and most powerful telepaths in the Marvel Universe. He is able to perceive the thoughts of others or project his own thoughts within a radius of approximately 250 miles (400 km). Xavier's telepathy once covered the entire world; although following this, Magneto altered the Earth's electromagnetic field to restrict Xavier's telepathic range. While not on Earth, Xavier's natural telepathic abilities have reached across space to make universal mental contact with multiple alien races. With extreme effort, he can also greatly extend the range of his telepathy. He can learn foreign languages by reading the language centers of the brain of someone adept, and alternately "teach" languages to others in the same manner. As side effect of his telepathy, Xavier possesses an eidetic memory and his brain can assimilate and process impossibly huge amounts of raw data in an astonishingly short amount of time. Xavier's vast psionic powers enable him to manipulate the minds of others, warp perceptions to make himself seem invisible, project mental illusions, cause loss of particular memories or total amnesia, and induce pain or temporary mental and/or physical paralysis in others. Xavier once trained a new group of mutants mentally, subjectively making them experience months of training together, while only hours passed in the real world. Within close range, he can manipulate almost any number of minds for such simple feats. However, he can only take full possession of one other mind at a time, and must strictly be within that person's physical presence. He is one of the few telepaths skilled enough to communicate with animals and even share their perceptions. He can also telepathically take away or control people's natural bodily functions and senses, such as sight, hearing, smell, taste, or even mutant powers. He has displayed telepathic prowess sufficient to confront Ego the Living Planet (while aided by Cadre K) as well as narrowly defeat Exodus. However, he cannot permanently "reprogram" human minds to believe what he might want them to believe even if he wanted to do so, explaining that the mind is an organism that would always recall the steps necessary for it to reach the present and thus 'rewrite' itself to its original setting if he tried to change it. However, his initial reprogramming of Wolverine lasted several years, despite Wolverine overcoming the reprogramming much faster than an ordinary human because of his healing factor. He is able to project from his mind 'bolts' composed of psychic energy, enabling him to stun the mind of another person into unconsciousness, inflict mental trauma, or even cause death. These 'bolts' inflict damage only upon other minds, having a negligible effect on non-mental beings, if any. The manner in which Xavier's powers function indicates that his telepathy is physical in some way, as it can be enhanced by physical means (for example, Cerebro), but can also be disrupted by physical means (for example, Magneto's alteration of the Earth's magnetic field). Xavier can perceive the distinct mental presence/brain waves of other superhuman mutants within a small radius of himself. To detect mutants to a wider area beyond this radius, he must amplify his powers through Cerebro and subsequently Cerebra, computer devices of his own design which are sensitive to the psychic/physical energies produced by the mind. Professor X can project his astral form into a psychic dimension known as the astral plane. There, he can use his powers to create objects, control his surroundings, and even control and destroy the astral forms of others. He cannot project this form over long distances. Uncanny X-Men writer Ed Brubaker has claimed that, after being de-powered by the Scarlet Witch, and then re-powered by the M'Kraan Crystal, Charles' telepathy is more powerful than was previously known. However, the extent of this enhancement is unknown. Years prior to initial publishing, Charles Xavier had an undefined level of telekinesis. This aspect of his powers were potent enough to cause catastrophic system disruption in computerized appliances. Such an attribute has faded, however. His evil counterpart Cassandra Nova Xavier would possess this ability, indicating he still possessed the potential for them. This potential was proven true after his death and resurgence within the younger, stronger body of Charlie Cluster 7. The Professor, using the moniker X, fashioned a Cerebro like a helmet which acts as a focusing device for his psionic powers and used it to galvanize latent aspects of his X-Gene to stimulate some dormant properties, seemingly using telekinesis to will a flash drive on Mystique's person into his hand. Charles Xavier is a genius with multiple doctorates. He is a world-renowned geneticist, a leading expert in mutation, possesses considerable knowledge of various life sciences, and is the inventor of Cerebro. He possesses Ph.D.s in Genetics, Biophysics, Psychology, and Anthropology, and an M.D. in Psychiatry. He is highly talented in devising equipment for utilizing and enhancing psionic powers. He is also a great tactician and strategist, effectively evaluating situations and devising swift responses. During his travels in Asia, Xavier learned martial arts, acquiring "refined combat skills" according to Magneto. When these skills are coordinated in tandem with his telepathic abilities, Xavier is a dangerous unarmed combatant, capable of sensing the intentions of others and countering them with superhuman efficiency. He also has extensive knowledge of pressure points. Charles Xavier was also given possession of the Mind Gem. It allows the user to boost mental power and access the thoughts and dreams of other beings. Backed by the Power Gem, it is possible to access all minds in existence simultaneously. Like all other former Illuminati members, Xavier has sworn to never use the gem and to keep its location hidden. The Xavier Protocols are a set of doomsday plans created by Professor X. The protocols detail the best way to kill many powerful mutant characters, including the X-Men and Xavier himself, should they become too large of a danger. The Xavier Protocols are first mentioned during the Onslaught crossover and first seen in Excalibur #100 in Moira MacTaggert's lab. Charles Xavier compiled a list of the Earth's most powerful mutants and plans on how to defeat them if they become a threat to the world. They are first used after Onslaught grows too powerful. Only parts of the actual protocols are ever shown. In the Operation: Zero Tolerance crossover Bastion obtains an encrypted copy of the protocols, intending to use them against the X-Men. However, Cable infiltrates the X-Mansion and secures all encrypted files before Bastion has a chance to decrypt them. Due to the tampering of Bastion and his Sentinels, the X-Mansion computer system Cerebro gains autonomy and seeks to destroy the X-Men by employing its knowledge of the Xavier Protocols. In a virtual environment created by Professor X, Cerebro executes the Xavier Protocols against the X-Men. Each protocol is activated by the presence of a different combination of X-Men and were written by Xavier himself : Other X-Men who have faced their Xavier Protocols are Colossus, Rogue, Shadowcat, Nightcrawler, Storm, and Gambit. Professor X is Carlos Javier in the miniseries Marvel 1602 (set in the alternative reality known as Earth-311), set at the end of the Elizabethan Era in an alternative universe. In this reality Carlos Javier set up a school for the Witchbreed to train them and prepare them to survive in a world that distrusted and hated them. He hid them away and would only send them out on mercy missions to retrieve other witchbreed who were in danger. When the young man named Werner – born with angel's wings – was to be burnt at the stake by the Inquisition, Javier sent his team leader, Scotius Summerisle, and Roberto Trefusis to rescue the boy. They did, and brought him back to Javier's school. Nicholas Fury, the Queen of England's spymaster, came to visit Javier at his school and warn him of the danger posed by Elizabeth's death and the eventual rise to power of King James of Scotland, who had no love for witchbreed. Javier acknowledged the threat, but did nothing about it, though he showed Fury his team of super-powered youths. Fury also asked a favor, and requested that Javier use his powers to read the thoughts of a captured assassin. All Javier could tell him what that he was one of three; another was to kill a girl from the colonies, and the third, the queen. Fury later sent his protégé, Peter Parquagh, to Javier's school, to warn him that Fury would be coming for him in the name of King James soon, and that Javier should go quietly, rather than risk a war that would have serious consequences. Javier agreed, and when Fury arrived with an army of men, he and his students went without a fight. While captive, Javier joined a discussion with Fury and Doctor Strange—the physician and magician of Queen Elizabeth. Strange told them that the world was coming to an end and the only way to save it would be to launch an attack on the castle of Otto von Doom, and steal away the treasure of the Templars and the survivors from the Four of the Fantastick. Fury disbelieved him, thinking his friend Sir Richard Reed dead, but Javier read Strange's mind, revealing that Strange thought he was telling the truth; and so it was decided. They traveled upon a ship that Javier's student Jean Grey lifted into the air with her mind, while Javier bolstered her powers with his own, and they flew to Latveria. Javier and Jean remained in meditation the whole way; keeping the ship afloat, for if they set down they would not get airborne again. As the battle commenced, Javier led his men. He sent Angel and Scotius down to silence the cannons, while he ordered Roberto to deflect cannonballs, which he himself would try to steer off course via the cannoneer's minds. His beast-like student Henry he asked to protect the ship from the flying minions of Doom that soon boarded the ship from the air. When the Captain of the Fantastick raged against his stone prison beneath Castle Doomstadt, it freed the members of the Captain's crew, along with Donal (Thor) and Matthew Murdoch. Donal quickly used the staff that was his greatest treasure, and turned himself into the Thunder God, Thor. When Thor created a massive storm to use against Doom, Roberto used the sudden moisture in the air to freeze the cannons and save their ship. Doom also used Thor's storm to electrify the golden globe he held — a distraction given to him by Donal — but it exploded in his face, scarring him and bringing him to the brink of death. Victorious, Thor and the members of the Fantastick joined Javier's crew, and with Thor's help they got the boat to sea, as Jean Grey had collapsed. The band of heroes set sail for the New World to fix the tear in time that had created the weather anomalies circling the globe, as well as endangering the universe itself. On the way, Jean Grey's body finally gave up under the strain of the use of her powers, and as per her final wish, she was flown into the air and vaporized by Scotius’ eye blasts, falling to the sea as ash; but not before Angel saw an image of an immense, flaming bird in the air. Almost to the Roanoke Colony, Javier sensed a trio of ships making their way to the New World; the first, Virginia Dare returning to the colonies with her time-traveling friend; the second, containing James’ men, set to kill Fury; and the third, the witchbreed Enrique with his two children. Enrique was an old friend of Javier's, later set against him. Javier's group intercepted Enrique's boat first, and Roberto encased it in ice to imprison them, while Javier demanded to know what they were doing. Enrique explained that the winds had taken them to the New World, but Javier did not trust him. Javier soon participated in another group discussion; this led by the severed head of Doctor Strange, brought from England by his wife, Clea. Strange told them through his head that the faux-Indian Rojhaz was actually a visitor from the future, Captain America, whose arrival had jeopardized the universe itself. To fix it, the heroes would have to return him to the rift. They soon found the rift, and Javier had no choice but to make a deal with his old friend, Enrique, who was the only one with the power to open the rift to put the man back. Enrique agreed without hearing the proposal, but demanded that his own terms be met when his job was done. As Javier had no other choice, he agreed. Together with Enrique, Thor, and Fury, they opened the rift enough for Fury to drag Captain America through, and it closed, healing the universe permanently. Though instead of reverting things back to the way they should have been, it separated the universe from the original, creating a pocket universe where the out-of-time heroes continued to exist. Before parting, Enrique explained his terms: that he would head north, and no one would follow him or investigate him; and that Javier would teach his children, Wanda and Petros, but not reveal to them that he was their father, though he would return one day to fetch them. Javier agreed, and parted with his old friend. In the Age of Apocalypse, Charles Xavier was killed when he sacrificed himself to save Erik Lensherr from his own future son, David Haller (Legion), who had gone back in time to eliminate Magneto in the belief that his father would thus be there for him and succeed in his dream without Magneto to 'hinder' his efforts. As a result, Magneto founded the X-Men and sought human/mutant co-existence in Xavier's name- even naming his new son with Rogue 'Charles' after his friend- but Haller's rampage also prompted Apocalypse to awaken decades before the world was ready for him, resulting in Apocalypse conquering North America and most of the world, eventually forcing Magneto's X-Men to attempt a daring mission to gain the power necessary to go back in time and save Xavier from Haller as they recognised how vital Xavier was to the future. In the Amalgam Comics continuity, Charles Xavier was combined with DC's Doctor Fate and Marvel's Doctor Strange to create Dr. Strangefate. He was the only character aware of the nature of the Amalgam Comics universe. He was also combined with Martian Manhunter to create Mr. X, leader of the JLX (a mash up of the X-Men and Justice League). In the second issue of Prelude to Deadpool Corps, Deadpool visits a universe where Prof. X runs an orphanage for troubled kids that includes Kidpool (kid version of Deadpool), Cyclops, Wolverine, Angel, and Colossus, with Storm being the headmistress and Beast as a teacher. In this universe, the professor has a fondness for Emma Frost who runs an orphanage for girls that includes Jean Grey and Rogue. He tries to get her attention by wearing wigs, throwing a dance for both orphanages, and trying to alter her memory. When the Scarlet Witch altered reality so Magneto ruled over the Earth and mutants were the dominant species, Professor X is initially depicted as missing; Wolverine attempts to locate him but his search turns up fruitless. Later on Genosha, Magneto is seen staring at a grave for the Professor, with the epitaph "He died so Genosha could live". However, when the grave is searched by Cloak, he finds there is no body. The question of Xavier's status in this world was left open-ended until House of M: Civil War, detailing the history of Magneto in this world. Xavier, while living, sought out Magneto when the latter was attempting to halt the oppression of mutantkind, declaring war on humans. He saved Magneto's life from a sniper attack and joined him, hoping to influence Magneto's actions into benevolence. He was disabled during the mutant takeover of Genosha and slowly grew more distant from Magneto as the latter's actions grew more bloodthirsty. Ultimately, when the United States sent a team onto Genosha to assassinate Magneto, Xavier found himself trying to appeal to a furious Bucky Barnes, who stabbed Xavier through the chest. What became of his body afterwards is unknown. In the Marvel Zombies one-shot Marvel Zombies: Dead Days, a zombified Alpha Flight attacks the X-Mansion. Storm informs the X-Men during the battle that Alpha Flight has ripped Xavier to pieces. Cyclops, trying not to deal with the fact that Xavier is dead, continues to fight. In the Marvel Zombies/Army of Darkness crossover, a zombified Beast informs Doctor Doom of Xavier's death, and that it was the Zombie Reed Richards who reprogrammed Cerebro to seek out humans. In Marvel Zombies Return however, another alternative Xaiver is zombified and turned into a human-detection system, his brain being permanently connected up to Cerebro so that he can find any remaining human beings. In the alternative reality known as Mutant X, Professor X believing in harmony between man and mutant, formed, along with his friend Magnus, the X-Men and led the team towards that peaceful goal . However, the day they fought the Shadow King, everything changed. The good in Xavier was corrupted, and he left the team to explore his powers further. When he returned, it was during an attack by the Juggernaut. Xavier fired a blast at Juggernaut, but it missed and killed Magneto's lover Moira MacTaggert, instead. Xavier left the X-Men for good then, and traveled the world seeking out telepaths, whom he captured and incarcerated around the globe. He joined forces with Sinister in a bid to transfer all the mental energy of all the world's telepaths into himself. To that end, they created the X-Man, and Xavier took control of S.H.I.E.L.D., captured Gambit's adopted daughter Raven, and had Fury attempt to kill the X-Men with a nuclear strike. Xavier met up with The Six in New York, "fleeing" from Apocalypse and the Four Horsemen. However, when Xavier made several attempts to abduct Scotty, Havok was alerted to the truth by Jean Grey and Magneto, and realized who the true villain was. After a pitched battle, Xavier donned his psychic armor, and he and Sinister released a giant replica of Galactus to induce fear in the citizens of Earth, on which Xavier could feed his power. In the end, the replica was destroyed and the Six beat the fear phantoms that had comprised it. Xavier turned on Sinister and destroyed him, and X-Man ran off, leaving Scotty and Raven, who with X-Man were to be Xavier's psychic batteries, to help Havok blast away at Xavier. Xavier was knocked out of his armor and fled the scene, but not before unleashing a blast at Havok that hit Brute when he jumped in front of it to save Alex. Fortunately, the blast temporarily restored Hank to his former levels of intelligence, and he was able to devise cures for his friends before the effect faded away. Xavier was later summoned by Dr. Strange to help fight the Beyonder (Goblin Queen) by adding his psychic power to others to help Havok reach a higher plane of reality. While hooked up to the psychic amplification machine, Xavier was about to be killed by Dracula when he was saved by Bloodstorm, who staked her former master. Warren Ellis' Ruins was set in an alternative version of the Marvel Universe where "everything went wrong". In this world, "President X" leads a corrupt regime over the United States. He moved the White House from Washington to Westchester, New York, letting the capital fall to waste and corruption. He never formed the X-Men, with only Warren Worthington working for him as a secret serviceman. Some of his would-be X-Men are locked in a Texan prison by his orders and are sometimes forcibly deformed in an effort to keep their powers under control. He was known to frequently visit and verbally abuse them "leaving them all sobbing and throwing up". The Avengers were depicted in this world as a Californian pro-secessionist revolutionary cell that opposed Xavier's regime, who were all killed when the Avengers Quinjet was shot down. President X also started the 'Genoshan Police Action', also known as the 'Genoshan War'. In the first arc of New Excalibur the team is brought together partly as a response to a clash between Dazzler and a group of homicidal mutants bearing a resemblance to the Original X-Men. It turns out that these are the X-Men of an alternative universe where Charles Xavier is possessed by the Shadow King and has gone on to use his mind-controlled and thoroughly corrupted X-Men to wipe out all the other superhumans. This version of Xavier can walk, and insists that his followers refer to him as 'Master'. He, along with the Shadow King, are killed by Lionheart. In the Ultimate Marvel continuity, Professor Charles Xavier is the world's most powerful telepath, the founder and patron of the X-Men and a world-famous lecturer for pacifism and mutant emancipation. In contrast to his mainstream version, he is publicly open about his mutant status from the beginning and also has limited telekinetic abilities. He leaves his wife Moira MacTaggert, whom he collaborated with to create new therapies and surgical techniques for their mutant patients, and their sick son David to pursue Magneto's dream of a mutant society, but Magneto turns on him, crippling him with a shard of metal through his spine. Xavier also repeatedly tampers with other people's minds to reach his goals, but he recognizes his flaws. In one instance, Xavier finds that Iceman has told a girl several secrets about the X-Men and is forced to erase the conversation from their minds. He generally believes that reading minds without permission is unacceptable, or so he leads his students to believe. In Ultimate X-Men #40, when Angel flies away, the Professor sends Storm after him because he telepathically knows that Angel is attracted to her. Similarly, Beast questions whether Xavier has made Storm love him. In this timeline, his former love interests include Mystique and Emma Frost. In Ultimate X-Men #77, he tells Cyclops that he is in love with Jean. He also has a pet cat which he has named "Mystique". In Ultimate X-Men #78, Xavier is apparently killed by Cable who was trying to prevent the horrible events in the future. In Ultimate X-Men #80 it is revealed that he is in fact alive, and a captive of Cable in the future. It has also been revealed that Cable has repaired his spine and is training Xavier to fight against Apocalypse. However, once the battle came, Jean Grey manifested as the Phoenix and destroyed Apocalypse. Jean returned everything back to normal, giving Xavier a "fresh start". As she did so however, she undid the repair to his spine that Cable had performed, leaving him once again disabled. Xavier reformed the X-Men upon return as the Headmaster of the Xavier Institute. Soon after, Xavier left the school temporarily to aid Moira in some research on Muir Island. While he is away, the school is attacked by Alpha Flight whose mutant powers are enhanced by a drug called Banshee. Furthermore, it is revealed that Colossus has been using Banshee during his entire time at the Xavier's School to use his power without pain. Due to the sudden and apparently rampant use of the drug, Xavier and Jean begin screening all the students for traces of Banshee. However, it is later revealed the Banshee drug was created by Xavier himself, during his time in the Savage Land, and that it was created from Wolverine's blood. When Xavier tested Banshee, he was given powers that mimicked Wolverine's, including claws, enhanced senses, and a healing factor. Xavier and Magneto, however, deemed the drug too dangerous and stopped production of it. When Wolverine discovered that he was the source of the drug and that Xavier was responsible for its initial creation, Wolverine attacked Muir Island. Xavier admits to creating the drug but denies that he is responsible for its continued creation and use. It is revealed then that Moira got hold of Xavier's research and began creating and selling the drug to finance Muir Island. Moira, who had used the drug to give herself a sonic scream, begins to do battle with Wolverine, and Xavier evacuates the children moments before the research facility explodes. In the Ultimatum story arc, Charles informs all mutants that Magneto is behind the actions. Magneto confronts Charles, explaining that he believes that he shall act as God did to cleanse the world and usher in an era of mutant supremacy. When Charles states that Magneto is not God and that he will stop him as he always has in the past, Magneto then snaps Charles's neck, killing him. He returned, revealed as Rogue's benefactor, secretly sending her on an undercover mission and stating that he does not want his former students to know about his plan. It is unconfirmed if this is truly Xavier, as both William Stryker, Alex Summers and Quicksilver have been seen talking to their supposedly dead loved ones, hinting at a foe mentally manipulating several characters. It was revealed to be the work of Mr. Sinister, Apocalypse's disciple. In X-Men Noir, Charles Xavier is a psychiatrist who ran the "Xavier School for Exceptionally Wayward Youth", in Westchester where he took in juvenile delinquents, but instead of reforming them, he actually further trained them in criminal talents, due to his belief that sociopathy was in fact the next state in human behavioral evolution. The paper in which he stated this led to his expulsion from the American Psychological Association. He is currently in Riker's Island, awaiting charges after the truth about his reform school were made public. Xavier had been framed by Chief of Detectives Eric Magnus for the murder of one of his own students: Warren. Magnus had murdered Warren after Xavier refused to make his X-Men join Magnus' Brotherhood. An alternative of Earth-616's Professor X is shown, there was seemingly little to distinguish Charles Xavier until the day he was kidnapped by the forces of the Savior (unbeknownst to him, an alternative of himself), who removed his head from his body, placed in a life-giving "jar", and placed it with the heads of all the other alternative Xaviers put through the same procedure and made to scan the multiverse for the next mutants to be kidnapped. When the Savior was defeated, the collective of Xavier heads put themselves to work finding a new home for the people of the world they had been kidnapped to. However, in the process, all of the heads exploded, except one. This Xavier head would later aide a cross-dimension X-Men team in defeating ten evil Xaviers who are scattered throughout the multiverse and threaten existence itself. During the X-Termination crossover, AoA Nightcrawler's trip home resulted in the release of three evil beings that destroy anyone they touch. Several casualties resulted, including the AoA's Sabretooth, Horror Show, and Fiend, as well as the X-Treme X-Men's Xavier and Hercules. Professor X has appeared on a number of animated television shows including the X-Men animated series voiced by Cedric Smith, X-Men: Evolution voiced by David Kaye, and in Wolverine and the X-Men voiced by Jim Ward. He has appeared in twelve live-action 20th Century Fox X-Men feature films to date. He is played by Patrick Stewart in X-Men, X2, X-Men: The Last Stand, X-Men Origins: Wolverine, The Wolverine, Logan, and Doctor Strange in the Multiverse of Madness and by James McAvoy in X-Men: First Class, X-Men: Apocalypse, Deadpool 2 and Dark Phoenix. Both actors play him at different ages in X-Men: Days of Future Past. Harry Lloyd portrays a young Charles Xavier in the television series Legion. He has also appeared in a number of books and video games. Professor X appears as a collectable card in Marvel SNAP.
[ { "paragraph_id": 0, "text": "Professor X (Prof. Charles Francis Xavier) is a character appearing in American comic books published by Marvel Comics. Created by writer Stan Lee and artist/co-writer Jack Kirby, the character first appeared in The X-Men #1 (September 1963). The character is depicted as the founder and occasional leader of the X-Men.", "title": "" }, { "paragraph_id": 1, "text": "Xavier is a member of a subspecies of humans known as mutants, who are born with superhuman abilities. He is an exceptionally powerful telepath, who can read and control the minds of others. To both shelter and train mutants from around the world, he runs a private school in the X-Mansion in Salem Center, located in Westchester County, New York. Xavier also strives to serve a greater good by promoting peaceful coexistence and equality between humans and mutants in a world where zealous anti-mutant bigotry is widespread, though he later abandons his dream in favor of establishing a mutant nation on Krakoa.", "title": "" }, { "paragraph_id": 2, "text": "Throughout much of the character's history, Xavier has been depicted with paraplegia and uses a wheelchair. One of the world's most powerful mutant telepaths, Xavier is a scientific genius and a leading authority in genetics. He has devised Cerebro and other equipment to enhance psionic powers and detect and track people with the mutant gene.", "title": "" }, { "paragraph_id": 3, "text": "Xavier's pacifist and assimilationist ideology and actions have often been contrasted with that of Magneto, a mutant leader (initially characterized as a supervillain and later as a complex antihero) with whom Xavier has a complicated relationship. Writer Chris Claremont, who originated Magneto's backstory as well as the relationship between the two men, modeled his characterization of Xavier on David Ben Gurion, and that of Magneto on Menachem Begin.", "title": "" }, { "paragraph_id": 4, "text": "Patrick Stewart portrayed the character in the first three films in the 20th Century Fox X-Men film series and in various video games, while James McAvoy portrayed a younger version of the character in the 2011 prequel X-Men: First Class. Both actors reprised the role in the film X-Men: Days of Future Past. Stewart would reprise the role in the film Logan (2017), while McAvoy would further appear as his younger iteration of the character in X-Men: Apocalypse (2016), Deadpool 2 (2018) and Dark Phoenix (2019). Harry Lloyd portrayed the character in the third season of the television series Legion. Stewart again returned to the role, portraying an alternate version of the character in the 2022 Marvel Cinematic Universe film Doctor Strange in the Multiverse of Madness.", "title": "" }, { "paragraph_id": 5, "text": "Created by writer Stan Lee and artist/co-writer Jack Kirby, Professor X first appeared in X-Men #1 (September 1963).", "title": "Publication history" }, { "paragraph_id": 6, "text": "Stan Lee has stated that the physical inspiration of Professor Xavier was from Academy Award-winning actor Yul Brynner.", "title": "Publication history" }, { "paragraph_id": 7, "text": "Writer Scott Lobdell established Xavier's middle name to be \"Francis\" in Uncanny X-Men #309 (February 1994).", "title": "Publication history" }, { "paragraph_id": 8, "text": "Xavier's goals are to promote the peaceful affirmation of mutant rights, to mediate the co-existence of mutants and humans, to protect mutants from violent humans, and to protect society from antagonistic mutants, including his old friend, Magneto. To achieve these aims, he founded Xavier's School for Gifted Youngsters (later named the Xavier Institute) to teach mutants to explore and control their powers. Its first group of students was the original X-Men (Cyclops, Iceman, Marvel Girl, Angel, and Beast). Xavier's students consider him a visionary and often refer to their mission as \"Xavier's dream\". He is highly regarded by others in the Marvel Universe, respected by various governments, and trusted by several other superhero teams, including the Avengers and the Fantastic Four. However, he also has a manipulative streak which has resulted in several significant fallings-out with allies and students.", "title": "Publication history" }, { "paragraph_id": 9, "text": "He often acts as a public advocate for mutant rights and is the authority most of the Marvel superhero community turns to for advice on mutants. Despite this, his status as a mutant himself and originator of the X-Men only became public during the 2001 story \"E Is for Extinction\". He also appears in almost all of the X-Men animated series and in many video games, although usually as a non-playable character. Patrick Stewart plays him in the 2000s X-Men film series, as well as providing his voice in some of the X-Men video games (including some not connected to the film series).", "title": "Publication history" }, { "paragraph_id": 10, "text": "According to BusinessWeek, Charles Xavier is listed as one of the top ten most intelligent fictional characters in American comics.", "title": "Publication history" }, { "paragraph_id": 11, "text": "In a number of comics, Xavier is shown to have a dark side, a part of himself that he struggles to suppress. Perhaps the most notable appearance of this character element is in the Onslaught storyline, in which the crossover event's antagonist is a physical manifestation of that dark side. Also, Onslaught is created in the most violent act Xavier claims to have done: erasing the mind of Magneto. In X-Men #106 (August 1977), the new X-Men fight images of the original team, which have been created by what Xavier says is his \"evil self ... who would use his powers for personal gain and conquest\", which he says he is normally able to keep in check. In the 1984 four-part series titled The X-Men and the Micronauts, Xavier's dark desires manifest themselves as the Entity and threaten to destroy the Micronauts' universe.", "title": "Publication history" }, { "paragraph_id": 12, "text": "In other instances, Xavier is shown to be secretive and manipulative. During the Onslaught storyline, the X-Men find Xavier's files, the \"Xavier Protocols\", which detail how to kill many of the characters, including Xavier himself, should the need ever arise, such as if they went rogue. Astonishing X-Men vol. 3, #12 (August 2005) reveals that when Xavier realizes that the Danger Room has become sentient, he keeps it trapped and experiments on it for years, an act that Cyclops calls \"the oppression of a new life\" and equates to humanity's treatment of mutants (however, X-Men Legacy #220 - 224 reveals that Xavier did not intend for the Danger Room to become sentient: it was an accident, and Xavier sought a way to free Danger, but was unable to find a way to accomplish this without deleting her sentience as well).", "title": "Publication history" }, { "paragraph_id": 13, "text": "Charles Francis Xavier was born in New York City to the wealthy Dr. Brian Xavier, a well-respected nuclear scientist, and Sharon Xavier. The family lives in a very grand mansion estate in Westchester County because of the riches his father's nuclear research has brought them. He later grows up to attend Pembroke College at the University of Oxford, where he earns a Professorship in Genetics and other science fields, and goes on to live first in Oxford and then London for a number of years. Crucially, as he enters late adolescence, Xavier inherits the mansion-house he was raised in, enabling him not only to continue to live in it, but also to turn it in to Xavier's School for Gifted Youngsters, which he begins together with the first of the X-Men.", "title": "Fictional character biography" }, { "paragraph_id": 14, "text": "Brian, his father, dies in an accident when Xavier is still very young, and Brian's science partner Kurt Marko comforts and then marries the grieving Sharon. When Xavier's telepathic mutant powers emerge, he discovers Marko cares only about his mother's money.", "title": "Fictional character biography" }, { "paragraph_id": 15, "text": "After the wedding, Kurt moves in with the Xaviers, bringing with him his son Cain. Kurt quickly grows neglectful of Sharon, driving her to alcoholism, and abuses both Charles and Cain. Cain takes out his frustrations and insecurities on his stepbrother. Charles uses his telepathic powers to read Cain's mind and explore the extent of his psychological damage, which only leads to Cain becoming more aggressive toward him and the young Xavier feeling Cain's pain firsthand.", "title": "Fictional character biography" }, { "paragraph_id": 16, "text": "Sharon dies soon after, and a fight erupts between Cain and Charles that causes some of Kurt's lab equipment to explode. Mortally wounded, Kurt drags the two children out before dying, and admits he was partly responsible for Brian's death.", "title": "Fictional character biography" }, { "paragraph_id": 17, "text": "With help from his superhuman powers and natural genius, Xavier becomes an excellent student and athlete, though he gives up the latter, believing his powers give him an unfair advantage. Due to his powers, by the time he graduates from high school, Charles loses all of his hair. He enters Bard College at age 16 and graduates with his bachelor's degree in biology in only two years. In graduate studies, he receives Ph.D.s in Genetics, Biophysics, Psychology, and Anthropology with a two-year residence at Pembroke College, University of Oxford. He also receives an M.D. in Psychiatry while spending several years in London. He is later appointed adjunct professor at Columbia University. Origins of Marvel Comics: X-Men #1 (2010) presents a different version of events, suggesting a scholarship to the University of Oxford rescued him from his abusive home, after which he \"never looked back\", suggesting he began his academic career as a very young man at Oxford. His stepbrother is resentful of him.", "title": "Fictional character biography" }, { "paragraph_id": 18, "text": "At graduate school, he meets a Scottish girl named Moira Kinross, a fellow genetics student with whom he falls in love. The two agree to get married, but soon, Xavier is drafted into the Korean War. He carves himself a niche as a soldier in search and rescue missions alongside Shadowcat's father, Carmen Pryde, and witnesses Cain's transformation into Juggernaut when he touches a ruby with an inscription on it in an underground temple. During the war, he receives a letter from Moira telling him that she is breaking up with him. He later discovers that Moira married her old boyfriend Joseph MacTaggert, who abuses her.", "title": "Fictional character biography" }, { "paragraph_id": 19, "text": "Deeply depressed when Moira broke off their engagement without explanation, Xavier began traveling around the world as an adventurer after leaving the army. In Cairo, he meets a young girl named Ororo Munroe (later known as Storm), who is a pickpocket, and the Shadow King, a powerful mutant who is posing as Egyptian crime lord Amahl Farouk. Xavier defeats the Shadow King, barely escaping with his life. This encounter leads to Xavier's decision to devote his life to protecting humanity from evil mutants and safeguarding innocent mutants from human oppression.", "title": "Fictional character biography" }, { "paragraph_id": 20, "text": "Xavier visits his friend Daniel Shomron, who runs a clinic for traumatized Holocaust victims in Haifa, Israel. There, he meets a man going by the name of Magnus (who would later become Magneto), a Holocaust survivor who works as a volunteer in the clinic, and Gabrielle Haller, a woman driven into a catatonic coma by the trauma she experienced. Xavier uses his mental powers to break her out of her catatonia and the two fall in love. Xavier and Magneto become good friends, although neither immediately reveals to the other that he is a mutant. The two hold lengthy debates hypothesizing what will happen if humanity is faced with a new super-powered race of humans. While Xavier is optimistic, Magneto's experiences in the Holocaust lead him to believe that humanity will ultimately oppress the new race of humans as they have done with other minorities. The two friends reveal their powers to each other when they fight Nazi Baron Wolfgang von Strucker and his Hydra agents, who kidnap Gabrielle because she knows the location of their secret cache of gold. Magneto attempts to kill Strucker but Xavier stops him. Realizing that his and Xavier's views on mutant-human relations are incompatible, Magneto leaves with the gold. Charles stays in Israel for some time, but he and Gabrielle separate on good terms, neither knowing that she is pregnant with his son, who grows up to become the mutant Legion.", "title": "Fictional character biography" }, { "paragraph_id": 21, "text": "In a strange town near the Himalayas, Xavier encounters an alien calling himself Lucifer, the advance scout for an invasion by his race, and foils his plans. In retaliation, Lucifer drops a huge stone block on Xavier, crippling his legs. After Lucifer leaves, a young woman named Sage hears Xavier's telepathic cries for help and rescues him, bringing him to safety, beginning a long alliance between the two.", "title": "Fictional character biography" }, { "paragraph_id": 22, "text": "In a hospital in India, he is brought to an American nurse, Amelia Voght, who looks after him and, as she sees to his recovery, they fall in love. When he is released from the hospital, the two moved into an apartment in Bombay together. Amelia is troubled to find Charles studying mutation, as she is a mutant and unsettled by it, though she calms when he reveals himself to be a mutant as well. They eventually move to the United States, living on Xavier's family estate. But the night Scott Summers moves into Xavier's mansion, Amelia leaves him, believing Charles would have changed his view and that mutants should lie low. Yet he is recruiting them to what she believes is a lost cause. Charles tries to force her to stay with his mental powers, but immediately ashamed by this, lets her go. She later becomes a disciple of Magneto.", "title": "Fictional character biography" }, { "paragraph_id": 23, "text": "Over the years, Charles makes a name for himself as geneticist and psychologist, apparently renowned enough that the Greys were referred to him when no other expert could help their catatonic daughter, Jean. Xavier trains her in the use of her telekinesis, while inhibiting her telepathic abilities until she matures. Around this time, he also starts working with fellow mutation expert, Karl Lykos, as well as Moira MacTaggert again, who built a mutant research station on Muir Island. Apparently, Charles had gotten over Moira in his travels to the Greek island of Kirinos. Xavier discusses his candidates for recruitment to his personal strike force, the X-Men, with Moira, including those he passes over, which are Kurt Wagner, Piotr Rasputin, Pietro and Wanda Maximoff, and Ororo Munroe. Xavier also trains Tessa to spy on Sebastian Shaw.", "title": "Fictional character biography" }, { "paragraph_id": 24, "text": "Xavier founded Xavier's School for Gifted Youngsters, which provides a safe haven for mutants and teaches them to master their abilities. In addition, he seeks to foster mutant-human relations by providing his superhero team, the X-Men, as an example of mutants acting in good faith, as he told FBI agent Fred Duncan. With his inherited fortune, he uses his ancestral mansion at 1407 Graymalkin Lane in Salem Center, Westchester County, New York as a base of operations with technologically advanced facilities, including the Danger Room - later, Fantomex mentions that Xavier is a billionaire with a net worth of $3.5 billion. Presenting the image of a stern teacher, Xavier makes his students endure a rigorous training regime.", "title": "Fictional character biography" }, { "paragraph_id": 25, "text": "Xavier's first five students are Cyclops, Iceman, Angel, Beast, and Marvel Girl who become the original X-Men. After he completes recruiting the original team of X-Men, he sends them into battle with Magneto.", "title": "Fictional character biography" }, { "paragraph_id": 26, "text": "Throughout most of his time with the team, Xavier uses his telepathic powers to keep in constant contact with his students and provides instructions and advice when needed. In addition, he uses a special machine called Cerebro, which enhances his ability to detect mutants and to allow the team to find new students in need of the school.", "title": "Fictional character biography" }, { "paragraph_id": 27, "text": "Among the obstacles Xavier faces is his old friend, Magneto, who has grown into an advocate of mutant superiority since their last encounter and who believes the only solution to mutant persecution is domination over humanity.", "title": "Fictional character biography" }, { "paragraph_id": 28, "text": "When anthropologist Bolivar Trask resurfaces the \"mutant problem\", Xavier counters him in a televised debate, however, he appears arrogant and Trask sends his mutant-hunting robot Sentinels to terrorize mutants. The X-Men dispatch them, but Trask sees the error in his ways too late as he is killed by his creations.", "title": "Fictional character biography" }, { "paragraph_id": 29, "text": "At one point, Xavier seemingly dies during the X-Men's battle with the sub-human Grotesk, but it is later revealed that Xavier arranged for a reformed former villain named Changeling to impersonate him while he went into hiding to plan a defense against an invasion by the extraterrestrial Z'Nox, imparting a portion of his telepathic abilities to the Changeling to complete the disguise.", "title": "Fictional character biography" }, { "paragraph_id": 30, "text": "When the X-Men are captured by the sentient island Krakoa, Xavier assembles a new team to rescue them, including Cyclops' and Havok's long-lost brother, Vulcan, along with Darwin, Petra, and Sway. This new team, composed of students of Dr. Moira MacTaggert, was sent to rescue the original X-Men from Krakoa. However, after rescuing Cyclops, McTaggert's former students were seemingly killed. Upon Cyclops' return, Xavier removed Cyclops' memories of the death of Vulcan and his teammates; and began assembling yet another team of X-Men.", "title": "Fictional character biography" }, { "paragraph_id": 31, "text": "Xavier's subsequent rescue team consists of Banshee, Colossus, Sunfire, Nightcrawler, Storm, Wolverine, and Thunderbird. After the mission, the older team of X-Men, except for Cyclops, leave the school, believing they no longer belong there, and Xavier mentors the new X-Men.", "title": "Fictional character biography" }, { "paragraph_id": 32, "text": "Xavier forms a psychic bond across galaxies with Princess Lilandra from the Shi'ar Empire. When they finally meet, it is love at first sight. She implores the professor to stop her mad brother, Shi'ar Emperor D'Ken, and he instantly aids her by deploying his X-Men. When Jean Grey returns from the Savage Land to tell him that all the X-Men are dead, he shuts down the school and travels with Lilandra to her kingdom, where she is crowned Empress and he is treated like a child or a trophy husband.", "title": "Fictional character biography" }, { "paragraph_id": 33, "text": "Xavier senses the changes taking place in Jean Grey, and returns to Earth to help and resume leadership of the X-Men. Shortly thereafter he battles his pupil after she becomes Dark Phoenix and destroys a populated planet in the Shi'ar Empire. It hurts Xavier to be on the opposite side of Lilandra, but he has no other choice but to challenge the Shi'ar Imperial Guard to a duel over the fate of the Phoenix. Xavier would have lost against the greater power of the Dark Phoenix, but thanks to the help Jean Grey gives him (fighting her Phoenix persona), Xavier emerges victorious; she later commits suicide to prevent herself from endangering more innocent lives.", "title": "Fictional character biography" }, { "paragraph_id": 34, "text": "When the X-Men fight members of the extraterrestrial race known as the Brood, Xavier is captured by them, and implanted with a Brood egg, which places Xavier under the Brood's control. During this time, Xavier assembles a team of younger mutants called the New Mutants, secretly intended to be prime hosts for reproduction of the aliens. The X-Men discover this and return to free Xavier, but they are too late to prevent his body from being destroyed with a Brood Queen in its place; however, his soul remains intact. The X-Men and Starjammers subdue this monstrous creature containing Xavier's essence, but the only way to restore him is to clone a new body using tissue samples he donated to the Starjammers and transfer his consciousness into the clone body. This new body possesses functional legs, though the psychosomatic pain Xavier experienced after living so long as a paraplegic takes some time to subside. Subsequently, he even joins the X-Men in the field, but later decides not to continue this practice after realizing that his place is at the school, as the teacher of the New Mutants.", "title": "Fictional character biography" }, { "paragraph_id": 35, "text": "After taking a teaching position at Columbia University in Uncanny X-Men #192, Xavier is severely injured and left for dead as the victim of a hate crime. Callisto and her Morlocks, a group of underground-dwelling mutants, get him to safety. One of the Morlocks partially restores Xavier's health, but Callisto warns Xavier that he is not fully healed and that he must spend more time recuperating and restrain himself from exerting his full strength or powers, or his health might fail again. Xavier hides his injuries from the others and resumes his life.", "title": "Fictional character biography" }, { "paragraph_id": 36, "text": "Charles meets with former lover Gabrielle Haller on Muir Isle and discovers that they had a child. The boy, David, has autism and dissociative identity disorder. Furthermore, he has vast psionic powers like his father. After helping him and his team to escape from David's mind, Xavier promises he will always be there for him.", "title": "Fictional character biography" }, { "paragraph_id": 37, "text": "A reformed Magneto is arrested and put on trial. Xavier attends the trial to defend his friend. Andrea and Andreas Strucker, the children of presumed dead Baron von Strucker, crash the courtroom to attack Magneto and Xavier. Xavier is seriously injured. Dying, he asks a shocked Magneto to look after the X-Men for him. Lilandra, who has a psychic bond with Xavier, feels that he is in great danger and heads to Earth. There, she and Corsair take Xavier with them so Shi'ar advanced technology can heal him.", "title": "Fictional character biography" }, { "paragraph_id": 38, "text": "Xavier leaves Magneto in charge of the school, but some of the X-Men are unwilling to forgive their former enemy. Cyclops loses a duel for the leadership of the X-Men against Storm, then leaves them and joins the other four original X-Men to form a new team called X-Factor.", "title": "Fictional character biography" }, { "paragraph_id": 39, "text": "In the meantime, Charles becomes stranded in space with the Starjammers, but he is reunited with his lover Lilandra and relishes his carefree lifestyle. He serves as a member of the Starjammers aboard the starship Starjammer, mobile in the Shi'ar Galaxy. He becomes consort to the Princess-Majestrix Lilandra while in exile, and when she later resumes her throne he takes up residence with her in the Imperial palace on the Shi'ar homeworld. Xavier joins Lilandra in her cause to overthrow her sister Deathbird, taking on the powers of Phoenix temporarily wherein he is named Bald Phoenix by Corsair, but sees that he must return to help the X-Men.", "title": "Fictional character biography" }, { "paragraph_id": 40, "text": "Xavier eventually becomes imprisoned by the Skrulls during their attempted invasion of the Shi'ar Empire. Xavier breaks free from imprisonment by Warskrull Prime, and is reunited with the X-Men. A healthy Xavier returns from the Shi'ar Empire and is reunited with both the current and original X-Men teams, and resumes his leadership responsibilities of the united teams. In a battle with his old foe, the Shadow King, in the \"Muir Island Saga\", Xavier's spine is shattered, returning him to his former paraplegic state, while his son David is seemingly killed. In the following months, Xavier rebuilds the mansion, which previously was rebuilt with Shi'ar technology, and restructures the X-Men into two teams.", "title": "Fictional character biography" }, { "paragraph_id": 41, "text": "While holding a mutant rights speech, Xavier is nearly assassinated by Stryfe in the guise of Cable, being infected with a fatal techno-organic virus. For reasons of his own, the villain Apocalypse saves him. As a temporary side-effect, he gains full use of his legs and devotes his precious time to the youngest recruit on his team, Jubilee.", "title": "Fictional character biography" }, { "paragraph_id": 42, "text": "With all his students now highly trained adults, Professor Xavier renames his school the Xavier Institute For Higher Learning. Also, he assumes control of a private institution, the Massachusetts Academy, making it a new School for Gifted Youngsters. Another group of young mutants is trained here, Generation X, with Banshee and Emma Frost as headmaster and headmistress, respectively.", "title": "Fictional character biography" }, { "paragraph_id": 43, "text": "Professor X is for a time the unknowing host of the evil psionic entity Onslaught, the result of a previous battle with Magneto. In that battle, Magneto uses his powers to rip out the adamantium bonded to Wolverine's skeleton, and a furious Xavier wipes Magneto's mind, leaving him in a coma. From the psychic trauma of Xavier using his powers so violently and the mixing of Magneto's and Xavier's repressed anger, Onslaught is born. Onslaught wreaks havoc, destroying much of Manhattan, until many of Marvel's superheroes—including the Avengers, the Fantastic Four and the Hulk—destroy him. Xavier is left without his telepathy and, overcome with guilt, leaves the X-Men and is incarcerated for his actions. He later returns to the X-Men after Operation: Zero Tolerance, in which he is shocked by the cruel act of being turned over to the mutant-hating Bastion, following a clash with the sentient Cerebro and a team of impostor X-Men.", "title": "Fictional character biography" }, { "paragraph_id": 44, "text": "Xavier questions his dream again and Magneto shortly thereafter is confronted by the X-Men. After the battle, the UN concedes Genosha to Magnus, and Wolverine is angered by Xavier stopping him from getting his revenge on Magneto. Charles and Logan are later trapped in a dimension with different laws of physics, wherein they have to coordinate their moves together and, in the process, gain a better understanding of the other's views.", "title": "Fictional character biography" }, { "paragraph_id": 45, "text": "Apocalypse kidnaps the fabled \"Twelve\" special mutants (Xavier included) whose combined energies would grant him omnipotence. After Apocalypse's defeat with the help of Skrull mutants, Xavier goes with the young Skrulls known as Cadre K to train them and free them from their oppressors, and eventually returns to aid in Legacy Virus research.", "title": "Fictional character biography" }, { "paragraph_id": 46, "text": "Mystique and her Brotherhood start a deadly assault on Muir Isle by releasing an altered form of the Legacy Virus, all in retaliation against the election campaign of Robert Kelly, a seeming mutant-hater. Mystique blows up Moira MacTaggert's laboratory complex, fatally wounding her. Charles goes to the astral plane to meet with her and retrieve information on the cure to the Legacy Virus, but after gathering the information does not want to leave her alone. If not for Jean and Cable talking him down and pulling him back, the professor would have died with his first love, who states she has no regrets.", "title": "Fictional character biography" }, { "paragraph_id": 47, "text": "As Beast cures the Legacy Virus, many infected Genoshan mutants recover overnight, providing Magneto, the current ruler of Genosha, with an army to start the third World War. He demands Earth's governments accept him as their leader, and abducts and crucifies Xavier in Magda Square for all to see. A loyal member of Magneto's Acolytes, Amelia Voght, cannot stand to see her former lover punished in such a manner and sets him free. Jean Grey and rather untrained newcomers, as most of the team are elsewhere, distract Magneto and Wolverine guts him. Xavier is too late to intervene.", "title": "Fictional character biography" }, { "paragraph_id": 48, "text": "Xavier's evil twin Cassandra Nova, whom Xavier attempted to kill while they were both in their mother's womb, orders a group of rogue Sentinels to destroy the independent mutant nation of Genosha. Magneto, who is Genosha's leader, appears to die along with the vast majority of the nation's inhabitants. Nova then takes over Xavier's body. Posing as Xavier, she reveals his mutation to the world, something he needed to do but did not want to sully his reputation over, before going into space and crippling the Shi'ar Empire. The X-Men restore Xavier, but Lilandra, believing that too much disaster has come from the Shi'ar's involvement with the X-Men, annuls her marriage to Xavier. Lilandra previously had gone insane and tried to assassinate Charles on a trip to Mumbai. During this period, a mutant named Xorn joins the X-Men. Xorn uses his healing power to restore Xavier's use of his legs.", "title": "Fictional character biography" }, { "paragraph_id": 49, "text": "When the X-Men receive a distress call from a Scottish island, they are surprised to find Juggernaut with nowhere to go, as the island was destroyed by his further-mutated partner in crime, Black Tom Cassidy, who died. Xavier reaches out to his stepbrother and offers him a place in his mansion, with Cain reluctantly accepting. The Juggernaut redeems himself over the next few weeks and joins the X-Men. Xavier finds out that Cain's father preferred him to his own flesh and blood and that they both thought they deserved the abuse they incurred by Kurt; Cain believed this because his father loved someone else's child more than him, and Charles felt guilty about getting in the way. That it is why neither of them stopped Kurt Marko with their powers.", "title": "Fictional character biography" }, { "paragraph_id": 50, "text": "Now outed as a mutant, Xavier makes speeches to the public about mutant tolerance. He also founds the X-Corporation, or X-Corp (not to be confused with the X-Corps), with offices all over the world. The purpose of the X-Corp is to watch over mutant rights and help mutants in need. As a result of being out, the school no longer hides the fact that it is a school for mutants and it opens its doors for more mutant (and even human) students to come in. A student named Quentin Quire and members of his gang start a riot at the Xavier Institute during an open house at the school. As a result, Quire and two other students are killed. Uncertain about his dream's validity, Xavier announces that he will step down as headmaster and be succeeded by Jean Grey. Afterwards, Xorn reveals himself to be Magneto, having apparently not died in the Sentinel raid on Genosha. Magneto undoes the restoration of Xavier's ability to walk, kidnaps him, and destroys the X-Mansion (killing several of the students). Then Xorn/Magneto assaults New York, where Cyclops, Fantomex and a few students confront him. After the rest of the X-Men arrive, Xorn/Magneto kills Jean Grey with an electromagnetically induced stroke, and Wolverine decapitates him. With Jean dead, Xavier leaves the school to Cyclops and Emma Frost, to bury Xorn/Magneto in Genosha. In a retcon of Grant Morrison's storyline, there Xavier meets the \"real\" Magneto, who mysteriously survived Cassandra Nova's assault. The two resolve their differences and attempt to restore their friendship, leading a team of mutants, the Genoshan Excalibur, to rebuild and restore order to the destroyed island nation.", "title": "Fictional character biography" }, { "paragraph_id": 51, "text": "At the mansion, the Danger Room (the X-Men's simulated reality training chamber) gains sentience, christens itself \"Danger\", assumes a humanoid form, and attacks the X-Men before leaving to kill Xavier. With Magneto's help, Xavier holds off Danger until the X-Men arrive. Danger flees, but not before revealing to Colossus that Xavier has known it to be sentient ever since he upgraded it. Colossus is especially offended by this because he had been held captive and experimented upon by Danger's ally, Ord of the Breakworld. Ashamed, Xavier tries to explain to them that by the time he realized what was happening, he could see no other course. The disgusted X-Men leave.", "title": "Fictional character biography" }, { "paragraph_id": 52, "text": "In a prelude to House of M, Magneto's daughter Scarlet Witch has a mental breakdown and causes the death of several Avengers. Magneto brings her to Xavier and asks him to use his mental powers to help her. Although aided by Doctor Strange and the appearance of Cassandra Nova, Xavier is unsuccessful. Xavier orders a meeting of the X-Men and Avengers to decide Wanda's fate. Her brother Quicksilver, believing the heroes plan to kill her, speeds off to Genosha and convinces Wanda that she could right the wrongs she inflicted by using her powers to alter reality.", "title": "Fictional character biography" }, { "paragraph_id": 53, "text": "Quicksilver somehow forces a tearful Wanda to reveal to him her heart's desires of Magneto, the assembled New Avengers, and the X-Men, and then uses her powers to make them all real. Thanks to Magneto, though, this re-imagined world is a place where a much more numerous mutant-kind are the dominant species, humans a disenfranchised and oppressed 'silent majority', and Magneto himself rules supreme. In this reality, the only proof that Charles Xavier ever existed is a secret monument in Magneto's palace garden, with the engraved message \"He died so Genosha could live\".", "title": "Fictional character biography" }, { "paragraph_id": 54, "text": "After mutant Layla Miller restores the memories of some of the X-Men and Avengers, they head to Genosha where they discover that Magneto has erected a memorial garden for Xavier commemorating his death. Emma is horrified until Cloak fades into the grave and discovers there is no body inside. After a battle, Scarlet Witch again uses her powers to restore reality and, as a slight against her father, causes a large majority mutants to lose their powers, leaving the mutant race on the brink of extinction and causing the lost powers to become an energy mass, the Collective. With reality restored, Xavier is still missing and the X-Men are unable to detect him with Cerebro.", "title": "Fictional character biography" }, { "paragraph_id": 55, "text": "Xavier returns when Cyclops' and Havok's long-lost brother, Vulcan, is revived by the Collective energy released as a result of the \"House of M\" incident. Vulcan then attacks the X-Men. Xavier, now depowered but able to walk in the wake of \"House of M\", reveals that he had gathered and trained another team of X-Men (this one composed of students of Dr. Moira MacTaggert) sometime between the original team and the new X-Men team introduced in Giant Size X-Men #1. This team included Vulcan as a member. Like the \"Giant Size\" X-Men team, McTaggert's former students were sent to rescue the original X-Men from Krakoa, the living island. However, after rescuing Cyclops, McTaggert's former students were seemingly killed. Upon Cyclops' return, Xavier removed Cyclops' memories of the death of Vulcan and his teammates and began assembling the \"Giant Size\" X-Men. Vulcan skirmishes with the X-Men and eventually flees into space.", "title": "Fictional character biography" }, { "paragraph_id": 56, "text": "In spite of Cyclops' feelings, Xavier forms a new team including Havok and Darwin, the lone other survivors of Moira's students. Xavier seeks to confront Vulcan before he can enact his vengeance against the Shi'ar empire, which killed Vulcan's mother. While en route to the Shi'ar homeworld, Xavier is abducted and is later thrown into the M'Kraan Crystal by Vulcan. Darwin follows Xavier into the crystal and pulls Xavier out. This somehow restores Xavier's lost telepathy. With help from his longtime lover, Lilandra, Xavier escapes back to Earth with several of his X-Men.", "title": "Fictional character biography" }, { "paragraph_id": 57, "text": "Upon Xavier's return to Earth, as seen in the World War Hulk storyline, he begins to search for lost mutants such as Magneto. Charles' search for more mutants is interrupted by the Hulk, who was sent into extraterrestrial exile by the Illuminati, a group of powerful superbeings to which Xavier belongs. Xavier had no part in (and did not know of) the Hulk exile decision, but Xavier admits to Hulk that he would have concurred to a temporary exile so Bruce Banner could be cured of transforming into the Hulk. However, he also tells the Hulk he would not have agreed to permanent exile. Xavier attempts to surrender to the Hulk, but after viewing the X-Mansion's large graveyard dedicated to post-M-Day mutant deaths, The Hulk concludes the mutants have suffered enough and leaves the Mansion grounds on his own accord. While the X-Men tend to the wounded, Cyclops finally forgives Professor X.", "title": "Fictional character biography" }, { "paragraph_id": 58, "text": "While using Cerebra and talking to Beast during the Messiah Complex storyline, Charles detects a new mutant so powerful it fries Cerebra's system. He asks Cyclops to send out a team to find out about the mutant. Once the team has come back empty handed, he argues with Scott for not telling him about the team he deployed to find former Acolytes. Scott tells him outright that he does not need him to run the X-Men anymore. This upsets Charles and annoys him later on when he overhears Cyclops briefing X-Factor on the situation. He also approaches the New X-Men in an attempt to help them figure out a non-violent way to help against the Purifiers, but is quickly rebuked by Surge, who questions where he was when they were getting attacked the first time, and that they did not need to learn from him. Charles questions Cyclops' decision to send X-Force to hunt down his own son, Cable, in front of the students. Cyclops then tells Xavier that he is a distraction that will keep getting in the way and that he must leave the mansion. Xavier is contacted by Cable, who lost the mutant newborn to the traitorous actions of Bishop, who in turn lost the child to the Marauders, and tells him that he is the only one who can help Cable save the future. In the final fight, Xavier is accidentally shot in the head by Bishop. Immediately afterward Xavier's body disappears and Cyclops declares that there are no more X-Men.", "title": "Fictional character biography" }, { "paragraph_id": 59, "text": "Professor Xavier survives Bishop's gunshot but falls into a coma. Xavier is kidnapped by Exodus, Tempo, and Karima Shapandar. Exodus tries to heal Xavier, Xavier mentally fights Exodus. Exodus finally approaches Magneto, who is apparently still depowered, for help. Magneto and Karima Shapandar are able to stir Xavier's memories and coax him out of his coma, though Xavier remains slightly confused and partly amnesiac. Later, Exodus confronts Magneto about Joanna Cargill's injury (Magneto was forced to shoot a laser through her eyeball to prevent her attempted an assassination of Xavier). Exodus nearly kills Magneto, and Xavier drags Exodus onto the Astral Plane, putting Xavier's own newly restored mind at stake. Xavier defeats Exodus after a harrowing psionic battle, and Exodus reveals the reason he abducted Xavier and to restore his mind: Exodus wants Xavier to lead the Acolytes and find the mutant messiah child (now under the guardianship of Cable) to indoctrinate the child into their cause. Xavier refuses. Emma Frost's telepathy picks up on the psychic fight, and Emma informs Cyclops that Xavier is alive. Xavier parts company with Magneto and Karima to try to regain his lost memories by visiting people from his past.", "title": "Fictional character biography" }, { "paragraph_id": 60, "text": "The first person Charles visits is Carter Ryking, who had gone insane after losing his powers. Charles reads Carter's memories and discovers that when the two were children they were used as test subjects by Nathan Milbury of the Black Womb Project, with the approval of Charles' father, Doctor Brian Xavier. Xavier makes the connection Milbury and X-Men villain, Mister Sinister, who has apparently long been manipulating Charles' life in addition to other X-Men. Afterwards, he discovers he has been targeted by assassins.", "title": "Fictional character biography" }, { "paragraph_id": 61, "text": "Charles eventually discovers Mister Sinister had set up Charles, Sebastian Shaw, Juggernaut, and Ryking (Hazard) as potential new hosts for Sinister's mind. Bleeding slowly to death, he apparently gives in to Sinister becoming the new Mister Sinister. But in reality, Xavier is still battling Sinister for control of his body. As Sebastian Shaw and Gambit destroy Sinister's Cronus Machine, the device that he used to transfer his consciousness into new hosts, Xavier drives Sinister out of his body permanently. Xavier thanks Shaw and Gambit for their help and declares he must go and see Cyclops immediately. Professor X returns to the X-Mansion to find it destroyed after recent events. Afterwards, Xavier leaves the ruins of the X-Mansion to secretly meet up Cyclops by psychically coercing his former student for the visit. Xavier explains to Cyclops about the recent events with Mr. Sinister and tries to explain to Cyclops how Sinister has been manipulating Scott's and Jean's lives since when they were children. Xavier attempts to have Scott give him permission to scan Scott's mind for traces of Sinister's influences, but instead, Scott turns the tables on Xavier by revealing that he has secretly invited Emma Frost into their entire meeting and also into Xavier's mind.", "title": "Fictional character biography" }, { "paragraph_id": 62, "text": "While in his mind, Emma forces Xavier to relive each of his morally ambiguous decisions under altruistic pretenses. As the issue continues, Charles realizes his human arrogance and that while some of his decisions were morally wrong, he must move forward with his life and deal with the consequences. Emma ends her incursion into Xavier's mind by reminding him of Moira MacTaggert's last words. As he reflects on Moira's words, Xavier gives Cyclops his blessing to lead the X-Men and leaves to find his own path. Following his encounter with Wolverine (in the \"Original Sin\" Arc) Professor Xavier seeks out his step-brother, the unstoppable Juggernaut in an attempt to reform him. After a conversation about the meaning of the word \"Juggernaut\" and a review of Juggernaut and Xavier's shared history Xavier offers Cain an empty box as a gift. Confused by Xavier's gift Cain attempts to kill the Professor bringing an entire sports bar down over their heads in the process. Later Cain battles the X-Men in his full Juggernaut armor and conquers the planet. Just as everything appears to be under the Juggernaut's control Xavier reappears and informs him that everything that has just taken place except for Juggernaut destroying the bar took place in Cain's mind. A baffled Cain demands to know how Xavier managed to overcome his psychically shielded helmet to which the Professor replies that he decided to visit Cain in his sleep. Professor Xavier then informs him that he now understands Cain as a person and that he will not attempt to get in his way or reform him again. But Xavier also warns Cain that if he gets in the way of the Professor's path to redemption Xavier will stop him permanently. Following his encounter with Cain it has been revealed that Xavier is now searching for Rogue.", "title": "Fictional character biography" }, { "paragraph_id": 63, "text": "After his bruising encounter with Cyclops and Emma Frost, Professor X is forced to revisit the biggest challenge and the biggest failure of his career, Wolverine, when the feral mutant asks for Charles' help in freeing his son from the clutches of the Hellfire Club. As the two search for Daken, Wolverine reveals that when he first joined the X-Men he attempted to assassinate Xavier due to some unknown programming. In response, the Professor broke Logan's mind and rebuilt it so that any and all programming he received was forgotten. Logan also revealed that the real reason Xavier asked him to join the X-Men was that Charles \"needed a weapon\". Eventually Professor Xavier and Wolverine locate Sebastian Shaw's mansion and attack his minions, just as they are about to enter a bomb explodes from within catching them both off guard. From the wreckage emerges an angry Sebastian who immobilizes Wolverine. Meanwhile, Miss Sinister knocks Daken unconscious and has him taken to the med lab in the mansion's basement. As Shaw prepares to deliver a killing blow to Xavier, Wolverine recovers and stops him telling Xavier to rescue his son. Professor Xavier locates the med lab and after a quick psychic battle with Miss Sinister enters Daken's fractured mind. While in Daken's mind Xavier discovers Romulus's psychic tampering and comments that Daken's mind is even more broken than Wolverine's was. Before Xavier can heal Daken a psychic bomb explodes causing Xavier to become comatose and Daken to wake up. Miss Sinister arrives and attempts to manipulate Daken who reveals that the psychic bomb in his head restored his memories and stabs Miss Sinister in the chest. Meanwhile, Wolverine defeats Shaw and enters the mansion to find Daken standing over an unconscious Xavier preparing to kill him. Wolverine tells Daken that he will not let him hurt Xavier and the two fight. Overcome with guilt over what happened to Daken and Itsu, Wolverine allows himself to be beaten. Just as Daken appears to have won Xavier pulls both of them onto the astral plane revealing that the psychic bomb had little effect on him because his psyche was already shattered. Xavier then explains to Wolverine and Daken that Romulus is solely responsible for Itsu's death and that he lied to Daken about everything because he wanted Wolverine to become his weapon. As the three converse, Daken returns to the physical plane and prevents Shaw from killing Xavier. With the truth revealed Wolverine and Daken decide to kill Romulus. As the two depart Wolverine tells Xavier that he forgives him for all of the dark moments in their history. Wolverine acknowledges that Professor Xavier allowed him to become a hero. Wolverine then tells the Professor that he hopes he will one day be able to forgive him for choosing to kill Romulus.", "title": "Fictional character biography" }, { "paragraph_id": 64, "text": "Professor Xavier recruits Gambit to go with him to Australia to find and help Rogue who is currently staying at the X-Men's old base in the Outback; unaware Danger is using Rogue as a conduit for her revenge against him.", "title": "Fictional character biography" }, { "paragraph_id": 65, "text": "In a prelude to the \"Secret Invasion\" storyline, Professor X was at the meeting of the Illuminati when it came to the discussion about the Skrulls planning an invasion by taking out Earth's heroes and posing as them. He claims he was unable to distinguish that Black Bolt had been replaced by a Skrull, and his powers were tested quickly by the Black Bolt Skrull. Professor X leaves after learning even he can no longer trust the others, yet appears to have severely restricted the number of people he informs of the forthcoming alien invasion, as the X-Men were not prepared for the Skrulls, at least at first. Xavier has not seen again during the events of Secret Invasion, though his X-Men in San Francisco are successful at repelling the invaders there through the use of the modified Legacy Virus.", "title": "Fictional character biography" }, { "paragraph_id": 66, "text": "During the Dark Reign storyline, Professor X convinces Exodus to disband the Acolytes. A H.A.M.M.E.R. helicopter arrives and from inside appears Norman Osborn, who wants to talk to him. During the Dark Avengers' arrival in San Francisco to enforce martial law and squelch the anti-mutant riots occurring in the city, Xavier appears (back in his wheelchair) in the company of Norman Osborn and publicly denounces Cyclops' actions and urges him to turn himself in. However, this Xavier was revealed to be Mystique who Osborn found to impersonate Xavier in public. The real Xavier is shown in prison on Alcatraz and slowly being stripped of his telepathic powers while in psionic contact with Beast, who was arrested earlier for his part in the anti-mutant riots. It was also revealed by Emma Frost that she and Professor X are both Omega Class Telepaths when she manages to detect the real Professor X. Professor X helps Emma Frost enter Sentry's mind. However, as Emma frees him of the Void's influence, a minute sliver of the entity itself remains in her mind. Xavier quickly tells her to remain in her diamond armor state to prevent the Void from gaining access to her psi-powers. Professor X is later seen with Emma Frost where Beast is recuperating.", "title": "Fictional character biography" }, { "paragraph_id": 67, "text": "After what happened at Utopia, Xavier has come to live on the risen Asteroid M, rechristened Utopia, along with the rest of the X-Men, X-Club, and mutant refugees and is also allowed to join the Utopia lead council (Cyclops, Storm, Namor, Iceman, Beast, Wolverine and Emma Frost). While he no longer continues to openly question every move that Cyclops makes, he is still concerned about some of his leadership decisions. Xavier had wanted to return to the mainland to clear his name, but in the aftermath of Osborn declaring Utopia as a mutant detention area, Cyclops refused to let him leave, stating that it would be a tactical advantage to have him as an ace in the hole in case the need arose. To that end, he has kept Xavier out of the field and instead relied on Emma Frost, Psylocke and the Stepford Cuckoos respectively for their own psionic talents. While attending the funeral of Yuriko Takiguchi, Magneto arrives at Utopia, apparently under peaceful motives. Xavier does not believe it, and attacks Magneto telepathically, causing Cyclops to force him to stand down. He later apologizes to Magneto for acting out of his old passions from their complicated relationship, which Magneto accepts.", "title": "Fictional character biography" }, { "paragraph_id": 68, "text": "During the Second Coming storyline, Professor Xavier is seen on Utopia delivering a eulogy at Nightcrawler's funeral. Like the other X-Men, he is deeply saddened by Kurt's death and anxious about the arrival of Cable and Hope. Xavier is seen using his powers to help his son Legion control his many personalities and battle the Nimrods. At the conclusion of Second Coming Professor X is seen surveying the aftermath of the battle from a helicopter. As Hope descends to the ground and cradles Cable's lifeless arm, Xavier reflects on everything that has transpired and states that, while he feels that Hope has indeed come to save mutant kind and revive his dream, she is still only a young woman and will have a long and difficult journey before she can truly achieve her potential.", "title": "Fictional character biography" }, { "paragraph_id": 69, "text": "During the \"Avengers vs. X-Men\" storyline, the Phoenix Force is split into five pieces and bonded with Cyclops, Emma Frost, Namor, Colossus and Magik (who become known as the Phoenix Five). Eventually, Cyclops and Frost come to possess the full Phoenix Force, and Professor X is instrumental in confronting them both, and dies in the ensuing battle with Cyclops. The Phoenix Force is subsequently forced to abandon Cyclops as a host by the efforts of both Hope Summers and the Scarlet Witch.", "title": "Fictional character biography" }, { "paragraph_id": 70, "text": "Xavier's body is later stolen by the Red Skull's S-Men while the group also captures Rogue and Scarlet Witch. Xavier's brain is removed and fused to the brain of the Red Skull. After Rogue and Scarlet Witch snapped out of the fight they were in, they find the lobotomized body of Professor X. Red Skull uses the new powers conferred upon him by Professor X's brain to provoke anti-mutant riots. His plans are foiled by the Avengers and the X-Men, and the Skull escapes.", "title": "Fictional character biography" }, { "paragraph_id": 71, "text": "Professor X's spirit is later seen in the Heaven dimension along with Nightcrawler's spirit at the time when Azazel invades Heaven.", "title": "Fictional character biography" }, { "paragraph_id": 72, "text": "During the AXIS storyline, a fragment of Professor X's psyche (which had escaped the scrubbing of his memories) still existed in Red Skull's mind preventing him from unleashing the full potential of Professor X's powers. During a fight with the Stark Sentinels, Doctor Strange and Scarlet Witch attempt to cast a spell to invert the axis of Red Skull's brain and bring out the fragment of Professor X to defeat Onslaught. Doctor Strange was targeted and captured by the Sentinels before they could cast the spell. When Magneto arrived with his supervillain allies, Doctor Doom and Scarlet Witch attempted to cast the inversion spell again and Red Onslaught was knocked unconscious and reverted to his Red Skull form. Although they did not know whether Professor X was now in control, the Avengers decided to be cautious and take Red Skull to Stark Tower. It was later revealed that the spell had actually caused all the heroes and villains present to undergo a \"moral inversion\" rather than simply bringing out Professor X in the Skull, with the result that the Skull and other villains became heroic while the Avengers and X-Men present became villainous. Eventually, the inversion was undone.", "title": "Fictional character biography" }, { "paragraph_id": 73, "text": "After the Skull mounts a telepathic assault that nearly allows him to take control of the Avengers, he is defeated when Deadpool places Magneto's old helmet on Rogue, allowing her to knock out the Skull and take him to Beast. Beast is subsequently able to perform brain surgery on the Skull, extracting the part of Xavier's brain that was grafted onto the villain's own brain without causing any apparent damage to the Skull. Rogers attempts to claim the fragment for himself, but Rogue flies up and incinerates the fragment with the aid of the Human Torch, the two expressing hope that Xavier will rest in peace.", "title": "Fictional character biography" }, { "paragraph_id": 74, "text": "The astral form of Professor Xavier has since been revealed to be imprisoned in the Astral Plane after Shadow King somehow acquired it upon Professor X's death. After what appeared to be years in the Astral Plane, Professor X is able to trick Shadow King into playing him in a 'game' that lures Rogue, Mystique and Fantomex onto the Astral Plane, while turning others into carriers for the Shadow King's 'contagious' psychic essence. With the Shadow King certain of his victory, he fails to realize that Xavier's apparent 'surrender' to his game was really just him biding his time until the Shadow King's influence was distracted long enough for him to drop his already-subtly-weakened guard long enough for Xavier to break his bonds, luring in the three aforementioned X-Men as their identities were already fundamentally malleable. With the Shadow King defeated, Xavier is apparently returned to the real world in the body of Fantomex, Fantomex reasoning that nobody really knows who he is as an individual beyond his status as one of the X-Men whereas this act of sacrifice will ensure that he is remembered for a great deed.", "title": "Fictional character biography" }, { "paragraph_id": 75, "text": "Proteus has spent years trapped in a psionic hellscape of the Astral Plane, where The Shadow King reigned supreme, only to escape last issue. Part of the reason that he could was the escape of Charles Xavier (who now chooses to go by X, since he is now in a younger body after escaping), and now X leads the X-Men directly into an ambush, as Proteus has warped an entire village with his powers, leading to a mind-to-mind battle that leaves X on the receiving end of a psychic beatdown.", "title": "Fictional character biography" }, { "paragraph_id": 76, "text": "Proteus has started his garden and his seeds are planted all over the world. Psylocke is in command and has a plan which mainly consists of Archangel using metal and Mystique morphing into his mother. Once they drain him, Rogue and Bishop convert his energy and release him back to the universe. Whilst this all went down Psylocke and X combined forces to burn out the seeds across the planet. As they are working on it they discover they are not enough to accomplish the task. X mentions the network of psychics the Shadow King was using and that Betsy who is in control should tap into it. She agrees and does so yet unbeknownst to her X was possessed by the Shadow King who violently erupts from X's head.", "title": "Fictional character biography" }, { "paragraph_id": 77, "text": "Following X's apparent death after the Shadow King exploded from his skull, the psychic villain tears the X-Men apart until X literally pulls himself back together (a feat he later refuses to explain), and he and Psylocke team up to harness the power of all of Earth's psychics to destroy the Shadow King. As Psylocke says she feels no psychic trace of him anywhere, X implants comforting post-hypnotic psychic suggestions in his allies and then erases their memories (including allowing Warren Worthington to switch between his identities at will). Only Psylocke's memory is left intact, with X telling her she will be the one to \"keep him honest\" while he embarks on a new mission.", "title": "Fictional character biography" }, { "paragraph_id": 78, "text": "X has since made his presence known to his former students and reveals his new plan for all mutantkind. Now clad in a Cerebro-like helmet, Xavier has apparently abandoned his dream for peaceful coexistence, and had turned Krakoa into a sovereign nation state for mutants as well as use it to apparently heal the X-Men from their ordeals during the showdown against the forces of O.N.E. He then leads the X-Men into planting in seeds in strategic locations around the world and Mars, which, overnight, grow into massive plantlike \"Habitats\". As it turns out, these \"Habitats\" – and the plants that grew them – are extensions of Krakoa. Through the advancement of mutant technology combined with Krakoa's unique abilities as a living mutant island, Professor X and the X-Men have embassies around the world. Also through this combination of technology and mutant power, Xavier have developed three drugs that could change human life – a pill that extends human life by five years, an adaptable universal antibiotic, and a pill that cures \"diseases of the mind, in humans\".” In exchange for recognizing the sovereignty of Krakoa, Professor X will give these drugs to mankind, with mutants living in peace on the island.", "title": "Fictional character biography" }, { "paragraph_id": 79, "text": "Xavier and Magneto later meet with Raven Darkholm inside his sanctum. The two mutant leaders both greatly pleased with the success of her mission as she presents what they'ed petitioned her to steal. A mysterious USB tab containing sensitive information stolen from Damage Control, Mystique would inquire for her payment as she had met their demands. However, Xavier mentions that he still had more demands that needed to be met as they were building their protected future of Homo Sapiens Superior, seeming to psychokinetically beckon the contents of her theft into his hands while Mystique questions how much more needed to be done for his ultimate pet project.", "title": "Fictional character biography" }, { "paragraph_id": 80, "text": "Xavier and Magneto reveal the contents of the USB drive to Cyclops, which are shown to be information on Orchis, an organization dedicated to responding to a large-scale mutant threat and the plans of a Mother Mold. They believe that the creation of the Mother Mold will herald a new generation of Sentinels and along with it, Nimrod. They task Cyclops with assembling a team to destroy the Mother Mold station. Although the team (composed of Cyclops, Marvel Girl, Wolverine, Nightcrawler, Husk, Mystique, Archangel, and Monet) is successfully, they are all killed in the process. X mourns them, vowing \"No more.\"", "title": "Fictional character biography" }, { "paragraph_id": 81, "text": "Xavier is revealed to have upgraded Cerebro with the help of Forge, which is now able to copy and store the minds of mutants in a database. After the Five (Hope Summers, Goldballs, Elixir, Proteus, and Tempus) are able to grow the bodies of deceased mutants, Xavier is able to copy the minds back in these empty shells. Thus, he is able to resurrect Cyclops's team, thanking them for what they did. At the U.N., Xavier, Beast, and Emma celebrate with other ambassadors for the recognition of Krakoa as a sovereign nation. Xavier telepathically converses with Emma, revealing that he knows that she manipulated the Russian ambassador to abstain from the vote, before thanking her for her service. Two days after the U.N. vote, Xavier, Magneto, and Wolverine are in Krakoa waiting besides several portals. While Wolverine expresses his misgivings about the upcoming event, Xavier and Magneto assure him all will be alright. Soon after, several villainous mutants, including Mister Sinister, Sebastian Shaw, Exodus, Selene (comics) and Apocalypse arrive through the portals. Apocalypse in particular expresses satisfaction at arriving and Krakoa responds in the same way. Magneto and Xavier reveal that they have invited all mutants, even those who have fought against them in the past, to Krakoa, to form a society. The assembled villainous mutants agree to their terms, and Xavier shakes Apocalypse's hand, welcoming him and the others to their home.\"", "title": "Fictional character biography" }, { "paragraph_id": 82, "text": "While peace reigns on Krakoa, a mysterious team of assassins HALO drops into the island and assassinates Xavier, destroying his Cerebro helmet in the process. The Quiet Council hides Xavier's death from the rest of the world, and through the activation of a Cerebro backup, and the efforts of The Five, Xavier is reborn once more. Soon after, he partakes in a global conference alongside Magneto and Apocalypse, professing that he still loves humanity, whilst subtly warning them in regards to his previous assassination - and his knowledge of an ongoing assassination attempt at the forum itself, foiled by Cyclops and Gorgon.", "title": "Fictional character biography" }, { "paragraph_id": 83, "text": "Professor X is a mutant who possesses vast telepathic powers, and is among the strongest and most powerful telepaths in the Marvel Universe. He is able to perceive the thoughts of others or project his own thoughts within a radius of approximately 250 miles (400 km). Xavier's telepathy once covered the entire world; although following this, Magneto altered the Earth's electromagnetic field to restrict Xavier's telepathic range. While not on Earth, Xavier's natural telepathic abilities have reached across space to make universal mental contact with multiple alien races. With extreme effort, he can also greatly extend the range of his telepathy. He can learn foreign languages by reading the language centers of the brain of someone adept, and alternately \"teach\" languages to others in the same manner. As side effect of his telepathy, Xavier possesses an eidetic memory and his brain can assimilate and process impossibly huge amounts of raw data in an astonishingly short amount of time.", "title": "Powers and abilities" }, { "paragraph_id": 84, "text": "Xavier's vast psionic powers enable him to manipulate the minds of others, warp perceptions to make himself seem invisible, project mental illusions, cause loss of particular memories or total amnesia, and induce pain or temporary mental and/or physical paralysis in others. Xavier once trained a new group of mutants mentally, subjectively making them experience months of training together, while only hours passed in the real world. Within close range, he can manipulate almost any number of minds for such simple feats. However, he can only take full possession of one other mind at a time, and must strictly be within that person's physical presence. He is one of the few telepaths skilled enough to communicate with animals and even share their perceptions. He can also telepathically take away or control people's natural bodily functions and senses, such as sight, hearing, smell, taste, or even mutant powers. He has displayed telepathic prowess sufficient to confront Ego the Living Planet (while aided by Cadre K) as well as narrowly defeat Exodus. However, he cannot permanently \"reprogram\" human minds to believe what he might want them to believe even if he wanted to do so, explaining that the mind is an organism that would always recall the steps necessary for it to reach the present and thus 'rewrite' itself to its original setting if he tried to change it. However, his initial reprogramming of Wolverine lasted several years, despite Wolverine overcoming the reprogramming much faster than an ordinary human because of his healing factor.", "title": "Powers and abilities" }, { "paragraph_id": 85, "text": "He is able to project from his mind 'bolts' composed of psychic energy, enabling him to stun the mind of another person into unconsciousness, inflict mental trauma, or even cause death. These 'bolts' inflict damage only upon other minds, having a negligible effect on non-mental beings, if any. The manner in which Xavier's powers function indicates that his telepathy is physical in some way, as it can be enhanced by physical means (for example, Cerebro), but can also be disrupted by physical means (for example, Magneto's alteration of the Earth's magnetic field).", "title": "Powers and abilities" }, { "paragraph_id": 86, "text": "Xavier can perceive the distinct mental presence/brain waves of other superhuman mutants within a small radius of himself. To detect mutants to a wider area beyond this radius, he must amplify his powers through Cerebro and subsequently Cerebra, computer devices of his own design which are sensitive to the psychic/physical energies produced by the mind.", "title": "Powers and abilities" }, { "paragraph_id": 87, "text": "Professor X can project his astral form into a psychic dimension known as the astral plane. There, he can use his powers to create objects, control his surroundings, and even control and destroy the astral forms of others. He cannot project this form over long distances.", "title": "Powers and abilities" }, { "paragraph_id": 88, "text": "Uncanny X-Men writer Ed Brubaker has claimed that, after being de-powered by the Scarlet Witch, and then re-powered by the M'Kraan Crystal, Charles' telepathy is more powerful than was previously known. However, the extent of this enhancement is unknown. Years prior to initial publishing, Charles Xavier had an undefined level of telekinesis. This aspect of his powers were potent enough to cause catastrophic system disruption in computerized appliances. Such an attribute has faded, however. His evil counterpart Cassandra Nova Xavier would possess this ability, indicating he still possessed the potential for them. This potential was proven true after his death and resurgence within the younger, stronger body of Charlie Cluster 7. The Professor, using the moniker X, fashioned a Cerebro like a helmet which acts as a focusing device for his psionic powers and used it to galvanize latent aspects of his X-Gene to stimulate some dormant properties, seemingly using telekinesis to will a flash drive on Mystique's person into his hand.", "title": "Powers and abilities" }, { "paragraph_id": 89, "text": "Charles Xavier is a genius with multiple doctorates. He is a world-renowned geneticist, a leading expert in mutation, possesses considerable knowledge of various life sciences, and is the inventor of Cerebro. He possesses Ph.D.s in Genetics, Biophysics, Psychology, and Anthropology, and an M.D. in Psychiatry. He is highly talented in devising equipment for utilizing and enhancing psionic powers. He is also a great tactician and strategist, effectively evaluating situations and devising swift responses.", "title": "Powers and abilities" }, { "paragraph_id": 90, "text": "During his travels in Asia, Xavier learned martial arts, acquiring \"refined combat skills\" according to Magneto. When these skills are coordinated in tandem with his telepathic abilities, Xavier is a dangerous unarmed combatant, capable of sensing the intentions of others and countering them with superhuman efficiency. He also has extensive knowledge of pressure points.", "title": "Powers and abilities" }, { "paragraph_id": 91, "text": "Charles Xavier was also given possession of the Mind Gem. It allows the user to boost mental power and access the thoughts and dreams of other beings. Backed by the Power Gem, it is possible to access all minds in existence simultaneously. Like all other former Illuminati members, Xavier has sworn to never use the gem and to keep its location hidden.", "title": "Powers and abilities" }, { "paragraph_id": 92, "text": "The Xavier Protocols are a set of doomsday plans created by Professor X. The protocols detail the best way to kill many powerful mutant characters, including the X-Men and Xavier himself, should they become too large of a danger. The Xavier Protocols are first mentioned during the Onslaught crossover and first seen in Excalibur #100 in Moira MacTaggert's lab. Charles Xavier compiled a list of the Earth's most powerful mutants and plans on how to defeat them if they become a threat to the world. They are first used after Onslaught grows too powerful. Only parts of the actual protocols are ever shown. In the Operation: Zero Tolerance crossover Bastion obtains an encrypted copy of the protocols, intending to use them against the X-Men. However, Cable infiltrates the X-Mansion and secures all encrypted files before Bastion has a chance to decrypt them. Due to the tampering of Bastion and his Sentinels, the X-Mansion computer system Cerebro gains autonomy and seeks to destroy the X-Men by employing its knowledge of the Xavier Protocols. In a virtual environment created by Professor X, Cerebro executes the Xavier Protocols against the X-Men.", "title": "Xavier Protocols" }, { "paragraph_id": 93, "text": "Each protocol is activated by the presence of a different combination of X-Men and were written by Xavier himself :", "title": "Xavier Protocols" }, { "paragraph_id": 94, "text": "Other X-Men who have faced their Xavier Protocols are Colossus, Rogue, Shadowcat, Nightcrawler, Storm, and Gambit.", "title": "Xavier Protocols" }, { "paragraph_id": 95, "text": "Professor X is Carlos Javier in the miniseries Marvel 1602 (set in the alternative reality known as Earth-311), set at the end of the Elizabethan Era in an alternative universe. In this reality Carlos Javier set up a school for the Witchbreed to train them and prepare them to survive in a world that distrusted and hated them. He hid them away and would only send them out on mercy missions to retrieve other witchbreed who were in danger. When the young man named Werner – born with angel's wings – was to be burnt at the stake by the Inquisition, Javier sent his team leader, Scotius Summerisle, and Roberto Trefusis to rescue the boy. They did, and brought him back to Javier's school.", "title": "Other versions" }, { "paragraph_id": 96, "text": "Nicholas Fury, the Queen of England's spymaster, came to visit Javier at his school and warn him of the danger posed by Elizabeth's death and the eventual rise to power of King James of Scotland, who had no love for witchbreed. Javier acknowledged the threat, but did nothing about it, though he showed Fury his team of super-powered youths. Fury also asked a favor, and requested that Javier use his powers to read the thoughts of a captured assassin. All Javier could tell him what that he was one of three; another was to kill a girl from the colonies, and the third, the queen. Fury later sent his protégé, Peter Parquagh, to Javier's school, to warn him that Fury would be coming for him in the name of King James soon, and that Javier should go quietly, rather than risk a war that would have serious consequences. Javier agreed, and when Fury arrived with an army of men, he and his students went without a fight.", "title": "Other versions" }, { "paragraph_id": 97, "text": "While captive, Javier joined a discussion with Fury and Doctor Strange—the physician and magician of Queen Elizabeth. Strange told them that the world was coming to an end and the only way to save it would be to launch an attack on the castle of Otto von Doom, and steal away the treasure of the Templars and the survivors from the Four of the Fantastick. Fury disbelieved him, thinking his friend Sir Richard Reed dead, but Javier read Strange's mind, revealing that Strange thought he was telling the truth; and so it was decided. They traveled upon a ship that Javier's student Jean Grey lifted into the air with her mind, while Javier bolstered her powers with his own, and they flew to Latveria. Javier and Jean remained in meditation the whole way; keeping the ship afloat, for if they set down they would not get airborne again.", "title": "Other versions" }, { "paragraph_id": 98, "text": "As the battle commenced, Javier led his men. He sent Angel and Scotius down to silence the cannons, while he ordered Roberto to deflect cannonballs, which he himself would try to steer off course via the cannoneer's minds. His beast-like student Henry he asked to protect the ship from the flying minions of Doom that soon boarded the ship from the air. When the Captain of the Fantastick raged against his stone prison beneath Castle Doomstadt, it freed the members of the Captain's crew, along with Donal (Thor) and Matthew Murdoch. Donal quickly used the staff that was his greatest treasure, and turned himself into the Thunder God, Thor. When Thor created a massive storm to use against Doom, Roberto used the sudden moisture in the air to freeze the cannons and save their ship. Doom also used Thor's storm to electrify the golden globe he held — a distraction given to him by Donal — but it exploded in his face, scarring him and bringing him to the brink of death. Victorious, Thor and the members of the Fantastick joined Javier's crew, and with Thor's help they got the boat to sea, as Jean Grey had collapsed.", "title": "Other versions" }, { "paragraph_id": 99, "text": "The band of heroes set sail for the New World to fix the tear in time that had created the weather anomalies circling the globe, as well as endangering the universe itself. On the way, Jean Grey's body finally gave up under the strain of the use of her powers, and as per her final wish, she was flown into the air and vaporized by Scotius’ eye blasts, falling to the sea as ash; but not before Angel saw an image of an immense, flaming bird in the air. Almost to the Roanoke Colony, Javier sensed a trio of ships making their way to the New World; the first, Virginia Dare returning to the colonies with her time-traveling friend; the second, containing James’ men, set to kill Fury; and the third, the witchbreed Enrique with his two children. Enrique was an old friend of Javier's, later set against him. Javier's group intercepted Enrique's boat first, and Roberto encased it in ice to imprison them, while Javier demanded to know what they were doing. Enrique explained that the winds had taken them to the New World, but Javier did not trust him.", "title": "Other versions" }, { "paragraph_id": 100, "text": "Javier soon participated in another group discussion; this led by the severed head of Doctor Strange, brought from England by his wife, Clea. Strange told them through his head that the faux-Indian Rojhaz was actually a visitor from the future, Captain America, whose arrival had jeopardized the universe itself. To fix it, the heroes would have to return him to the rift. They soon found the rift, and Javier had no choice but to make a deal with his old friend, Enrique, who was the only one with the power to open the rift to put the man back. Enrique agreed without hearing the proposal, but demanded that his own terms be met when his job was done. As Javier had no other choice, he agreed. Together with Enrique, Thor, and Fury, they opened the rift enough for Fury to drag Captain America through, and it closed, healing the universe permanently. Though instead of reverting things back to the way they should have been, it separated the universe from the original, creating a pocket universe where the out-of-time heroes continued to exist.", "title": "Other versions" }, { "paragraph_id": 101, "text": "Before parting, Enrique explained his terms: that he would head north, and no one would follow him or investigate him; and that Javier would teach his children, Wanda and Petros, but not reveal to them that he was their father, though he would return one day to fetch them. Javier agreed, and parted with his old friend.", "title": "Other versions" }, { "paragraph_id": 102, "text": "In the Age of Apocalypse, Charles Xavier was killed when he sacrificed himself to save Erik Lensherr from his own future son, David Haller (Legion), who had gone back in time to eliminate Magneto in the belief that his father would thus be there for him and succeed in his dream without Magneto to 'hinder' his efforts. As a result, Magneto founded the X-Men and sought human/mutant co-existence in Xavier's name- even naming his new son with Rogue 'Charles' after his friend- but Haller's rampage also prompted Apocalypse to awaken decades before the world was ready for him, resulting in Apocalypse conquering North America and most of the world, eventually forcing Magneto's X-Men to attempt a daring mission to gain the power necessary to go back in time and save Xavier from Haller as they recognised how vital Xavier was to the future.", "title": "Other versions" }, { "paragraph_id": 103, "text": "In the Amalgam Comics continuity, Charles Xavier was combined with DC's Doctor Fate and Marvel's Doctor Strange to create Dr. Strangefate. He was the only character aware of the nature of the Amalgam Comics universe.", "title": "Other versions" }, { "paragraph_id": 104, "text": "He was also combined with Martian Manhunter to create Mr. X, leader of the JLX (a mash up of the X-Men and Justice League).", "title": "Other versions" }, { "paragraph_id": 105, "text": "In the second issue of Prelude to Deadpool Corps, Deadpool visits a universe where Prof. X runs an orphanage for troubled kids that includes Kidpool (kid version of Deadpool), Cyclops, Wolverine, Angel, and Colossus, with Storm being the headmistress and Beast as a teacher. In this universe, the professor has a fondness for Emma Frost who runs an orphanage for girls that includes Jean Grey and Rogue. He tries to get her attention by wearing wigs, throwing a dance for both orphanages, and trying to alter her memory.", "title": "Other versions" }, { "paragraph_id": 106, "text": "When the Scarlet Witch altered reality so Magneto ruled over the Earth and mutants were the dominant species, Professor X is initially depicted as missing; Wolverine attempts to locate him but his search turns up fruitless. Later on Genosha, Magneto is seen staring at a grave for the Professor, with the epitaph \"He died so Genosha could live\". However, when the grave is searched by Cloak, he finds there is no body. The question of Xavier's status in this world was left open-ended until House of M: Civil War, detailing the history of Magneto in this world. Xavier, while living, sought out Magneto when the latter was attempting to halt the oppression of mutantkind, declaring war on humans. He saved Magneto's life from a sniper attack and joined him, hoping to influence Magneto's actions into benevolence. He was disabled during the mutant takeover of Genosha and slowly grew more distant from Magneto as the latter's actions grew more bloodthirsty. Ultimately, when the United States sent a team onto Genosha to assassinate Magneto, Xavier found himself trying to appeal to a furious Bucky Barnes, who stabbed Xavier through the chest. What became of his body afterwards is unknown.", "title": "Other versions" }, { "paragraph_id": 107, "text": "In the Marvel Zombies one-shot Marvel Zombies: Dead Days, a zombified Alpha Flight attacks the X-Mansion. Storm informs the X-Men during the battle that Alpha Flight has ripped Xavier to pieces. Cyclops, trying not to deal with the fact that Xavier is dead, continues to fight. In the Marvel Zombies/Army of Darkness crossover, a zombified Beast informs Doctor Doom of Xavier's death, and that it was the Zombie Reed Richards who reprogrammed Cerebro to seek out humans.", "title": "Other versions" }, { "paragraph_id": 108, "text": "In Marvel Zombies Return however, another alternative Xaiver is zombified and turned into a human-detection system, his brain being permanently connected up to Cerebro so that he can find any remaining human beings.", "title": "Other versions" }, { "paragraph_id": 109, "text": "In the alternative reality known as Mutant X, Professor X believing in harmony between man and mutant, formed, along with his friend Magnus, the X-Men and led the team towards that peaceful goal . However, the day they fought the Shadow King, everything changed. The good in Xavier was corrupted, and he left the team to explore his powers further. When he returned, it was during an attack by the Juggernaut. Xavier fired a blast at Juggernaut, but it missed and killed Magneto's lover Moira MacTaggert, instead. Xavier left the X-Men for good then, and traveled the world seeking out telepaths, whom he captured and incarcerated around the globe. He joined forces with Sinister in a bid to transfer all the mental energy of all the world's telepaths into himself. To that end, they created the X-Man, and Xavier took control of S.H.I.E.L.D., captured Gambit's adopted daughter Raven, and had Fury attempt to kill the X-Men with a nuclear strike. Xavier met up with The Six in New York, \"fleeing\" from Apocalypse and the Four Horsemen. However, when Xavier made several attempts to abduct Scotty, Havok was alerted to the truth by Jean Grey and Magneto, and realized who the true villain was. After a pitched battle, Xavier donned his psychic armor, and he and Sinister released a giant replica of Galactus to induce fear in the citizens of Earth, on which Xavier could feed his power. In the end, the replica was destroyed and the Six beat the fear phantoms that had comprised it. Xavier turned on Sinister and destroyed him, and X-Man ran off, leaving Scotty and Raven, who with X-Man were to be Xavier's psychic batteries, to help Havok blast away at Xavier. Xavier was knocked out of his armor and fled the scene, but not before unleashing a blast at Havok that hit Brute when he jumped in front of it to save Alex. Fortunately, the blast temporarily restored Hank to his former levels of intelligence, and he was able to devise cures for his friends before the effect faded away. Xavier was later summoned by Dr. Strange to help fight the Beyonder (Goblin Queen) by adding his psychic power to others to help Havok reach a higher plane of reality. While hooked up to the psychic amplification machine, Xavier was about to be killed by Dracula when he was saved by Bloodstorm, who staked her former master.", "title": "Other versions" }, { "paragraph_id": 110, "text": "Warren Ellis' Ruins was set in an alternative version of the Marvel Universe where \"everything went wrong\". In this world, \"President X\" leads a corrupt regime over the United States. He moved the White House from Washington to Westchester, New York, letting the capital fall to waste and corruption. He never formed the X-Men, with only Warren Worthington working for him as a secret serviceman. Some of his would-be X-Men are locked in a Texan prison by his orders and are sometimes forcibly deformed in an effort to keep their powers under control. He was known to frequently visit and verbally abuse them \"leaving them all sobbing and throwing up\". The Avengers were depicted in this world as a Californian pro-secessionist revolutionary cell that opposed Xavier's regime, who were all killed when the Avengers Quinjet was shot down. President X also started the 'Genoshan Police Action', also known as the 'Genoshan War'.", "title": "Other versions" }, { "paragraph_id": 111, "text": "In the first arc of New Excalibur the team is brought together partly as a response to a clash between Dazzler and a group of homicidal mutants bearing a resemblance to the Original X-Men. It turns out that these are the X-Men of an alternative universe where Charles Xavier is possessed by the Shadow King and has gone on to use his mind-controlled and thoroughly corrupted X-Men to wipe out all the other superhumans. This version of Xavier can walk, and insists that his followers refer to him as 'Master'.", "title": "Other versions" }, { "paragraph_id": 112, "text": "He, along with the Shadow King, are killed by Lionheart.", "title": "Other versions" }, { "paragraph_id": 113, "text": "In the Ultimate Marvel continuity, Professor Charles Xavier is the world's most powerful telepath, the founder and patron of the X-Men and a world-famous lecturer for pacifism and mutant emancipation. In contrast to his mainstream version, he is publicly open about his mutant status from the beginning and also has limited telekinetic abilities. He leaves his wife Moira MacTaggert, whom he collaborated with to create new therapies and surgical techniques for their mutant patients, and their sick son David to pursue Magneto's dream of a mutant society, but Magneto turns on him, crippling him with a shard of metal through his spine.", "title": "Other versions" }, { "paragraph_id": 114, "text": "Xavier also repeatedly tampers with other people's minds to reach his goals, but he recognizes his flaws. In one instance, Xavier finds that Iceman has told a girl several secrets about the X-Men and is forced to erase the conversation from their minds. He generally believes that reading minds without permission is unacceptable, or so he leads his students to believe. In Ultimate X-Men #40, when Angel flies away, the Professor sends Storm after him because he telepathically knows that Angel is attracted to her. Similarly, Beast questions whether Xavier has made Storm love him.", "title": "Other versions" }, { "paragraph_id": 115, "text": "In this timeline, his former love interests include Mystique and Emma Frost. In Ultimate X-Men #77, he tells Cyclops that he is in love with Jean. He also has a pet cat which he has named \"Mystique\".", "title": "Other versions" }, { "paragraph_id": 116, "text": "In Ultimate X-Men #78, Xavier is apparently killed by Cable who was trying to prevent the horrible events in the future. In Ultimate X-Men #80 it is revealed that he is in fact alive, and a captive of Cable in the future. It has also been revealed that Cable has repaired his spine and is training Xavier to fight against Apocalypse. However, once the battle came, Jean Grey manifested as the Phoenix and destroyed Apocalypse. Jean returned everything back to normal, giving Xavier a \"fresh start\". As she did so however, she undid the repair to his spine that Cable had performed, leaving him once again disabled. Xavier reformed the X-Men upon return as the Headmaster of the Xavier Institute.", "title": "Other versions" }, { "paragraph_id": 117, "text": "Soon after, Xavier left the school temporarily to aid Moira in some research on Muir Island. While he is away, the school is attacked by Alpha Flight whose mutant powers are enhanced by a drug called Banshee. Furthermore, it is revealed that Colossus has been using Banshee during his entire time at the Xavier's School to use his power without pain. Due to the sudden and apparently rampant use of the drug, Xavier and Jean begin screening all the students for traces of Banshee. However, it is later revealed the Banshee drug was created by Xavier himself, during his time in the Savage Land, and that it was created from Wolverine's blood. When Xavier tested Banshee, he was given powers that mimicked Wolverine's, including claws, enhanced senses, and a healing factor. Xavier and Magneto, however, deemed the drug too dangerous and stopped production of it. When Wolverine discovered that he was the source of the drug and that Xavier was responsible for its initial creation, Wolverine attacked Muir Island. Xavier admits to creating the drug but denies that he is responsible for its continued creation and use. It is revealed then that Moira got hold of Xavier's research and began creating and selling the drug to finance Muir Island. Moira, who had used the drug to give herself a sonic scream, begins to do battle with Wolverine, and Xavier evacuates the children moments before the research facility explodes.", "title": "Other versions" }, { "paragraph_id": 118, "text": "In the Ultimatum story arc, Charles informs all mutants that Magneto is behind the actions. Magneto confronts Charles, explaining that he believes that he shall act as God did to cleanse the world and usher in an era of mutant supremacy. When Charles states that Magneto is not God and that he will stop him as he always has in the past, Magneto then snaps Charles's neck, killing him.", "title": "Other versions" }, { "paragraph_id": 119, "text": "He returned, revealed as Rogue's benefactor, secretly sending her on an undercover mission and stating that he does not want his former students to know about his plan. It is unconfirmed if this is truly Xavier, as both William Stryker, Alex Summers and Quicksilver have been seen talking to their supposedly dead loved ones, hinting at a foe mentally manipulating several characters. It was revealed to be the work of Mr. Sinister, Apocalypse's disciple.", "title": "Other versions" }, { "paragraph_id": 120, "text": "In X-Men Noir, Charles Xavier is a psychiatrist who ran the \"Xavier School for Exceptionally Wayward Youth\", in Westchester where he took in juvenile delinquents, but instead of reforming them, he actually further trained them in criminal talents, due to his belief that sociopathy was in fact the next state in human behavioral evolution. The paper in which he stated this led to his expulsion from the American Psychological Association. He is currently in Riker's Island, awaiting charges after the truth about his reform school were made public. Xavier had been framed by Chief of Detectives Eric Magnus for the murder of one of his own students: Warren. Magnus had murdered Warren after Xavier refused to make his X-Men join Magnus' Brotherhood.", "title": "Other versions" }, { "paragraph_id": 121, "text": "An alternative of Earth-616's Professor X is shown, there was seemingly little to distinguish Charles Xavier until the day he was kidnapped by the forces of the Savior (unbeknownst to him, an alternative of himself), who removed his head from his body, placed in a life-giving \"jar\", and placed it with the heads of all the other alternative Xaviers put through the same procedure and made to scan the multiverse for the next mutants to be kidnapped. When the Savior was defeated, the collective of Xavier heads put themselves to work finding a new home for the people of the world they had been kidnapped to. However, in the process, all of the heads exploded, except one. This Xavier head would later aide a cross-dimension X-Men team in defeating ten evil Xaviers who are scattered throughout the multiverse and threaten existence itself. During the X-Termination crossover, AoA Nightcrawler's trip home resulted in the release of three evil beings that destroy anyone they touch. Several casualties resulted, including the AoA's Sabretooth, Horror Show, and Fiend, as well as the X-Treme X-Men's Xavier and Hercules.", "title": "Other versions" }, { "paragraph_id": 122, "text": "Professor X has appeared on a number of animated television shows including the X-Men animated series voiced by Cedric Smith, X-Men: Evolution voiced by David Kaye, and in Wolverine and the X-Men voiced by Jim Ward.", "title": "In other media" }, { "paragraph_id": 123, "text": "He has appeared in twelve live-action 20th Century Fox X-Men feature films to date. He is played by Patrick Stewart in X-Men, X2, X-Men: The Last Stand, X-Men Origins: Wolverine, The Wolverine, Logan, and Doctor Strange in the Multiverse of Madness and by James McAvoy in X-Men: First Class, X-Men: Apocalypse, Deadpool 2 and Dark Phoenix. Both actors play him at different ages in X-Men: Days of Future Past.", "title": "In other media" }, { "paragraph_id": 124, "text": "Harry Lloyd portrays a young Charles Xavier in the television series Legion.", "title": "In other media" }, { "paragraph_id": 125, "text": "He has also appeared in a number of books and video games.", "title": "In other media" }, { "paragraph_id": 126, "text": "Professor X appears as a collectable card in Marvel SNAP.", "title": "In other media" } ]
Professor X is a character appearing in American comic books published by Marvel Comics. Created by writer Stan Lee and artist/co-writer Jack Kirby, the character first appeared in The X-Men #1. The character is depicted as the founder and occasional leader of the X-Men. Xavier is a member of a subspecies of humans known as mutants, who are born with superhuman abilities. He is an exceptionally powerful telepath, who can read and control the minds of others. To both shelter and train mutants from around the world, he runs a private school in the X-Mansion in Salem Center, located in Westchester County, New York. Xavier also strives to serve a greater good by promoting peaceful coexistence and equality between humans and mutants in a world where zealous anti-mutant bigotry is widespread, though he later abandons his dream in favor of establishing a mutant nation on Krakoa. Throughout much of the character's history, Xavier has been depicted with paraplegia and uses a wheelchair. One of the world's most powerful mutant telepaths, Xavier is a scientific genius and a leading authority in genetics. He has devised Cerebro and other equipment to enhance psionic powers and detect and track people with the mutant gene. Xavier's pacifist and assimilationist ideology and actions have often been contrasted with that of Magneto, a mutant leader with whom Xavier has a complicated relationship. Writer Chris Claremont, who originated Magneto's backstory as well as the relationship between the two men, modeled his characterization of Xavier on David Ben Gurion, and that of Magneto on Menachem Begin. Patrick Stewart portrayed the character in the first three films in the 20th Century Fox X-Men film series and in various video games, while James McAvoy portrayed a younger version of the character in the 2011 prequel X-Men: First Class. Both actors reprised the role in the film X-Men: Days of Future Past. Stewart would reprise the role in the film Logan (2017), while McAvoy would further appear as his younger iteration of the character in X-Men: Apocalypse (2016), Deadpool 2 (2018) and Dark Phoenix (2019). Harry Lloyd portrayed the character in the third season of the television series Legion. Stewart again returned to the role, portraying an alternate version of the character in the 2022 Marvel Cinematic Universe film Doctor Strange in the Multiverse of Madness.
2002-02-25T15:51:15Z
2023-12-28T01:36:05Z
[ "Template:Redirect", "Template:Cite news", "Template:ISBN", "Template:Jack Kirby", "Template:Wolverine", "Template:Convert", "Template:Citation needed", "Template:Main", "Template:Cite web", "Template:Cite comic", "Template:Other uses", "Template:Infobox comics character", "Template:Very long", "Template:Volume needed", "Template:X-Men characters", "Template:Stan Lee", "Template:Short description", "Template:Reflist", "Template:Cite book", "Template:New Mutants", "Template:Magneto" ]
https://en.wikipedia.org/wiki/Professor_X
7,734
Central Pacific Railroad
The Central Pacific Railroad (CPRR) was a rail company chartered by U.S. Congress in 1862 to build a railroad eastwards from Sacramento, California, to complete the western part of the "First transcontinental railroad" in North America. Incorporated in 1861, CPRR ceased operation in 1959 when assets were formally merged into the Southern Pacific Railroad. Following the completion of the Pacific Railroad Surveys in 1855, several national proposals to build a transcontinental railroad failed because of political disputes over slavery. With the secession of the South in 1861, the modernizers in the Republican Party controlled the US Congress. They passed legislation in 1862 authorizing the central rail route with financing in the form of land grants and government railroad bond, which were all eventually repaid with interest. The government and the railroads both shared in the increased value of the land grants, which the railroads developed. The construction of the railroad also secured for the government the economical "safe and speedy transportation of the mails, troops, munitions of war, and public stores". In the fall of 1860, Charles Marsh, a surveyor, civil engineer and water company owner, met with Theodore Judah, a civil engineer, who had recently built the Sacramento Valley Railroad from Sacramento to Folsom, California. Marsh, who had already surveyed a potential railroad route between Sacramento and Nevada City, California, a decade earlier, went with Judah into the Sierra Nevada Mountains. There they examined the Henness Pass Turnpike Company's route (Marsh was a founding director of that company). They measured elevations and distances, and discussed the possibility of a transcontinental railroad. Both were convinced that it could be done. In December 1860 or early January 1861, Marsh met with Judah and Daniel Strong in Strong's drug store in Dutch Flat, California, to discuss the project, which they called the Central Pacific Railroad of California. James Bailey, a friend of Judah, told Leland Stanford that Judah had a feasible route for a railroad across the Sierras, and urged Stanford to meet with Judah. In early 1861, Marsh, Judah and Strong met with Collis P. Huntington, Leland Stanford, Mark Hopkins Jr. and Charles Crocker to obtain financial backing. Papers were filed to incorporate the new company, and on April 30, 1861, the eight of them, along with Lucius Anson Booth, became the first board of directors of the Central Pacific Railroad. Planned by Judah, the Central Pacific Railroad was promoted by Congress by the Pacific Railway Act of 1862 which authorized the issuance of government bonds and land grants for each mile that was constructed. Stanford served as president (at the same time he was elected governor of California), Huntington served as vice-president in charge of fundraising and purchasing, Hopkins was treasurer and Crocker was in charge of construction. They called themselves "The Associates," but became known as "The Big Four." Construction began in 1863 when the first rails were laid in Sacramento. Construction proceeded in earnest in 1865 when James Harvey Strobridge, the head of the construction work force, hired the first Cantonese emigrant workers at Crocker's suggestion. The construction crew grew to include 12,000 Chinese laborers by 1868, when they breached Donner summit and constituted eighty percent of the entire work force. The "Golden spike", connecting the western railroad to the Union Pacific Railroad at Promontory, Utah, was hammered on May 10, 1869. Coast-to-coast train travel in eight days became possible, replacing months-long sea voyages and lengthy, hazardous travel by wagon trains. In 1885 the Central Pacific Railroad was acquired by the Southern Pacific Company as a leased line. Technically the CPRR remained a corporate entity until 1959, when it was formally merged into Southern Pacific. (It was reorganized in 1899 as the Central Pacific "Railway".) The original right-of-way is now controlled by the Union Pacific, which bought Southern Pacific in 1996. The Union Pacific-Central Pacific (Southern Pacific) main line followed the historic Overland Route from Omaha, Nebraska, to San Francisco Bay. Chinese labor was the most vital source for constructing the railroad. Most of the railroad workers in the west were Chinese, as white workers were not willing to do the dangerous work. Fifty Cantonese emigrant workers were hired by the Central Pacific Railroad in February 1865 on a trial basis, and soon more and more Cantonese emigrants were hired. Working conditions were harsh, and Chinese were compensated less than their white counterparts. Chinese laborers were paid thirty-one dollars each month, and while white workers were paid the same, they were also given room and board. In time, CPRR came to see the advantage of good workers employed at low wages: "Chinese labor proved to be Central Pacific's salvation." The difficulties faced by the Central Pacific in the Sierra Nevada - particularly the extensive tunneling required - were far more formidable than those encountered by the Union Pacific Railroad in the Rocky Mountains. The story that Chinese workers were suspended in wicker baskets over vertical granite cliffs at Cape Horn, California, to drill and blast a ledge for the Central Pacific has been repeated and exaggerated by uncritical historians. The slope there was steep, but definitely not vertical, the rock was not granite, and no one used any baskets. There is reliable, primary-source evidence stating that surveyors used safety ropes while staking out the route, but nothing about construction workers using ropes. Digging the cut was done downward from the top, and from each horizontal end of the cut. It is conceivable that a safety rope would have been useful when digging an initial footpath, that could then be enlarged into a shelf, but there was no reason to be suspended by ropes to dig or drill into the face of the cut. It wasn't done that way. And, most of the Chinese labor was not hired until later. So, the gangs that did the digging at Cape Horn were probably Irish. Central Pacific Director Charles Marsh had extensive civil engineering experience in projects of this nature, both from planning an earlier proposed railroad into the Sierras, and from building ditches and flumes through those mountains for his water company. Construction of the road was financed primarily by 30-year, 6% U.S. government bonds authorized by Sec. 5 of the Pacific Railroad Act of 1862. They were issued at the rate of $16,000 ($265,000 in 2017 dollars) per mile of tracked grade completed east of the designated base of the Sierra Nevada range near Roseville, CA where California state geologist Josiah Whitney had determined were the geologic start of the Sierras' foothills. Sec. 11 of the Act also provided that the issuance of bonds "shall be treble the number per mile" (to $48,000) for tracked grade completed over and within the two mountain ranges (but limited to a total of 300 miles (480 km) at this rate), and "doubled" (to $32,000) per mile of completed grade laid between the two mountain ranges. The U.S. Government Bonds, which constituted a lien upon the railroads and all their fixtures, were repaid in full (and with interest) by the company as and when they became due. Sec. 10 of the 1864 amending Pacific Railroad Act (13 Statutes at Large, 356) additionally authorized the company to issue its own "First Mortgage Bonds" in total amounts up to (but not exceeding) that of the bonds issued by the United States. Such company-issued securities had priority over the original Government Bonds. (Local and state governments also aided the financing, although the City and County of San Francisco did not do so willingly. This materially slowed early construction efforts.) Sec. 3 of the 1862 Act granted the railroads 10 square miles (26 km) of public land for every mile laid, except where railroads ran through cities and crossed rivers. This grant was apportioned in 5 sections on alternating sides of the railroad, with each section measuring 0.2 miles (320 m) by 10 miles (16 km). These grants were later doubled to 20 square miles (52 km) per mile of grade by the 1864 Act. Although the Pacific Railroad eventually benefited the Bay Area, the City and County of San Francisco obstructed financing it during the early years of 1863–1865. When Stanford was Governor of California, the Legislature passed on April 22, 1863, "An Act to Authorize the Board of Supervisors of the City and County of San Francisco to take and subscribe One Million Dollars to the Capital Stock of the Western Pacific Rail Road Company and the Central Pacific Rail Road Company of California and to provide for the payment of the same and other matters relating thereto" (which was later amended by Section Five of the "Compromise Act" of April 4, 1864). On May 19, 1863, the electors of the City and County of San Francisco passed this bond by a vote of 6,329 to 3,116, in a highly controversial Special Election. The City and County's financing of the investment through the issuance and delivery of Bonds was delayed for two years, when Mayor Henry P. Coon, and the County Clerk, Wilhelm Loewy, each refused to countersign the Bonds. It took legal actions to force them to do so: in 1864 the Supreme Court of the State of California ordered them under Writs of Mandamus (The People of the State of California ex rel the Central Pacific Railroad Company vs. Henry P. Coon, Mayor; Henry M. Hale, Auditor; and Joseph S. Paxson, Treasurer, of the City and County of San Francisco. 25 Cal. 635) and in 1865, a legal judgment against Loewy (The People ex rel The Central Pacific Railroad Company of California vs. The Board of Supervisors of the City and County of San Francisco, and Wilhelm Lowey, Clerk 27 Cal. 655) directing that the Bonds be countersigned and delivered. In 1863 the State legislature's forcing of City and County action became known as the "Dutch Flat Swindle". Critics claimed the CPRR's Big Four intended to build a railroad only as far as Dutch Flat, California, to connect to the Dutch Flat-Donner Pass Wagon Road to monopolize the lucrative mining traffic, and not push the track east of Dutch Flat into the more challenging and expensive High Sierra effort. CPRR's chief engineer, Theodore Judah, also argued against such a road and hence against the Big Four, fearing that its construction would siphon money from CPRR's paramount trans-Sierra railroad effort. Despite Judah's strong objection, the Big Four incorporated in August 1863 the Dutch Flat-Donner Lake Wagon Road Company. Frustrated, Judah headed off for New York via Panama to raise funds to buy out the Big Four from CPRR and build his trans-Sierra railroad. Unfortunately, Judah contracted yellow fever in Panama and died in New York in November 1863. A replica of the Sacramento, California, Central Pacific Railroad passenger station is part of the California State Railroad Museum, located in the Old Sacramento State Historic Park. Nearly all the company's early correspondence is preserved at Syracuse University, as part of the Collis Huntington Papers collection. It has been released on microfilm (133 reels). The following libraries have the microfilm: University of Arizona at Tucson; and Virginia Commonwealth University at Richmond. Additional collections of manuscript letters are held at Stanford University and the Mariners' Museum at Newport News, Virginia. Alfred A. Hart was the official photographer of the CPRR construction. The Central Pacific's first three locomotives were of the then common 4-4-0 type, although with the American Civil War raging in the east, they had difficulty acquiring engines from eastern builders, who at times only had smaller 4-2-4 or 4-2-2 types available. Until the completion of the Transcontinental rail link and the railroad's opening of its own shops, all locomotives had to be purchased from builders in the northeastern U.S. The engines had to be dismantled, loaded on a ship, which would embark on a four-month journey that went around South America's Cape Horn until arriving in Sacramento where the locomotives would be unloaded, re-assembled, and placed in service. Locomotives at the time came from many manufacturers, such as Cooke, Schenectady, Mason, Rogers, Danforth, Norris, Booth, and McKay & Aldus, among others. The railroad had been on rather unfriendly terms with the Baldwin Locomotive Works, one of the more well-known firms. It is not clear as to the cause of this dispute, though some attribute it to the builder insisting on cash payment (though this has yet to be verified). Consequently, the railroad refused to buy engines from Baldwin, and three former Western Pacific Railroad (which the CP had absorbed in 1870) engines were the only Baldwin engines owned by the Central Pacific. The Central Pacific's dispute with Baldwin remained unresolved until well after the road had been acquired by the Southern Pacific. In the 1870s, the road opened up its own locomotive construction facilities in Sacramento. Central Pacific's 173 was rebuilt by these shops and served as the basis for CP's engine construction. The locomotives built before the 1870s were given names as well as numbers. By the 1870s, it was decided to eliminate the names and as each engine was sent to the shops for service, their names would be removed. However, one engine that was built in the 1880s did receive a name: the El Gobernador. Construction of the rails was often dangerous work. Towards the end of construction, almost all workers were Chinese immigrants. The ethnicity of workers depended largely on the "gang" of workers/specific area on the rails they were working. The following CP engines have been preserved: 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1876 1877 1883 1885 1888 1899 1959
[ { "paragraph_id": 0, "text": "The Central Pacific Railroad (CPRR) was a rail company chartered by U.S. Congress in 1862 to build a railroad eastwards from Sacramento, California, to complete the western part of the \"First transcontinental railroad\" in North America. Incorporated in 1861, CPRR ceased operation in 1959 when assets were formally merged into the Southern Pacific Railroad.", "title": "" }, { "paragraph_id": 1, "text": "Following the completion of the Pacific Railroad Surveys in 1855, several national proposals to build a transcontinental railroad failed because of political disputes over slavery. With the secession of the South in 1861, the modernizers in the Republican Party controlled the US Congress. They passed legislation in 1862 authorizing the central rail route with financing in the form of land grants and government railroad bond, which were all eventually repaid with interest. The government and the railroads both shared in the increased value of the land grants, which the railroads developed. The construction of the railroad also secured for the government the economical \"safe and speedy transportation of the mails, troops, munitions of war, and public stores\".", "title": "" }, { "paragraph_id": 2, "text": "In the fall of 1860, Charles Marsh, a surveyor, civil engineer and water company owner, met with Theodore Judah, a civil engineer, who had recently built the Sacramento Valley Railroad from Sacramento to Folsom, California. Marsh, who had already surveyed a potential railroad route between Sacramento and Nevada City, California, a decade earlier, went with Judah into the Sierra Nevada Mountains. There they examined the Henness Pass Turnpike Company's route (Marsh was a founding director of that company). They measured elevations and distances, and discussed the possibility of a transcontinental railroad. Both were convinced that it could be done.", "title": "History" }, { "paragraph_id": 3, "text": "In December 1860 or early January 1861, Marsh met with Judah and Daniel Strong in Strong's drug store in Dutch Flat, California, to discuss the project, which they called the Central Pacific Railroad of California. James Bailey, a friend of Judah, told Leland Stanford that Judah had a feasible route for a railroad across the Sierras, and urged Stanford to meet with Judah. In early 1861, Marsh, Judah and Strong met with Collis P. Huntington, Leland Stanford, Mark Hopkins Jr. and Charles Crocker to obtain financial backing. Papers were filed to incorporate the new company, and on April 30, 1861, the eight of them, along with Lucius Anson Booth, became the first board of directors of the Central Pacific Railroad.", "title": "History" }, { "paragraph_id": 4, "text": "Planned by Judah, the Central Pacific Railroad was promoted by Congress by the Pacific Railway Act of 1862 which authorized the issuance of government bonds and land grants for each mile that was constructed. Stanford served as president (at the same time he was elected governor of California), Huntington served as vice-president in charge of fundraising and purchasing, Hopkins was treasurer and Crocker was in charge of construction. They called themselves \"The Associates,\" but became known as \"The Big Four.\" Construction began in 1863 when the first rails were laid in Sacramento.", "title": "History" }, { "paragraph_id": 5, "text": "Construction proceeded in earnest in 1865 when James Harvey Strobridge, the head of the construction work force, hired the first Cantonese emigrant workers at Crocker's suggestion. The construction crew grew to include 12,000 Chinese laborers by 1868, when they breached Donner summit and constituted eighty percent of the entire work force. The \"Golden spike\", connecting the western railroad to the Union Pacific Railroad at Promontory, Utah, was hammered on May 10, 1869. Coast-to-coast train travel in eight days became possible, replacing months-long sea voyages and lengthy, hazardous travel by wagon trains.", "title": "History" }, { "paragraph_id": 6, "text": "In 1885 the Central Pacific Railroad was acquired by the Southern Pacific Company as a leased line. Technically the CPRR remained a corporate entity until 1959, when it was formally merged into Southern Pacific. (It was reorganized in 1899 as the Central Pacific \"Railway\".) The original right-of-way is now controlled by the Union Pacific, which bought Southern Pacific in 1996.", "title": "History" }, { "paragraph_id": 7, "text": "The Union Pacific-Central Pacific (Southern Pacific) main line followed the historic Overland Route from Omaha, Nebraska, to San Francisco Bay.", "title": "History" }, { "paragraph_id": 8, "text": "Chinese labor was the most vital source for constructing the railroad. Most of the railroad workers in the west were Chinese, as white workers were not willing to do the dangerous work. Fifty Cantonese emigrant workers were hired by the Central Pacific Railroad in February 1865 on a trial basis, and soon more and more Cantonese emigrants were hired. Working conditions were harsh, and Chinese were compensated less than their white counterparts. Chinese laborers were paid thirty-one dollars each month, and while white workers were paid the same, they were also given room and board. In time, CPRR came to see the advantage of good workers employed at low wages: \"Chinese labor proved to be Central Pacific's salvation.\"", "title": "History" }, { "paragraph_id": 9, "text": "The difficulties faced by the Central Pacific in the Sierra Nevada - particularly the extensive tunneling required - were far more formidable than those encountered by the Union Pacific Railroad in the Rocky Mountains. The story that Chinese workers were suspended in wicker baskets over vertical granite cliffs at Cape Horn, California, to drill and blast a ledge for the Central Pacific has been repeated and exaggerated by uncritical historians.", "title": "History" }, { "paragraph_id": 10, "text": "The slope there was steep, but definitely not vertical, the rock was not granite, and no one used any baskets. There is reliable, primary-source evidence stating that surveyors used safety ropes while staking out the route, but nothing about construction workers using ropes. Digging the cut was done downward from the top, and from each horizontal end of the cut. It is conceivable that a safety rope would have been useful when digging an initial footpath, that could then be enlarged into a shelf, but there was no reason to be suspended by ropes to dig or drill into the face of the cut. It wasn't done that way. And, most of the Chinese labor was not hired until later. So, the gangs that did the digging at Cape Horn were probably Irish.", "title": "History" }, { "paragraph_id": 11, "text": "Central Pacific Director Charles Marsh had extensive civil engineering experience in projects of this nature, both from planning an earlier proposed railroad into the Sierras, and from building ditches and flumes through those mountains for his water company.", "title": "History" }, { "paragraph_id": 12, "text": "Construction of the road was financed primarily by 30-year, 6% U.S. government bonds authorized by Sec. 5 of the Pacific Railroad Act of 1862. They were issued at the rate of $16,000 ($265,000 in 2017 dollars) per mile of tracked grade completed east of the designated base of the Sierra Nevada range near Roseville, CA where California state geologist Josiah Whitney had determined were the geologic start of the Sierras' foothills. Sec. 11 of the Act also provided that the issuance of bonds \"shall be treble the number per mile\" (to $48,000) for tracked grade completed over and within the two mountain ranges (but limited to a total of 300 miles (480 km) at this rate), and \"doubled\" (to $32,000) per mile of completed grade laid between the two mountain ranges. The U.S. Government Bonds, which constituted a lien upon the railroads and all their fixtures, were repaid in full (and with interest) by the company as and when they became due.", "title": "History" }, { "paragraph_id": 13, "text": "Sec. 10 of the 1864 amending Pacific Railroad Act (13 Statutes at Large, 356) additionally authorized the company to issue its own \"First Mortgage Bonds\" in total amounts up to (but not exceeding) that of the bonds issued by the United States. Such company-issued securities had priority over the original Government Bonds. (Local and state governments also aided the financing, although the City and County of San Francisco did not do so willingly. This materially slowed early construction efforts.) Sec. 3 of the 1862 Act granted the railroads 10 square miles (26 km) of public land for every mile laid, except where railroads ran through cities and crossed rivers. This grant was apportioned in 5 sections on alternating sides of the railroad, with each section measuring 0.2 miles (320 m) by 10 miles (16 km). These grants were later doubled to 20 square miles (52 km) per mile of grade by the 1864 Act.", "title": "History" }, { "paragraph_id": 14, "text": "Although the Pacific Railroad eventually benefited the Bay Area, the City and County of San Francisco obstructed financing it during the early years of 1863–1865. When Stanford was Governor of California, the Legislature passed on April 22, 1863, \"An Act to Authorize the Board of Supervisors of the City and County of San Francisco to take and subscribe One Million Dollars to the Capital Stock of the Western Pacific Rail Road Company and the Central Pacific Rail Road Company of California and to provide for the payment of the same and other matters relating thereto\" (which was later amended by Section Five of the \"Compromise Act\" of April 4, 1864). On May 19, 1863, the electors of the City and County of San Francisco passed this bond by a vote of 6,329 to 3,116, in a highly controversial Special Election.", "title": "History" }, { "paragraph_id": 15, "text": "The City and County's financing of the investment through the issuance and delivery of Bonds was delayed for two years, when Mayor Henry P. Coon, and the County Clerk, Wilhelm Loewy, each refused to countersign the Bonds. It took legal actions to force them to do so: in 1864 the Supreme Court of the State of California ordered them under Writs of Mandamus (The People of the State of California ex rel the Central Pacific Railroad Company vs. Henry P. Coon, Mayor; Henry M. Hale, Auditor; and Joseph S. Paxson, Treasurer, of the City and County of San Francisco. 25 Cal. 635) and in 1865, a legal judgment against Loewy (The People ex rel The Central Pacific Railroad Company of California vs. The Board of Supervisors of the City and County of San Francisco, and Wilhelm Lowey, Clerk 27 Cal. 655) directing that the Bonds be countersigned and delivered.", "title": "History" }, { "paragraph_id": 16, "text": "In 1863 the State legislature's forcing of City and County action became known as the \"Dutch Flat Swindle\". Critics claimed the CPRR's Big Four intended to build a railroad only as far as Dutch Flat, California, to connect to the Dutch Flat-Donner Pass Wagon Road to monopolize the lucrative mining traffic, and not push the track east of Dutch Flat into the more challenging and expensive High Sierra effort. CPRR's chief engineer, Theodore Judah, also argued against such a road and hence against the Big Four, fearing that its construction would siphon money from CPRR's paramount trans-Sierra railroad effort. Despite Judah's strong objection, the Big Four incorporated in August 1863 the Dutch Flat-Donner Lake Wagon Road Company. Frustrated, Judah headed off for New York via Panama to raise funds to buy out the Big Four from CPRR and build his trans-Sierra railroad. Unfortunately, Judah contracted yellow fever in Panama and died in New York in November 1863.", "title": "History" }, { "paragraph_id": 17, "text": "A replica of the Sacramento, California, Central Pacific Railroad passenger station is part of the California State Railroad Museum, located in the Old Sacramento State Historic Park.", "title": "Museums and archives" }, { "paragraph_id": 18, "text": "Nearly all the company's early correspondence is preserved at Syracuse University, as part of the Collis Huntington Papers collection. It has been released on microfilm (133 reels). The following libraries have the microfilm: University of Arizona at Tucson; and Virginia Commonwealth University at Richmond. Additional collections of manuscript letters are held at Stanford University and the Mariners' Museum at Newport News, Virginia. Alfred A. Hart was the official photographer of the CPRR construction.", "title": "Museums and archives" }, { "paragraph_id": 19, "text": "The Central Pacific's first three locomotives were of the then common 4-4-0 type, although with the American Civil War raging in the east, they had difficulty acquiring engines from eastern builders, who at times only had smaller 4-2-4 or 4-2-2 types available. Until the completion of the Transcontinental rail link and the railroad's opening of its own shops, all locomotives had to be purchased from builders in the northeastern U.S. The engines had to be dismantled, loaded on a ship, which would embark on a four-month journey that went around South America's Cape Horn until arriving in Sacramento where the locomotives would be unloaded, re-assembled, and placed in service.", "title": "Locomotives" }, { "paragraph_id": 20, "text": "Locomotives at the time came from many manufacturers, such as Cooke, Schenectady, Mason, Rogers, Danforth, Norris, Booth, and McKay & Aldus, among others. The railroad had been on rather unfriendly terms with the Baldwin Locomotive Works, one of the more well-known firms. It is not clear as to the cause of this dispute, though some attribute it to the builder insisting on cash payment (though this has yet to be verified). Consequently, the railroad refused to buy engines from Baldwin, and three former Western Pacific Railroad (which the CP had absorbed in 1870) engines were the only Baldwin engines owned by the Central Pacific. The Central Pacific's dispute with Baldwin remained unresolved until well after the road had been acquired by the Southern Pacific.", "title": "Locomotives" }, { "paragraph_id": 21, "text": "In the 1870s, the road opened up its own locomotive construction facilities in Sacramento. Central Pacific's 173 was rebuilt by these shops and served as the basis for CP's engine construction. The locomotives built before the 1870s were given names as well as numbers. By the 1870s, it was decided to eliminate the names and as each engine was sent to the shops for service, their names would be removed. However, one engine that was built in the 1880s did receive a name: the El Gobernador.", "title": "Locomotives" }, { "paragraph_id": 22, "text": "Construction of the rails was often dangerous work. Towards the end of construction, almost all workers were Chinese immigrants. The ethnicity of workers depended largely on the \"gang\" of workers/specific area on the rails they were working.", "title": "Locomotives" }, { "paragraph_id": 23, "text": "The following CP engines have been preserved:", "title": "Preserved locomotives" }, { "paragraph_id": 24, "text": "1861", "title": "Timeline" }, { "paragraph_id": 25, "text": "1862", "title": "Timeline" }, { "paragraph_id": 26, "text": "1863", "title": "Timeline" }, { "paragraph_id": 27, "text": "1864", "title": "Timeline" }, { "paragraph_id": 28, "text": "1865", "title": "Timeline" }, { "paragraph_id": 29, "text": "1866", "title": "Timeline" }, { "paragraph_id": 30, "text": "1867", "title": "Timeline" }, { "paragraph_id": 31, "text": "1868", "title": "Timeline" }, { "paragraph_id": 32, "text": "1869", "title": "Timeline" }, { "paragraph_id": 33, "text": "1870", "title": "Timeline" }, { "paragraph_id": 34, "text": "1876", "title": "Timeline" }, { "paragraph_id": 35, "text": "1877", "title": "Timeline" }, { "paragraph_id": 36, "text": "1883", "title": "Timeline" }, { "paragraph_id": 37, "text": "1885", "title": "Timeline" }, { "paragraph_id": 38, "text": "1888", "title": "Timeline" }, { "paragraph_id": 39, "text": "1899", "title": "Timeline" }, { "paragraph_id": 40, "text": "1959", "title": "Timeline" } ]
The Central Pacific Railroad (CPRR) was a rail company chartered by U.S. Congress in 1862 to build a railroad eastwards from Sacramento, California, to complete the western part of the "First transcontinental railroad" in North America. Incorporated in 1861, CPRR ceased operation in 1959 when assets were formally merged into the Southern Pacific Railroad. Following the completion of the Pacific Railroad Surveys in 1855, several national proposals to build a transcontinental railroad failed because of political disputes over slavery. With the secession of the South in 1861, the modernizers in the Republican Party controlled the US Congress. They passed legislation in 1862 authorizing the central rail route with financing in the form of land grants and government railroad bond, which were all eventually repaid with interest. The government and the railroads both shared in the increased value of the land grants, which the railroads developed. The construction of the railroad also secured for the government the economical "safe and speedy transportation of the mails, troops, munitions of war, and public stores".
2002-02-25T15:51:15Z
2023-12-23T07:39:51Z
[ "Template:Short description", "Template:Multiple image", "Template:Convert", "Template:Cite book", "Template:Cite web", "Template:Webarchive", "Template:Cite journal", "Template:The Big Four", "Template:See also", "Template:More citations needed section", "Template:Expand section", "Template:HAER", "Template:Authority control", "Template:Use mdy dates", "Template:Infobox rail", "Template:Circa", "Template:Portal", "Template:Cite court", "Template:Subject bar", "Template:Redirect", "Template:Rp", "Template:Reflist", "Template:Cite episode" ]
https://en.wikipedia.org/wiki/Central_Pacific_Railroad
7,737
Clairvoyance
Clairvoyance (/klɛərˈvɔɪ.əns/; from French clair 'clear', and voyance 'vision') is the claimed psychic ability to gain information about an object, person, location, or physical event through extrasensory perception. Any person who is claimed to have such ability is said to be a clairvoyant (/klɛərˈvɔɪ.ənt/) ('one who sees clearly'). Claims for the existence of paranormal and psychic abilities such as clairvoyance have not been supported by scientific evidence. Parapsychology explores this possibility, but the existence of the paranormal is not accepted by the scientific community. The scientific community widely considers parapsychology, including the study of clairvoyance, a pseudoscience. Pertaining to the ability of clear-sightedness, clairvoyance refers to the paranormal ability to see persons and events that are distant in time or space. It can be divided into roughly three classes: precognition, the ability to perceive or predict future events, retrocognition, the ability to see past events, and remote viewing, the perception of contemporary events happening outside the range of normal perception. Throughout history, there have been numerous places and times in which people have claimed themselves or others to be clairvoyant. In several religions, stories of certain individuals being able to see things far removed from their immediate sensory perception are commonplace, especially within pagan religions where oracles were used. Prophecy often involved some degree of clairvoyance, especially when future events were predicted. This ability has sometimes been attributed to a higher power rather than to the person performing it. A number of Christian saints were said to be able to see or know things that were far removed from their immediate sensory perception as a kind of gift from God, including Charbel Makhlouf, Padre Pio and Anne Catherine Emmerich in Catholicism and Gabriel Urgebadze, Paisios Eznepidis and John Maximovitch in Orthodoxy. Jesus Christ in the Gospels is also recorded as being able to know things that were far removed from his immediate human perception. Some Christians today also share the same claim. In Jainism, clairvoyance is regarded as one of the five kinds of knowledge. The beings of hell and heaven (devas) are said to possess clairvoyance by birth. According to Jain text Sarvārthasiddhi, "this kind of knowledge has been called avadhi as it ascertains matter in downward range or knows objects within limits". Rudolf Steiner, famous as a clairvoyant himself, claimed that for a clairvoyant, it is easy to confuse his own emotional and spiritual being with the objective spiritual world. The earliest record of somnambulist clairvoyance is credited to the Marquis de Puységur, a follower of Franz Mesmer, who in 1784 was treating a local dull-witted peasant named Victor Race. During treatment, Race reportedly would go into trance and undergo a personality change, becoming fluent and articulate, and giving diagnosis and prescription for his own disease as well as those of others. Clairvoyance was a reported ability of some mediums during the spiritualist period of the late 19th and early 20th centuries, and psychics of many descriptions have claimed clairvoyant ability up to the present day. Early researchers of clairvoyance included William Gregory, Gustav Pagenstecher, and Rudolf Tischner. Clairvoyance experiments were reported in 1884 by Charles Richet. Playing cards were enclosed in envelopes and a subject put under hypnosis attempted to identify them. The subject was reported to have been successful in a series of 133 trials but the results dropped to chance level when performed before a group of scientists in Cambridge. J. M. Peirce and E. C. Pickering reported a similar experiment in which they tested 36 subjects over 23,384 trials which did not obtain above chance scores. Ivor Lloyd Tuckett (1911) and Joseph McCabe (1920) analyzed early cases of clairvoyance and came to the conclusion they were best explained by coincidence or fraud. In 1919, the magician P. T. Selbit staged a séance at his own flat in Bloomsbury. The spiritualist Arthur Conan Doyle attended the séance and declared the clairvoyance manifestations to be genuine. A significant development in clairvoyance research came when J. B. Rhine, a parapsychologist at Duke University, introduced a standard methodology, with a standard statistical approach to analyzing data, as part of his research into extrasensory perception. A number of psychological departments attempted to repeat Rhine's experiments, with failure. W. S. Cox (1936) from Princeton University with 132 subjects produced 25,064 trials in a playing card ESP experiment. Cox concluded, "There is no evidence of extrasensory perception either in the 'average man' or of the group investigated or in any particular individual of that group. The discrepancy between these results and those obtained by Rhine is due either to uncontrollable factors in experimental procedure or to the difference in the subjects." Four other psychological departments failed to replicate Rhine's results. It was revealed that Rhine's experiments contained methodological flaws and procedural errors. Eileen Garrett was tested by Rhine at Duke University in 1933 with Zener cards. Certain symbols that were placed on the cards and sealed in an envelope, and she was asked to guess their contents. She performed poorly and later criticized the tests by claiming the cards lacked a psychic energy called "energy stimulus" and that she could not perform clairvoyance to order. The parapsychologist Samuel Soal and his colleagues tested Garrett in May 1937. Most of the experiments were carried out in the Psychological Laboratory at the University College London. A total of over 12,000 guesses were recorded but Garrett failed to produce above chance level. In his report Soal wrote "In the case of Mrs. Eileen Garrett we fail to find the slightest confirmation of Dr. J. B. Rhine's remarkable claims relating to her alleged powers of extra-sensory perception. Not only did she fail when I took charge of the experiments, but she failed equally when four other carefully trained experimenters took my place." Remote viewing, also known as remote sensing, remote perception, telesthesia and travelling clairvoyance is the alleged paranormal ability to perceive a remote or hidden target without support of the senses. A well known study of remote viewing in recent times has been the US government-funded project at the Stanford Research Institute during the 1970s through the mid-1990s. In 1972, Harold Puthoff and Russell Targ initiated a series of human subject studies to determine whether participants (the viewers or percipients) could reliably identify and accurately describe salient features of remote locations or targets. In the early studies, a human sender was typically present at the remote location, as part of the experiment protocol. A three-step process was used, the first step being to randomly select the target conditions to be experienced by the senders. Secondly, in the viewing step, participants were asked to verbally express or sketch their impressions of the remote scene. Thirdly, in the judging step, these descriptions were matched by separate judges, as closely as possible, with the intended targets. The term remote viewing was coined to describe this overall process. The first paper by Puthoff and Targ on remote viewing was published in Nature in March 1974; in it, the team reported some degree of remote viewing success. After the publication of these findings, other attempts to replicate the experiments were carried out with remotely linked groups using computer conferencing. The psychologists David Marks and Richard Kammann attempted to replicate Targ and Puthoff's remote viewing experiments that were carried out in the 1970s at the Stanford Research Institute. In a series of 35 studies, they were unable to replicate the results so investigated the procedure of the original experiments. Marks and Kammann discovered that the notes given to the judges in Targ and Puthoff's experiments contained clues as to which order they were carried out, such as referring to yesterday's two targets, or they had the date of the session written at the top of the page. They concluded that these clues were the reason for the experiment's high hit rates. Marks was able to achieve 100 per cent accuracy without visiting any of the sites himself but by using cues. James Randi has written controlled tests by several other researchers, eliminating several sources of cuing and extraneous evidence present in the original tests, produced negative results. Students were also able to solve Puthoff and Targ's locations from the clues that had inadvertently been included in the transcripts. In 1980, Charles Tart claimed that a rejudging of the transcripts from one of Targ and Puthoff's experiments revealed an above-chance result. Targ and Puthoff again refused to provide copies of the transcripts and it was not until July 1985 that they were made available for study when it was discovered they still contained sensory cues. Marks and Christopher Scott (1986) wrote "considering the importance for the remote viewing hypothesis of adequate cue removal, Tart's failure to perform this basic task seems beyond comprehension. As previously concluded, remote viewing has not been demonstrated in the experiments conducted by Puthoff and Targ, only the repeated failure of the investigators to remove sensory cues." In 1982 Robert Jahn, then Dean of the School of Engineering at Princeton University wrote a comprehensive review of psychic phenomena from an engineering perspective. His paper included numerous references to remote viewing studies at the time. Statistical flaws in his work have been proposed by others in the parapsychological community and within the general scientific community. According to scientific research, clairvoyance is generally explained as the result of confirmation bias, expectancy bias, fraud, hallucination, self-delusion, sensory leakage, subjective validation, wishful thinking or failures to appreciate the base rate of chance occurrences and not as a paranormal power. Parapsychology is generally regarded by the scientific community as a pseudoscience. In 1988, the US National Research Council concluded "The committee finds no scientific justification from research conducted over a period of 130 years, for the existence of parapsychological phenomena." Skeptics say that if clairvoyance were a reality, it would have become abundantly clear. They also contend that those who believe in paranormal phenomena do so for merely psychological reasons. According to David G. Myers (Psychology, 8th ed.): The search for a valid and reliable test of clairvoyance has resulted in thousands of experiments. One controlled procedure has invited 'senders' to telepathically transmit one of four visual images to 'receivers' deprived of sensation in a nearby chamber (Bem & Honorton, 1994). The result? A reported 32 percent accurate response rate, surpassing the chance rate of 25 percent. But follow-up studies have (depending on who was summarizing the results) failed to replicate the phenomenon or produced mixed results (Bem & others, 2001; Milton & Wiseman, 2002; Storm, 2000, 2003).One skeptic, magician James Randi, had a longstanding offer of U.S. $1 million—"to anyone who proves a genuine psychic power under proper observing conditions" (Randi, 1999). French, Australian, and Indian groups have parallel offers of up to 200,000 euros to anyone with demonstrable paranormal abilities (CFI, 2003). Large as these sums are, the scientific seal of approval would be worth far more to anyone whose claims could be authenticated. To refute those who say there is no ESP, one need only produce a single person who can demonstrate a single, reproducible ESP phenomenon. So far, no such person has emerged. Randi's offer has been publicized for three decades and dozens of people have been tested, sometimes under the scrutiny of an independent panel of judges. Still, nothing. "People's desire to believe in the paranormal is stronger than all the evidence that it does not exist." Susan Blackmore, "Blackmore's first law", 2004. Clairvoyance is considered a hallucination by mainstream psychiatry.
[ { "paragraph_id": 0, "text": "Clairvoyance (/klɛərˈvɔɪ.əns/; from French clair 'clear', and voyance 'vision') is the claimed psychic ability to gain information about an object, person, location, or physical event through extrasensory perception. Any person who is claimed to have such ability is said to be a clairvoyant (/klɛərˈvɔɪ.ənt/) ('one who sees clearly').", "title": "" }, { "paragraph_id": 1, "text": "Claims for the existence of paranormal and psychic abilities such as clairvoyance have not been supported by scientific evidence. Parapsychology explores this possibility, but the existence of the paranormal is not accepted by the scientific community. The scientific community widely considers parapsychology, including the study of clairvoyance, a pseudoscience.", "title": "" }, { "paragraph_id": 2, "text": "Pertaining to the ability of clear-sightedness, clairvoyance refers to the paranormal ability to see persons and events that are distant in time or space. It can be divided into roughly three classes: precognition, the ability to perceive or predict future events, retrocognition, the ability to see past events, and remote viewing, the perception of contemporary events happening outside the range of normal perception.", "title": "Usage" }, { "paragraph_id": 3, "text": "Throughout history, there have been numerous places and times in which people have claimed themselves or others to be clairvoyant.", "title": "In history and religion" }, { "paragraph_id": 4, "text": "In several religions, stories of certain individuals being able to see things far removed from their immediate sensory perception are commonplace, especially within pagan religions where oracles were used. Prophecy often involved some degree of clairvoyance, especially when future events were predicted. This ability has sometimes been attributed to a higher power rather than to the person performing it.", "title": "In history and religion" }, { "paragraph_id": 5, "text": "A number of Christian saints were said to be able to see or know things that were far removed from their immediate sensory perception as a kind of gift from God, including Charbel Makhlouf, Padre Pio and Anne Catherine Emmerich in Catholicism and Gabriel Urgebadze, Paisios Eznepidis and John Maximovitch in Orthodoxy. Jesus Christ in the Gospels is also recorded as being able to know things that were far removed from his immediate human perception. Some Christians today also share the same claim.", "title": "In history and religion" }, { "paragraph_id": 6, "text": "In Jainism, clairvoyance is regarded as one of the five kinds of knowledge. The beings of hell and heaven (devas) are said to possess clairvoyance by birth. According to Jain text Sarvārthasiddhi, \"this kind of knowledge has been called avadhi as it ascertains matter in downward range or knows objects within limits\".", "title": "In history and religion" }, { "paragraph_id": 7, "text": "Rudolf Steiner, famous as a clairvoyant himself, claimed that for a clairvoyant, it is easy to confuse his own emotional and spiritual being with the objective spiritual world.", "title": "In history and religion" }, { "paragraph_id": 8, "text": "The earliest record of somnambulist clairvoyance is credited to the Marquis de Puységur, a follower of Franz Mesmer, who in 1784 was treating a local dull-witted peasant named Victor Race. During treatment, Race reportedly would go into trance and undergo a personality change, becoming fluent and articulate, and giving diagnosis and prescription for his own disease as well as those of others. Clairvoyance was a reported ability of some mediums during the spiritualist period of the late 19th and early 20th centuries, and psychics of many descriptions have claimed clairvoyant ability up to the present day.", "title": "Parapsychology" }, { "paragraph_id": 9, "text": "Early researchers of clairvoyance included William Gregory, Gustav Pagenstecher, and Rudolf Tischner. Clairvoyance experiments were reported in 1884 by Charles Richet. Playing cards were enclosed in envelopes and a subject put under hypnosis attempted to identify them. The subject was reported to have been successful in a series of 133 trials but the results dropped to chance level when performed before a group of scientists in Cambridge. J. M. Peirce and E. C. Pickering reported a similar experiment in which they tested 36 subjects over 23,384 trials which did not obtain above chance scores.", "title": "Parapsychology" }, { "paragraph_id": 10, "text": "Ivor Lloyd Tuckett (1911) and Joseph McCabe (1920) analyzed early cases of clairvoyance and came to the conclusion they were best explained by coincidence or fraud. In 1919, the magician P. T. Selbit staged a séance at his own flat in Bloomsbury. The spiritualist Arthur Conan Doyle attended the séance and declared the clairvoyance manifestations to be genuine.", "title": "Parapsychology" }, { "paragraph_id": 11, "text": "A significant development in clairvoyance research came when J. B. Rhine, a parapsychologist at Duke University, introduced a standard methodology, with a standard statistical approach to analyzing data, as part of his research into extrasensory perception. A number of psychological departments attempted to repeat Rhine's experiments, with failure. W. S. Cox (1936) from Princeton University with 132 subjects produced 25,064 trials in a playing card ESP experiment. Cox concluded, \"There is no evidence of extrasensory perception either in the 'average man' or of the group investigated or in any particular individual of that group. The discrepancy between these results and those obtained by Rhine is due either to uncontrollable factors in experimental procedure or to the difference in the subjects.\" Four other psychological departments failed to replicate Rhine's results. It was revealed that Rhine's experiments contained methodological flaws and procedural errors.", "title": "Parapsychology" }, { "paragraph_id": 12, "text": "Eileen Garrett was tested by Rhine at Duke University in 1933 with Zener cards. Certain symbols that were placed on the cards and sealed in an envelope, and she was asked to guess their contents. She performed poorly and later criticized the tests by claiming the cards lacked a psychic energy called \"energy stimulus\" and that she could not perform clairvoyance to order. The parapsychologist Samuel Soal and his colleagues tested Garrett in May 1937. Most of the experiments were carried out in the Psychological Laboratory at the University College London. A total of over 12,000 guesses were recorded but Garrett failed to produce above chance level. In his report Soal wrote \"In the case of Mrs. Eileen Garrett we fail to find the slightest confirmation of Dr. J. B. Rhine's remarkable claims relating to her alleged powers of extra-sensory perception. Not only did she fail when I took charge of the experiments, but she failed equally when four other carefully trained experimenters took my place.\"", "title": "Parapsychology" }, { "paragraph_id": 13, "text": "Remote viewing, also known as remote sensing, remote perception, telesthesia and travelling clairvoyance is the alleged paranormal ability to perceive a remote or hidden target without support of the senses.", "title": "Parapsychology" }, { "paragraph_id": 14, "text": "A well known study of remote viewing in recent times has been the US government-funded project at the Stanford Research Institute during the 1970s through the mid-1990s. In 1972, Harold Puthoff and Russell Targ initiated a series of human subject studies to determine whether participants (the viewers or percipients) could reliably identify and accurately describe salient features of remote locations or targets. In the early studies, a human sender was typically present at the remote location, as part of the experiment protocol. A three-step process was used, the first step being to randomly select the target conditions to be experienced by the senders. Secondly, in the viewing step, participants were asked to verbally express or sketch their impressions of the remote scene. Thirdly, in the judging step, these descriptions were matched by separate judges, as closely as possible, with the intended targets. The term remote viewing was coined to describe this overall process. The first paper by Puthoff and Targ on remote viewing was published in Nature in March 1974; in it, the team reported some degree of remote viewing success. After the publication of these findings, other attempts to replicate the experiments were carried out with remotely linked groups using computer conferencing.", "title": "Parapsychology" }, { "paragraph_id": 15, "text": "The psychologists David Marks and Richard Kammann attempted to replicate Targ and Puthoff's remote viewing experiments that were carried out in the 1970s at the Stanford Research Institute. In a series of 35 studies, they were unable to replicate the results so investigated the procedure of the original experiments. Marks and Kammann discovered that the notes given to the judges in Targ and Puthoff's experiments contained clues as to which order they were carried out, such as referring to yesterday's two targets, or they had the date of the session written at the top of the page. They concluded that these clues were the reason for the experiment's high hit rates. Marks was able to achieve 100 per cent accuracy without visiting any of the sites himself but by using cues. James Randi has written controlled tests by several other researchers, eliminating several sources of cuing and extraneous evidence present in the original tests, produced negative results. Students were also able to solve Puthoff and Targ's locations from the clues that had inadvertently been included in the transcripts.", "title": "Parapsychology" }, { "paragraph_id": 16, "text": "In 1980, Charles Tart claimed that a rejudging of the transcripts from one of Targ and Puthoff's experiments revealed an above-chance result. Targ and Puthoff again refused to provide copies of the transcripts and it was not until July 1985 that they were made available for study when it was discovered they still contained sensory cues. Marks and Christopher Scott (1986) wrote \"considering the importance for the remote viewing hypothesis of adequate cue removal, Tart's failure to perform this basic task seems beyond comprehension. As previously concluded, remote viewing has not been demonstrated in the experiments conducted by Puthoff and Targ, only the repeated failure of the investigators to remove sensory cues.\"", "title": "Parapsychology" }, { "paragraph_id": 17, "text": "In 1982 Robert Jahn, then Dean of the School of Engineering at Princeton University wrote a comprehensive review of psychic phenomena from an engineering perspective. His paper included numerous references to remote viewing studies at the time. Statistical flaws in his work have been proposed by others in the parapsychological community and within the general scientific community.", "title": "Parapsychology" }, { "paragraph_id": 18, "text": "According to scientific research, clairvoyance is generally explained as the result of confirmation bias, expectancy bias, fraud, hallucination, self-delusion, sensory leakage, subjective validation, wishful thinking or failures to appreciate the base rate of chance occurrences and not as a paranormal power. Parapsychology is generally regarded by the scientific community as a pseudoscience. In 1988, the US National Research Council concluded \"The committee finds no scientific justification from research conducted over a period of 130 years, for the existence of parapsychological phenomena.\"", "title": "Scientific reception" }, { "paragraph_id": 19, "text": "Skeptics say that if clairvoyance were a reality, it would have become abundantly clear. They also contend that those who believe in paranormal phenomena do so for merely psychological reasons. According to David G. Myers (Psychology, 8th ed.):", "title": "Scientific reception" }, { "paragraph_id": 20, "text": "The search for a valid and reliable test of clairvoyance has resulted in thousands of experiments. One controlled procedure has invited 'senders' to telepathically transmit one of four visual images to 'receivers' deprived of sensation in a nearby chamber (Bem & Honorton, 1994). The result? A reported 32 percent accurate response rate, surpassing the chance rate of 25 percent. But follow-up studies have (depending on who was summarizing the results) failed to replicate the phenomenon or produced mixed results (Bem & others, 2001; Milton & Wiseman, 2002; Storm, 2000, 2003).One skeptic, magician James Randi, had a longstanding offer of U.S. $1 million—\"to anyone who proves a genuine psychic power under proper observing conditions\" (Randi, 1999). French, Australian, and Indian groups have parallel offers of up to 200,000 euros to anyone with demonstrable paranormal abilities (CFI, 2003). Large as these sums are, the scientific seal of approval would be worth far more to anyone whose claims could be authenticated. To refute those who say there is no ESP, one need only produce a single person who can demonstrate a single, reproducible ESP phenomenon. So far, no such person has emerged. Randi's offer has been publicized for three decades and dozens of people have been tested, sometimes under the scrutiny of an independent panel of judges. Still, nothing. \"People's desire to believe in the paranormal is stronger than all the evidence that it does not exist.\" Susan Blackmore, \"Blackmore's first law\", 2004.", "title": "Scientific reception" }, { "paragraph_id": 21, "text": "Clairvoyance is considered a hallucination by mainstream psychiatry.", "title": "Scientific reception" } ]
Clairvoyance is the claimed psychic ability to gain information about an object, person, location, or physical event through extrasensory perception. Any person who is claimed to have such ability is said to be a clairvoyant. Claims for the existence of paranormal and psychic abilities such as clairvoyance have not been supported by scientific evidence. Parapsychology explores this possibility, but the existence of the paranormal is not accepted by the scientific community. The scientific community widely considers parapsychology, including the study of clairvoyance, a pseudoscience.
2002-01-13T14:00:26Z
2023-12-29T02:13:23Z
[ "Template:Short description", "Template:Redirect", "Template:Sfn", "Template:Cite book", "Template:Cite web", "Template:Use mdy dates", "Template:Div col end", "Template:Wiktionary", "Template:Wikisource1911Enc", "Template:IPAc-en", "Template:Etymology", "Template:Refbegin", "Template:Refend", "Template:Parapsychology", "Template:New Age Movement", "Template:Main", "Template:Cite encyclopedia", "Template:Cite journal", "Template:Cite Encyclopedia of Claims", "Template:Gloss", "Template:Div col", "Template:Citation", "Template:Wikiquote", "Template:Paranormal", "Template:ISBN", "Template:About", "Template:Cite Merriam-Webster", "Template:-\"", "Template:Reflist" ]
https://en.wikipedia.org/wiki/Clairvoyance
7,738
Chiropractic
Chiropractic is a form of alternative medicine concerned with the diagnosis, treatment and prevention of mechanical disorders of the musculoskeletal system, especially of the spine. It has esoteric origins and is based on several pseudoscientific ideas. Many chiropractors, especially those in the field's early history, have proposed that mechanical disorders of the joints, especially of the spine, affect general health, and that regular manipulation of the spine (spinal adjustment) improves general health. The main chiropractic treatment technique involves manual therapy, especially manipulation of the spine, other joints, and soft tissues, but may also include exercises and health and lifestyle counseling. A chiropractor may have a Doctor of Chiropractic (D.C.) degree and be referred to as "doctor" but is not a Doctor of Medicine (M.D.) or a Doctor of Osteopathic Medicine (D.O.). While many chiropractors view themselves as primary care providers, chiropractic clinical training does not meet the requirements for that designation. Systematic reviews of controlled clinical studies of treatments used by chiropractors have found no evidence that chiropractic manipulation is effective, with the possible exception of treatment for back pain. A 2011 critical evaluation of 45 systematic reviews concluded that the data included in the study "fail[ed] to demonstrate convincingly that spinal manipulation is an effective intervention for any condition." Spinal manipulation may be cost-effective for sub-acute or chronic low back pain, but the results for acute low back pain were insufficient. No compelling evidence exists to indicate that maintenance chiropractic care adequately prevents symptoms or diseases. There is not sufficient data to establish the safety of chiropractic manipulations. It is frequently associated with mild to moderate adverse effects, with serious or fatal complications in rare cases. There is controversy regarding the degree of risk of vertebral artery dissection, which can lead to stroke and death, from cervical manipulation. Several deaths have been associated with this technique and it has been suggested that the relationship is causative, a claim which is disputed by many chiropractors. Chiropractic is well established in the United States, Canada, and Australia. It overlaps with other manual-therapy professions such as osteopathy and physical therapy. Most who seek chiropractic care do so for low back pain. Back and neck pain are considered the specialties of chiropractic, but many chiropractors treat ailments other than musculoskeletal issues. Chiropractic has two main groups: "straights", now the minority, emphasize vitalism, "Innate Intelligence", and consider vertebral subluxations to be the cause of all disease; and "mixers", the majority, are more open to mainstream views and conventional medical techniques, such as exercise, massage, and ice therapy. D. D. Palmer founded chiropractic in the 1890s, after saying he received it from "the other world"; Palmer maintained that the tenets of chiropractic were passed along to him by a doctor who had died 50 years previously. His son B. J. Palmer helped to expand chiropractic in the early 20th century. Throughout its history, chiropractic has been controversial. Its foundation is at odds with evidence-based medicine, and has been sustained by pseudoscientific ideas such as vertebral subluxation and Innate Intelligence. Despite the overwhelming evidence that vaccination is an effective public health intervention, among chiropractors there are significant disagreements over the subject, which has led to negative impacts on both public vaccination and mainstream acceptance of chiropractic. The American Medical Association called chiropractic an "unscientific cult" in 1966 and boycotted it until losing an antitrust case in 1987. Chiropractic has had a strong political base and sustained demand for services. In the last decades of the twentieth century, it gained more legitimacy and greater acceptance among conventional physicians and health plans in the United States. During the COVID-19 pandemic, chiropractic professional associations advised chiropractors to adhere to CDC, WHO, and local health department guidance. Despite these recommendations, a small but vocal and influential number of chiropractors spread vaccine misinformation. Chiropractic is generally categorized as complementary and alternative medicine (CAM), which focuses on manipulation of the musculoskeletal system, especially the spine. Its founder, D. D. Palmer, called it "a science of healing without drugs". Chiropractic's origins lie in the folk medicine of bonesetting, and as it evolved it incorporated vitalism, spiritual inspiration and rationalism. Its early philosophy was based on deduction from irrefutable doctrine, which helped distinguish chiropractic from medicine, provided it with legal and political defenses against claims of practicing medicine without a license, and allowed chiropractors to establish themselves as an autonomous profession. This "straight" philosophy, taught to generations of chiropractors, rejects the inferential reasoning of the scientific method, and relies on deductions from vitalistic first principles rather than on the materialism of science. However, most practitioners tend to incorporate scientific research into chiropractic, and most practitioners are "mixers" who attempt to combine the materialistic reductionism of science with the metaphysics of their predecessors and with the holistic paradigm of wellness. A 2008 commentary proposed that chiropractic actively divorce itself from the straight philosophy as part of a campaign to eliminate untestable dogma and engage in critical thinking and evidence-based research. Although a wide diversity of ideas exist among chiropractors, they share the belief that the spine and health are related in a fundamental way, and that this relationship is mediated through the nervous system. Some chiropractors claim spinal manipulation can have an effect on a variety of ailments such as irritable bowel syndrome and asthma. Chiropractic philosophy includes the following perspectives: Holism assumes that health is affected by everything in an individual's environment; some sources also include a spiritual or existential dimension. In contrast, reductionism in chiropractic reduces causes and cures of health problems to a single factor, vertebral subluxation. Homeostasis emphasizes the body's inherent self-healing abilities. Chiropractic's early notion of innate intelligence can be thought of as a metaphor for homeostasis. A large number of chiropractors fear that if they do not separate themselves from the traditional vitalistic concept of innate intelligence, chiropractic will continue to be seen as a fringe profession. A variant of chiropractic called naprapathy originated in Chicago in the early twentieth century. It holds that manual manipulation of soft tissue can reduce "interference" in the body and thus improve health. Straight chiropractors adhere to the philosophical principles set forth by D. D. and B. J. Palmer, and retain metaphysical definitions and vitalistic qualities. Straight chiropractors believe that vertebral subluxation leads to interference with an "innate intelligence" exerted via the human nervous system and is a primary underlying risk factor for many diseases. Straights view the medical diagnosis of patient complaints, which they consider to be the "secondary effects" of subluxations, to be unnecessary for chiropractic treatment. Thus, straight chiropractors are concerned primarily with the detection and correction of vertebral subluxation via adjustment and do not "mix" other types of therapies into their practice style. Their philosophy and explanations are metaphysical in nature and they prefer to use traditional chiropractic lexicon terminology such as "perform spinal analysis", "detect subluxation", "correct with adjustment". They prefer to remain separate and distinct from mainstream health care. Although considered the minority group, "they have been able to transform their status as purists and heirs of the lineage into influence dramatically out of proportion to their numbers." Mixer chiropractors "mix" diagnostic and treatment approaches from chiropractic, medical or osteopathic viewpoints and make up the majority of chiropractors. Unlike straight chiropractors, mixers believe subluxation is one of many causes of disease, and hence they tend to be open to mainstream medicine. Many of them incorporate mainstream medical diagnostics and employ conventional treatments including techniques of physical therapy such as exercise, stretching, massage, ice packs, electrical muscle stimulation, therapeutic ultrasound, and moist heat. Some mixers also use techniques from alternative medicine, including nutritional supplements, acupuncture, homeopathy, herbal remedies, and biofeedback. Although mixers are the majority group, many of them retain belief in vertebral subluxation as shown in a 2003 survey of 1,100 North American chiropractors, which found that 88 percent wanted to retain the term "vertebral subluxation complex", and that when asked to estimate the percent of disorders of internal organs that subluxation significantly contributes to, the mean response was 62 percent. A 2008 survey of 6,000 American chiropractors demonstrated that most chiropractors seem to believe that a subluxation-based clinical approach may be of limited utility for addressing visceral disorders, and greatly favored non-subluxation-based clinical approaches for such conditions. The same survey showed that most chiropractors generally believed that the majority of their clinical approach for addressing musculoskeletal/biomechanical disorders such as back pain was based on subluxation. Chiropractors often offer conventional therapies such as physical therapy and lifestyle counseling, and it may for the lay person be difficult to distinguish the unscientific from the scientific. In science-based medicine, the term "subluxation" refers to an incomplete or partial dislocation of a joint, from the Latin luxare for 'dislocate'. While medical doctors use the term exclusively to refer to physical dislocations, Chiropractic founder D. D. Palmer imbued the word subluxation with a metaphysical and philosophical meaning drawn from pseudoscientific traditions such as Vitalism. Palmer claimed that vertebral subluxations interfered with the body's function and its inborn ability to heal itself. D. D. Palmer repudiated his earlier theory that vertebral subluxations caused pinched nerves in the intervertebral spaces in favor of subluxations causing altered nerve vibration, either too tense or too slack, affecting the tone (health) of the end organ. He qualified this by noting that knowledge of innate intelligence was not essential to the competent practice of chiropractic. This concept was later expanded upon by his son, B. J. Palmer, and was instrumental in providing the legal basis of differentiating chiropractic from conventional medicine. In 1910, D. D. Palmer theorized that the nervous system controlled health: Physiologists divide nerve-fibers, which form the nerves, into two classes, afferent and efferent. Impressions are made on the peripheral afferent fiber-endings; these create sensations that are transmitted to the center of the nervous system. Efferent nerve-fibers carry impulses out from the center to their endings. Most of these go to muscles and are therefore called motor impulses; some are secretory and enter glands; a portion are inhibitory, their function being to restrain secretion. Thus, nerves carry impulses outward and sensations inward. The activity of these nerves, or rather their fibers, may become excited or allayed by impingement, the result being a modification of functionality – too much or not enough action – which is disease. Vertebral subluxation, a core concept of traditional chiropractic, remains unsubstantiated and largely untested, and a debate about whether to keep it in the chiropractic paradigm has been ongoing for decades. In general, critics of traditional subluxation-based chiropractic (including chiropractors) are skeptical of its clinical value, dogmatic beliefs and metaphysical approach. While straight chiropractic still retains the traditional vitalistic construct espoused by the founders, evidence-based chiropractic suggests that a mechanistic view will allow chiropractic care to become integrated into the wider health care community. This is still a continuing source of debate within the chiropractic profession as well, with some schools of chiropractic still teaching the traditional/straight subluxation-based chiropractic, while others have moved towards an evidence-based chiropractic that rejects metaphysical foundings and limits itself to primarily neuromusculoskeletal conditions. In 2005, the chiropractic subluxation was defined by the World Health Organization as "a lesion or dysfunction in a joint or motion segment in which alignment, movement integrity and/or physiological function are altered, although contact between joint surfaces remains intact. It is essentially a functional entity, which may influence biomechanical and neural integrity." This differs from the medical definition of subluxation as a significant structural displacement, which can be seen with static imaging techniques such as X-rays. The use of X-ray imaging in the case of vertebral subluxation exposes patients to harmful ionizing radiation for no evidentially supported reason. The 2008 book Trick or Treatment states "X-rays can reveal neither the subluxations nor the innate intelligence associated with chiropractic philosophy, because they do not exist." Attorney David Chapman-Smith, Secretary-General of the World Federation of Chiropractic, has stated that "Medical critics have asked how there can be a subluxation if it cannot be seen on X-ray. The answer is that the chiropractic subluxation is essentially a functional entity, not structural, and is therefore no more visible on static X-ray than a limp or headache or any other functional problem." The General Chiropractic Council, the statutory regulatory body for chiropractors in the United Kingdom, states that the chiropractic vertebral subluxation complex "is not supported by any clinical research evidence that would allow claims to be made that it is the cause of disease." As of 2014, the US National Board of Chiropractic Examiners states "The specific focus of chiropractic practice is known as the chiropractic subluxation or joint dysfunction. A subluxation is a health concern that manifests in the skeletal joints, and, through complex anatomical and physiological relationships, affects the nervous system and may lead to reduced function, disability or illness." While some chiropractors limit their practice to short-term treatment of musculoskeletal conditions, many falsely claim to be able treat a myriad of other conditions. Some dissuade patients from seeking medical care, others have pretended to be qualified to act as a family doctor. Quackwatch, an alternative medicine watchdog, cautions against seeing chiropractors who: Writing for the Skeptical Inquirer, one physician cautioned against seeing even chiropractors who solely claim to treat musculoskeletal conditions: I think Spinal Manipulation Therapy (SMT) is a reasonable option for patients to try ... But I could not in good conscience refer a patient to a chiropractor... When chiropractic is effective, what is effective is not 'chiropractic': it is SMT. SMT is also offered by physical therapists, DOs, and others. These are science-based providers ... If I thought a patient might benefit from manipulation, I would rather refer him or her to a science-based provider. Chiropractors emphasize the conservative management of the neuromusculoskeletal system without the use of medicines or surgery, with special emphasis on the spine. Back and neck pain are the specialties of chiropractic but many chiropractors treat ailments other than musculoskeletal issues. There is a range of opinions among chiropractors: some believed that treatment should be confined to the spine, or back and neck pain; others disagreed. For example, while one 2009 survey of American chiropractors had found that 73% classified themselves as "back pain/musculoskeletal specialists", the label "back and neck pain specialists" was regarded by 47% of them as a least desirable description in a 2005 international survey. Chiropractic combines aspects from mainstream and alternative medicine, and there is no agreement about how to define the profession: although chiropractors have many attributes of primary care providers, chiropractic has more attributes of a medical specialty like dentistry or podiatry. It has been proposed that chiropractors specialize in nonsurgical spine care, instead of attempting to also treat other problems, but the more expansive view of chiropractic is still widespread. Mainstream health care and governmental organizations such as the World Health Organization consider chiropractic to be complementary and alternative medicine (CAM); and a 2008 study reported that 31% of surveyed chiropractors categorized chiropractic as CAM, 27% as integrated medicine, and 12% as mainstream medicine. Many chiropractors believe they are primary care providers, including US and UK chiropractors, but the length, breadth, and depth of chiropractic clinical training do not support the requirements to be considered primary care providers, so their role on primary care is limited and disputed. Chiropractic overlaps with several other forms of manual therapy, including massage therapy, osteopathy, physical therapy, and sports medicine. Chiropractic is autonomous from and competitive with mainstream medicine, and osteopathy outside the US remains primarily a manual medical system; physical therapists work alongside and cooperate with mainstream medicine, and osteopathic medicine in the U.S. has merged with the medical profession. Practitioners may distinguish these competing approaches through claims that, compared to other therapists, chiropractors heavily emphasize spinal manipulation, tend to use firmer manipulative techniques, and promote maintenance care; that osteopaths use a wider variety of treatment procedures; and that physical therapists emphasize machinery and exercise. Chiropractic diagnosis may involve a range of methods including skeletal imaging, observational and tactile assessments, and orthopedic and neurological evaluation. A chiropractor may also refer a patient to an appropriate specialist, or co-manage with another health care provider. Common patient management involves spinal manipulation (SM) and other manual therapies to the joints and soft tissues, rehabilitative exercises, health promotion, electrical modalities, complementary procedures, and lifestyle advice. Chiropractors are not normally licensed to write medical prescriptions or perform major surgery in the United States (although New Mexico has become the first US state to allow "advanced practice" trained chiropractors to prescribe certain medications). In the US, their scope of practice varies by state, based on inconsistent views of chiropractic care: some states, such as Iowa, broadly allow treatment of "human ailments"; some, such as Delaware, use vague concepts such as "transition of nerve energy" to define scope of practice; others, such as New Jersey, specify a severely narrowed scope. US states also differ over whether chiropractors may conduct laboratory tests or diagnostic procedures, dispense dietary supplements, or use other therapies such as homeopathy and acupuncture; in Oregon they can become certified to perform minor surgery and to deliver children via natural childbirth. A 2003 survey of North American chiropractors found that a slight majority favored allowing them to write prescriptions for over-the-counter drugs. A 2010 survey found that 72% of Swiss chiropractors considered their ability to prescribe nonprescription medication as an advantage for chiropractic treatment. A related field, veterinary chiropractic, applies manual therapies to animals and is recognized in many US states, but is not recognized by the American Chiropractic Association as being chiropractic. It remains controversial within certain segments of the veterinary and chiropractic professions. No single profession "owns" spinal manipulation and there is little consensus as to which profession should administer SM, raising concerns by chiropractors that other medical physicians could "steal" SM procedures from chiropractors. A focus on evidence-based SM research has also raised concerns that the resulting practice guidelines could limit the scope of chiropractic practice to treating backs and necks. Two US states (Washington and Arkansas) prohibit physical therapists from performing SM, some states allow them to do it only if they have completed advanced training in SM, and some states allow only chiropractors to perform SM, or only chiropractors and physicians. Bills to further prohibit non-chiropractors from performing SM are regularly introduced into state legislatures and are opposed by physical therapist organizations. Spinal manipulation, which chiropractors call "spinal adjustment" or "chiropractic adjustment", is the most common treatment used in chiropractic care. Spinal manipulation is a passive manual maneuver during which a three-joint complex is taken past the normal range of movement, but not so far as to dislocate or damage the joint. Its defining factor is a dynamic thrust, which is a sudden force that causes an audible release and attempts to increase a joint's range of motion. High-velocity, low-amplitude spinal manipulation (HVLA-SM) thrusts have physiological effects that signal neural discharge from paraspinal muscle tissues, depending on duration and amplitude of the thrust are factors of the degree in paraspinal muscle spindles activation. Clinical skill in employing HVLA-SM thrusts depends on the ability of the practitioner to handle the duration and magnitude of the load. More generally, spinal manipulative therapy (SMT) describes techniques where the hands are used to manipulate, massage, mobilize, adjust, stimulate, apply traction to, or otherwise influence the spine and related tissues. There are several schools of chiropractic adjustive techniques, although most chiropractors mix techniques from several schools. The following adjustive procedures were received by more than 10% of patients of licensed US chiropractors in a 2003 survey: Diversified technique (full-spine manipulation, employing various techniques), extremity adjusting, Activator technique (which uses a spring-loaded tool to deliver precise adjustments to the spine), Thompson Technique (which relies on a drop table and detailed procedural protocols), Gonstead (which emphasizes evaluating the spine along with specific adjustment that avoids rotational vectors), Cox/flexion-distraction (a gentle, low-force adjusting procedure which mixes chiropractic with osteopathic principles and utilizes specialized adjusting tables with movable parts), adjustive instrument, Sacro-Occipital Technique (which models the spine as a torsion bar), Nimmo Receptor-Tonus Technique, applied kinesiology (which emphasises "muscle testing" as a diagnostic tool), and cranial. Chiropractic biophysics technique uses inverse functions of rotations during spinal manipulation. Koren Specific Technique (KST) may use their hands, or they may use an electric device known as an "ArthroStim" for assessment and spinal manipulations. Insurers in the US and UK that cover other chiropractic techniques exclude KST from coverage because they consider it to be "experimental and investigational". Medicine-assisted manipulation, such as manipulation under anesthesia, involves sedation or local anesthetic and is done by a team that includes an anesthesiologist; a 2008 systematic review did not find enough evidence to make recommendations about its use for chronic low back pain. Many other procedures are used by chiropractors for treating the spine, other joints and tissues, and general health issues. The following procedures were received by more than one-third of patients of licensed US chiropractors in a 2003 survey: Diversified technique (full-spine manipulation; mentioned in previous paragraph), physical fitness/exercise promotion, corrective or therapeutic exercise, ergonomic/postural advice, self-care strategies, activities of daily living, changing risky/unhealthy behaviors, nutritional/dietary recommendations, relaxation/stress reduction recommendations, ice pack/cryotherapy, extremity adjusting (also mentioned in previous paragraph), trigger point therapy, and disease prevention/early screening advice. A 2010 study describing Belgian chiropractors and their patients found chiropractors in Belgium mostly focus on neuromusculoskeletal complaints in adult patients, with emphasis on the spine. The diversified technique is the most often applied technique at 93%, followed by the Activator mechanical-assisted technique at 41%. A 2009 study assessing chiropractic students giving or receiving spinal manipulations while attending a United States chiropractic college found Diversified, Gonstead, and upper cervical manipulations are frequently used methods. Reviews of research studies within the chiropractic community have been used to generate practice guidelines outlining standards that specify which chiropractic treatments are legitimate (i.e. supported by evidence) and conceivably reimbursable under managed care health payment systems. Evidence-based guidelines are supported by one end of an ideological continuum among chiropractors; the other end employs antiscientific reasoning and makes unsubstantiated claims. Chiropractic remains at a crossroads, and that in order to progress it would need to embrace science; the promotion by some for it to be a cure-all was both "misguided and irrational". A 2007 survey of Alberta chiropractors found that they do not consistently apply research in practice, which may have resulted from a lack of research education and skills. Specific guidelines concerning the treatment of nonspecific (i.e., unknown cause) low back pain are inconsistent between countries. Numerous controlled clinical studies of treatments used by chiropractors have been conducted, with varied results. There is no conclusive evidence that chiropractic manipulative treatment is effective for the treatment of any medical condition, except perhaps for certain kinds of back pain. Generally, the research carried out into the effectiveness of chiropractic has been of poor quality. Research published by chiropractors is distinctly biased: reviews of SM for back pain tended to find positive conclusions when authored by chiropractors, while reviews by mainstream authors did not. There is a wide range of ways to measure treatment outcomes. Chiropractic care benefits from the placebo response, but it is difficult to construct a trustworthy placebo for clinical trials of spinal manipulative therapy (SMT). The efficacy of maintenance care in chiropractic is unknown. Available evidence covers the following conditions: The World Health Organization found chiropractic care in general is safe when employed skillfully and appropriately. There is not sufficient data to establish the safety of chiropractic manipulations. Manipulation is regarded as relatively safe but complications can arise, and it has known adverse effects, risks and contraindications. Absolute contraindications to spinal manipulative therapy are conditions that should not be manipulated; these contraindications include rheumatoid arthritis and conditions known to result in unstable joints. Relative contraindications are conditions where increased risk is acceptable in some situations and where low-force and soft-tissue techniques are treatments of choice; these contraindications include osteoporosis. Although most contraindications apply only to manipulation of the affected region, some neurological signs indicate referral to emergency medical services; these include sudden and severe headache or neck pain unlike that previously experienced. Indirect risks of chiropractic involve delayed or missed diagnoses through consulting a chiropractor. Spinal manipulation is associated with frequent, mild and temporary adverse effects, including new or worsening pain or stiffness in the affected region. They have been estimated to occur in 33% to 61% of patients, and frequently occur within an hour of treatment and disappear within 24 to 48 hours; adverse reactions appear to be more common following manipulation than mobilization. The most frequently stated adverse effects are mild headache, soreness, and briefly elevated pain fatigue. Chiropractic is correlated with a very high incidence of minor adverse effects. Rarely, spinal manipulation, particularly on the upper spine, can also result in complications that can lead to permanent disability or death; these can occur in adults and children. Estimates vary widely for the incidence of these complications, and the actual incidence is unknown, due to high levels of underreporting and to the difficulty of linking manipulation to adverse effects such as stroke, which is a particular concern. Adverse effects are poorly reported in recent studies investigating chiropractic manipulations. A 2016 systematic review concludes that the level of reporting is unsuitable and unacceptable. Reports of serious adverse events have occurred, resulting from spinal manipulation therapy of the lumbopelvic region. Estimates for serious adverse events vary from 5 strokes per 100,000 manipulations to 1.46 serious adverse events per 10 million manipulations and 2.68 deaths per 10 million manipulations, though it was determined that there was inadequate data to be conclusive. Several case reports show temporal associations between interventions and potentially serious complications. The published medical literature contains reports of 26 deaths since 1934 following chiropractic manipulations and many more seem to remain unpublished. Vertebrobasilar artery stroke (VAS) is statistically associated with chiropractic services in persons under 45 years of age, but it is similarly associated with general practitioner services, suggesting that these associations are likely explained by preexisting conditions. Weak to moderately strong evidence supports causation (as opposed to statistical association) between cervical manipulative therapy (CMT) and VAS. There is insufficient evidence to support a strong association or no association between cervical manipulation and stroke. While the biomechanical evidence is not sufficient to support the statement that CMT causes cervical artery dissection (CD), clinical reports suggest that mechanical forces have a part in a substantial number of CDs and the majority of population controlled studies found an association between CMT and VAS in young people. It is strongly recommended that practitioners consider the plausibility of CD as a symptom, and people can be informed of the association between CD and CMT before administering manipulation of the cervical spine. There is controversy regarding the degree of risk of stroke from cervical manipulation. Many chiropractors state that, the association between chiropractic therapy and vertebral arterial dissection is not proven. However, it has been suggested that the causality between chiropractic cervical manipulation beyond the normal range of motion and vascular accidents is probable or definite. There is very low evidence supporting a small association between internal carotid artery dissection and chiropractic neck manipulation. The incidence of internal carotid artery dissection following cervical spine manipulation is unknown. The literature infrequently reports helpful data to better understand the association between cervical manipulative therapy, cervical artery dissection and stroke. The limited evidence is inconclusive that chiropractic spinal manipulation therapy is not a cause of intracranial hypotension. Cervical intradural disc herniation is very rare following spinal manipulation therapy. Chiropractors sometimes employ diagnostic imaging techniques such as X-rays and CT scans that rely on ionizing radiation. Although there is no clear evidence to justify the practice, some chiropractors still X-ray a patient several times a year. Practice guidelines aim to reduce unnecessary radiation exposure, which increases cancer risk in proportion to the amount of radiation received. Research suggests that radiology instruction given at chiropractic schools worldwide seem to be evidence-based. Although, there seems to be a disparity between some schools and available evidence regarding the aspect of radiography for patients with acute low back pain without an indication of a serious disease, which may contribute to chiropractic overuse of radiography for low back pain. A 2012 systematic review concluded that no accurate assessment of risk-benefit exists for cervical manipulation. A 2010 systematic review stated that there is no good evidence to assume that neck manipulation is an effective treatment for any medical condition and suggested a precautionary principle in healthcare for chiropractic intervention even if a causality with vertebral artery dissection after neck manipulation were merely a remote possibility. The same review concluded that the risk of death from manipulations to the neck outweighs the benefits. Chiropractors have criticized this conclusion, claiming that the author did not evaluate the potential benefits of spinal manipulation. Edzard Ernst stated "This detail was not the subject of my review. I do, however, refer to such evaluations and should add that a report recently commissioned by the General Chiropractic Council did not support many of the outlandish claims made by many chiropractors across the world." A 1999 review of 177 previously reported cases published between 1925 and 1997 in which injuries were attributed to manipulation of the cervical spine (MCS) concluded that "The literature does not demonstrate that the benefits of MCS outweigh the risks." The professions associated with each injury were assessed. Physical therapists (PT) were involved in less than 2% of all cases, with no deaths caused by PTs. Chiropractors were involved in a little more than 60% of all cases, including 32 deaths. A 2009 review evaluating maintenance chiropractic care found that spinal manipulation is associated with considerable harm and no compelling evidence exists to indicate that it adequately prevents symptoms or diseases, thus the risk-benefit is not evidently favorable. A 2012 systematic review suggested that the use of spine manipulation in clinical practice is a cost-effective treatment when used alone or in combination with other treatment approaches. A 2011 systematic review found evidence supporting the cost-effectiveness of using spinal manipulation for the treatment of sub-acute or chronic low back pain; the results for acute low back pain were insufficient. A 2006 systematic cost-effectiveness review found that the reported cost-effectiveness of spinal manipulation in the United Kingdom compared favorably with other treatments for back pain, but that reports were based on data from clinical trials without placebo controls and that the specific cost-effectiveness of the treatment (as opposed to non-specific effects) remains uncertain. A 2005 American systematic review of economic evaluations of conservative treatments for low back pain found that significant quality problems in available studies meant that definite conclusions could not be drawn about the most cost-effective intervention. The cost-effectiveness of maintenance chiropractic care is unknown. Analysis of a clinical and cost utilization data from the years 2003 to 2005 by an integrative medicine independent physician association (IPA) which looked the chiropractic services utilization found that the clinical and cost utilization of chiropractic services based on 70,274 member-months over a 7-year period decreased patient costs associate with the following use of services by 60% for in-hospital admissions, 59% for hospital days, 62% for outpatient surgeries and procedures, and 85% for pharmaceutical costs when compared with conventional medicine (visit to a medical doctor primary care provider) IPA performance for the same health maintenance organization product in the same geography and time frame. Requirements vary between countries. In the U.S. chiropractors obtain a non-medical accredited diploma in the field of chiropractic. Chiropractic education in the U.S. has been criticized for failing to meet generally accepted standards of evidence-based medicine. The curriculum content of North American chiropractic and medical colleges with regard to basic and clinical sciences has little similarity, both in the kinds of subjects offered and in the time assigned to each subject. Accredited chiropractic programs in the U.S. require that applicants have 90 semester hours of undergraduate education with a grade point average of at least 3.0 on a 4.0 scale. Many programs require at least three years of undergraduate education, and more are requiring a bachelor's degree. Canada requires a minimum three years of undergraduate education for applicants, and at least 4200 instructional hours (or the equivalent) of full‐time chiropractic education for matriculation through an accredited chiropractic program. Graduates of the Canadian Memorial Chiropractic College (CMCC) are formally recognized to have at least 7–8 years of university level education. The World Health Organization (WHO) guidelines suggest three major full-time educational paths culminating in either a DC, DCM, BSc, or MSc degree. Besides the full-time paths, they also suggest a conversion program for people with other health care education and limited training programs for regions where no legislation governs chiropractic. Upon graduation, there may be a requirement to pass national, state, or provincial board examinations before being licensed to practice in a particular jurisdiction. Depending on the location, continuing education may be required to renew these licenses. Specialty training is available through part-time postgraduate education programs such as chiropractic orthopedics and sports chiropractic, and through full-time residency programs such as radiology or orthopedics. In the U.S., chiropractic schools are accredited through the Council on Chiropractic Education (CCE) while the General Chiropractic Council (GCC) is the statutory governmental body responsible for the regulation of chiropractic in the UK. The U.S. CCE requires a mixing curriculum, which means a straight-educated chiropractor may not be eligible for licensing in states requiring CCE accreditation. CCEs in the U.S., Canada, Australia and Europe have joined to form CCE-International (CCE-I) as a model of accreditation standards with the goal of having credentials portable internationally. Today, there are 18 accredited Doctor of Chiropractic programs in the U.S., 2 in Canada, 6 in Australasia, and 5 in Europe. All but one of the chiropractic colleges in the U.S. are privately funded, but in several other countries they are in government-sponsored universities and colleges. Of the two chiropractic colleges in Canada, one is publicly funded (UQTR) and one is privately funded (CMCC). In 2005, CMCC was granted the privilege of offering a professional health care degree under the Post-secondary Education Choice and Excellence Act, which sets the program within the hierarchy of education in Canada as comparable to that of other primary contact health care professions such as medicine, dentistry and optometry. Regulatory colleges and chiropractic boards in the U.S., Canada, Mexico, and Australia are responsible for protecting the public, standards of practice, disciplinary issues, quality assurance and maintenance of competency. There are an estimated 49,000 chiropractors in the U.S. (2008), 6,500 in Canada (2010), 2,500 in Australia (2000), and 1,500 in the UK (2000). Chiropractors often argue that this education is as good as or better than medical physicians', but most chiropractic training is confined to classrooms with much time spent learning theory, adjustment, and marketing. The fourth year of chiropractic education persistently showed the highest stress levels. Every student, irrespective of year, experienced different ranges of stress when studying. The chiropractic leaders and colleges have had internal struggles. Rather than cooperation, there has been infighting between different factions. A number of actions were posturing due to the confidential nature of the chiropractic colleges in an attempt to enroll students. The chiropractic oath is a modern variation of the classical Hippocratic Oath historically taken by physicians and other healthcare professionals swearing to practice their professions ethically. The American Chiropractic Association (ACA) has an ethical code "based upon the acknowledgement that the social contract dictates the profession's responsibilities to the patient, the public, and the profession; and upholds the fundamental principle that the paramount purpose of the chiropractic doctor's professional services shall be to benefit the patient." The International Chiropractor's Association (ICA) also has a set of professional canons. A 2008 commentary proposed that the chiropractic profession actively regulate itself to combat abuse, fraud, and quackery, which are more prevalent in chiropractic than in other health care professions, violating the social contract between patients and physicians. According to a 2015 Gallup poll of U.S. adults, the perception of chiropractors is generally favorable; two-thirds of American adults agree that chiropractors have their patient's best interest in mind and more than half also agree that most chiropractors are trustworthy. Less than 10% of US adults disagreed with the statement that chiropractors were trustworthy. Chiropractors, especially in America, have a reputation for unnecessarily treating patients. In many circumstances the focus seems to be put on economics instead of health care. Sustained chiropractic care is promoted as a preventive tool, but unnecessary manipulation could possibly present a risk to patients. Some chiropractors are concerned by the routine unjustified claims chiropractors have made. A 2010 analysis of chiropractic websites found the majority of chiropractors and their associations made claims of effectiveness not supported by scientific evidence, while 28% of chiropractor websites advocate lower back pain care, which has some sound evidence. The US Office of the Inspector General (OIG) estimated that for calendar year 2013, 82% of payments to chiropractors under Medicare Part B, a total of $359 million, did not comply with Medicare requirements. There have been at least 15 OIG reports about chiropractic billing irregularities since 1986. In 2009, a backlash to the libel suit filed by the British Chiropractic Association (BCA) against Simon Singh inspired the filing of formal complaints of false advertising against more than 500 individual chiropractors within one 24-hour period, prompting the McTimoney Chiropractic Association to write to its members advising them to remove leaflets that make claims about whiplash and colic from their practice, to be wary of new patients and telephone inquiries, and telling their members: "If you have a website, take it down NOW" and "Finally, we strongly suggest you do NOT discuss this with others, especially patients." An editorial in Nature suggested that the BCA may have been trying to suppress debate and that this use of English libel law was a burden on the right to freedom of expression, which is protected by the European Convention on Human Rights. The libel case ended with the BCA withdrawing its suit in 2010. Chiropractic is established in the U.S., Canada, and Australia, and is present to a lesser extent in many other countries. It is viewed as a marginal and non-clinically–proven attempt at complementary and alternative medicine, which has not integrated into mainstream medicine. In Australia, there are approximately 2488 chiropractors, or one chiropractor for every 7980 people. Most private health insurance funds in Australia cover chiropractic care, and the federal government funds chiropractic care when the patient is referred by a medical practitioner. In 2014, the chiropractic profession had a registered workforce of 4,684 practitioners in Australia represented by two major organizations – the Chiropractors' Association of Australia (CAA) and the Chiropractic and Osteopathic College of Australasia (COCA). Annual expenditure on chiropractic care (alone or combined with osteopathy) in Australia is estimated to be between AUD$750–988 million with musculoskeletal complaints such as back and neck pain making up the bulk of consultations; and proportional expenditure is similar to that found in other countries. While Medicare (the Australian publicly funded universal health care system) coverage of chiropractic services is limited to only those directed by a medical referral to assist chronic disease management, most private health insurers in Australia do provide partial reimbursement for a wider range of chiropractic services in addition to limited third party payments for workers compensation and motor vehicle accidents. Of the 2,005 chiropractors who participated in a 2015 survey, 62.4% were male and the average age was 42.1 (SD = 12.1) years. Nearly all chiropractors (97.1%) had a bachelor's degree or higher, with the majority of chiropractor's highest professional qualification being a bachelor or double bachelor's degree (34.6%), followed by a master's degree (32.7%), Doctor of Chiropractic (28.9%) or PhD (0.9%). Only a small number of chiropractor's highest professional qualification was a diploma (2.1%) or advanced diploma (0.8%). In Germany, chiropractic may be offered by medical doctors and alternative practitioners. Chiropractors qualified abroad must obtain a German non-medical practitioner license. Authorities have routinely required a comprehensive knowledge test for this, but in the recent past, some administrative courts have ruled that training abroad should be recognised. In Switzerland, only trained medical professionals are allowed to offer chiropractic. There are 300 chiropractors in Switzerland. In the United Kingdom, there are over 2,000 chiropractors, representing one chiropractor per 29,206 people. Chiropractic is available on the National Health Service in some areas, such as Cornwall, where the treatment is only available for neck or back pain. A 2010 study by questionnaire presented to UK chiropractors indicated only 45% of chiropractors disclosed to patients the serious risk associated with manipulation of the cervical spine and that 46% believed there was possibility patients would refuse treatment if the risks were correctly explained. However 80% acknowledged the ethical/moral responsibility to disclose risk to patients. The percentage of the population that utilizes chiropractic care at any given time generally falls into a range from 6% to 12% in the U.S. and Canada, with a global high of 20% in Alberta in 2006. In 2008, chiropractors were reported to be the most common CAM providers for children and adolescents, these patients representing up to 14% of all visits to chiropractors. There were around 50,330 chiropractors practicing in North America in 2000. In 2008, this has increased by almost 20% to around 60,000 chiropractors. In 2002–03, the majority of those who sought chiropractic did so for relief from back and neck pain and other neuromusculoskeletal complaints; most do so specifically for low back pain. The majority of U.S. chiropractors participate in some form of managed care. Although the majority of U.S. chiropractors view themselves as specialists in neuromusculoskeletal conditions, many also consider chiropractic as a type of primary care. In the majority of cases, the care that chiropractors and physicians provide divides the market, however for some, their care is complementary. In the U.S., chiropractors perform over 90% of all manipulative treatments. Satisfaction rates are typically higher for chiropractic care compared to medical care, with a 1998 U.S. survey reporting 83% of respondents satisfied or very satisfied with their care; quality of communication seems to be a consistent predictor of patient satisfaction with chiropractors. Utilization of chiropractic care is sensitive to the costs incurred by the co-payment by the patient. The use of chiropractic declined from 9.9% of U.S. adults in 1997 to 7.4% in 2002; this was the largest relative decrease among CAM professions, which overall had a stable use rate. As of 2007 7% of the U.S. population is being reached by chiropractic. They were the third largest medical profession in the US in 2002, following physicians and dentists. Employment of U.S. chiropractors was expected to increase 14% between 2006 and 2016, faster than the average for all occupations. In the U.S., most states require insurers to cover chiropractic care, and most HMOs cover these services. Chiropractic's origins lie in the folk medicine practice of bonesetting, in which untrained practitioners engaged in joint manipulation or resetting fractured bones. Chiropractic was founded in 1895 by Daniel David (D. D.) Palmer in Davenport, Iowa. Palmer, a magnetic healer, hypothesized that manual manipulation of the spine could cure disease. The first chiropractic patient of D. D. Palmer was Harvey Lillard, a worker in the building where Palmer's office was located. He claimed that he had severely reduced hearing for 17 years, which started shortly following a "pop" in his spine. A few days following his adjustment, Lillard claimed his hearing was almost completely restored. Another of Palmer's patients, Samuel Weed, coined the term chiropractic, from Greek χειρο- chiro- 'hand' (itself from χείρ cheir 'hand') and πρακτικός praktikos 'practical'. Chiropractic is classified as a field of pseudomedicine on account of its esoteric origins. Chiropractic competed with its predecessor osteopathy, another medical system based on magnetic healing; both systems were founded by charismatic midwesterners in opposition to the conventional medicine of the day, and both postulated that manipulation improved health. Although initially keeping chiropractic a family secret, in 1898 Palmer began teaching it to a few students at his new Palmer School of Chiropractic. One student, his son Bartlett Joshua (B. J.) Palmer, became committed to promoting chiropractic, took over the Palmer School in 1906, and rapidly expanded its enrollment. Early chiropractors believed that all disease was caused by interruptions in the flow of innate intelligence, a vitalistic nervous energy or life force that represented God's presence in man; chiropractic leaders often invoked religious imagery and moral traditions. D. D. Palmer said he "received chiropractic from the other world". D. D. and B. J. both seriously considered declaring chiropractic a religion, which might have provided legal protection under the U.S. constitution, but decided against it partly to avoid confusion with Christian Science. Early chiropractors also tapped into the Populist movement, emphasizing craft, hard work, competition, and advertisement, aligning themselves with the common man against intellectuals and trusts, among which they included the American Medical Association (AMA). Chiropractic has seen considerable controversy and criticism. Although D. D. and B. J. were "straight" and disdained the use of instruments, some early chiropractors, whom B. J. scornfully called "mixers", advocated the use of instruments. In 1910, B. J. changed course and endorsed X-rays as necessary for diagnosis; this resulted in a significant exodus from the Palmer School of the more conservative faculty and students. The mixer camp grew until by 1924 B. J. estimated that only 3,000 of the United States' 25,000 chiropractors remained straight. That year, B. J.'s invention and promotion of the neurocalometer, a temperature-sensing device, was highly controversial among B. J.'s fellow straights. By the 1930s, chiropractic was the largest alternative healing profession in the U.S. Chiropractors faced heavy opposition from organized medicine. D. D. Palmer was jailed in 1907 for practicing medicine without a license. Thousands of chiropractors were prosecuted for practicing medicine without a license, and D. D. and many other chiropractors were jailed. To defend against medical statutes, B. J. argued that chiropractic was separate and distinct from medicine, asserting that chiropractors "analyzed" rather than "diagnosed", and "adjusted" subluxations rather than "treated" disease. B. J. cofounded the Universal Chiropractors' Association (UCA) to provide legal services to arrested chiropractors. Although the UCA won their first test case in Wisconsin in 1907, prosecutions instigated by state medical boards became increasingly common and in many cases were successful. In response, chiropractors conducted political campaigns to secure separate licensing statutes, eventually succeeding in all fifty states, from Kansas in 1913 through Louisiana in 1974. The longstanding feud between chiropractors and medical doctors continued for decades. The AMA labeled chiropractic an "unscientific cult" in 1966, and until 1980 advised its members that it was unethical for medical doctors to associate with "unscientific practitioners". This culminated in a landmark 1987 decision, Wilk v. AMA, in which the court found that the AMA had engaged in unreasonable restraint of trade and conspiracy, and which ended the AMA's de facto boycott of chiropractic. Serious research to test chiropractic theories did not begin until the 1970s, and is continuing to be hampered by antiscientific and pseudoscientific ideas that sustained the profession in its long battle with organized medicine. By the mid-1990s there was a growing scholarly interest in chiropractic, which helped efforts to improve service quality and establish clinical guidelines that recommended manual therapies for acute low back pain. In recent decades chiropractic gained legitimacy and greater acceptance by medical physicians and health plans, and enjoyed a strong political base and sustained demand for services. However, its future seemed uncertain: as the number of practitioners grew, evidence-based medicine insisted on treatments with demonstrated value, managed care restricted payment, and competition grew from massage therapists and other health professions. The profession responded by marketing natural products and devices more aggressively, and by reaching deeper into alternative medicine and primary care. Some chiropractors oppose vaccination and water fluoridation, which are common public health practices. Within the chiropractic community there are significant disagreements about vaccination, one of the most cost-effective public health interventions available. Most chiropractic writings on vaccination focus on its negative aspects, claiming that it is hazardous, ineffective, and unnecessary. Some chiropractors have embraced vaccination, but a significant portion of the profession rejects it, as original chiropractic philosophy traces diseases to causes in the spine and states that vaccines interfere with healing. The extent to which anti-vaccination views perpetuate the current chiropractic profession is uncertain. The American Chiropractic Association and the International Chiropractors Association support individual exemptions to compulsory vaccination laws, and a 1995 survey of U.S. chiropractors found that about a third believed there was no scientific proof that immunization prevents disease. The Canadian Chiropractic Association supports vaccination; a survey in Alberta in 2002 found that 25% of chiropractors advised patients for, and 27% against, vaccinating themselves or their children. Early opposition to water fluoridation included chiropractors, some of whom continue to oppose it as being incompatible with chiropractic philosophy and an infringement of personal freedom. Other chiropractors have actively promoted fluoridation, and several chiropractic organizations have endorsed scientific principles of public health. In addition to traditional chiropractic opposition to water fluoridation and vaccination, chiropractors' attempts to establish a positive reputation for their public health role are also compromised by their reputation for recommending repetitive lifelong chiropractic treatment. Throughout its history chiropractic has been the subject of internal and external controversy and criticism. According to Daniel D. Palmer, the founder of chiropractic, subluxation is the sole cause of disease and manipulation is the cure for all diseases of the human race. A 2003 profession-wide survey found "most chiropractors (whether 'straights' or 'mixers') still hold views of innate intelligence and of the cause and cure of disease (not just back pain) consistent with those of the Palmers." A critical evaluation stated "Chiropractic is rooted in mystical concepts. This led to an internal conflict within the chiropractic profession, which continues today." Chiropractors, including D. D. Palmer, were jailed for practicing medicine without a license. For most of its existence, chiropractic has battled with mainstream medicine, sustained by antiscientific and pseudoscientific ideas such as subluxation. Collectively, systematic reviews have not demonstrated that spinal manipulation, the main treatment method employed by chiropractors, is effective for any medical condition, with the possible exception of treatment for back pain. Chiropractic remains controversial, though to a lesser extent than in past years.
[ { "paragraph_id": 0, "text": "Chiropractic is a form of alternative medicine concerned with the diagnosis, treatment and prevention of mechanical disorders of the musculoskeletal system, especially of the spine. It has esoteric origins and is based on several pseudoscientific ideas.", "title": "" }, { "paragraph_id": 1, "text": "Many chiropractors, especially those in the field's early history, have proposed that mechanical disorders of the joints, especially of the spine, affect general health, and that regular manipulation of the spine (spinal adjustment) improves general health. The main chiropractic treatment technique involves manual therapy, especially manipulation of the spine, other joints, and soft tissues, but may also include exercises and health and lifestyle counseling. A chiropractor may have a Doctor of Chiropractic (D.C.) degree and be referred to as \"doctor\" but is not a Doctor of Medicine (M.D.) or a Doctor of Osteopathic Medicine (D.O.). While many chiropractors view themselves as primary care providers, chiropractic clinical training does not meet the requirements for that designation.", "title": "" }, { "paragraph_id": 2, "text": "Systematic reviews of controlled clinical studies of treatments used by chiropractors have found no evidence that chiropractic manipulation is effective, with the possible exception of treatment for back pain. A 2011 critical evaluation of 45 systematic reviews concluded that the data included in the study \"fail[ed] to demonstrate convincingly that spinal manipulation is an effective intervention for any condition.\" Spinal manipulation may be cost-effective for sub-acute or chronic low back pain, but the results for acute low back pain were insufficient. No compelling evidence exists to indicate that maintenance chiropractic care adequately prevents symptoms or diseases.", "title": "" }, { "paragraph_id": 3, "text": "There is not sufficient data to establish the safety of chiropractic manipulations. It is frequently associated with mild to moderate adverse effects, with serious or fatal complications in rare cases. There is controversy regarding the degree of risk of vertebral artery dissection, which can lead to stroke and death, from cervical manipulation. Several deaths have been associated with this technique and it has been suggested that the relationship is causative, a claim which is disputed by many chiropractors.", "title": "" }, { "paragraph_id": 4, "text": "Chiropractic is well established in the United States, Canada, and Australia. It overlaps with other manual-therapy professions such as osteopathy and physical therapy. Most who seek chiropractic care do so for low back pain. Back and neck pain are considered the specialties of chiropractic, but many chiropractors treat ailments other than musculoskeletal issues. Chiropractic has two main groups: \"straights\", now the minority, emphasize vitalism, \"Innate Intelligence\", and consider vertebral subluxations to be the cause of all disease; and \"mixers\", the majority, are more open to mainstream views and conventional medical techniques, such as exercise, massage, and ice therapy.", "title": "" }, { "paragraph_id": 5, "text": "D. D. Palmer founded chiropractic in the 1890s, after saying he received it from \"the other world\"; Palmer maintained that the tenets of chiropractic were passed along to him by a doctor who had died 50 years previously. His son B. J. Palmer helped to expand chiropractic in the early 20th century. Throughout its history, chiropractic has been controversial. Its foundation is at odds with evidence-based medicine, and has been sustained by pseudoscientific ideas such as vertebral subluxation and Innate Intelligence. Despite the overwhelming evidence that vaccination is an effective public health intervention, among chiropractors there are significant disagreements over the subject, which has led to negative impacts on both public vaccination and mainstream acceptance of chiropractic. The American Medical Association called chiropractic an \"unscientific cult\" in 1966 and boycotted it until losing an antitrust case in 1987. Chiropractic has had a strong political base and sustained demand for services. In the last decades of the twentieth century, it gained more legitimacy and greater acceptance among conventional physicians and health plans in the United States. During the COVID-19 pandemic, chiropractic professional associations advised chiropractors to adhere to CDC, WHO, and local health department guidance. Despite these recommendations, a small but vocal and influential number of chiropractors spread vaccine misinformation.", "title": "" }, { "paragraph_id": 6, "text": "Chiropractic is generally categorized as complementary and alternative medicine (CAM), which focuses on manipulation of the musculoskeletal system, especially the spine. Its founder, D. D. Palmer, called it \"a science of healing without drugs\".", "title": "Conceptual basis" }, { "paragraph_id": 7, "text": "Chiropractic's origins lie in the folk medicine of bonesetting, and as it evolved it incorporated vitalism, spiritual inspiration and rationalism. Its early philosophy was based on deduction from irrefutable doctrine, which helped distinguish chiropractic from medicine, provided it with legal and political defenses against claims of practicing medicine without a license, and allowed chiropractors to establish themselves as an autonomous profession. This \"straight\" philosophy, taught to generations of chiropractors, rejects the inferential reasoning of the scientific method, and relies on deductions from vitalistic first principles rather than on the materialism of science. However, most practitioners tend to incorporate scientific research into chiropractic, and most practitioners are \"mixers\" who attempt to combine the materialistic reductionism of science with the metaphysics of their predecessors and with the holistic paradigm of wellness. A 2008 commentary proposed that chiropractic actively divorce itself from the straight philosophy as part of a campaign to eliminate untestable dogma and engage in critical thinking and evidence-based research.", "title": "Conceptual basis" }, { "paragraph_id": 8, "text": "Although a wide diversity of ideas exist among chiropractors, they share the belief that the spine and health are related in a fundamental way, and that this relationship is mediated through the nervous system. Some chiropractors claim spinal manipulation can have an effect on a variety of ailments such as irritable bowel syndrome and asthma.", "title": "Conceptual basis" }, { "paragraph_id": 9, "text": "Chiropractic philosophy includes the following perspectives:", "title": "Conceptual basis" }, { "paragraph_id": 10, "text": "Holism assumes that health is affected by everything in an individual's environment; some sources also include a spiritual or existential dimension. In contrast, reductionism in chiropractic reduces causes and cures of health problems to a single factor, vertebral subluxation. Homeostasis emphasizes the body's inherent self-healing abilities. Chiropractic's early notion of innate intelligence can be thought of as a metaphor for homeostasis.", "title": "Conceptual basis" }, { "paragraph_id": 11, "text": "A large number of chiropractors fear that if they do not separate themselves from the traditional vitalistic concept of innate intelligence, chiropractic will continue to be seen as a fringe profession. A variant of chiropractic called naprapathy originated in Chicago in the early twentieth century. It holds that manual manipulation of soft tissue can reduce \"interference\" in the body and thus improve health.", "title": "Conceptual basis" }, { "paragraph_id": 12, "text": "Straight chiropractors adhere to the philosophical principles set forth by D. D. and B. J. Palmer, and retain metaphysical definitions and vitalistic qualities. Straight chiropractors believe that vertebral subluxation leads to interference with an \"innate intelligence\" exerted via the human nervous system and is a primary underlying risk factor for many diseases. Straights view the medical diagnosis of patient complaints, which they consider to be the \"secondary effects\" of subluxations, to be unnecessary for chiropractic treatment. Thus, straight chiropractors are concerned primarily with the detection and correction of vertebral subluxation via adjustment and do not \"mix\" other types of therapies into their practice style. Their philosophy and explanations are metaphysical in nature and they prefer to use traditional chiropractic lexicon terminology such as \"perform spinal analysis\", \"detect subluxation\", \"correct with adjustment\". They prefer to remain separate and distinct from mainstream health care. Although considered the minority group, \"they have been able to transform their status as purists and heirs of the lineage into influence dramatically out of proportion to their numbers.\"", "title": "Conceptual basis" }, { "paragraph_id": 13, "text": "Mixer chiropractors \"mix\" diagnostic and treatment approaches from chiropractic, medical or osteopathic viewpoints and make up the majority of chiropractors. Unlike straight chiropractors, mixers believe subluxation is one of many causes of disease, and hence they tend to be open to mainstream medicine. Many of them incorporate mainstream medical diagnostics and employ conventional treatments including techniques of physical therapy such as exercise, stretching, massage, ice packs, electrical muscle stimulation, therapeutic ultrasound, and moist heat. Some mixers also use techniques from alternative medicine, including nutritional supplements, acupuncture, homeopathy, herbal remedies, and biofeedback.", "title": "Conceptual basis" }, { "paragraph_id": 14, "text": "Although mixers are the majority group, many of them retain belief in vertebral subluxation as shown in a 2003 survey of 1,100 North American chiropractors, which found that 88 percent wanted to retain the term \"vertebral subluxation complex\", and that when asked to estimate the percent of disorders of internal organs that subluxation significantly contributes to, the mean response was 62 percent. A 2008 survey of 6,000 American chiropractors demonstrated that most chiropractors seem to believe that a subluxation-based clinical approach may be of limited utility for addressing visceral disorders, and greatly favored non-subluxation-based clinical approaches for such conditions. The same survey showed that most chiropractors generally believed that the majority of their clinical approach for addressing musculoskeletal/biomechanical disorders such as back pain was based on subluxation. Chiropractors often offer conventional therapies such as physical therapy and lifestyle counseling, and it may for the lay person be difficult to distinguish the unscientific from the scientific.", "title": "Conceptual basis" }, { "paragraph_id": 15, "text": "In science-based medicine, the term \"subluxation\" refers to an incomplete or partial dislocation of a joint, from the Latin luxare for 'dislocate'. While medical doctors use the term exclusively to refer to physical dislocations, Chiropractic founder D. D. Palmer imbued the word subluxation with a metaphysical and philosophical meaning drawn from pseudoscientific traditions such as Vitalism.", "title": "Conceptual basis" }, { "paragraph_id": 16, "text": "Palmer claimed that vertebral subluxations interfered with the body's function and its inborn ability to heal itself. D. D. Palmer repudiated his earlier theory that vertebral subluxations caused pinched nerves in the intervertebral spaces in favor of subluxations causing altered nerve vibration, either too tense or too slack, affecting the tone (health) of the end organ. He qualified this by noting that knowledge of innate intelligence was not essential to the competent practice of chiropractic. This concept was later expanded upon by his son, B. J. Palmer, and was instrumental in providing the legal basis of differentiating chiropractic from conventional medicine. In 1910, D. D. Palmer theorized that the nervous system controlled health:", "title": "Conceptual basis" }, { "paragraph_id": 17, "text": "Physiologists divide nerve-fibers, which form the nerves, into two classes, afferent and efferent. Impressions are made on the peripheral afferent fiber-endings; these create sensations that are transmitted to the center of the nervous system. Efferent nerve-fibers carry impulses out from the center to their endings. Most of these go to muscles and are therefore called motor impulses; some are secretory and enter glands; a portion are inhibitory, their function being to restrain secretion. Thus, nerves carry impulses outward and sensations inward. The activity of these nerves, or rather their fibers, may become excited or allayed by impingement, the result being a modification of functionality – too much or not enough action – which is disease.", "title": "Conceptual basis" }, { "paragraph_id": 18, "text": "Vertebral subluxation, a core concept of traditional chiropractic, remains unsubstantiated and largely untested, and a debate about whether to keep it in the chiropractic paradigm has been ongoing for decades. In general, critics of traditional subluxation-based chiropractic (including chiropractors) are skeptical of its clinical value, dogmatic beliefs and metaphysical approach. While straight chiropractic still retains the traditional vitalistic construct espoused by the founders, evidence-based chiropractic suggests that a mechanistic view will allow chiropractic care to become integrated into the wider health care community. This is still a continuing source of debate within the chiropractic profession as well, with some schools of chiropractic still teaching the traditional/straight subluxation-based chiropractic, while others have moved towards an evidence-based chiropractic that rejects metaphysical foundings and limits itself to primarily neuromusculoskeletal conditions.", "title": "Conceptual basis" }, { "paragraph_id": 19, "text": "In 2005, the chiropractic subluxation was defined by the World Health Organization as \"a lesion or dysfunction in a joint or motion segment in which alignment, movement integrity and/or physiological function are altered, although contact between joint surfaces remains intact. It is essentially a functional entity, which may influence biomechanical and neural integrity.\" This differs from the medical definition of subluxation as a significant structural displacement, which can be seen with static imaging techniques such as X-rays. The use of X-ray imaging in the case of vertebral subluxation exposes patients to harmful ionizing radiation for no evidentially supported reason. The 2008 book Trick or Treatment states \"X-rays can reveal neither the subluxations nor the innate intelligence associated with chiropractic philosophy, because they do not exist.\" Attorney David Chapman-Smith, Secretary-General of the World Federation of Chiropractic, has stated that \"Medical critics have asked how there can be a subluxation if it cannot be seen on X-ray. The answer is that the chiropractic subluxation is essentially a functional entity, not structural, and is therefore no more visible on static X-ray than a limp or headache or any other functional problem.\" The General Chiropractic Council, the statutory regulatory body for chiropractors in the United Kingdom, states that the chiropractic vertebral subluxation complex \"is not supported by any clinical research evidence that would allow claims to be made that it is the cause of disease.\"", "title": "Conceptual basis" }, { "paragraph_id": 20, "text": "As of 2014, the US National Board of Chiropractic Examiners states \"The specific focus of chiropractic practice is known as the chiropractic subluxation or joint dysfunction. A subluxation is a health concern that manifests in the skeletal joints, and, through complex anatomical and physiological relationships, affects the nervous system and may lead to reduced function, disability or illness.\"", "title": "Conceptual basis" }, { "paragraph_id": 21, "text": "While some chiropractors limit their practice to short-term treatment of musculoskeletal conditions, many falsely claim to be able treat a myriad of other conditions. Some dissuade patients from seeking medical care, others have pretended to be qualified to act as a family doctor.", "title": "Conceptual basis" }, { "paragraph_id": 22, "text": "Quackwatch, an alternative medicine watchdog, cautions against seeing chiropractors who:", "title": "Conceptual basis" }, { "paragraph_id": 23, "text": "Writing for the Skeptical Inquirer, one physician cautioned against seeing even chiropractors who solely claim to treat musculoskeletal conditions:", "title": "Conceptual basis" }, { "paragraph_id": 24, "text": "I think Spinal Manipulation Therapy (SMT) is a reasonable option for patients to try ... But I could not in good conscience refer a patient to a chiropractor... When chiropractic is effective, what is effective is not 'chiropractic': it is SMT. SMT is also offered by physical therapists, DOs, and others. These are science-based providers ... If I thought a patient might benefit from manipulation, I would rather refer him or her to a science-based provider.", "title": "Conceptual basis" }, { "paragraph_id": 25, "text": "Chiropractors emphasize the conservative management of the neuromusculoskeletal system without the use of medicines or surgery, with special emphasis on the spine. Back and neck pain are the specialties of chiropractic but many chiropractors treat ailments other than musculoskeletal issues. There is a range of opinions among chiropractors: some believed that treatment should be confined to the spine, or back and neck pain; others disagreed. For example, while one 2009 survey of American chiropractors had found that 73% classified themselves as \"back pain/musculoskeletal specialists\", the label \"back and neck pain specialists\" was regarded by 47% of them as a least desirable description in a 2005 international survey. Chiropractic combines aspects from mainstream and alternative medicine, and there is no agreement about how to define the profession: although chiropractors have many attributes of primary care providers, chiropractic has more attributes of a medical specialty like dentistry or podiatry. It has been proposed that chiropractors specialize in nonsurgical spine care, instead of attempting to also treat other problems, but the more expansive view of chiropractic is still widespread.", "title": "Scope of practice" }, { "paragraph_id": 26, "text": "Mainstream health care and governmental organizations such as the World Health Organization consider chiropractic to be complementary and alternative medicine (CAM); and a 2008 study reported that 31% of surveyed chiropractors categorized chiropractic as CAM, 27% as integrated medicine, and 12% as mainstream medicine. Many chiropractors believe they are primary care providers, including US and UK chiropractors, but the length, breadth, and depth of chiropractic clinical training do not support the requirements to be considered primary care providers, so their role on primary care is limited and disputed.", "title": "Scope of practice" }, { "paragraph_id": 27, "text": "Chiropractic overlaps with several other forms of manual therapy, including massage therapy, osteopathy, physical therapy, and sports medicine. Chiropractic is autonomous from and competitive with mainstream medicine, and osteopathy outside the US remains primarily a manual medical system; physical therapists work alongside and cooperate with mainstream medicine, and osteopathic medicine in the U.S. has merged with the medical profession. Practitioners may distinguish these competing approaches through claims that, compared to other therapists, chiropractors heavily emphasize spinal manipulation, tend to use firmer manipulative techniques, and promote maintenance care; that osteopaths use a wider variety of treatment procedures; and that physical therapists emphasize machinery and exercise.", "title": "Scope of practice" }, { "paragraph_id": 28, "text": "Chiropractic diagnosis may involve a range of methods including skeletal imaging, observational and tactile assessments, and orthopedic and neurological evaluation. A chiropractor may also refer a patient to an appropriate specialist, or co-manage with another health care provider. Common patient management involves spinal manipulation (SM) and other manual therapies to the joints and soft tissues, rehabilitative exercises, health promotion, electrical modalities, complementary procedures, and lifestyle advice.", "title": "Scope of practice" }, { "paragraph_id": 29, "text": "Chiropractors are not normally licensed to write medical prescriptions or perform major surgery in the United States (although New Mexico has become the first US state to allow \"advanced practice\" trained chiropractors to prescribe certain medications). In the US, their scope of practice varies by state, based on inconsistent views of chiropractic care: some states, such as Iowa, broadly allow treatment of \"human ailments\"; some, such as Delaware, use vague concepts such as \"transition of nerve energy\" to define scope of practice; others, such as New Jersey, specify a severely narrowed scope. US states also differ over whether chiropractors may conduct laboratory tests or diagnostic procedures, dispense dietary supplements, or use other therapies such as homeopathy and acupuncture; in Oregon they can become certified to perform minor surgery and to deliver children via natural childbirth. A 2003 survey of North American chiropractors found that a slight majority favored allowing them to write prescriptions for over-the-counter drugs. A 2010 survey found that 72% of Swiss chiropractors considered their ability to prescribe nonprescription medication as an advantage for chiropractic treatment.", "title": "Scope of practice" }, { "paragraph_id": 30, "text": "A related field, veterinary chiropractic, applies manual therapies to animals and is recognized in many US states, but is not recognized by the American Chiropractic Association as being chiropractic. It remains controversial within certain segments of the veterinary and chiropractic professions.", "title": "Scope of practice" }, { "paragraph_id": 31, "text": "No single profession \"owns\" spinal manipulation and there is little consensus as to which profession should administer SM, raising concerns by chiropractors that other medical physicians could \"steal\" SM procedures from chiropractors. A focus on evidence-based SM research has also raised concerns that the resulting practice guidelines could limit the scope of chiropractic practice to treating backs and necks. Two US states (Washington and Arkansas) prohibit physical therapists from performing SM, some states allow them to do it only if they have completed advanced training in SM, and some states allow only chiropractors to perform SM, or only chiropractors and physicians. Bills to further prohibit non-chiropractors from performing SM are regularly introduced into state legislatures and are opposed by physical therapist organizations.", "title": "Scope of practice" }, { "paragraph_id": 32, "text": "Spinal manipulation, which chiropractors call \"spinal adjustment\" or \"chiropractic adjustment\", is the most common treatment used in chiropractic care. Spinal manipulation is a passive manual maneuver during which a three-joint complex is taken past the normal range of movement, but not so far as to dislocate or damage the joint. Its defining factor is a dynamic thrust, which is a sudden force that causes an audible release and attempts to increase a joint's range of motion. High-velocity, low-amplitude spinal manipulation (HVLA-SM) thrusts have physiological effects that signal neural discharge from paraspinal muscle tissues, depending on duration and amplitude of the thrust are factors of the degree in paraspinal muscle spindles activation. Clinical skill in employing HVLA-SM thrusts depends on the ability of the practitioner to handle the duration and magnitude of the load. More generally, spinal manipulative therapy (SMT) describes techniques where the hands are used to manipulate, massage, mobilize, adjust, stimulate, apply traction to, or otherwise influence the spine and related tissues.", "title": "Treatments" }, { "paragraph_id": 33, "text": "There are several schools of chiropractic adjustive techniques, although most chiropractors mix techniques from several schools. The following adjustive procedures were received by more than 10% of patients of licensed US chiropractors in a 2003 survey: Diversified technique (full-spine manipulation, employing various techniques), extremity adjusting, Activator technique (which uses a spring-loaded tool to deliver precise adjustments to the spine), Thompson Technique (which relies on a drop table and detailed procedural protocols), Gonstead (which emphasizes evaluating the spine along with specific adjustment that avoids rotational vectors), Cox/flexion-distraction (a gentle, low-force adjusting procedure which mixes chiropractic with osteopathic principles and utilizes specialized adjusting tables with movable parts), adjustive instrument, Sacro-Occipital Technique (which models the spine as a torsion bar), Nimmo Receptor-Tonus Technique, applied kinesiology (which emphasises \"muscle testing\" as a diagnostic tool), and cranial. Chiropractic biophysics technique uses inverse functions of rotations during spinal manipulation. Koren Specific Technique (KST) may use their hands, or they may use an electric device known as an \"ArthroStim\" for assessment and spinal manipulations. Insurers in the US and UK that cover other chiropractic techniques exclude KST from coverage because they consider it to be \"experimental and investigational\". Medicine-assisted manipulation, such as manipulation under anesthesia, involves sedation or local anesthetic and is done by a team that includes an anesthesiologist; a 2008 systematic review did not find enough evidence to make recommendations about its use for chronic low back pain.", "title": "Treatments" }, { "paragraph_id": 34, "text": "Many other procedures are used by chiropractors for treating the spine, other joints and tissues, and general health issues. The following procedures were received by more than one-third of patients of licensed US chiropractors in a 2003 survey: Diversified technique (full-spine manipulation; mentioned in previous paragraph), physical fitness/exercise promotion, corrective or therapeutic exercise, ergonomic/postural advice, self-care strategies, activities of daily living, changing risky/unhealthy behaviors, nutritional/dietary recommendations, relaxation/stress reduction recommendations, ice pack/cryotherapy, extremity adjusting (also mentioned in previous paragraph), trigger point therapy, and disease prevention/early screening advice.", "title": "Treatments" }, { "paragraph_id": 35, "text": "A 2010 study describing Belgian chiropractors and their patients found chiropractors in Belgium mostly focus on neuromusculoskeletal complaints in adult patients, with emphasis on the spine. The diversified technique is the most often applied technique at 93%, followed by the Activator mechanical-assisted technique at 41%. A 2009 study assessing chiropractic students giving or receiving spinal manipulations while attending a United States chiropractic college found Diversified, Gonstead, and upper cervical manipulations are frequently used methods.", "title": "Treatments" }, { "paragraph_id": 36, "text": "Reviews of research studies within the chiropractic community have been used to generate practice guidelines outlining standards that specify which chiropractic treatments are legitimate (i.e. supported by evidence) and conceivably reimbursable under managed care health payment systems. Evidence-based guidelines are supported by one end of an ideological continuum among chiropractors; the other end employs antiscientific reasoning and makes unsubstantiated claims. Chiropractic remains at a crossroads, and that in order to progress it would need to embrace science; the promotion by some for it to be a cure-all was both \"misguided and irrational\". A 2007 survey of Alberta chiropractors found that they do not consistently apply research in practice, which may have resulted from a lack of research education and skills. Specific guidelines concerning the treatment of nonspecific (i.e., unknown cause) low back pain are inconsistent between countries.", "title": "Treatments" }, { "paragraph_id": 37, "text": "Numerous controlled clinical studies of treatments used by chiropractors have been conducted, with varied results. There is no conclusive evidence that chiropractic manipulative treatment is effective for the treatment of any medical condition, except perhaps for certain kinds of back pain.", "title": "Treatments" }, { "paragraph_id": 38, "text": "Generally, the research carried out into the effectiveness of chiropractic has been of poor quality. Research published by chiropractors is distinctly biased: reviews of SM for back pain tended to find positive conclusions when authored by chiropractors, while reviews by mainstream authors did not.", "title": "Treatments" }, { "paragraph_id": 39, "text": "There is a wide range of ways to measure treatment outcomes. Chiropractic care benefits from the placebo response, but it is difficult to construct a trustworthy placebo for clinical trials of spinal manipulative therapy (SMT). The efficacy of maintenance care in chiropractic is unknown.", "title": "Treatments" }, { "paragraph_id": 40, "text": "Available evidence covers the following conditions:", "title": "Treatments" }, { "paragraph_id": 41, "text": "The World Health Organization found chiropractic care in general is safe when employed skillfully and appropriately. There is not sufficient data to establish the safety of chiropractic manipulations. Manipulation is regarded as relatively safe but complications can arise, and it has known adverse effects, risks and contraindications. Absolute contraindications to spinal manipulative therapy are conditions that should not be manipulated; these contraindications include rheumatoid arthritis and conditions known to result in unstable joints. Relative contraindications are conditions where increased risk is acceptable in some situations and where low-force and soft-tissue techniques are treatments of choice; these contraindications include osteoporosis. Although most contraindications apply only to manipulation of the affected region, some neurological signs indicate referral to emergency medical services; these include sudden and severe headache or neck pain unlike that previously experienced. Indirect risks of chiropractic involve delayed or missed diagnoses through consulting a chiropractor.", "title": "Treatments" }, { "paragraph_id": 42, "text": "Spinal manipulation is associated with frequent, mild and temporary adverse effects, including new or worsening pain or stiffness in the affected region. They have been estimated to occur in 33% to 61% of patients, and frequently occur within an hour of treatment and disappear within 24 to 48 hours; adverse reactions appear to be more common following manipulation than mobilization. The most frequently stated adverse effects are mild headache, soreness, and briefly elevated pain fatigue. Chiropractic is correlated with a very high incidence of minor adverse effects. Rarely, spinal manipulation, particularly on the upper spine, can also result in complications that can lead to permanent disability or death; these can occur in adults and children. Estimates vary widely for the incidence of these complications, and the actual incidence is unknown, due to high levels of underreporting and to the difficulty of linking manipulation to adverse effects such as stroke, which is a particular concern. Adverse effects are poorly reported in recent studies investigating chiropractic manipulations. A 2016 systematic review concludes that the level of reporting is unsuitable and unacceptable. Reports of serious adverse events have occurred, resulting from spinal manipulation therapy of the lumbopelvic region. Estimates for serious adverse events vary from 5 strokes per 100,000 manipulations to 1.46 serious adverse events per 10 million manipulations and 2.68 deaths per 10 million manipulations, though it was determined that there was inadequate data to be conclusive. Several case reports show temporal associations between interventions and potentially serious complications. The published medical literature contains reports of 26 deaths since 1934 following chiropractic manipulations and many more seem to remain unpublished.", "title": "Treatments" }, { "paragraph_id": 43, "text": "Vertebrobasilar artery stroke (VAS) is statistically associated with chiropractic services in persons under 45 years of age, but it is similarly associated with general practitioner services, suggesting that these associations are likely explained by preexisting conditions. Weak to moderately strong evidence supports causation (as opposed to statistical association) between cervical manipulative therapy (CMT) and VAS. There is insufficient evidence to support a strong association or no association between cervical manipulation and stroke. While the biomechanical evidence is not sufficient to support the statement that CMT causes cervical artery dissection (CD), clinical reports suggest that mechanical forces have a part in a substantial number of CDs and the majority of population controlled studies found an association between CMT and VAS in young people. It is strongly recommended that practitioners consider the plausibility of CD as a symptom, and people can be informed of the association between CD and CMT before administering manipulation of the cervical spine. There is controversy regarding the degree of risk of stroke from cervical manipulation. Many chiropractors state that, the association between chiropractic therapy and vertebral arterial dissection is not proven. However, it has been suggested that the causality between chiropractic cervical manipulation beyond the normal range of motion and vascular accidents is probable or definite. There is very low evidence supporting a small association between internal carotid artery dissection and chiropractic neck manipulation. The incidence of internal carotid artery dissection following cervical spine manipulation is unknown. The literature infrequently reports helpful data to better understand the association between cervical manipulative therapy, cervical artery dissection and stroke. The limited evidence is inconclusive that chiropractic spinal manipulation therapy is not a cause of intracranial hypotension. Cervical intradural disc herniation is very rare following spinal manipulation therapy.", "title": "Treatments" }, { "paragraph_id": 44, "text": "Chiropractors sometimes employ diagnostic imaging techniques such as X-rays and CT scans that rely on ionizing radiation. Although there is no clear evidence to justify the practice, some chiropractors still X-ray a patient several times a year. Practice guidelines aim to reduce unnecessary radiation exposure, which increases cancer risk in proportion to the amount of radiation received. Research suggests that radiology instruction given at chiropractic schools worldwide seem to be evidence-based. Although, there seems to be a disparity between some schools and available evidence regarding the aspect of radiography for patients with acute low back pain without an indication of a serious disease, which may contribute to chiropractic overuse of radiography for low back pain.", "title": "Treatments" }, { "paragraph_id": 45, "text": "A 2012 systematic review concluded that no accurate assessment of risk-benefit exists for cervical manipulation. A 2010 systematic review stated that there is no good evidence to assume that neck manipulation is an effective treatment for any medical condition and suggested a precautionary principle in healthcare for chiropractic intervention even if a causality with vertebral artery dissection after neck manipulation were merely a remote possibility. The same review concluded that the risk of death from manipulations to the neck outweighs the benefits. Chiropractors have criticized this conclusion, claiming that the author did not evaluate the potential benefits of spinal manipulation. Edzard Ernst stated \"This detail was not the subject of my review. I do, however, refer to such evaluations and should add that a report recently commissioned by the General Chiropractic Council did not support many of the outlandish claims made by many chiropractors across the world.\" A 1999 review of 177 previously reported cases published between 1925 and 1997 in which injuries were attributed to manipulation of the cervical spine (MCS) concluded that \"The literature does not demonstrate that the benefits of MCS outweigh the risks.\" The professions associated with each injury were assessed. Physical therapists (PT) were involved in less than 2% of all cases, with no deaths caused by PTs. Chiropractors were involved in a little more than 60% of all cases, including 32 deaths.", "title": "Treatments" }, { "paragraph_id": 46, "text": "A 2009 review evaluating maintenance chiropractic care found that spinal manipulation is associated with considerable harm and no compelling evidence exists to indicate that it adequately prevents symptoms or diseases, thus the risk-benefit is not evidently favorable.", "title": "Treatments" }, { "paragraph_id": 47, "text": "A 2012 systematic review suggested that the use of spine manipulation in clinical practice is a cost-effective treatment when used alone or in combination with other treatment approaches. A 2011 systematic review found evidence supporting the cost-effectiveness of using spinal manipulation for the treatment of sub-acute or chronic low back pain; the results for acute low back pain were insufficient.", "title": "Treatments" }, { "paragraph_id": 48, "text": "A 2006 systematic cost-effectiveness review found that the reported cost-effectiveness of spinal manipulation in the United Kingdom compared favorably with other treatments for back pain, but that reports were based on data from clinical trials without placebo controls and that the specific cost-effectiveness of the treatment (as opposed to non-specific effects) remains uncertain. A 2005 American systematic review of economic evaluations of conservative treatments for low back pain found that significant quality problems in available studies meant that definite conclusions could not be drawn about the most cost-effective intervention. The cost-effectiveness of maintenance chiropractic care is unknown.", "title": "Treatments" }, { "paragraph_id": 49, "text": "Analysis of a clinical and cost utilization data from the years 2003 to 2005 by an integrative medicine independent physician association (IPA) which looked the chiropractic services utilization found that the clinical and cost utilization of chiropractic services based on 70,274 member-months over a 7-year period decreased patient costs associate with the following use of services by 60% for in-hospital admissions, 59% for hospital days, 62% for outpatient surgeries and procedures, and 85% for pharmaceutical costs when compared with conventional medicine (visit to a medical doctor primary care provider) IPA performance for the same health maintenance organization product in the same geography and time frame.", "title": "Treatments" }, { "paragraph_id": 50, "text": "Requirements vary between countries. In the U.S. chiropractors obtain a non-medical accredited diploma in the field of chiropractic. Chiropractic education in the U.S. has been criticized for failing to meet generally accepted standards of evidence-based medicine. The curriculum content of North American chiropractic and medical colleges with regard to basic and clinical sciences has little similarity, both in the kinds of subjects offered and in the time assigned to each subject. Accredited chiropractic programs in the U.S. require that applicants have 90 semester hours of undergraduate education with a grade point average of at least 3.0 on a 4.0 scale. Many programs require at least three years of undergraduate education, and more are requiring a bachelor's degree. Canada requires a minimum three years of undergraduate education for applicants, and at least 4200 instructional hours (or the equivalent) of full‐time chiropractic education for matriculation through an accredited chiropractic program. Graduates of the Canadian Memorial Chiropractic College (CMCC) are formally recognized to have at least 7–8 years of university level education. The World Health Organization (WHO) guidelines suggest three major full-time educational paths culminating in either a DC, DCM, BSc, or MSc degree. Besides the full-time paths, they also suggest a conversion program for people with other health care education and limited training programs for regions where no legislation governs chiropractic.", "title": "Education, licensing, and regulation" }, { "paragraph_id": 51, "text": "Upon graduation, there may be a requirement to pass national, state, or provincial board examinations before being licensed to practice in a particular jurisdiction. Depending on the location, continuing education may be required to renew these licenses. Specialty training is available through part-time postgraduate education programs such as chiropractic orthopedics and sports chiropractic, and through full-time residency programs such as radiology or orthopedics.", "title": "Education, licensing, and regulation" }, { "paragraph_id": 52, "text": "In the U.S., chiropractic schools are accredited through the Council on Chiropractic Education (CCE) while the General Chiropractic Council (GCC) is the statutory governmental body responsible for the regulation of chiropractic in the UK. The U.S. CCE requires a mixing curriculum, which means a straight-educated chiropractor may not be eligible for licensing in states requiring CCE accreditation. CCEs in the U.S., Canada, Australia and Europe have joined to form CCE-International (CCE-I) as a model of accreditation standards with the goal of having credentials portable internationally. Today, there are 18 accredited Doctor of Chiropractic programs in the U.S., 2 in Canada, 6 in Australasia, and 5 in Europe. All but one of the chiropractic colleges in the U.S. are privately funded, but in several other countries they are in government-sponsored universities and colleges. Of the two chiropractic colleges in Canada, one is publicly funded (UQTR) and one is privately funded (CMCC). In 2005, CMCC was granted the privilege of offering a professional health care degree under the Post-secondary Education Choice and Excellence Act, which sets the program within the hierarchy of education in Canada as comparable to that of other primary contact health care professions such as medicine, dentistry and optometry.", "title": "Education, licensing, and regulation" }, { "paragraph_id": 53, "text": "Regulatory colleges and chiropractic boards in the U.S., Canada, Mexico, and Australia are responsible for protecting the public, standards of practice, disciplinary issues, quality assurance and maintenance of competency. There are an estimated 49,000 chiropractors in the U.S. (2008), 6,500 in Canada (2010), 2,500 in Australia (2000), and 1,500 in the UK (2000).", "title": "Education, licensing, and regulation" }, { "paragraph_id": 54, "text": "Chiropractors often argue that this education is as good as or better than medical physicians', but most chiropractic training is confined to classrooms with much time spent learning theory, adjustment, and marketing. The fourth year of chiropractic education persistently showed the highest stress levels. Every student, irrespective of year, experienced different ranges of stress when studying. The chiropractic leaders and colleges have had internal struggles. Rather than cooperation, there has been infighting between different factions. A number of actions were posturing due to the confidential nature of the chiropractic colleges in an attempt to enroll students.", "title": "Education, licensing, and regulation" }, { "paragraph_id": 55, "text": "The chiropractic oath is a modern variation of the classical Hippocratic Oath historically taken by physicians and other healthcare professionals swearing to practice their professions ethically. The American Chiropractic Association (ACA) has an ethical code \"based upon the acknowledgement that the social contract dictates the profession's responsibilities to the patient, the public, and the profession; and upholds the fundamental principle that the paramount purpose of the chiropractic doctor's professional services shall be to benefit the patient.\" The International Chiropractor's Association (ICA) also has a set of professional canons.", "title": "Education, licensing, and regulation" }, { "paragraph_id": 56, "text": "A 2008 commentary proposed that the chiropractic profession actively regulate itself to combat abuse, fraud, and quackery, which are more prevalent in chiropractic than in other health care professions, violating the social contract between patients and physicians. According to a 2015 Gallup poll of U.S. adults, the perception of chiropractors is generally favorable; two-thirds of American adults agree that chiropractors have their patient's best interest in mind and more than half also agree that most chiropractors are trustworthy. Less than 10% of US adults disagreed with the statement that chiropractors were trustworthy.", "title": "Education, licensing, and regulation" }, { "paragraph_id": 57, "text": "Chiropractors, especially in America, have a reputation for unnecessarily treating patients. In many circumstances the focus seems to be put on economics instead of health care. Sustained chiropractic care is promoted as a preventive tool, but unnecessary manipulation could possibly present a risk to patients. Some chiropractors are concerned by the routine unjustified claims chiropractors have made. A 2010 analysis of chiropractic websites found the majority of chiropractors and their associations made claims of effectiveness not supported by scientific evidence, while 28% of chiropractor websites advocate lower back pain care, which has some sound evidence.", "title": "Education, licensing, and regulation" }, { "paragraph_id": 58, "text": "The US Office of the Inspector General (OIG) estimated that for calendar year 2013, 82% of payments to chiropractors under Medicare Part B, a total of $359 million, did not comply with Medicare requirements. There have been at least 15 OIG reports about chiropractic billing irregularities since 1986.", "title": "Education, licensing, and regulation" }, { "paragraph_id": 59, "text": "In 2009, a backlash to the libel suit filed by the British Chiropractic Association (BCA) against Simon Singh inspired the filing of formal complaints of false advertising against more than 500 individual chiropractors within one 24-hour period, prompting the McTimoney Chiropractic Association to write to its members advising them to remove leaflets that make claims about whiplash and colic from their practice, to be wary of new patients and telephone inquiries, and telling their members: \"If you have a website, take it down NOW\" and \"Finally, we strongly suggest you do NOT discuss this with others, especially patients.\" An editorial in Nature suggested that the BCA may have been trying to suppress debate and that this use of English libel law was a burden on the right to freedom of expression, which is protected by the European Convention on Human Rights. The libel case ended with the BCA withdrawing its suit in 2010.", "title": "Education, licensing, and regulation" }, { "paragraph_id": 60, "text": "Chiropractic is established in the U.S., Canada, and Australia, and is present to a lesser extent in many other countries. It is viewed as a marginal and non-clinically–proven attempt at complementary and alternative medicine, which has not integrated into mainstream medicine.", "title": "Reception" }, { "paragraph_id": 61, "text": "In Australia, there are approximately 2488 chiropractors, or one chiropractor for every 7980 people. Most private health insurance funds in Australia cover chiropractic care, and the federal government funds chiropractic care when the patient is referred by a medical practitioner. In 2014, the chiropractic profession had a registered workforce of 4,684 practitioners in Australia represented by two major organizations – the Chiropractors' Association of Australia (CAA) and the Chiropractic and Osteopathic College of Australasia (COCA). Annual expenditure on chiropractic care (alone or combined with osteopathy) in Australia is estimated to be between AUD$750–988 million with musculoskeletal complaints such as back and neck pain making up the bulk of consultations; and proportional expenditure is similar to that found in other countries. While Medicare (the Australian publicly funded universal health care system) coverage of chiropractic services is limited to only those directed by a medical referral to assist chronic disease management, most private health insurers in Australia do provide partial reimbursement for a wider range of chiropractic services in addition to limited third party payments for workers compensation and motor vehicle accidents.", "title": "Reception" }, { "paragraph_id": 62, "text": "Of the 2,005 chiropractors who participated in a 2015 survey, 62.4% were male and the average age was 42.1 (SD = 12.1) years. Nearly all chiropractors (97.1%) had a bachelor's degree or higher, with the majority of chiropractor's highest professional qualification being a bachelor or double bachelor's degree (34.6%), followed by a master's degree (32.7%), Doctor of Chiropractic (28.9%) or PhD (0.9%). Only a small number of chiropractor's highest professional qualification was a diploma (2.1%) or advanced diploma (0.8%).", "title": "Reception" }, { "paragraph_id": 63, "text": "In Germany, chiropractic may be offered by medical doctors and alternative practitioners. Chiropractors qualified abroad must obtain a German non-medical practitioner license. Authorities have routinely required a comprehensive knowledge test for this, but in the recent past, some administrative courts have ruled that training abroad should be recognised.", "title": "Reception" }, { "paragraph_id": 64, "text": "In Switzerland, only trained medical professionals are allowed to offer chiropractic. There are 300 chiropractors in Switzerland.", "title": "Reception" }, { "paragraph_id": 65, "text": "In the United Kingdom, there are over 2,000 chiropractors, representing one chiropractor per 29,206 people. Chiropractic is available on the National Health Service in some areas, such as Cornwall, where the treatment is only available for neck or back pain.", "title": "Reception" }, { "paragraph_id": 66, "text": "A 2010 study by questionnaire presented to UK chiropractors indicated only 45% of chiropractors disclosed to patients the serious risk associated with manipulation of the cervical spine and that 46% believed there was possibility patients would refuse treatment if the risks were correctly explained. However 80% acknowledged the ethical/moral responsibility to disclose risk to patients.", "title": "Reception" }, { "paragraph_id": 67, "text": "The percentage of the population that utilizes chiropractic care at any given time generally falls into a range from 6% to 12% in the U.S. and Canada, with a global high of 20% in Alberta in 2006. In 2008, chiropractors were reported to be the most common CAM providers for children and adolescents, these patients representing up to 14% of all visits to chiropractors.", "title": "Reception" }, { "paragraph_id": 68, "text": "There were around 50,330 chiropractors practicing in North America in 2000. In 2008, this has increased by almost 20% to around 60,000 chiropractors. In 2002–03, the majority of those who sought chiropractic did so for relief from back and neck pain and other neuromusculoskeletal complaints; most do so specifically for low back pain. The majority of U.S. chiropractors participate in some form of managed care. Although the majority of U.S. chiropractors view themselves as specialists in neuromusculoskeletal conditions, many also consider chiropractic as a type of primary care. In the majority of cases, the care that chiropractors and physicians provide divides the market, however for some, their care is complementary.", "title": "Reception" }, { "paragraph_id": 69, "text": "In the U.S., chiropractors perform over 90% of all manipulative treatments. Satisfaction rates are typically higher for chiropractic care compared to medical care, with a 1998 U.S. survey reporting 83% of respondents satisfied or very satisfied with their care; quality of communication seems to be a consistent predictor of patient satisfaction with chiropractors.", "title": "Reception" }, { "paragraph_id": 70, "text": "Utilization of chiropractic care is sensitive to the costs incurred by the co-payment by the patient. The use of chiropractic declined from 9.9% of U.S. adults in 1997 to 7.4% in 2002; this was the largest relative decrease among CAM professions, which overall had a stable use rate. As of 2007 7% of the U.S. population is being reached by chiropractic. They were the third largest medical profession in the US in 2002, following physicians and dentists. Employment of U.S. chiropractors was expected to increase 14% between 2006 and 2016, faster than the average for all occupations.", "title": "Reception" }, { "paragraph_id": 71, "text": "In the U.S., most states require insurers to cover chiropractic care, and most HMOs cover these services.", "title": "Reception" }, { "paragraph_id": 72, "text": "Chiropractic's origins lie in the folk medicine practice of bonesetting, in which untrained practitioners engaged in joint manipulation or resetting fractured bones. Chiropractic was founded in 1895 by Daniel David (D. D.) Palmer in Davenport, Iowa. Palmer, a magnetic healer, hypothesized that manual manipulation of the spine could cure disease. The first chiropractic patient of D. D. Palmer was Harvey Lillard, a worker in the building where Palmer's office was located. He claimed that he had severely reduced hearing for 17 years, which started shortly following a \"pop\" in his spine. A few days following his adjustment, Lillard claimed his hearing was almost completely restored. Another of Palmer's patients, Samuel Weed, coined the term chiropractic, from Greek χειρο- chiro- 'hand' (itself from χείρ cheir 'hand') and πρακτικός praktikos 'practical'. Chiropractic is classified as a field of pseudomedicine on account of its esoteric origins.", "title": "History" }, { "paragraph_id": 73, "text": "Chiropractic competed with its predecessor osteopathy, another medical system based on magnetic healing; both systems were founded by charismatic midwesterners in opposition to the conventional medicine of the day, and both postulated that manipulation improved health. Although initially keeping chiropractic a family secret, in 1898 Palmer began teaching it to a few students at his new Palmer School of Chiropractic. One student, his son Bartlett Joshua (B. J.) Palmer, became committed to promoting chiropractic, took over the Palmer School in 1906, and rapidly expanded its enrollment.", "title": "History" }, { "paragraph_id": 74, "text": "Early chiropractors believed that all disease was caused by interruptions in the flow of innate intelligence, a vitalistic nervous energy or life force that represented God's presence in man; chiropractic leaders often invoked religious imagery and moral traditions. D. D. Palmer said he \"received chiropractic from the other world\". D. D. and B. J. both seriously considered declaring chiropractic a religion, which might have provided legal protection under the U.S. constitution, but decided against it partly to avoid confusion with Christian Science. Early chiropractors also tapped into the Populist movement, emphasizing craft, hard work, competition, and advertisement, aligning themselves with the common man against intellectuals and trusts, among which they included the American Medical Association (AMA).", "title": "History" }, { "paragraph_id": 75, "text": "Chiropractic has seen considerable controversy and criticism. Although D. D. and B. J. were \"straight\" and disdained the use of instruments, some early chiropractors, whom B. J. scornfully called \"mixers\", advocated the use of instruments. In 1910, B. J. changed course and endorsed X-rays as necessary for diagnosis; this resulted in a significant exodus from the Palmer School of the more conservative faculty and students. The mixer camp grew until by 1924 B. J. estimated that only 3,000 of the United States' 25,000 chiropractors remained straight. That year, B. J.'s invention and promotion of the neurocalometer, a temperature-sensing device, was highly controversial among B. J.'s fellow straights. By the 1930s, chiropractic was the largest alternative healing profession in the U.S.", "title": "History" }, { "paragraph_id": 76, "text": "Chiropractors faced heavy opposition from organized medicine. D. D. Palmer was jailed in 1907 for practicing medicine without a license. Thousands of chiropractors were prosecuted for practicing medicine without a license, and D. D. and many other chiropractors were jailed. To defend against medical statutes, B. J. argued that chiropractic was separate and distinct from medicine, asserting that chiropractors \"analyzed\" rather than \"diagnosed\", and \"adjusted\" subluxations rather than \"treated\" disease. B. J. cofounded the Universal Chiropractors' Association (UCA) to provide legal services to arrested chiropractors. Although the UCA won their first test case in Wisconsin in 1907, prosecutions instigated by state medical boards became increasingly common and in many cases were successful. In response, chiropractors conducted political campaigns to secure separate licensing statutes, eventually succeeding in all fifty states, from Kansas in 1913 through Louisiana in 1974. The longstanding feud between chiropractors and medical doctors continued for decades. The AMA labeled chiropractic an \"unscientific cult\" in 1966, and until 1980 advised its members that it was unethical for medical doctors to associate with \"unscientific practitioners\". This culminated in a landmark 1987 decision, Wilk v. AMA, in which the court found that the AMA had engaged in unreasonable restraint of trade and conspiracy, and which ended the AMA's de facto boycott of chiropractic.", "title": "History" }, { "paragraph_id": 77, "text": "Serious research to test chiropractic theories did not begin until the 1970s, and is continuing to be hampered by antiscientific and pseudoscientific ideas that sustained the profession in its long battle with organized medicine. By the mid-1990s there was a growing scholarly interest in chiropractic, which helped efforts to improve service quality and establish clinical guidelines that recommended manual therapies for acute low back pain. In recent decades chiropractic gained legitimacy and greater acceptance by medical physicians and health plans, and enjoyed a strong political base and sustained demand for services. However, its future seemed uncertain: as the number of practitioners grew, evidence-based medicine insisted on treatments with demonstrated value, managed care restricted payment, and competition grew from massage therapists and other health professions. The profession responded by marketing natural products and devices more aggressively, and by reaching deeper into alternative medicine and primary care.", "title": "History" }, { "paragraph_id": 78, "text": "Some chiropractors oppose vaccination and water fluoridation, which are common public health practices. Within the chiropractic community there are significant disagreements about vaccination, one of the most cost-effective public health interventions available. Most chiropractic writings on vaccination focus on its negative aspects, claiming that it is hazardous, ineffective, and unnecessary. Some chiropractors have embraced vaccination, but a significant portion of the profession rejects it, as original chiropractic philosophy traces diseases to causes in the spine and states that vaccines interfere with healing. The extent to which anti-vaccination views perpetuate the current chiropractic profession is uncertain. The American Chiropractic Association and the International Chiropractors Association support individual exemptions to compulsory vaccination laws, and a 1995 survey of U.S. chiropractors found that about a third believed there was no scientific proof that immunization prevents disease. The Canadian Chiropractic Association supports vaccination; a survey in Alberta in 2002 found that 25% of chiropractors advised patients for, and 27% against, vaccinating themselves or their children.", "title": "Public health" }, { "paragraph_id": 79, "text": "Early opposition to water fluoridation included chiropractors, some of whom continue to oppose it as being incompatible with chiropractic philosophy and an infringement of personal freedom. Other chiropractors have actively promoted fluoridation, and several chiropractic organizations have endorsed scientific principles of public health. In addition to traditional chiropractic opposition to water fluoridation and vaccination, chiropractors' attempts to establish a positive reputation for their public health role are also compromised by their reputation for recommending repetitive lifelong chiropractic treatment.", "title": "Public health" }, { "paragraph_id": 80, "text": "Throughout its history chiropractic has been the subject of internal and external controversy and criticism. According to Daniel D. Palmer, the founder of chiropractic, subluxation is the sole cause of disease and manipulation is the cure for all diseases of the human race. A 2003 profession-wide survey found \"most chiropractors (whether 'straights' or 'mixers') still hold views of innate intelligence and of the cause and cure of disease (not just back pain) consistent with those of the Palmers.\" A critical evaluation stated \"Chiropractic is rooted in mystical concepts. This led to an internal conflict within the chiropractic profession, which continues today.\" Chiropractors, including D. D. Palmer, were jailed for practicing medicine without a license. For most of its existence, chiropractic has battled with mainstream medicine, sustained by antiscientific and pseudoscientific ideas such as subluxation. Collectively, systematic reviews have not demonstrated that spinal manipulation, the main treatment method employed by chiropractors, is effective for any medical condition, with the possible exception of treatment for back pain. Chiropractic remains controversial, though to a lesser extent than in past years.", "title": "Controversy" } ]
Chiropractic is a form of alternative medicine concerned with the diagnosis, treatment and prevention of mechanical disorders of the musculoskeletal system, especially of the spine. It has esoteric origins and is based on several pseudoscientific ideas. Many chiropractors, especially those in the field's early history, have proposed that mechanical disorders of the joints, especially of the spine, affect general health, and that regular manipulation of the spine improves general health. The main chiropractic treatment technique involves manual therapy, especially manipulation of the spine, other joints, and soft tissues, but may also include exercises and health and lifestyle counseling. A chiropractor may have a Doctor of Chiropractic (D.C.) degree and be referred to as "doctor" but is not a Doctor of Medicine (M.D.) or a Doctor of Osteopathic Medicine (D.O.). While many chiropractors view themselves as primary care providers, chiropractic clinical training does not meet the requirements for that designation. Systematic reviews of controlled clinical studies of treatments used by chiropractors have found no evidence that chiropractic manipulation is effective, with the possible exception of treatment for back pain. A 2011 critical evaluation of 45 systematic reviews concluded that the data included in the study "fail[ed] to demonstrate convincingly that spinal manipulation is an effective intervention for any condition." Spinal manipulation may be cost-effective for sub-acute or chronic low back pain, but the results for acute low back pain were insufficient. No compelling evidence exists to indicate that maintenance chiropractic care adequately prevents symptoms or diseases. There is not sufficient data to establish the safety of chiropractic manipulations. It is frequently associated with mild to moderate adverse effects, with serious or fatal complications in rare cases. There is controversy regarding the degree of risk of vertebral artery dissection, which can lead to stroke and death, from cervical manipulation. Several deaths have been associated with this technique and it has been suggested that the relationship is causative, a claim which is disputed by many chiropractors. Chiropractic is well established in the United States, Canada, and Australia. It overlaps with other manual-therapy professions such as osteopathy and physical therapy. Most who seek chiropractic care do so for low back pain. Back and neck pain are considered the specialties of chiropractic, but many chiropractors treat ailments other than musculoskeletal issues. Chiropractic has two main groups: "straights", now the minority, emphasize vitalism, "Innate Intelligence", and consider vertebral subluxations to be the cause of all disease; and "mixers", the majority, are more open to mainstream views and conventional medical techniques, such as exercise, massage, and ice therapy. D. D. Palmer founded chiropractic in the 1890s, after saying he received it from "the other world"; Palmer maintained that the tenets of chiropractic were passed along to him by a doctor who had died 50 years previously. His son B. J. Palmer helped to expand chiropractic in the early 20th century. Throughout its history, chiropractic has been controversial. Its foundation is at odds with evidence-based medicine, and has been sustained by pseudoscientific ideas such as vertebral subluxation and Innate Intelligence. Despite the overwhelming evidence that vaccination is an effective public health intervention, among chiropractors there are significant disagreements over the subject, which has led to negative impacts on both public vaccination and mainstream acceptance of chiropractic. The American Medical Association called chiropractic an "unscientific cult" in 1966 and boycotted it until losing an antitrust case in 1987. Chiropractic has had a strong political base and sustained demand for services. In the last decades of the twentieth century, it gained more legitimacy and greater acceptance among conventional physicians and health plans in the United States. During the COVID-19 pandemic, chiropractic professional associations advised chiropractors to adhere to CDC, WHO, and local health department guidance. Despite these recommendations, a small but vocal and influential number of chiropractors spread vaccine misinformation.
2002-02-25T15:43:11Z
2023-12-17T09:14:32Z
[ "Template:Cite journal", "Template:ISBN", "Template:Sisterlinks", "Template:Authority control", "Template:Pseudomedicine sidebar", "Template:Distinguish", "Template:Cite book", "Template:Page needed", "Template:Bulleted list", "Template:Portal", "Template:Chiropractic", "Template:Pp-vandalism", "Template:Clarify", "Template:Fcn", "Template:Cite web", "Template:Cite magazine", "Template:Cite news", "Template:CC-notice", "Template:Prone to spam", "Template:Short description", "Template:Lang", "Template:Pseudoscience", "Template:Primary source inline", "Template:Reflist", "Template:Blockquote", "Template:Use American English", "Template:Nbsp", "Template:Main", "Template:Citation", "Template:Webarchive", "Template:Cite encyclopedia", "Template:Infobox alternative medicine", "Template:Further" ]
https://en.wikipedia.org/wiki/Chiropractic
7,739
Carbide
In chemistry, a carbide usually describes a compound composed of carbon and a metal. In metallurgy, carbiding or carburizing is the process for producing carbide coatings on a metal piece. The carbides of the group 4, 5 and 6 transition metals (with the exception of chromium) are often described as interstitial compounds. These carbides have metallic properties and are refractory. Some exhibit a range of stoichiometries, being a non-stoichiometric mixture of various carbides arising due to crystal defects. Some of them, including titanium carbide and tungsten carbide, are important industrially and are used to coat metals in cutting tools. The long-held view is that the carbon atoms fit into octahedral interstices in a close-packed metal lattice when the metal atom radius is greater than approximately 135 pm: The following table shows structures of the metals and their carbides. (N.B. the body centered cubic structure adopted by vanadium, niobium, tantalum, chromium, molybdenum and tungsten is not a close-packed lattice.) The notation "h/2" refers to the M2C type structure described above, which is only an approximate description of the actual structures. The simple view that the lattice of the pure metal "absorbs" carbon atoms can be seen to be untrue as the packing of the metal atom lattice in the carbides is different from the packing in the pure metal, although it is technically correct that the carbon atoms fit into the octahedral interstices of a close-packed metal lattice. For a long time the non-stoichiometric phases were believed to be disordered with a random filling of the interstices, however short and longer range ordering has been detected. Iron forms a number of carbides, Fe3C, Fe7C3 and Fe2C. The best known is cementite, Fe3C, which is present in steels. These carbides are more reactive than the interstitial carbides; for example, the carbides of Cr, Mn, Fe, Co and Ni are all hydrolysed by dilute acids and sometimes by water, to give a mixture of hydrogen and hydrocarbons. These compounds share features with both the inert interstitials and the more reactive salt-like carbides. Some metals, such as lead and tin, are believed not to form carbides under any circumstances. There exists however a mixed titanium-tin carbide, which is a two-dimensional conductor. Carbides can be generally classified by the chemical bonds type as follows: Examples include calcium carbide (CaC2), silicon carbide (SiC), tungsten carbide (WC; often called, simply, carbide when referring to machine tooling), and cementite (Fe3C), each used in key industrial applications. The naming of ionic carbides is not systematic. Salt-like carbides are composed of highly electropositive elements such as the alkali metals, alkaline earth metals, lanthanides, actinides, and group 3 metals (scandium, yttrium, and lutetium). Aluminium from group 13 forms carbides, but gallium, indium, and thallium do not. These materials feature isolated carbon centers, often described as "C", in the methanides or methides; two-atom units, "C2−2", in the acetylides; and three-atom units, "C4−3", in the allylides. The graphite intercalation compound KC8, prepared from vapour of potassium and graphite, and the alkali metal derivatives of C60 are not usually classified as carbides. Methanides are a subset of carbides distinguished by their tendency to decompose in water producing methane. Three examples are aluminium carbide Al4C3, magnesium carbide Mg2C and beryllium carbide Be2C. Transition metal carbides are not saline: their reaction with water is very slow and is usually neglected. For example, depending on surface porosity, 5–30 atomic layers of titanium carbide are hydrolyzed, forming methane within 5 minutes at ambient conditions, following by saturation of the reaction. Note that methanide in this context is a trivial historical name. According to the IUPAC systematic naming conventions, a compound such as NaCH3 would be termed a "methanide", although this compound is often called methylsodium. See Methyl group#Methyl anion for more information about the CH−3 anion. Several carbides are assumed to be salts of the acetylide anion C2−2 (also called percarbide, by analogy with peroxide), which has a triple bond between the two carbon atoms. Alkali metals, alkaline earth metals, and lanthanoid metals form acetylides, for example, sodium carbide Na2C2, calcium carbide CaC2, and LaC2. Lanthanides also form carbides (sesquicarbides, see below) with formula M2C3. Metals from group 11 also tend to form acetylides, such as copper(I) acetylide and silver acetylide. Carbides of the actinide elements, which have stoichiometry MC2 and M2C3, are also described as salt-like derivatives of C2−2. The C–C triple bond length ranges from 119.2 pm in CaC2 (similar to ethyne), to 130.3 pm in LaC2 and 134 pm in UC2. The bonding in LaC2 has been described in terms of La with the extra electron delocalised into the antibonding orbital on C2−2, explaining the metallic conduction. The polyatomic ion C4−3, sometimes called allylide, is found in Li4C3 and Mg2C3. The ion is linear and is isoelectronic with CO2. The C–C distance in Mg2C3 is 133.2 pm. Mg2C3 yields methylacetylene, CH3CCH, and propadiene, CH2CCH2, on hydrolysis, which was the first indication that it contains C4−3. The carbides of silicon and boron are described as "covalent carbides", although virtually all compounds of carbon exhibit some covalent character. Silicon carbide has two similar crystalline forms, which are both related to the diamond structure. Boron carbide, B4C, on the other hand, has an unusual structure which includes icosahedral boron units linked by carbon atoms. In this respect boron carbide is similar to the boron rich borides. Both silicon carbide (also known as carborundum) and boron carbide are very hard materials and refractory. Both materials are important industrially. Boron also forms other covalent carbides, such as B25C. Metal complexes containing C are known as metal carbido complexes. Most common are carbon-centered octahedral clusters, such as [Au6C(PPh3)6] (where "Ph" represents a phenyl group) and [Fe6C(CO)6]. Similar species are known for the metal carbonyls and the early metal halides. A few terminal carbides have been isolated, such as [CRuCl2{P(C6H11)3}2]. Metallocarbohedrynes (or "met-cars") are stable clusters with the general formula M8C12 where M is a transition metal (Ti, Zr, V, etc.). In addition to the carbides, other groups of related carbon compounds exist:
[ { "paragraph_id": 0, "text": "In chemistry, a carbide usually describes a compound composed of carbon and a metal. In metallurgy, carbiding or carburizing is the process for producing carbide coatings on a metal piece.", "title": "" }, { "paragraph_id": 1, "text": "The carbides of the group 4, 5 and 6 transition metals (with the exception of chromium) are often described as interstitial compounds. These carbides have metallic properties and are refractory. Some exhibit a range of stoichiometries, being a non-stoichiometric mixture of various carbides arising due to crystal defects. Some of them, including titanium carbide and tungsten carbide, are important industrially and are used to coat metals in cutting tools.", "title": "Interstitial / Metallic carbides" }, { "paragraph_id": 2, "text": "The long-held view is that the carbon atoms fit into octahedral interstices in a close-packed metal lattice when the metal atom radius is greater than approximately 135 pm:", "title": "Interstitial / Metallic carbides" }, { "paragraph_id": 3, "text": "The following table shows structures of the metals and their carbides. (N.B. the body centered cubic structure adopted by vanadium, niobium, tantalum, chromium, molybdenum and tungsten is not a close-packed lattice.) The notation \"h/2\" refers to the M2C type structure described above, which is only an approximate description of the actual structures. The simple view that the lattice of the pure metal \"absorbs\" carbon atoms can be seen to be untrue as the packing of the metal atom lattice in the carbides is different from the packing in the pure metal, although it is technically correct that the carbon atoms fit into the octahedral interstices of a close-packed metal lattice.", "title": "Interstitial / Metallic carbides" }, { "paragraph_id": 4, "text": "For a long time the non-stoichiometric phases were believed to be disordered with a random filling of the interstices, however short and longer range ordering has been detected.", "title": "Interstitial / Metallic carbides" }, { "paragraph_id": 5, "text": "Iron forms a number of carbides, Fe3C, Fe7C3 and Fe2C. The best known is cementite, Fe3C, which is present in steels. These carbides are more reactive than the interstitial carbides; for example, the carbides of Cr, Mn, Fe, Co and Ni are all hydrolysed by dilute acids and sometimes by water, to give a mixture of hydrogen and hydrocarbons. These compounds share features with both the inert interstitials and the more reactive salt-like carbides.", "title": "Interstitial / Metallic carbides" }, { "paragraph_id": 6, "text": "Some metals, such as lead and tin, are believed not to form carbides under any circumstances. There exists however a mixed titanium-tin carbide, which is a two-dimensional conductor.", "title": "Interstitial / Metallic carbides" }, { "paragraph_id": 7, "text": "Carbides can be generally classified by the chemical bonds type as follows:", "title": "Chemical classification of carbides" }, { "paragraph_id": 8, "text": "Examples include calcium carbide (CaC2), silicon carbide (SiC), tungsten carbide (WC; often called, simply, carbide when referring to machine tooling), and cementite (Fe3C), each used in key industrial applications. The naming of ionic carbides is not systematic.", "title": "Chemical classification of carbides" }, { "paragraph_id": 9, "text": "Salt-like carbides are composed of highly electropositive elements such as the alkali metals, alkaline earth metals, lanthanides, actinides, and group 3 metals (scandium, yttrium, and lutetium). Aluminium from group 13 forms carbides, but gallium, indium, and thallium do not. These materials feature isolated carbon centers, often described as \"C\", in the methanides or methides; two-atom units, \"C2−2\", in the acetylides; and three-atom units, \"C4−3\", in the allylides. The graphite intercalation compound KC8, prepared from vapour of potassium and graphite, and the alkali metal derivatives of C60 are not usually classified as carbides.", "title": "Chemical classification of carbides" }, { "paragraph_id": 10, "text": "Methanides are a subset of carbides distinguished by their tendency to decompose in water producing methane. Three examples are aluminium carbide Al4C3, magnesium carbide Mg2C and beryllium carbide Be2C.", "title": "Chemical classification of carbides" }, { "paragraph_id": 11, "text": "Transition metal carbides are not saline: their reaction with water is very slow and is usually neglected. For example, depending on surface porosity, 5–30 atomic layers of titanium carbide are hydrolyzed, forming methane within 5 minutes at ambient conditions, following by saturation of the reaction.", "title": "Chemical classification of carbides" }, { "paragraph_id": 12, "text": "Note that methanide in this context is a trivial historical name. According to the IUPAC systematic naming conventions, a compound such as NaCH3 would be termed a \"methanide\", although this compound is often called methylsodium. See Methyl group#Methyl anion for more information about the CH−3 anion.", "title": "Chemical classification of carbides" }, { "paragraph_id": 13, "text": "Several carbides are assumed to be salts of the acetylide anion C2−2 (also called percarbide, by analogy with peroxide), which has a triple bond between the two carbon atoms. Alkali metals, alkaline earth metals, and lanthanoid metals form acetylides, for example, sodium carbide Na2C2, calcium carbide CaC2, and LaC2. Lanthanides also form carbides (sesquicarbides, see below) with formula M2C3. Metals from group 11 also tend to form acetylides, such as copper(I) acetylide and silver acetylide. Carbides of the actinide elements, which have stoichiometry MC2 and M2C3, are also described as salt-like derivatives of C2−2.", "title": "Chemical classification of carbides" }, { "paragraph_id": 14, "text": "The C–C triple bond length ranges from 119.2 pm in CaC2 (similar to ethyne), to 130.3 pm in LaC2 and 134 pm in UC2. The bonding in LaC2 has been described in terms of La with the extra electron delocalised into the antibonding orbital on C2−2, explaining the metallic conduction.", "title": "Chemical classification of carbides" }, { "paragraph_id": 15, "text": "The polyatomic ion C4−3, sometimes called allylide, is found in Li4C3 and Mg2C3. The ion is linear and is isoelectronic with CO2. The C–C distance in Mg2C3 is 133.2 pm. Mg2C3 yields methylacetylene, CH3CCH, and propadiene, CH2CCH2, on hydrolysis, which was the first indication that it contains C4−3.", "title": "Chemical classification of carbides" }, { "paragraph_id": 16, "text": "The carbides of silicon and boron are described as \"covalent carbides\", although virtually all compounds of carbon exhibit some covalent character. Silicon carbide has two similar crystalline forms, which are both related to the diamond structure. Boron carbide, B4C, on the other hand, has an unusual structure which includes icosahedral boron units linked by carbon atoms. In this respect boron carbide is similar to the boron rich borides. Both silicon carbide (also known as carborundum) and boron carbide are very hard materials and refractory. Both materials are important industrially. Boron also forms other covalent carbides, such as B25C.", "title": "Chemical classification of carbides" }, { "paragraph_id": 17, "text": "Metal complexes containing C are known as metal carbido complexes. Most common are carbon-centered octahedral clusters, such as [Au6C(PPh3)6] (where \"Ph\" represents a phenyl group) and [Fe6C(CO)6]. Similar species are known for the metal carbonyls and the early metal halides. A few terminal carbides have been isolated, such as [CRuCl2{P(C6H11)3}2].", "title": "Chemical classification of carbides" }, { "paragraph_id": 18, "text": "Metallocarbohedrynes (or \"met-cars\") are stable clusters with the general formula M8C12 where M is a transition metal (Ti, Zr, V, etc.).", "title": "Chemical classification of carbides" }, { "paragraph_id": 19, "text": "In addition to the carbides, other groups of related carbon compounds exist:", "title": "Related materials" } ]
In chemistry, a carbide usually describes a compound composed of carbon and a metal. In metallurgy, carbiding or carburizing is the process for producing carbide coatings on a metal piece.
2002-01-13T20:46:45Z
2023-11-08T06:55:49Z
[ "Template:Reflist", "Template:Cite book", "Template:Cite journal", "Template:Monatomic anion compounds", "Template:Short description", "Template:CO2", "Template:Ullmann", "Template:Greenwood&Earnshaw1st", "Template:Inorganic compounds of carbon", "Template:Authority control", "Template:About", "Template:Chem2" ]
https://en.wikipedia.org/wiki/Carbide
7,740
Charles C. Krulak
Charles Chandler Krulak (born March 4, 1942) is a retired United States Marine Corps four-star general who served as the 31st Commandant of the Marine Corps from July 1, 1995 to June 30, 1999. He is the son of Lieutenant General Victor H. "Brute" Krulak, who served in World War II, Korea, and Vietnam. He was the 13th President of Birmingham-Southern College after his stint as a non-executive director of English association football club Aston Villa. Krulak was born in Quantico, Virginia, on March 4, 1942, the son of Amy (née Chandler) and Victor H. Krulak. He graduated from Phillips Exeter Academy in Exeter, New Hampshire, in 1960, where he was classmates with novelist John Irving. Krulak then attended the United States Naval Academy, graduating in 1964 with a bachelor's degree. Krulak also holds a master's degree in labor relations from George Washington University (1973). He is a graduate of the Amphibious Warfare School (1968); the Army Command and General Staff College (1976); and the National War College (1982). After his commissioning and graduation from The Basic School at Marine Corps Base Quantico, Krulak held a variety of command and staff positions. His command positions included: commanding officer of a platoon and two rifle companies during two tours of duty in Vietnam; commanding officer of Special Training Branch and Recruit Series at Marine Corps Recruit Depot San Diego, California (1966–1968); commanding officer of Counter-Guerilla Warfare School, Northern Training Area on Okinawa (1970), Company officer at the United States Naval Academy (1970–1973); commanding officer of the Marine Barracks at Naval Air Station North Island, California (1973–1976), and commanding officer, 3rd Battalion, 3rd Marines (1983–1985). Krulak's staff assignments included: operations officer, 2nd Battalion, 9th Marines (1977–1978); chief of the Combat Arms Monitor Section at Headquarters Marine Corps, Washington, D.C. (1978–1979); executive assistant to the Director of Personnel Management, Headquarters Marine Corps (1979–1981); Plans Office, Fleet Marine Forces Pacific, Camp H.M. Smith, Hawaii (1982–1983); executive officer, 3rd Marine Regiment, 1st Marine Expeditionary Brigade; assistant chief of staff, maritime pre-positioning ships, 1st MEB; assistant chief of staff for operations, 1st Marine Expeditionary Brigade; and the military assistant to the assistant secretary of defense for command, control, communications and intelligence, Office of the Secretary of Defense. Krulak was assigned duty as the deputy director of the White House Military Office in September 1987. While serving in this capacity, he was selected for promotion to brigadier general in November 1988. He was advanced to that grade on June 5, 1989, and assigned duties as the commanding general, 10th MEB/Assistant division commander, 2nd Marine Division, Fleet Marine Force Atlantic, at Marine Corps Base Camp Lejeune, North Carolina on July 10, 1989. On June 1, 1990, he assumed duties as the commanding general, 2nd Force Service Support Group Group/Commanding general, 6th Marine Expeditionary Brigade, Fleet Marine Force Atlantic and commanded the 2d FSSG during the Gulf War. He served in this capacity until July 12, 1991, and was assigned duty as assistant deputy chief of staff for manpower and reserve affairs (personnel Management/Personnel Procurement), Headquarters Marine Corps on August 5, 1991. He was advanced to major general on March 20, 1992. Krulak was assigned as commanding general, Marine Corps Combat Development Command, Quantico, on August 24, 1992, and was promoted to lieutenant general on September 1, 1992. On July 22, 1994, he was assigned as commander of Marine Forces Pacific/commanding general, Fleet Marine Force Pacific, and in March 1995 he was nominated to serve as the Commandant of the Marine Corps. On June, 29, he was promoted to general and assumed duties as the 31st commandant on June 30, 1995. He was relieved on June 30, 1999, by General James L. Jones. In 1997, Krulak became a Life Member of the Sons of the Revolution in the State of California. Citation: The President of the United States of America takes pleasure in presenting the Silver Star to Captain Charles Chandler Krulak, United States Marine Corps, for conspicuous gallantry and intrepidity in action while serving as Commanding Officer of Company L, Third Battalion, Third Marines, Third Marine Division, during combat operations against the enemy in the Republic of Vietnam. On 3 June 1969, during Operation Virginia Ridge, Company L was occupying ambush positions near the Demilitarized Zone west of Con Thien when the Marines came under a heavy volume of mortar fire and sustained several casualties. Although seriously wounded himself, Captain Krulak unhesitatingly left his covered position and, thinking only of the welfare of his men, fearlessly maneuvered across the fire-swept terrain to ensure that his Marines were in effective defensive locations and capable of repelling an expected ground attack. Shortly after the initial mortar attack, the Company was subjected to a second intense mortar barrage. Realizing that the determined enemy soldiers had accurate range on the Marine emplacements, and unwilling to incur additional casualties, he commenced maneuvering his men to an alternate location. Simultaneously, undaunted by the fierce barrage, Captain Krulak fearlessly moved to a dangerously exposed vantage point from which he pinpointed the principal sources of hostile fire and skillfully coordinated fixed-wing air strikes and supporting artillery fire on the enemy positions, silencing the fire. By this time, both the platoon commander and a platoon sergeant of one of his platoons had been seriously wounded. After repeatedly exposing himself to the relentless fire to supervise the evacuation of the casualties, he then personally led the platoon back to the main body of his Company across 3,000 meters of rugged mountain terrain to another patrol base and, although weak from loss of blood and the pain of his injuries, steadfastly refused medical evacuation until the arrival of another officer on the following morning. By his courage, dynamic leadership, and inspiring devotion to duty in the face of grave personal danger, Captain Krulak minimized Marine casualties and upheld the highest traditions of the Marine Corps and of the United States Naval Service. Krulak received the Golden Plate Award of the American Academy of Achievement in 1996. The Golden Plate was presented by Awards Council member and Chairman of the Joint Chiefs of Staff, General John M. Shalikashvili, USA. Krulak joined MBNA America in September 1999 as chief administrative officer, responsible for personnel, benefits, compensation, education, and other administrative services. Krulak has served as the Senior Vice Chairman and Chief Executive Officer of MBNA Europe (2001–2005) and was based at the Chester campus in the UK. He was the executive vice chairman and chief administration officer of MBNA Corporation (2004–2005). He retired from MBNA in 2005. Following the takeover of English football club Aston Villa by MBNA Chairman Randy Lerner in August 2006 and as of September 19, 2006, Krulak joined the board of Aston Villa as non-executive director where he posted on several fans forums. Krulak was generally referred to as "The General" by fans on these boards. Krulak also serves on the boards of ConocoPhillips, Freeport-McMoran (formerly known as Phelps Dodge Corporation) and Union Pacific Corporation. In addition, he serves on the advisory council of Hope For The Warriors, a national non-profit dedicated to provide a full cycle of non-medical care to combat wounded service members, their families, and families of the fallen from each military branch. Krulak was elected as the 13th President of Birmingham–Southern College in Birmingham, Alabama on March 21, 2011, and retired June 1, 2015. He received an honorary doctorate of Humane Letters from Birmingham-Southern College. The Krulak Institute Institute for Leadership, Experiential Learning, and Civic Engagement at Birmingham-Southern College is named for him. Krulak was the Vice Chair of the Sweet Briar College Board of Directors. He joined the Board in the Summer of 2015. General Krulak's decorations and medals include: Krulak famously referred to the "Strategic Corporal" and the Three Block War as two of the key lessons identified from the deployments in Somalia, Haiti and Bosnia. These concepts are still considered vital in understanding the increasing complexity of modern battlefields. Krulak explained some of his warfighting philosophy in an interview with Tom Clancy in Clancy's nonfiction book Marine. Clancy referred to Krulak as "Warrior Prince of the Corps." Krulak also rewrote the Marine Corps' basic combat study text, MCDP 1: Warfighting, incorporating his theories on operations in the modern battlefield. Krulak is married to Zandi Meyers from Annapolis. They have two sons: CAPT David C. Krulak, the former Commanding Officer for Naval Hospital Okinawa, Japan and Dr. Todd C. Krulak, PhD., a retired freelance rave DJ who is now a professor at Samford University; and five grandchildren: Capt Brian Krulak (USMC), Katie, Mary, Matthew, and Charles. He is the son of Lieutenant General Victor H. Krulak Sr., and the younger brother of Commander Victor H. Krulak Jr, Navy Chaplain Corps and Colonel William Krulak, United States Marine Corps Reserve. Krulak's godfather was USMC general Holland McTyeire "Howlin' Mad" Smith. This article incorporates public domain material from websites or documents of the United States Marine Corps.
[ { "paragraph_id": 0, "text": "Charles Chandler Krulak (born March 4, 1942) is a retired United States Marine Corps four-star general who served as the 31st Commandant of the Marine Corps from July 1, 1995 to June 30, 1999. He is the son of Lieutenant General Victor H. \"Brute\" Krulak, who served in World War II, Korea, and Vietnam. He was the 13th President of Birmingham-Southern College after his stint as a non-executive director of English association football club Aston Villa.", "title": "" }, { "paragraph_id": 1, "text": "Krulak was born in Quantico, Virginia, on March 4, 1942, the son of Amy (née Chandler) and Victor H. Krulak. He graduated from Phillips Exeter Academy in Exeter, New Hampshire, in 1960, where he was classmates with novelist John Irving. Krulak then attended the United States Naval Academy, graduating in 1964 with a bachelor's degree. Krulak also holds a master's degree in labor relations from George Washington University (1973). He is a graduate of the Amphibious Warfare School (1968); the Army Command and General Staff College (1976); and the National War College (1982).", "title": "Early life and education" }, { "paragraph_id": 2, "text": "After his commissioning and graduation from The Basic School at Marine Corps Base Quantico, Krulak held a variety of command and staff positions. His command positions included: commanding officer of a platoon and two rifle companies during two tours of duty in Vietnam; commanding officer of Special Training Branch and Recruit Series at Marine Corps Recruit Depot San Diego, California (1966–1968); commanding officer of Counter-Guerilla Warfare School, Northern Training Area on Okinawa (1970), Company officer at the United States Naval Academy (1970–1973); commanding officer of the Marine Barracks at Naval Air Station North Island, California (1973–1976), and commanding officer, 3rd Battalion, 3rd Marines (1983–1985).", "title": "Marine career" }, { "paragraph_id": 3, "text": "Krulak's staff assignments included: operations officer, 2nd Battalion, 9th Marines (1977–1978); chief of the Combat Arms Monitor Section at Headquarters Marine Corps, Washington, D.C. (1978–1979); executive assistant to the Director of Personnel Management, Headquarters Marine Corps (1979–1981); Plans Office, Fleet Marine Forces Pacific, Camp H.M. Smith, Hawaii (1982–1983); executive officer, 3rd Marine Regiment, 1st Marine Expeditionary Brigade; assistant chief of staff, maritime pre-positioning ships, 1st MEB; assistant chief of staff for operations, 1st Marine Expeditionary Brigade; and the military assistant to the assistant secretary of defense for command, control, communications and intelligence, Office of the Secretary of Defense.", "title": "Marine career" }, { "paragraph_id": 4, "text": "Krulak was assigned duty as the deputy director of the White House Military Office in September 1987. While serving in this capacity, he was selected for promotion to brigadier general in November 1988. He was advanced to that grade on June 5, 1989, and assigned duties as the commanding general, 10th MEB/Assistant division commander, 2nd Marine Division, Fleet Marine Force Atlantic, at Marine Corps Base Camp Lejeune, North Carolina on July 10, 1989. On June 1, 1990, he assumed duties as the commanding general, 2nd Force Service Support Group Group/Commanding general, 6th Marine Expeditionary Brigade, Fleet Marine Force Atlantic and commanded the 2d FSSG during the Gulf War. He served in this capacity until July 12, 1991, and was assigned duty as assistant deputy chief of staff for manpower and reserve affairs (personnel Management/Personnel Procurement), Headquarters Marine Corps on August 5, 1991. He was advanced to major general on March 20, 1992. Krulak was assigned as commanding general, Marine Corps Combat Development Command, Quantico, on August 24, 1992, and was promoted to lieutenant general on September 1, 1992. On July 22, 1994, he was assigned as commander of Marine Forces Pacific/commanding general, Fleet Marine Force Pacific, and in March 1995 he was nominated to serve as the Commandant of the Marine Corps. On June, 29, he was promoted to general and assumed duties as the 31st commandant on June 30, 1995. He was relieved on June 30, 1999, by General James L. Jones.", "title": "Marine career" }, { "paragraph_id": 5, "text": "In 1997, Krulak became a Life Member of the Sons of the Revolution in the State of California.", "title": "Marine career" }, { "paragraph_id": 6, "text": "Citation:", "title": "Marine career" }, { "paragraph_id": 7, "text": "The President of the United States of America takes pleasure in presenting the Silver Star to Captain Charles Chandler Krulak, United States Marine Corps, for conspicuous gallantry and intrepidity in action while serving as Commanding Officer of Company L, Third Battalion, Third Marines, Third Marine Division, during combat operations against the enemy in the Republic of Vietnam. On 3 June 1969, during Operation Virginia Ridge, Company L was occupying ambush positions near the Demilitarized Zone west of Con Thien when the Marines came under a heavy volume of mortar fire and sustained several casualties. Although seriously wounded himself, Captain Krulak unhesitatingly left his covered position and, thinking only of the welfare of his men, fearlessly maneuvered across the fire-swept terrain to ensure that his Marines were in effective defensive locations and capable of repelling an expected ground attack. Shortly after the initial mortar attack, the Company was subjected to a second intense mortar barrage. Realizing that the determined enemy soldiers had accurate range on the Marine emplacements, and unwilling to incur additional casualties, he commenced maneuvering his men to an alternate location. Simultaneously, undaunted by the fierce barrage, Captain Krulak fearlessly moved to a dangerously exposed vantage point from which he pinpointed the principal sources of hostile fire and skillfully coordinated fixed-wing air strikes and supporting artillery fire on the enemy positions, silencing the fire. By this time, both the platoon commander and a platoon sergeant of one of his platoons had been seriously wounded. After repeatedly exposing himself to the relentless fire to supervise the evacuation of the casualties, he then personally led the platoon back to the main body of his Company across 3,000 meters of rugged mountain terrain to another patrol base and, although weak from loss of blood and the pain of his injuries, steadfastly refused medical evacuation until the arrival of another officer on the following morning. By his courage, dynamic leadership, and inspiring devotion to duty in the face of grave personal danger, Captain Krulak minimized Marine casualties and upheld the highest traditions of the Marine Corps and of the United States Naval Service.", "title": "Marine career" }, { "paragraph_id": 8, "text": "Krulak received the Golden Plate Award of the American Academy of Achievement in 1996. The Golden Plate was presented by Awards Council member and Chairman of the Joint Chiefs of Staff, General John M. Shalikashvili, USA.", "title": "Personal life" }, { "paragraph_id": 9, "text": "Krulak joined MBNA America in September 1999 as chief administrative officer, responsible for personnel, benefits, compensation, education, and other administrative services. Krulak has served as the Senior Vice Chairman and Chief Executive Officer of MBNA Europe (2001–2005) and was based at the Chester campus in the UK. He was the executive vice chairman and chief administration officer of MBNA Corporation (2004–2005). He retired from MBNA in 2005.", "title": "Personal life" }, { "paragraph_id": 10, "text": "Following the takeover of English football club Aston Villa by MBNA Chairman Randy Lerner in August 2006 and as of September 19, 2006, Krulak joined the board of Aston Villa as non-executive director where he posted on several fans forums. Krulak was generally referred to as \"The General\" by fans on these boards.", "title": "Personal life" }, { "paragraph_id": 11, "text": "Krulak also serves on the boards of ConocoPhillips, Freeport-McMoran (formerly known as Phelps Dodge Corporation) and Union Pacific Corporation. In addition, he serves on the advisory council of Hope For The Warriors, a national non-profit dedicated to provide a full cycle of non-medical care to combat wounded service members, their families, and families of the fallen from each military branch.", "title": "Personal life" }, { "paragraph_id": 12, "text": "Krulak was elected as the 13th President of Birmingham–Southern College in Birmingham, Alabama on March 21, 2011, and retired June 1, 2015. He received an honorary doctorate of Humane Letters from Birmingham-Southern College. The Krulak Institute Institute for Leadership, Experiential Learning, and Civic Engagement at Birmingham-Southern College is named for him.", "title": "Personal life" }, { "paragraph_id": 13, "text": "Krulak was the Vice Chair of the Sweet Briar College Board of Directors. He joined the Board in the Summer of 2015.", "title": "Personal life" }, { "paragraph_id": 14, "text": "General Krulak's decorations and medals include:", "title": "Awards and decorations" }, { "paragraph_id": 15, "text": "Krulak famously referred to the \"Strategic Corporal\" and the Three Block War as two of the key lessons identified from the deployments in Somalia, Haiti and Bosnia. These concepts are still considered vital in understanding the increasing complexity of modern battlefields.", "title": "Legacy" }, { "paragraph_id": 16, "text": "Krulak explained some of his warfighting philosophy in an interview with Tom Clancy in Clancy's nonfiction book Marine. Clancy referred to Krulak as \"Warrior Prince of the Corps.\" Krulak also rewrote the Marine Corps' basic combat study text, MCDP 1: Warfighting, incorporating his theories on operations in the modern battlefield.", "title": "Legacy" }, { "paragraph_id": 17, "text": "Krulak is married to Zandi Meyers from Annapolis. They have two sons: CAPT David C. Krulak, the former Commanding Officer for Naval Hospital Okinawa, Japan and Dr. Todd C. Krulak, PhD., a retired freelance rave DJ who is now a professor at Samford University; and five grandchildren: Capt Brian Krulak (USMC), Katie, Mary, Matthew, and Charles. He is the son of Lieutenant General Victor H. Krulak Sr., and the younger brother of Commander Victor H. Krulak Jr, Navy Chaplain Corps and Colonel William Krulak, United States Marine Corps Reserve. Krulak's godfather was USMC general Holland McTyeire \"Howlin' Mad\" Smith.", "title": "Family" }, { "paragraph_id": 18, "text": "This article incorporates public domain material from websites or documents of the United States Marine Corps.", "title": "References" } ]
Charles Chandler Krulak is a retired United States Marine Corps four-star general who served as the 31st Commandant of the Marine Corps from July 1, 1995 to June 30, 1999. He is the son of Lieutenant General Victor H. "Brute" Krulak, who served in World War II, Korea, and Vietnam. He was the 13th President of Birmingham-Southern College after his stint as a non-executive director of English association football club Aston Villa.
2002-01-13T20:51:40Z
2023-12-30T21:25:13Z
[ "Template:Infobox military person", "Template:Cite news", "Template:S-mil", "Template:S-end", "Template:Ribbon devices", "Template:Dead link", "Template:Marine Corps", "Template:Portal", "Template:Cite journal", "Template:S-start", "Template:US Marine Corps navbox", "Template:CMC", "Template:Authority control", "Template:Short description", "Template:Nee", "Template:Cite web", "Template:Citation", "Template:C-SPAN", "Template:S-ttl", "Template:Reflist", "Template:S-bef", "Template:S-aft" ]
https://en.wikipedia.org/wiki/Charles_C._Krulak
7,742
Compaq
Compaq Computer Corporation (sometimes abbreviated to CQ prior to the 2007 rebranding) was an American information technology company founded in 1982 that developed, sold, and supported computers and related products and services. Compaq produced some of the first IBM PC compatible computers, being the second company after Columbia Data Products to legally reverse engineer the BIOS of the IBM Personal Computer. It rose to become the largest supplier of PC systems during the 1990s before being overtaken by Dell in 2001. Struggling to keep up in the price wars against Dell, as well as with a risky acquisition of DEC, Compaq was acquired for US$25 billion by HP in 2002. The Compaq brand remained in use by HP for lower-end systems until 2013 when it was discontinued. Since 2013, the brand is currently licensed to third parties for use on electronics in Brazil and India. The company was formed by Rod Canion, Jim Harris, and Bill Murto, all of whom were former Texas Instruments senior managers. Murto (SVP of sales) departed Compaq in 1987, while Canion (president and CEO) and Harris (SVP of engineering) left under a shakeup in 1991, which saw Eckhard Pfeiffer appointed president and CEO. Pfeiffer served through the 1990s. Ben Rosen provided the venture capital financing for the fledgling company and served as chairman of the board for 17 years from 1983 until September 28, 2000, when he retired and was succeeded by Michael Capellas, who served as the last chairman and CEO until its merger with HP. Prior to its merger, the company was headquartered in northwest unincorporated Harris County, Texas, which now continues as HP's largest United States facility. Compaq was founded in February 1982 by Rod Canion, Jim Harris, and Bill Murto, three senior managers from semiconductor manufacturer Texas Instruments. The three of them had left due to lack of faith and loss of confidence in TI's management, and initially considered but ultimately decided against starting a chain of Mexican restaurants. Each invested $1,000 to form the company, which was founded with the temporary name Gateway Technology. The name "COMPAQ" was said to be derived from "Compatibility and Quality" but this explanation was an afterthought. The name was chosen from many suggested by Ogilvy & Mather, it being the name least rejected. The first Compaq PC was sketched out on a placemat by Ted Papajohn while dining with the founders in a pie shop, (named House of Pies in Houston). Their first venture capital came from Benjamin M. Rosen and Sevin Rosen Funds, who helped the fledgling company secure $1.5 million to produce their initial computer. Overall, the founders managed to raise $25 million from venture capitalists, as this gave stability to the new company as well as providing assurances to the dealers or middlemen. Unlike many startups, Compaq differentiated its offerings from the many other IBM PC clones by not focusing mainly on price, but instead concentrating on new features, such as portability and better graphics displays as well as performance—and all at prices comparable to those of IBM's PCs. In contrast to Dell and Gateway 2000, Compaq hired veteran engineers with an average of 15 years experience, which lent credibility to Compaq's reputation of reliability among customers. Due to its partnership with Intel, Compaq was able to maintain a technological lead in the market place as it was the first one to come out with computers containing the next generation of each Intel processor. Under Canion's direction, Compaq sold computers only through dealers to avoid potential competition that a direct sales channel would foster, which helped foster loyalty among resellers. By giving dealers considerable leeway in pricing Compaq's offerings, either a significant markup for more profits or discount for more sales, dealers had a major incentive to advertise Compaq. During its first year of sales (second year of operation), the company sold 53,000 PCs for sales of $111 million, the first start-up to hit the $100 million mark that fast. Compaq went public in 1983 on the NYSE and raised $67 million. In 1986, it enjoyed record sales of $329 million from 150,000 PCs, and became the youngest-ever firm to make the Fortune 500. In 1985, sales reached $504 million. In 1987, Compaq hit the $1 billion revenue mark, taking the least amount of time to reach that milestone. By 1991, Compaq held the fifth place spot in the PC market with $3 billion in sales that year. Two key marketing executives in Compaq's early years, Jim D'Arezzo and Sparky Sparks, had come from IBM's PC Group. Other key executives responsible for the company's meteoric growth in the late 1980s and early 1990s were Ross A. Cooley, another former IBM associate, who served for many years as SVP of GM North America; Michael Swavely, who was the company's chief marketing officer in the early years, and eventually ran the North America organization, later passing along that responsibility to Cooley when Swavely retired. In the United States, Brendan A. "Mac" McLoughlin (another long time IBM executive) led the company's field sales organization after starting up the Western U.S. Area of Operations. These executives, along with other key contributors, including Kevin Ellington, Douglas Johns, Steven Flannigan, and Gary Stimac, helped the company compete against the IBM Corporation in all personal computer sales categories, after many predicted that none could compete with the behemoth. The soft-spoken Canion was popular with employees and the culture that he built helped Compaq to attract the best talent. Instead of headquartering the company in a downtown Houston skyscraper, Canion chose a West Coast-style campus surrounded by forests, where every employee had similar offices and no-one (not even the CEO) had a reserved parking spot. At semi-annual meetings, turnout was high as any employee could ask questions to senior managers. In 1987, company co-founder Bill Murto resigned to study at a religious education program at the University of St. Thomas. Murto had helped to organize the company's marketing and authorized-dealer distribution strategy, and held the post of senior vice president of sales since June 1985. Murto was succeeded by Ross A. Cooley, director of corporate sales. Cooley would report to Michael S. Swavely, vice president for marketing, who was given increased responsibility and the title of vice president for sales and marketing. In November 1982, Compaq announced their first product, the Compaq Portable, a portable IBM PC compatible personal computer. It was released in March 1983 at $2,995. The Compaq Portable was one of the progenitors of today's laptop; some called it a "suitcase computer" for its size and the look of its case. It was the second IBM PC compatible, being capable of running all software that would run on an IBM PC. It was a commercial success, selling 53,000 units in its first year and generating $111 million in sales revenue. The Compaq Portable was the first in the range of the Compaq Portable series. Compaq was able to market a legal IBM clone because IBM mostly used "off the shelf" parts for their PC. Furthermore, Microsoft had kept the right to license MS-DOS, the most popular and de facto standard operating system for the IBM PC, to other computer manufacturers. The only part which had to be duplicated was the BIOS, which Compaq did legally by using clean room design at a cost of $1 million. Unlike other companies, Compaq did not bundle application software with its computers. Vice President of Sales and Service H. L. Sparks said in early 1984: We've considered it, and every time we consider it we reject it. I don't believe and our dealer network doesn't believe that bundling is the best way to merchandise those products. You remove the freedom from the dealers to really merchandise when you bundle in software. It is perceived by a lot of people as a marketing gimmick. You know, when you advertise a $3,000 computer with $3,000 worth of free software, it obviously can't be true. The software should stand on its merits and be supported and so should the hardware. Why should you be constrained to use the software that comes with a piece of hardware? I think it can tend to inhibit sales over the long run. Compaq instead emphasized PC compatibility, of which Future Computing in May 1983 ranked Compaq as among the "Best" examples. "Many industry observers think [Compaq] is poised for meteoric growth", The New York Times reported in March of that year. By October, when the company announced the Compaq Plus with a 10 MB hard drive, PC Magazine wrote of "the reputation for compatibility it built with its highly regarded floppy disk portable". Compaq computers remained the most compatible PC clones into 1984, and maintained its reputation for compatibility for years, even as clone BIOSes became available from Phoenix Technologies and other companies that also reverse engineered IBM's design, then sold their version to clone manufacturers. On June 28, 1984, Compaq released the Deskpro, a 16-bit desktop computer using an Intel 8086 microprocessor running at 7.14 MHz. It was considerably faster than an IBM PC and was, like the original Compaq Portable, also capable of running IBM software. It was Compaq's first non-portable computer and began the Deskpro line of computers. In 1986, Compaq introduced the Deskpro 386, the first PC based on Intel's new 80386 microprocessor. Bill Gates of Microsoft later said the folks at IBM didn't trust the 386. They didn't think it would get done. So we encouraged Compaq to go ahead and just do a 386 machine. That was the first time people started to get a sense that it wasn't just IBM setting the standards, that this industry had a life of its own, and that companies like Compaq and Intel were in there doing new things that people should pay attention to. The Compaq 386 computer marked the first CPU change to the PC platform that was not initiated by IBM. An IBM-made 386 machine reached the market almost a year later, but by that time Compaq was the 386 supplier of choice and IBM had lost some of its prestige. For the first three months after announcement, the Deskpro 386 shipped with Windows/386. This was a version of Windows 2.1 adapted for the 80386 processor. Support for the virtual 8086 mode was added by Compaq engineers. (Windows, running on top of the MS-DOS operating system, would not become a popular "operating environment" until at least the release of Windows 3.0 in 1990.) Compaq's technical leadership and the rivalry with IBM was emphasized when the SystemPro server was launched in late 1989 – this was a true server product with standard support for a second CPU and RAID, but also the first product to feature the EISA bus, designed in reaction to IBM's MCA (Micro Channel Architecture) which was incompatible with the original AT bus. Although Compaq had become successful by being 100 percent IBM-compatible, it decided to continue with the original AT bus—which it renamed ISA—instead of licensing IBM's MCA. Prior to developing EISA Compaq had invested significant resources into reverse engineering MCA, but its executives correctly calculated that the $80 billion already spent by corporations on IBM-compatible technology would make it difficult for even IBM to force manufacturers to adopt the new MCA design. Instead of cloning MCA, Compaq led an alliance with Hewlett Packard and seven other major manufacturers, known collectively as the "Gang of Nine", to develop EISA. Development of a truly mobile successor to the Portable line began in 1986, the company releasing two stopgap products in the meantime, the SLT (Compaq's first laptop) and the Compaq Portable III (a lighter-weight, lunchbox-sized entry in the Portable line). In 1989, they introduced the LTE, their first notebook-sized laptop which competed with NEC's UltraLite and Zenith Data Systems's MinisPort. However, whereas the UltraLite and MinisPort failed to gain much uptake due to their novel but nonstandard data storage technologies, the LTE succeeded on account of its use of the conventional floppy drive and spinning hard drive, allowing users to transfer data to and from their desktop computers without any hassle. As well, Compaq began offering docking stations with the release of the LTE/386s in 1990, providing performance comparable to then-current desktop machines. Thus, the LTE was the first commercially successful notebook computer, helping launch the burgeoning industry. It was a direct influence on both Apple and IBM for the development of their own notebook computers, the PowerBook and ThinkPad, respectively. By 1989, The New York Times wrote that being the first to release a 80386-based personal computer made Compaq the leader of the industry and "hurt no company more - in prestige as well as dollars - than" IBM. The company was so influential that observers and its executives spoke of "Compaq compatible". InfoWorld reported that "In the [ISA market] Compaq is already IBM's equal in being seen as a safe bet", quoting a sell-side analyst describing it as "now the safe choice in personal computers". Even rival Tandy Corporation acknowledged Compaq's leadership, stating that within the Gang of Nine "when you have 10 people sit down before a table to write a letter to the president, someone has to write the letter. Compaq is sitting down at the typewriter". Michael S. Swavely, president of Compaq's North American division since May 1989, took a six-month sabbatical in January 1991 (which would eventually become retirement effective on July 12, 1991). Eckhard Pfeiffer, then president of Compaq International, was named to succeed him. Pfeiffer also received the title of chief operating officer, with responsibility for the company's operations on a worldwide basis, so that Canion could devote more time to strategy. Swavely's abrupt departure in January led to rumors of turmoil in Compaq's executive suite, including friction between Canion and Swavely, likely as Swavely's rival Pfeiffer had received the number two leadership position. Swavely's U.S. marketing organization was losing ground with only 4% growth for Compaq versus 7% in the market, likely due to short supplies of the LTE 386s from component shortages, rivals that undercut Compaq's prices by as much as 35%, and large customers who did not like Compaq's dealer-only policy. Pfeiffer became president and CEO of Compaq later that year, as a result of a boardroom coup led by board chairman Ben Rosen that forced co-founder Rod Canion to resign as president and CEO. Pfeiffer had joined Compaq from Texas Instruments, and established operations from scratch in both Europe and Asia. Pfeiffer was given US$20,000 to start up Compaq Europe He started up Compaq's first overseas office in Munich in 1984. By 1990, Compaq Europe was a $2 billion business and number two behind IBM in that region, and foreign sales contributed 54 percent of Compaq's revenues. Pfeiffer, while transplanting Compaq's U.S. strategy of dealer-only distribution to Europe, was more selective in signing up dealers than Compaq had been in the U. S. such that European dealers were more qualified to handle its increasingly complex products. During the 1980s, under Canion's direction Compaq had focused on engineering, research, and quality control, producing high-end, high-performance machines with high profit margins that allowed Compaq to continue investing in engineering and next-generation technology. This strategy was successful as Compaq was considered a trusted brand, while many other IBM clones were untrusted due to being plagued by poor reliability. However, by the end of the eighties many manufacturers had improved their quality and were able to produce inexpensive PCs with off-the-shelf components, incurring none of the R&D costs which allowed them to undercut Compaq's expensive computers. Faced with lower-cost rivals such as Dell, AST Research, and Gateway 2000, Compaq suffered a $71 million loss for that quarter, their first loss as a company, while the stock had dropped by over two-thirds. An analyst stated that "Compaq has made a lot of tactical errors in the last year and a half. They were trend-setters, now they are lagging". Canion initially believed that the 1990s recession was responsible for Compaq's declining sales but insisted that they would recover once the economy improved, however Pfeiffer's observation of the European market noted that it was competition as rivals could match Compaq at a fraction of the cost. Under pressure from Compaq's board to control costs as staff was ballooning at their Houston headquarters despite falling U.S. sales, while the number of non-U.S. employees had stayed constant, Compaq made its first-ever layoffs (1400 employees which was 12% of its workforce) while Pfeiffer was promoted to EVP and COO. Rosen and Canion had disagreed about how to counter the cheaper Asian PC imports, as Canion wanted Compaq to build lower cost PCs with components developed in-house in order to preserve Compaq's reputation for engineering and quality, while Rosen believed that Compaq needed to buy standard components from suppliers and reach the market faster. While Canion developed an 18-month plan to create a line of low-priced computers, Rosen sent his own Compaq engineering team to Comdex without Canion's knowledge and discovered that a low-priced PC could be made in half the time and at lower cost than Canion's initiative. It was also believed that Canion's consensus-style management slowed the company's ability to react in the market, whereas Pfeiffer's autocratic style would be suited to price and product competition. Rosen initiated a 14-hour board meeting, and the directors also interviewed Pfeiffer for several hours without informing Canion. At the conclusion, the board was unanimous in picking Pfeiffer over Canion. As Canion was popular with company workers, 150 employees staged an impromptu protest with signs stating "We love you, Rod." and taking out a newspaper ad saying "Rod, you are the wind beneath our wings. We love you." Canion declined an offer to remain on Compaq's board and was bitter about his ouster as he did not speak to Rosen for years, although their relationship became cordial again. In 1999, Canion admitted that his ouster was justified, saying "I was burned out. I needed to leave. He [Rosen] felt I didn't have a strong sense of urgency". Two weeks after Canion's ouster, five other senior executives resigned, including remaining company founder James Harris as SVP of Engineering. These departures were motivated by an enhanced severance or early retirement, as well as an imminent demotion as their functions were to be shifted to vice presidents. Under Pfeiffer's tenure as chief executive, Compaq entered the retail computer market with the Compaq Presario as one of the first manufacturers in the mid-1990s to market a sub-$1000 PC. In order to maintain the prices it wanted, Compaq became the first first-tier computer manufacturer to utilize CPUs from AMD and Cyrix. The two price wars resulting from Compaq's actions ultimately drove numerous competitors from the market, such as Packard Bell and AST Research. From third place in 1993, Compaq had overtaken Apple Computer and even surpassed IBM as the top PC manufacturer in 1994, as both IBM and Apple were struggling considerably during that time. Compaq's inventory and gross margins were better than that of its rivals which enabled it to wage the price wars. Compaq had decided to make a foray into printers in 1989, and the first models were released to positive reviews in 1992. However, Pfeiffer saw that the prospects of taking on market leader Hewlett-Packard (who had 60% market share) was tough, as that would force Compaq to devote more funds and people to that project than originally budgeted. Compaq ended up selling the printer business to Xerox and took a charge of $50 million. On June 26, 1995, Compaq reached an agreement with Cisco Systems Inc. in order to get into networking, including digital modems, routers, and switches favored by small businesses and corporate departments, which was now a $4 billion business and the fastest-growing part of the computer hardware market. Compaq also built up a network engineering and marketing staff. In 1996, despite record sales and profits at Compaq, Pfeiffer initiated a major management shakeup in the senior ranks. John T. Rose, who previously ran Compaq's desktop PC division, took over the corporate server business from SVP Gary Stimac who had resigned. Rose had joined Compaq in 1993 from Digital Equipment Corporation where he oversaw the personal computer division and worldwide engineering, while Stimac had been with Compaq since 1982 and was one of the longest-serving executives. Senior Vice-president for North America Ross Cooley announced his resignation effective at the end of 1996. CFO Daryl J. White, who joined the company in January, 1983 resigned in May, 1996 after 8 years as CFO. Michael Winkler, who joined Compaq in 1995 to run its portable computer division, was promoted to general manager of the new PC products group. Earl Mason, hired from Inland Steel effective in May 1996, immediately made an impact as the new CFO. Under Mason's guidance, Compaq utilized its assets more efficiently instead of focusing just on income and profits, which increased Compaq's cash from $700 million to nearly $5 billion in one year. Additionally, Compaq's return on invested capital (after-tax operating profit divided by operating assets) doubled to 50 percent from 25 percent in that period. Compaq had been producing the PC chassis at its plant in Shenzhen, China to cut costs. In 1996, instead of expanding its own plant, Compaq asked a Taiwanese supplier to set up a new factory nearby to produce the mechanicals, with the Taiwanese supplier owning the inventory until it reached Compaq in Houston. Pfeiffer also introduced a new distribution strategy, to build PCs made-to-order which would eliminate the stockpile of computers in warehouses and cut the components inventory down to two weeks, with the supply chain from supplier to dealer linked by complex software. Vice-president for Corporate Development Kenneth E. Kurtzman assembled five teams to examine Compaq's businesses and assess each unit's strategy and that of key rivals. Kurtzman's teams recommended to Pfeiffer that each business unit had to be first or second in its market within three years—or else Compaq should exit that line. Also, the company should no longer use profits from high-margin businesses to carry marginally profitable ones, as instead each unit must show a return on investment. Pfeiffer's vision was to make Compaq a full-fledged computer company, moving beyond its main business of manufacturing retail PCs and into the more lucrative business services and solutions that IBM did well at, such as computer servers which would also require more "customer handholding" from either the dealers or Compaq staff themselves. Unlike IBM and HP, Compaq would not build up field technicians and programmers in-house as those could be costly assets, instead Compaq would leverage its partnerships (including those with Andersen Consulting and software maker SAP) to install and maintain corporate systems. This allowed Compaq to compete in the "big-iron market" without incurring the costs of running its own services or software businesses. Most of Compaq's server sales were for systems that would be running Microsoft's Windows NT operating system, and indeed Compaq was the largest hardware supplier for Windows NT. However, some 20 percent of Compaq servers went for systems that would be running the Unix operating system. This was exemplified by a strategic alliance formed in 1997 between Compaq and the Santa Cruz Operation (SCO), which was known for its server Unix operating system products on Intel-architecture-based hardware. Compaq was also the largest hardware supplier for SCO's Unix products, and some 10 percent of Compaq's ProLiant servers ran SCO's UnixWare. In January 1998, Compaq was at its height. CEO Pfeiffer boldly predicted that the Microsoft/Intel "Wintel" duopoly would be replaced by "Wintelpaq". Pfeiffer also made several major and some minor acquisitions. In 1997, Compaq bought Tandem Computers, known for their NonStop server line. This acquisition instantly gave Compaq a presence in the higher end business computing market. The alliance between Compaq and SCO took advantage of this to put out the UnixWare NonStop Clusters product in 1998. Minor acquisitions centered around building a networking arm and included NetWorth (1998) based in Irving, Texas and Thomas-Conrad (1998) based in Austin, Texas. In 1997, Microcom was also acquired, based in Norwood, MA, which brought a line of modems, Remote Access Servers (RAS) and the popular Carbon Copy software. In 1998, Compaq acquired Digital Equipment Corporation for a then-industry record of $9.6 billion. The merger made Compaq, at the time, the world's second largest computer maker in the world in terms of revenue behind IBM. Digital Equipment, which had nearly twice as many employees as Compaq while generating half the revenue, had been a leading computer company during the 1970s and early 1980s. However, Digital had struggled during the 1990s, with high operating costs. For nine years the company had lost money or barely broke even, and had recently refocused itself as a "network solutions company". In 1995, Compaq had considered a bid for Digital but only became seriously interested in 1997 after Digital's major divestments and refocusing on the Internet. At the time of the acquisition, services accounted for 45 percent of Digital's revenues (about $6 billion) and their gross margins on services averaged 34 percent, considerably higher than Compaq's 25% margins on PC sales and also satisfying customers who had demanded more services from Compaq for years. Compaq had originally wanted to purchase only Digital's services business but that was turned down. When the announcement was made, it was initially viewed as a master stroke as it immediately gave Compaq a 22,000 person global service operation to help corporations handle major technological purchases (by 2001 services made up over 20% of Compaq's revenues, largely due to the Digital employees inherited from the merger), in order to compete with IBM. However it was also risky merger, as the combined company would have to lay off 2,000 employees from Compaq and 15,000 from Digital which would potentially hurt morale. Furthermore, Compaq fell behind schedule in integrating Digital's operations, which also distracted the company from its strength in low-end PCs where it used to lead the market in rolling out next-generation systems which let rival Dell grab market share. Reportedly Compaq had three consulting firms working to integrate Digital alone. However, Pfeiffer had little vision for what the combined companies should do, or indeed how the three dramatically different cultures could work as a single entity, and Compaq struggled from strategy indecisiveness and lost focus, as a result being caught in between the low end and high end of the market. Mark Anderson, president of Strategic News Service, a research firm based in Friday Harbor, Wash. was quoted as saying, "The kind of goals he had sounded good to shareholders – like being a $50 billion company by the year 2000, or to beat I.B.M. – but they didn't have anything to do with customers. The new C.E.O. should look at everything Eckhard acquired and ask: did the customer benefit from that. If the answer isn't yes, they should get rid of it." On one hand, Compaq had previously dominated the PC market with its price war but was now struggling against Dell, which sold directly to buyers, avoiding the dealer channel and its markup, and built each machine to order to keep inventories and costs at a minimum. At the same time, Compaq, through its acquisitions of the Digital Equipment Corporation in 1998 and Tandem Computers in 1997, had tried to become a major systems company, like IBM and Hewlett-Packard. While IBM and HP were able generate repeat business from corporate customers to drive sales of their different divisions, Compaq had not yet managed to make its newly acquired sales and services organizations work as seamlessly. In early 1998, Compaq had the problem of bloated PC inventories. By summer 1998, Compaq was suffering from product-quality problems. Robert W. Stearns, SVP of Business Development, said "In [Pfeiffer's] quest for bigness, he lost an understanding of the customer and built what I call empty market share—large but not profitable", while Jim Moore, a technology strategy consultant with GeoPartners Research in Cambridge, Mass., says Pfeiffer "raced to scale without having economies of scale." The "colossus" that Pfeiffer built up was not nimble enough to adapt to the fast-changing computer industry. That year Compaq forecast demand poorly and shipped too many PCs, causing resellers to dump them at fire sale prices, and since Compaq protected resellers from heavy losses it cost them two quarters of operating profits. Pfeiffer also refused to develop a potential successor, rebuffing Rosen's suggestion to recruit a few executives to create the separate position of Compaq president. The board complained that Pfeiffer was too removed from management and the rank-and-file, as he surrounded himself with a "clique" of Chief Financial Officer Earl Mason, Senior Vice-President John T. Rose, and Senior Vice-President of Human Resources Hans Gutsch. Current and former Compaq employees complained that Gutsch was part of a group of senior executives, dubbed the "A team", who controlled access to Pfeiffer. Gutsch was said to be a "master of corporate politics, pitting senior vice presidents against each other and inserting himself into parts of the company that normally would not be under his purview". Gutsch, who oversaw security, had an extensive security system and guard station installed on the eight floor of CCA-11, where the company's senior vice presidents worked. There were accusations that Gutsch and others sought to divide top management, although this was regarded by others as sour grapes on the part of executives who were shut out of planning that involved the acquisitions of Tandem Computers and Digital Equipment Corp. Pfeiffer reduced the size of the group working on the deal due to news leaks, saying "We cut the team down to the minimum number of people—those who would have to be directly involved, and not one person more". Robert W. Stearns, Compaq's senior vice president for business development, with responsibility for mergers and acquisitions, had opposed the acquisition of Digital as the cultural differences between both companies were too great, and complained that he was placed on the "B team" as a result. Compaq entered 1999 with strong expectations. Fourth-quarter 1998 earnings reported in January 1999 beat expectations by six cents a share with record 48 percent growth. The company launched Compaq.com as the key for its new direct sales strategy, and planned an IPO for AltaVista toward the end of 1999 in order to capitalize on the dotcom bubble. However, by February 1999, analysts were sceptical of Compaq's plan to sell both direct and to resellers. Compaq was hit with two class-action lawsuits, as a result of CFO Earl Mason, SVP John Rose, and other executives selling US$50 million of stock before a conference call with analysts, where they noted that demand for PCs was slowing down. On April 17, 1999, just nine days after Compaq reported first-quarter profit being at half of what analysts had expected, the latest in a string of earnings disappointments, Pfeiffer was forced to resign as CEO in a coup led by board chairman Ben Rosen. Reportedly, at the special board meeting held on April 15, 1999, the directors were unanimous in dismissing Pfeiffer. The company's stock had fallen 50 percent since its all-time high in January 1999. Compaq shares, which traded as high as $51.25 early in 1999, dropped 23 percent on April 12, 1999, the first day of trading after the first-quarter announcement and closed the following Friday at $23.62. During three out of the last six quarters of Pfeiffer's tenure, the company's revenues or earnings had missed expectations. While rival Dell had 55% growth in U.S. PC sales in the first quarter of 1999, Compaq could only manage 10%. Rosen suggested that the accelerating change brought about by the Internet had overtaken Compaq's management team, saying "As a company engaged in transforming its industry for the Internet era, we must have the organizational flexibility necessary to move at Internet speed." In a statement, Pfeiffer said "Compaq has come a long way since I joined the company in 1983" and "under Ben's guidance, I know this company will realize its potential." Rosen's priority was to have Compaq catchup as an E-commerce competitor, and he also moved to streamline operations and reduce the indecision that plagued the company. Roger Kay, an analyst at International Data Corporation, observed that Compaq's behavior at times seemed like a personal vendetta, noting that "Eckhard has been so obsessed with staying ahead of Dell that they focused too hard on market share and stopped paying attention to profitability and liquidity. They got whacked in a price war that they started." Subsequent earnings releases from Compaq's rivals, Dell, Gateway, IBM, and Hewlett-Packard suggested that the problems were not affecting the whole PC industry as Pfeiffer had suggested. Dell and Gateway sold direct, which helped them to avoid Compaq's inventory problems and compete on price without dealer markups, plus Gateway sold web access and a broad range of software tailored to small businesses. Hewlett-Packard's PC business had similar challenges like Compaq but this was offset by HP's extremely lucrative printer business, while IBM sold PCs at a loss but used them to lock in multi-year services contracts with customers. After Pfeiffer's resignation, the board established an office of the CEO with a triumvirate of directors; Rosen as interim CEO and vice chairmen Frank P. Doyle and Robert Ted Enloe III. They began "cleaning house", as shortly afterward many of Pfeiffer's top executives resigned or were pushed out, including John J. Rando, Earl L. Mason, and John T. Rose. Rando, senior vice president and general manager of Compaq Services, was a key player during the merger discussions and the most senior executive from Digital to remain with Compaq after the acquisition closed and had been touted by some as the heir-apparent to Pfeiffer. Rando's division had performed strongly as it had sales of $1.6 billion for the first quarter compared to $113 million in 1998, which met expectations and was anticipated to post accelerated and profitable growth going forward. At the time of Rando's departure, Compaq Services ranked third behind those of IBM and EDS, while slightly ahead of Hewlett-Packard's and Andersen Consulting, however customers switched from Digital technology-based workstations to those of HP, IBM, and Sun Microsystems. Mason, senior vice president and chief financial officer, had previously been offered the job of chief executive of Alliant Foodservice, Inc., a foodservice distributor based in Chicago, and he informed Compaq's board that he accepted the offer. Rose, senior vice president and general manager of Compaq's Enterprise Computing group, resigned effective as of June 3 and was succeeded by Tandem veteran Enrico Pesatori. Rose was reportedly upset that he was not considered for the CEO vacancy, which became apparent once Michael Capellas was named COO. While Enterprise Computing, responsible for engineering and marketing of network servers, workstations and data-storage products, reportedly accounted for one third of Compaq's revenues and likely the largest part of its profits, it was responsible for the earnings shortfall in Q1 of 1999. In addition, Rose was part of the "old guard" close to former CEO Pfeiffer, and he and other Compaq executives had been criticized at the company's annual meeting for selling stock before reporting the sales slowdown. Rose was succeeded by SVP Enrico Pesatori, who had previously worked as a senior executive at Olivetti, Zenith Data Systems, Digital Equipment Corporation, and Tandem Computers. Capellas was appointed COO after pressure mounted on Rosen to find a permanent CEO, however it was reported that potential candidates did not want to work under Rosen as chairman. Around the same time Pesatori was placed in charge of the newly created Enterprise Solutions and Services Group, making him Compaq's second most powerful executive in operational responsibility after Capellas. Pfeiffer's permanent replacement was Michael Capellas, who had been serving as Compaq's SVP and CIO for under a year. A couple months after Pfeiffer's ouster, Capellas was elevated to interim chief operating officer on June 2, 2000, and was soon appointed president and CEO. Capellas also assumed the title of chairman on September 28, 2000, when Rosen stepped down from the board of directors. At his retirement, Rosen proclaimed "These are great achievements—to create 65,000 jobs, $40 billion in sales and $40 billion in market value, all starting with a sketch and a dream". In 1998, Compaq signed new sales and equipment alliance with NaviSite. Under the pact, Compaq agreed to promote and sell NaviSite Web hosting services. In return, NaviSite took Compaq as a preferred provider for its storage and Intel-based servers. During November 1999, Compaq began to work with Microsoft to create the first in a line of small-scale, web-based computer systems called MSN Companions. Capellas was able to restore some of the luster lost in the latter part of the Pfeiffer era and he repaired the relationship with Microsoft which had deteriorated under his predecessor's tenure. However Compaq still struggled against lower-cost competitors with direct sales channels such as Dell who took over the top spot of PC manufacturer from Compaq in 2001. Compaq relied significantly on reseller channels, so their criticism caused Compaq to retreat from its proposed direct sales plan, although Capellas maintained that he would use the middlemen to provide value-added services. Despite falling to No. 2 among PC manufacturers, Capellas proclaimed "We are No. 2 in the traditional PC market, but we're focused on industry leadership in the next generation of Internet access devices and wireless mobility. That's where the growth and the profitability will be." The company's longer-term strategy involved extending its services to servers and storage products, as well as handheld computers such as the iPAQ PocketPC which accounted for 11 percent of total unit volume. Compaq struggled as a result of the collapse of the dot-com bubble, which hurt sales of their high-end systems in 2001 and 2002, and they managed only a small profit in a few quarters during these years. They also accumulated $1.7 billion in short-term debt around this time. The stock price of Compaq, which was around $25 when Capellas became CEO, was trading at half that by 2002. In 2002, Compaq signed a merger agreement with Hewlett-Packard for US$24.2 billion, including US$14.45 billion for goodwill, where each Compaq share would be exchanged for 0.6325 of a Hewlett-Packard share. There would be a termination fee of US$675 million that either company would have to pay the other to break the merger. Compaq shareholders would own 36% of the combined company while HP's would have 64%. Hewlett-Packard had reported yearly revenues of $47 billion, while Compaq's was $40 billion, and the combined company would have been close to IBM's $90 billion revenues. It was projected to have $2.5 billion in annual cost savings by mid-2004. The expected layoffs at Compaq and HP, 8500 and 9000 jobs respectively, would leave the combined company with a workforce of 145,000. The companies would dole out a combined $634.5 million in bonuses to prevent key employees from leaving if shareholders approve the proposed merger, with $370.1 million for HP employees and $264.4 million for Compaq employees. Both companies had to seek approval from their shareholders through separate special meetings. While Compaq shareholders unanimously approved the deal, there was a public proxy battle within HP as the deal was strongly opposed by numerous large HP shareholders, including the sons of the company founders, Walter Hewlett and David W. Packard, as well as the California Public Employees’ Retirement System (CalPERS) and the Ontario Teachers' Pension Plan. Walter Hewlett only reluctantly approved the merger, in his duty as a member of the board of directors, since the merger agreement "called for unanimous board approval in order to ensure the best possible shareholder reception". While supporters of the merger argued that there would be economies of scale and that the sales of PCs would drive sales of printers and cameras, Walter Hewlett was convinced that PCs were a low-margin but risky business that would not contribute and would likely dilute the old HP's traditionally profitable Imaging and Printing division. David W. Packard in his opposition to the deal "[cited] massive layoffs as an example of this departure from HP’s core values...[arguing] that although the founders never guaranteed job security, 'Bill and Dave never developed a premeditated business strategy that treated HP employees as expendable.'" Packard further stated that "[Carly] Fiorina’s high-handed management and her efforts to reinvent the company ran counter to the company’s core values as established by the founders". The founders' families who controlled a significant amount of HP shares were further irked because Fiorina had made no attempt to reach out to them and consult about the merger, instead they received the same standard roadshow presentation as other investors. Analysts on Wall Street were generally critical of the merger, as both companies had been struggling before the announcement, and the stock prices of both companies dropped in the months after the merger agreement was made public. Particularly rival Dell made gains from defecting HP and Compaq customers who were wary of the merger. Carly Fiorina, initially seen as HP's savior when she was hired as CEO back in 1999, had seen the company's stock price drop to less than half since she assumed the position, and her job was said to be on shaky ground before the merger announcement. HP's offer was regarded by analysts to be overvaluing Compaq, due to Compaq's shaky financial performance in the past recent years (there were rumors that it could run out of money in 12 months and be forced to cease business operations had it stayed independent), as well as Compaq's own more conservative valuation of its assets. Detractors of the deal noted that buying Compaq was a "distraction" that would not directly help HP take on IBM's breadth or Dell Computer's direct sales model. Plus there were significant cultural differences between HP and Compaq; which made decisions by consensus and rapid autocratic styles, respectively. One of Compaq's few bright spots was its services business, which was outperforming HP's own services division. The merger was approved by HP shareholders only after the narrowest of margins, and allegations of vote buying (primarily involving an alleged last-second back-room deal with Deutsche Bank) haunted the new company. It was subsequently disclosed that HP had retained Deutsche Bank's investment banking division in January 2002 to assist in the merger. HP had agreed to pay Deutsche Bank $1 million guaranteed, and another $1 million contingent upon approval of the merger. On August 19, 2003, the U.S. SEC charged Deutsche Bank with failing to disclose a material conflict of interest in its voting of client proxies for the merger and imposed a civil penalty of $750,000. Deutsche Bank consented without admitting or denying the findings. Hewlett-Packard announced the completion of merger on May 3, 2002, and the merged HP-Compaq company was officially launched on May 7. Compaq's pre-merger ticker symbol was CPQ. This was combined with Hewlett-Packard's ticker symbol (HWP) to create the current ticker symbol (HPQ), which was announced on May 6. Capellas, Compaq's last chairman and CEO, became president of the post-merger Hewlett-Packard, under chairman and CEO Carly Fiorina, to ease the integration of the two companies. However, Capellas was reported not to be happy with his role, being said not to be utilized and being unlikely to become CEO as the board supported Fiorina. Capellas stepped down as HP president on November 12, 2002, after just six months on the job, to become CEO of MCI Worldcom where he would lead its acquisition by Verizon. Capellas' former role of president was not filled as the executives who reported to him then reported directly to the CEO. Fiorina helmed the post-merger HP for nearly three years after Capellas left. HP laid off thousands of former Compaq, DEC, HP, and Tandem employees, its stock price generally declined and profits did not perk up. Several senior executives from the Compaq side including Jeff Clarke and Peter Blackmore would resign or be ousted from the post-merger HP. Though the combination of both companies' PC manufacturing capacity initially made the post-merger HP number one, it soon lost the lead and further market share to Dell which squeezed HP on low end PCs. HP was also unable to compete effectively with IBM in the high-end server market. In addition, the merging of the stagnant Compaq computer assembly business with HP's lucrative printing and imaging division was criticized for obstructing the profitability of the printing/imaging segment. Overall, it has been suggested that the purchase of Compaq was not a good move for HP, due to the narrow profit margins in the commoditized PC business, especially in light of IBM's 2004 announcement to sell its PC division to Lenovo. The Inquirer noted that the continued low return on investment and small margins of HP's personal computer manufacturing business, now named the Personal Systems Group, "continues to be what it was in the individual companies, not much more than a job creation scheme for its employees". One of the few positives was Compaq's sales approach and enterprise focus that influenced the newly combined company's strategy and philosophy. In February 2005, the board of directors ousted Fiorina, with CFO Robert Wayman being named interim CEO. Former Compaq CEO Capellas was mentioned by some as a potential successor, but several months afterwards, Mark Hurd was hired as president and CEO of HP. Hurd separated the PC division from the imaging and printing division and renamed it the Personal Systems Group, placing it under the leadership of EVP Todd R. Bradley. Hewlett Packard's PC business has since been reinvigorated by Hurd's restructuring and now generates more revenue than the traditionally more profitable printers. By late 2006, HP had retaken the #1 sales position of PCs from Dell, which struggled with missed estimates and poor quality, and held that rank until supplanted in the mid-2010s by Lenovo. Most Compaq products have been re-branded with the HP nameplate, such as the company's market leading ProLiant server line (now owned by Hewlett Packard Enterprise, which spun off from HP in 2015), while the Compaq brand was repurposed for some of HP's consumer-oriented and budget products, notably Compaq Presario PCs. HP's business computers line was discontinued in favour of the Compaq Evo line, which was initially rebranded HP Compaq but now use brands such as EliteBook and ProBook, among others. HP's Jornada PDAs were replaced by Compaq iPAQ PDAs, which were renamed HP iPAQ. Following the merger, all Compaq computers were shipped with HP software. In May 2007, HP announced in a press release a new logo for their Compaq Division to be placed on the new model Compaq Presarios. In 2008, HP reshuffled its business line notebooks. The "Compaq" name from its "HP Compaq" series was originally used for all of HP's business and budget notebooks. However, the HP EliteBook line became the top of the business notebook lineup while the HP Compaq B series became its middle business line. As of early 2009, the "HP ProBook" filled out HP's low end business lineup. In 2009, HP sold part of Compaq's former headquarters to the Lone Star College System. On August 18, 2011, then-CEO of HP Léo Apotheker announced plans for a partial or full spinoff of the Personal Systems Group. The PC unit had the lowest profit margin although it accounted for nearly a third of HP's overall revenues in 2010. HP was still selling more PCs than any other vendor, shipping 14.9 million PCs in the second quarter of 2011 (17.5% of the market according to Gartner), while Dell and Lenovo were tied for second place, each with more than a 12% share of the market and shipments of over 10 million units. However, the announcement of the PC spinoff (concurrent with the discontinuation of WebOS, and the purchase of Autonomy Corp. for $10 billion) was poorly received by the market, and after Apotheker's ouster, plans for a divestiture were cancelled. In March 2012, the printing and imaging division was merged into the PC unit. In October 2012, according to Gartner, Lenovo took the lead as the number one PC manufacturer from HP, while IDC ranked Lenovo just right behind HP. In Q2 2013, Forbes reported that Lenovo ranked ahead of HP as the world's number-one PC supplier. HP discontinued the Compaq brand name in the United States in 2013. Around that same year, Globalk (a Brazilian-based retailer and licensing management firm) started a partnership with HP to re-introduce the brand with a new line of desktop and laptop computers. In 2015, Grupo Newsan (an Argentinian-based company) acquired the brand's license, along with a $3 million investment, and developed two new lines of Presario notebooks for the local market over the course of the year. However, Compaq's Argentine web site went offline in March 2019. The last archived copy of the site was made in October 2018, which featured the same models introduced in 2016. In 2018, Ossify Industries (an Indian-based company) entered a licensing agreement with HP to use the Compaq brand name for the distribution and manufacturing of Smart TV sets. The Compaq World Headquarters (now HP United States) campus consisted of 80 acres (320,000 m) of land which contained 15 office buildings, 7 manufacturing buildings, a product conference center, an employee cafeteria, mechanical laboratories, warehouses, and chemical handling facilities. Instead of headquartering the company in a downtown Houston skyscraper, then-CEO Rod Canion chose a West Coast-style campus surrounded by forests, where every employee had similar offices and no-one (not even the CEO) had a reserved parking spot. As it grew, Compaq became so important to Houston that it negotiated the expansion of Highway 249 in the late 1980s, and many other technology companies appeared in what became known as the "249 Corridor". After Canion's ouster, senior vice-president of human resources, Hans W. Gutsch, oversaw the company's facilities and security. Gutsch had an extensive security system and guard station installed on the eight floor of CCA-1, where the company's senior vice presidents had their offices. Eckhard Pfeiffer, president and CEO, introduced a whole series of executive perks to a company that had always had an egalitarian culture; for instance, he oversaw the construction of an executive parking garage, previously parking places had never been reserved. On August 31, 1998, the Compaq Commons was opened in the headquarters campus, which featured a conference center, an employee convenience store, a wellness center, and an employee cafeteria. In 2009, HP sold part of Compaq's former headquarters to the Lone Star College System. Hewlett Packard Buildings #7 & #8, two eight-story reinforced concrete buildings totaling 450,000 square feet, plus a 1,200-car parking garage and a central chiller plant, were all deemed by the college to be too robust and costly to maintain, and so they were demolished by implosion on September 18, 2011. As of January 2013, the site is one of HP's largest campuses, with 7,000 employees in all six of HP's divisions. In 2018, HP announced the sale of the entire former Compaq HQ campus. Compaq originally competed directly against IBM, manufacturing computer systems equivalent with the IBM PC, as well as Apple Computer. In the 1990s, as IBM's own PC division declined, Compaq faced other IBM PC Compatible manufacturers like Dell, Packard Bell, AST Research, and Gateway 2000. By the mid-1990s, Compaq's price war had enabled it to overtake IBM and Apple, while other IBM PC Compatible manufacturers such as Packard Bell and AST were driven out from the market. Dell overtook Compaq and became the number-one supplier of PCs in 2001. At the time of their 2002 merger, Compaq and HP were the second and third largest PC manufacturers, so their combination made them number one. However, the combined HP-Compaq struggled and fell to second place behind Dell from 2003 to 2006. Due to Dell's struggles in late 2006, HP has led all PC vendors from 2007 to 2012. During its existence as a division of HP, Compaq primarily competed against other budget-oriented personal computer series from manufacturers including Acer, Lenovo, and Toshiba. Most of Compaq's competitors except Dell were later acquired by bigger rivals like Acer (Gateway 2000 and Packard Bell) and Lenovo absorbing IBM's PC division. From 2013 onwards, Lenovo has been the world leader for PCs. Before its merger with HP, Compaq sponsored the Williams Formula One team when it was still powered by BMW engines. HP inherited and continued the sponsorship deal for a few years. Compaq sponsored Queens Park Rangers F.C. for the 1994–95 and 1995–96 seasons.
[ { "paragraph_id": 0, "text": "Compaq Computer Corporation (sometimes abbreviated to CQ prior to the 2007 rebranding) was an American information technology company founded in 1982 that developed, sold, and supported computers and related products and services. Compaq produced some of the first IBM PC compatible computers, being the second company after Columbia Data Products to legally reverse engineer the BIOS of the IBM Personal Computer. It rose to become the largest supplier of PC systems during the 1990s before being overtaken by Dell in 2001. Struggling to keep up in the price wars against Dell, as well as with a risky acquisition of DEC, Compaq was acquired for US$25 billion by HP in 2002. The Compaq brand remained in use by HP for lower-end systems until 2013 when it was discontinued. Since 2013, the brand is currently licensed to third parties for use on electronics in Brazil and India.", "title": "" }, { "paragraph_id": 1, "text": "The company was formed by Rod Canion, Jim Harris, and Bill Murto, all of whom were former Texas Instruments senior managers. Murto (SVP of sales) departed Compaq in 1987, while Canion (president and CEO) and Harris (SVP of engineering) left under a shakeup in 1991, which saw Eckhard Pfeiffer appointed president and CEO. Pfeiffer served through the 1990s. Ben Rosen provided the venture capital financing for the fledgling company and served as chairman of the board for 17 years from 1983 until September 28, 2000, when he retired and was succeeded by Michael Capellas, who served as the last chairman and CEO until its merger with HP.", "title": "" }, { "paragraph_id": 2, "text": "Prior to its merger, the company was headquartered in northwest unincorporated Harris County, Texas, which now continues as HP's largest United States facility.", "title": "" }, { "paragraph_id": 3, "text": "Compaq was founded in February 1982 by Rod Canion, Jim Harris, and Bill Murto, three senior managers from semiconductor manufacturer Texas Instruments. The three of them had left due to lack of faith and loss of confidence in TI's management, and initially considered but ultimately decided against starting a chain of Mexican restaurants. Each invested $1,000 to form the company, which was founded with the temporary name Gateway Technology. The name \"COMPAQ\" was said to be derived from \"Compatibility and Quality\" but this explanation was an afterthought. The name was chosen from many suggested by Ogilvy & Mather, it being the name least rejected. The first Compaq PC was sketched out on a placemat by Ted Papajohn while dining with the founders in a pie shop, (named House of Pies in Houston). Their first venture capital came from Benjamin M. Rosen and Sevin Rosen Funds, who helped the fledgling company secure $1.5 million to produce their initial computer. Overall, the founders managed to raise $25 million from venture capitalists, as this gave stability to the new company as well as providing assurances to the dealers or middlemen.", "title": "History" }, { "paragraph_id": 4, "text": "Unlike many startups, Compaq differentiated its offerings from the many other IBM PC clones by not focusing mainly on price, but instead concentrating on new features, such as portability and better graphics displays as well as performance—and all at prices comparable to those of IBM's PCs. In contrast to Dell and Gateway 2000, Compaq hired veteran engineers with an average of 15 years experience, which lent credibility to Compaq's reputation of reliability among customers. Due to its partnership with Intel, Compaq was able to maintain a technological lead in the market place as it was the first one to come out with computers containing the next generation of each Intel processor.", "title": "History" }, { "paragraph_id": 5, "text": "Under Canion's direction, Compaq sold computers only through dealers to avoid potential competition that a direct sales channel would foster, which helped foster loyalty among resellers. By giving dealers considerable leeway in pricing Compaq's offerings, either a significant markup for more profits or discount for more sales, dealers had a major incentive to advertise Compaq.", "title": "History" }, { "paragraph_id": 6, "text": "During its first year of sales (second year of operation), the company sold 53,000 PCs for sales of $111 million, the first start-up to hit the $100 million mark that fast. Compaq went public in 1983 on the NYSE and raised $67 million. In 1986, it enjoyed record sales of $329 million from 150,000 PCs, and became the youngest-ever firm to make the Fortune 500. In 1985, sales reached $504 million. In 1987, Compaq hit the $1 billion revenue mark, taking the least amount of time to reach that milestone. By 1991, Compaq held the fifth place spot in the PC market with $3 billion in sales that year.", "title": "History" }, { "paragraph_id": 7, "text": "Two key marketing executives in Compaq's early years, Jim D'Arezzo and Sparky Sparks, had come from IBM's PC Group. Other key executives responsible for the company's meteoric growth in the late 1980s and early 1990s were Ross A. Cooley, another former IBM associate, who served for many years as SVP of GM North America; Michael Swavely, who was the company's chief marketing officer in the early years, and eventually ran the North America organization, later passing along that responsibility to Cooley when Swavely retired. In the United States, Brendan A. \"Mac\" McLoughlin (another long time IBM executive) led the company's field sales organization after starting up the Western U.S. Area of Operations. These executives, along with other key contributors, including Kevin Ellington, Douglas Johns, Steven Flannigan, and Gary Stimac, helped the company compete against the IBM Corporation in all personal computer sales categories, after many predicted that none could compete with the behemoth.", "title": "History" }, { "paragraph_id": 8, "text": "The soft-spoken Canion was popular with employees and the culture that he built helped Compaq to attract the best talent. Instead of headquartering the company in a downtown Houston skyscraper, Canion chose a West Coast-style campus surrounded by forests, where every employee had similar offices and no-one (not even the CEO) had a reserved parking spot. At semi-annual meetings, turnout was high as any employee could ask questions to senior managers.", "title": "History" }, { "paragraph_id": 9, "text": "In 1987, company co-founder Bill Murto resigned to study at a religious education program at the University of St. Thomas. Murto had helped to organize the company's marketing and authorized-dealer distribution strategy, and held the post of senior vice president of sales since June 1985. Murto was succeeded by Ross A. Cooley, director of corporate sales. Cooley would report to Michael S. Swavely, vice president for marketing, who was given increased responsibility and the title of vice president for sales and marketing.", "title": "History" }, { "paragraph_id": 10, "text": "In November 1982, Compaq announced their first product, the Compaq Portable, a portable IBM PC compatible personal computer. It was released in March 1983 at $2,995. The Compaq Portable was one of the progenitors of today's laptop; some called it a \"suitcase computer\" for its size and the look of its case. It was the second IBM PC compatible, being capable of running all software that would run on an IBM PC. It was a commercial success, selling 53,000 units in its first year and generating $111 million in sales revenue. The Compaq Portable was the first in the range of the Compaq Portable series. Compaq was able to market a legal IBM clone because IBM mostly used \"off the shelf\" parts for their PC. Furthermore, Microsoft had kept the right to license MS-DOS, the most popular and de facto standard operating system for the IBM PC, to other computer manufacturers. The only part which had to be duplicated was the BIOS, which Compaq did legally by using clean room design at a cost of $1 million.", "title": "History" }, { "paragraph_id": 11, "text": "Unlike other companies, Compaq did not bundle application software with its computers. Vice President of Sales and Service H. L. Sparks said in early 1984:", "title": "History" }, { "paragraph_id": 12, "text": "We've considered it, and every time we consider it we reject it. I don't believe and our dealer network doesn't believe that bundling is the best way to merchandise those products.", "title": "History" }, { "paragraph_id": 13, "text": "You remove the freedom from the dealers to really merchandise when you bundle in software. It is perceived by a lot of people as a marketing gimmick. You know, when you advertise a $3,000 computer with $3,000 worth of free software, it obviously can't be true.", "title": "History" }, { "paragraph_id": 14, "text": "The software should stand on its merits and be supported and so should the hardware. Why should you be constrained to use the software that comes with a piece of hardware? I think it can tend to inhibit sales over the long run.", "title": "History" }, { "paragraph_id": 15, "text": "Compaq instead emphasized PC compatibility, of which Future Computing in May 1983 ranked Compaq as among the \"Best\" examples. \"Many industry observers think [Compaq] is poised for meteoric growth\", The New York Times reported in March of that year. By October, when the company announced the Compaq Plus with a 10 MB hard drive, PC Magazine wrote of \"the reputation for compatibility it built with its highly regarded floppy disk portable\". Compaq computers remained the most compatible PC clones into 1984, and maintained its reputation for compatibility for years, even as clone BIOSes became available from Phoenix Technologies and other companies that also reverse engineered IBM's design, then sold their version to clone manufacturers.", "title": "History" }, { "paragraph_id": 16, "text": "On June 28, 1984, Compaq released the Deskpro, a 16-bit desktop computer using an Intel 8086 microprocessor running at 7.14 MHz. It was considerably faster than an IBM PC and was, like the original Compaq Portable, also capable of running IBM software. It was Compaq's first non-portable computer and began the Deskpro line of computers.", "title": "History" }, { "paragraph_id": 17, "text": "In 1986, Compaq introduced the Deskpro 386, the first PC based on Intel's new 80386 microprocessor. Bill Gates of Microsoft later said", "title": "History" }, { "paragraph_id": 18, "text": "the folks at IBM didn't trust the 386. They didn't think it would get done. So we encouraged Compaq to go ahead and just do a 386 machine. That was the first time people started to get a sense that it wasn't just IBM setting the standards, that this industry had a life of its own, and that companies like Compaq and Intel were in there doing new things that people should pay attention to.", "title": "History" }, { "paragraph_id": 19, "text": "The Compaq 386 computer marked the first CPU change to the PC platform that was not initiated by IBM. An IBM-made 386 machine reached the market almost a year later, but by that time Compaq was the 386 supplier of choice and IBM had lost some of its prestige.", "title": "History" }, { "paragraph_id": 20, "text": "For the first three months after announcement, the Deskpro 386 shipped with Windows/386. This was a version of Windows 2.1 adapted for the 80386 processor. Support for the virtual 8086 mode was added by Compaq engineers. (Windows, running on top of the MS-DOS operating system, would not become a popular \"operating environment\" until at least the release of Windows 3.0 in 1990.)", "title": "History" }, { "paragraph_id": 21, "text": "Compaq's technical leadership and the rivalry with IBM was emphasized when the SystemPro server was launched in late 1989 – this was a true server product with standard support for a second CPU and RAID, but also the first product to feature the EISA bus, designed in reaction to IBM's MCA (Micro Channel Architecture) which was incompatible with the original AT bus.", "title": "History" }, { "paragraph_id": 22, "text": "Although Compaq had become successful by being 100 percent IBM-compatible, it decided to continue with the original AT bus—which it renamed ISA—instead of licensing IBM's MCA. Prior to developing EISA Compaq had invested significant resources into reverse engineering MCA, but its executives correctly calculated that the $80 billion already spent by corporations on IBM-compatible technology would make it difficult for even IBM to force manufacturers to adopt the new MCA design. Instead of cloning MCA, Compaq led an alliance with Hewlett Packard and seven other major manufacturers, known collectively as the \"Gang of Nine\", to develop EISA.", "title": "History" }, { "paragraph_id": 23, "text": "Development of a truly mobile successor to the Portable line began in 1986, the company releasing two stopgap products in the meantime, the SLT (Compaq's first laptop) and the Compaq Portable III (a lighter-weight, lunchbox-sized entry in the Portable line). In 1989, they introduced the LTE, their first notebook-sized laptop which competed with NEC's UltraLite and Zenith Data Systems's MinisPort. However, whereas the UltraLite and MinisPort failed to gain much uptake due to their novel but nonstandard data storage technologies, the LTE succeeded on account of its use of the conventional floppy drive and spinning hard drive, allowing users to transfer data to and from their desktop computers without any hassle. As well, Compaq began offering docking stations with the release of the LTE/386s in 1990, providing performance comparable to then-current desktop machines. Thus, the LTE was the first commercially successful notebook computer, helping launch the burgeoning industry. It was a direct influence on both Apple and IBM for the development of their own notebook computers, the PowerBook and ThinkPad, respectively.", "title": "History" }, { "paragraph_id": 24, "text": "By 1989, The New York Times wrote that being the first to release a 80386-based personal computer made Compaq the leader of the industry and \"hurt no company more - in prestige as well as dollars - than\" IBM. The company was so influential that observers and its executives spoke of \"Compaq compatible\". InfoWorld reported that \"In the [ISA market] Compaq is already IBM's equal in being seen as a safe bet\", quoting a sell-side analyst describing it as \"now the safe choice in personal computers\". Even rival Tandy Corporation acknowledged Compaq's leadership, stating that within the Gang of Nine \"when you have 10 people sit down before a table to write a letter to the president, someone has to write the letter. Compaq is sitting down at the typewriter\".", "title": "History" }, { "paragraph_id": 25, "text": "Michael S. Swavely, president of Compaq's North American division since May 1989, took a six-month sabbatical in January 1991 (which would eventually become retirement effective on July 12, 1991). Eckhard Pfeiffer, then president of Compaq International, was named to succeed him. Pfeiffer also received the title of chief operating officer, with responsibility for the company's operations on a worldwide basis, so that Canion could devote more time to strategy. Swavely's abrupt departure in January led to rumors of turmoil in Compaq's executive suite, including friction between Canion and Swavely, likely as Swavely's rival Pfeiffer had received the number two leadership position. Swavely's U.S. marketing organization was losing ground with only 4% growth for Compaq versus 7% in the market, likely due to short supplies of the LTE 386s from component shortages, rivals that undercut Compaq's prices by as much as 35%, and large customers who did not like Compaq's dealer-only policy. Pfeiffer became president and CEO of Compaq later that year, as a result of a boardroom coup led by board chairman Ben Rosen that forced co-founder Rod Canion to resign as president and CEO.", "title": "History" }, { "paragraph_id": 26, "text": "Pfeiffer had joined Compaq from Texas Instruments, and established operations from scratch in both Europe and Asia. Pfeiffer was given US$20,000 to start up Compaq Europe He started up Compaq's first overseas office in Munich in 1984. By 1990, Compaq Europe was a $2 billion business and number two behind IBM in that region, and foreign sales contributed 54 percent of Compaq's revenues. Pfeiffer, while transplanting Compaq's U.S. strategy of dealer-only distribution to Europe, was more selective in signing up dealers than Compaq had been in the U. S. such that European dealers were more qualified to handle its increasingly complex products.", "title": "History" }, { "paragraph_id": 27, "text": "During the 1980s, under Canion's direction Compaq had focused on engineering, research, and quality control, producing high-end, high-performance machines with high profit margins that allowed Compaq to continue investing in engineering and next-generation technology. This strategy was successful as Compaq was considered a trusted brand, while many other IBM clones were untrusted due to being plagued by poor reliability. However, by the end of the eighties many manufacturers had improved their quality and were able to produce inexpensive PCs with off-the-shelf components, incurring none of the R&D costs which allowed them to undercut Compaq's expensive computers. Faced with lower-cost rivals such as Dell, AST Research, and Gateway 2000, Compaq suffered a $71 million loss for that quarter, their first loss as a company, while the stock had dropped by over two-thirds. An analyst stated that \"Compaq has made a lot of tactical errors in the last year and a half. They were trend-setters, now they are lagging\". Canion initially believed that the 1990s recession was responsible for Compaq's declining sales but insisted that they would recover once the economy improved, however Pfeiffer's observation of the European market noted that it was competition as rivals could match Compaq at a fraction of the cost. Under pressure from Compaq's board to control costs as staff was ballooning at their Houston headquarters despite falling U.S. sales, while the number of non-U.S. employees had stayed constant, Compaq made its first-ever layoffs (1400 employees which was 12% of its workforce) while Pfeiffer was promoted to EVP and COO.", "title": "History" }, { "paragraph_id": 28, "text": "Rosen and Canion had disagreed about how to counter the cheaper Asian PC imports, as Canion wanted Compaq to build lower cost PCs with components developed in-house in order to preserve Compaq's reputation for engineering and quality, while Rosen believed that Compaq needed to buy standard components from suppliers and reach the market faster. While Canion developed an 18-month plan to create a line of low-priced computers, Rosen sent his own Compaq engineering team to Comdex without Canion's knowledge and discovered that a low-priced PC could be made in half the time and at lower cost than Canion's initiative. It was also believed that Canion's consensus-style management slowed the company's ability to react in the market, whereas Pfeiffer's autocratic style would be suited to price and product competition.", "title": "History" }, { "paragraph_id": 29, "text": "Rosen initiated a 14-hour board meeting, and the directors also interviewed Pfeiffer for several hours without informing Canion. At the conclusion, the board was unanimous in picking Pfeiffer over Canion. As Canion was popular with company workers, 150 employees staged an impromptu protest with signs stating \"We love you, Rod.\" and taking out a newspaper ad saying \"Rod, you are the wind beneath our wings. We love you.\" Canion declined an offer to remain on Compaq's board and was bitter about his ouster as he did not speak to Rosen for years, although their relationship became cordial again. In 1999, Canion admitted that his ouster was justified, saying \"I was burned out. I needed to leave. He [Rosen] felt I didn't have a strong sense of urgency\". Two weeks after Canion's ouster, five other senior executives resigned, including remaining company founder James Harris as SVP of Engineering. These departures were motivated by an enhanced severance or early retirement, as well as an imminent demotion as their functions were to be shifted to vice presidents.", "title": "History" }, { "paragraph_id": 30, "text": "Under Pfeiffer's tenure as chief executive, Compaq entered the retail computer market with the Compaq Presario as one of the first manufacturers in the mid-1990s to market a sub-$1000 PC. In order to maintain the prices it wanted, Compaq became the first first-tier computer manufacturer to utilize CPUs from AMD and Cyrix. The two price wars resulting from Compaq's actions ultimately drove numerous competitors from the market, such as Packard Bell and AST Research. From third place in 1993, Compaq had overtaken Apple Computer and even surpassed IBM as the top PC manufacturer in 1994, as both IBM and Apple were struggling considerably during that time. Compaq's inventory and gross margins were better than that of its rivals which enabled it to wage the price wars.", "title": "History" }, { "paragraph_id": 31, "text": "Compaq had decided to make a foray into printers in 1989, and the first models were released to positive reviews in 1992. However, Pfeiffer saw that the prospects of taking on market leader Hewlett-Packard (who had 60% market share) was tough, as that would force Compaq to devote more funds and people to that project than originally budgeted. Compaq ended up selling the printer business to Xerox and took a charge of $50 million.", "title": "History" }, { "paragraph_id": 32, "text": "On June 26, 1995, Compaq reached an agreement with Cisco Systems Inc. in order to get into networking, including digital modems, routers, and switches favored by small businesses and corporate departments, which was now a $4 billion business and the fastest-growing part of the computer hardware market. Compaq also built up a network engineering and marketing staff.", "title": "History" }, { "paragraph_id": 33, "text": "In 1996, despite record sales and profits at Compaq, Pfeiffer initiated a major management shakeup in the senior ranks. John T. Rose, who previously ran Compaq's desktop PC division, took over the corporate server business from SVP Gary Stimac who had resigned. Rose had joined Compaq in 1993 from Digital Equipment Corporation where he oversaw the personal computer division and worldwide engineering, while Stimac had been with Compaq since 1982 and was one of the longest-serving executives. Senior Vice-president for North America Ross Cooley announced his resignation effective at the end of 1996. CFO Daryl J. White, who joined the company in January, 1983 resigned in May, 1996 after 8 years as CFO. Michael Winkler, who joined Compaq in 1995 to run its portable computer division, was promoted to general manager of the new PC products group. Earl Mason, hired from Inland Steel effective in May 1996, immediately made an impact as the new CFO. Under Mason's guidance, Compaq utilized its assets more efficiently instead of focusing just on income and profits, which increased Compaq's cash from $700 million to nearly $5 billion in one year. Additionally, Compaq's return on invested capital (after-tax operating profit divided by operating assets) doubled to 50 percent from 25 percent in that period.", "title": "History" }, { "paragraph_id": 34, "text": "Compaq had been producing the PC chassis at its plant in Shenzhen, China to cut costs. In 1996, instead of expanding its own plant, Compaq asked a Taiwanese supplier to set up a new factory nearby to produce the mechanicals, with the Taiwanese supplier owning the inventory until it reached Compaq in Houston. Pfeiffer also introduced a new distribution strategy, to build PCs made-to-order which would eliminate the stockpile of computers in warehouses and cut the components inventory down to two weeks, with the supply chain from supplier to dealer linked by complex software.", "title": "History" }, { "paragraph_id": 35, "text": "Vice-president for Corporate Development Kenneth E. Kurtzman assembled five teams to examine Compaq's businesses and assess each unit's strategy and that of key rivals. Kurtzman's teams recommended to Pfeiffer that each business unit had to be first or second in its market within three years—or else Compaq should exit that line. Also, the company should no longer use profits from high-margin businesses to carry marginally profitable ones, as instead each unit must show a return on investment. Pfeiffer's vision was to make Compaq a full-fledged computer company, moving beyond its main business of manufacturing retail PCs and into the more lucrative business services and solutions that IBM did well at, such as computer servers which would also require more \"customer handholding\" from either the dealers or Compaq staff themselves. Unlike IBM and HP, Compaq would not build up field technicians and programmers in-house as those could be costly assets, instead Compaq would leverage its partnerships (including those with Andersen Consulting and software maker SAP) to install and maintain corporate systems. This allowed Compaq to compete in the \"big-iron market\" without incurring the costs of running its own services or software businesses.", "title": "History" }, { "paragraph_id": 36, "text": "Most of Compaq's server sales were for systems that would be running Microsoft's Windows NT operating system, and indeed Compaq was the largest hardware supplier for Windows NT. However, some 20 percent of Compaq servers went for systems that would be running the Unix operating system. This was exemplified by a strategic alliance formed in 1997 between Compaq and the Santa Cruz Operation (SCO), which was known for its server Unix operating system products on Intel-architecture-based hardware. Compaq was also the largest hardware supplier for SCO's Unix products, and some 10 percent of Compaq's ProLiant servers ran SCO's UnixWare.", "title": "History" }, { "paragraph_id": 37, "text": "In January 1998, Compaq was at its height. CEO Pfeiffer boldly predicted that the Microsoft/Intel \"Wintel\" duopoly would be replaced by \"Wintelpaq\".", "title": "History" }, { "paragraph_id": 38, "text": "Pfeiffer also made several major and some minor acquisitions. In 1997, Compaq bought Tandem Computers, known for their NonStop server line. This acquisition instantly gave Compaq a presence in the higher end business computing market. The alliance between Compaq and SCO took advantage of this to put out the UnixWare NonStop Clusters product in 1998.", "title": "History" }, { "paragraph_id": 39, "text": "Minor acquisitions centered around building a networking arm and included NetWorth (1998) based in Irving, Texas and Thomas-Conrad (1998) based in Austin, Texas. In 1997, Microcom was also acquired, based in Norwood, MA, which brought a line of modems, Remote Access Servers (RAS) and the popular Carbon Copy software.", "title": "History" }, { "paragraph_id": 40, "text": "In 1998, Compaq acquired Digital Equipment Corporation for a then-industry record of $9.6 billion. The merger made Compaq, at the time, the world's second largest computer maker in the world in terms of revenue behind IBM. Digital Equipment, which had nearly twice as many employees as Compaq while generating half the revenue, had been a leading computer company during the 1970s and early 1980s. However, Digital had struggled during the 1990s, with high operating costs. For nine years the company had lost money or barely broke even, and had recently refocused itself as a \"network solutions company\". In 1995, Compaq had considered a bid for Digital but only became seriously interested in 1997 after Digital's major divestments and refocusing on the Internet. At the time of the acquisition, services accounted for 45 percent of Digital's revenues (about $6 billion) and their gross margins on services averaged 34 percent, considerably higher than Compaq's 25% margins on PC sales and also satisfying customers who had demanded more services from Compaq for years. Compaq had originally wanted to purchase only Digital's services business but that was turned down. When the announcement was made, it was initially viewed as a master stroke as it immediately gave Compaq a 22,000 person global service operation to help corporations handle major technological purchases (by 2001 services made up over 20% of Compaq's revenues, largely due to the Digital employees inherited from the merger), in order to compete with IBM. However it was also risky merger, as the combined company would have to lay off 2,000 employees from Compaq and 15,000 from Digital which would potentially hurt morale. Furthermore, Compaq fell behind schedule in integrating Digital's operations, which also distracted the company from its strength in low-end PCs where it used to lead the market in rolling out next-generation systems which let rival Dell grab market share. Reportedly Compaq had three consulting firms working to integrate Digital alone.", "title": "History" }, { "paragraph_id": 41, "text": "However, Pfeiffer had little vision for what the combined companies should do, or indeed how the three dramatically different cultures could work as a single entity, and Compaq struggled from strategy indecisiveness and lost focus, as a result being caught in between the low end and high end of the market. Mark Anderson, president of Strategic News Service, a research firm based in Friday Harbor, Wash. was quoted as saying, \"The kind of goals he had sounded good to shareholders – like being a $50 billion company by the year 2000, or to beat I.B.M. – but they didn't have anything to do with customers. The new C.E.O. should look at everything Eckhard acquired and ask: did the customer benefit from that. If the answer isn't yes, they should get rid of it.\" On one hand, Compaq had previously dominated the PC market with its price war but was now struggling against Dell, which sold directly to buyers, avoiding the dealer channel and its markup, and built each machine to order to keep inventories and costs at a minimum. At the same time, Compaq, through its acquisitions of the Digital Equipment Corporation in 1998 and Tandem Computers in 1997, had tried to become a major systems company, like IBM and Hewlett-Packard. While IBM and HP were able generate repeat business from corporate customers to drive sales of their different divisions, Compaq had not yet managed to make its newly acquired sales and services organizations work as seamlessly.", "title": "History" }, { "paragraph_id": 42, "text": "In early 1998, Compaq had the problem of bloated PC inventories. By summer 1998, Compaq was suffering from product-quality problems. Robert W. Stearns, SVP of Business Development, said \"In [Pfeiffer's] quest for bigness, he lost an understanding of the customer and built what I call empty market share—large but not profitable\", while Jim Moore, a technology strategy consultant with GeoPartners Research in Cambridge, Mass., says Pfeiffer \"raced to scale without having economies of scale.\" The \"colossus\" that Pfeiffer built up was not nimble enough to adapt to the fast-changing computer industry. That year Compaq forecast demand poorly and shipped too many PCs, causing resellers to dump them at fire sale prices, and since Compaq protected resellers from heavy losses it cost them two quarters of operating profits.", "title": "History" }, { "paragraph_id": 43, "text": "Pfeiffer also refused to develop a potential successor, rebuffing Rosen's suggestion to recruit a few executives to create the separate position of Compaq president. The board complained that Pfeiffer was too removed from management and the rank-and-file, as he surrounded himself with a \"clique\" of Chief Financial Officer Earl Mason, Senior Vice-President John T. Rose, and Senior Vice-President of Human Resources Hans Gutsch. Current and former Compaq employees complained that Gutsch was part of a group of senior executives, dubbed the \"A team\", who controlled access to Pfeiffer. Gutsch was said to be a \"master of corporate politics, pitting senior vice presidents against each other and inserting himself into parts of the company that normally would not be under his purview\". Gutsch, who oversaw security, had an extensive security system and guard station installed on the eight floor of CCA-11, where the company's senior vice presidents worked. There were accusations that Gutsch and others sought to divide top management, although this was regarded by others as sour grapes on the part of executives who were shut out of planning that involved the acquisitions of Tandem Computers and Digital Equipment Corp. Pfeiffer reduced the size of the group working on the deal due to news leaks, saying \"We cut the team down to the minimum number of people—those who would have to be directly involved, and not one person more\". Robert W. Stearns, Compaq's senior vice president for business development, with responsibility for mergers and acquisitions, had opposed the acquisition of Digital as the cultural differences between both companies were too great, and complained that he was placed on the \"B team\" as a result.", "title": "History" }, { "paragraph_id": 44, "text": "Compaq entered 1999 with strong expectations. Fourth-quarter 1998 earnings reported in January 1999 beat expectations by six cents a share with record 48 percent growth. The company launched Compaq.com as the key for its new direct sales strategy, and planned an IPO for AltaVista toward the end of 1999 in order to capitalize on the dotcom bubble. However, by February 1999, analysts were sceptical of Compaq's plan to sell both direct and to resellers. Compaq was hit with two class-action lawsuits, as a result of CFO Earl Mason, SVP John Rose, and other executives selling US$50 million of stock before a conference call with analysts, where they noted that demand for PCs was slowing down.", "title": "History" }, { "paragraph_id": 45, "text": "On April 17, 1999, just nine days after Compaq reported first-quarter profit being at half of what analysts had expected, the latest in a string of earnings disappointments, Pfeiffer was forced to resign as CEO in a coup led by board chairman Ben Rosen. Reportedly, at the special board meeting held on April 15, 1999, the directors were unanimous in dismissing Pfeiffer. The company's stock had fallen 50 percent since its all-time high in January 1999. Compaq shares, which traded as high as $51.25 early in 1999, dropped 23 percent on April 12, 1999, the first day of trading after the first-quarter announcement and closed the following Friday at $23.62. During three out of the last six quarters of Pfeiffer's tenure, the company's revenues or earnings had missed expectations. While rival Dell had 55% growth in U.S. PC sales in the first quarter of 1999, Compaq could only manage 10%. Rosen suggested that the accelerating change brought about by the Internet had overtaken Compaq's management team, saying \"As a company engaged in transforming its industry for the Internet era, we must have the organizational flexibility necessary to move at Internet speed.\" In a statement, Pfeiffer said \"Compaq has come a long way since I joined the company in 1983\" and \"under Ben's guidance, I know this company will realize its potential.\" Rosen's priority was to have Compaq catchup as an E-commerce competitor, and he also moved to streamline operations and reduce the indecision that plagued the company.", "title": "History" }, { "paragraph_id": 46, "text": "Roger Kay, an analyst at International Data Corporation, observed that Compaq's behavior at times seemed like a personal vendetta, noting that \"Eckhard has been so obsessed with staying ahead of Dell that they focused too hard on market share and stopped paying attention to profitability and liquidity. They got whacked in a price war that they started.\" Subsequent earnings releases from Compaq's rivals, Dell, Gateway, IBM, and Hewlett-Packard suggested that the problems were not affecting the whole PC industry as Pfeiffer had suggested. Dell and Gateway sold direct, which helped them to avoid Compaq's inventory problems and compete on price without dealer markups, plus Gateway sold web access and a broad range of software tailored to small businesses. Hewlett-Packard's PC business had similar challenges like Compaq but this was offset by HP's extremely lucrative printer business, while IBM sold PCs at a loss but used them to lock in multi-year services contracts with customers.", "title": "History" }, { "paragraph_id": 47, "text": "After Pfeiffer's resignation, the board established an office of the CEO with a triumvirate of directors; Rosen as interim CEO and vice chairmen Frank P. Doyle and Robert Ted Enloe III. They began \"cleaning house\", as shortly afterward many of Pfeiffer's top executives resigned or were pushed out, including John J. Rando, Earl L. Mason, and John T. Rose. Rando, senior vice president and general manager of Compaq Services, was a key player during the merger discussions and the most senior executive from Digital to remain with Compaq after the acquisition closed and had been touted by some as the heir-apparent to Pfeiffer. Rando's division had performed strongly as it had sales of $1.6 billion for the first quarter compared to $113 million in 1998, which met expectations and was anticipated to post accelerated and profitable growth going forward. At the time of Rando's departure, Compaq Services ranked third behind those of IBM and EDS, while slightly ahead of Hewlett-Packard's and Andersen Consulting, however customers switched from Digital technology-based workstations to those of HP, IBM, and Sun Microsystems. Mason, senior vice president and chief financial officer, had previously been offered the job of chief executive of Alliant Foodservice, Inc., a foodservice distributor based in Chicago, and he informed Compaq's board that he accepted the offer. Rose, senior vice president and general manager of Compaq's Enterprise Computing group, resigned effective as of June 3 and was succeeded by Tandem veteran Enrico Pesatori. Rose was reportedly upset that he was not considered for the CEO vacancy, which became apparent once Michael Capellas was named COO. While Enterprise Computing, responsible for engineering and marketing of network servers, workstations and data-storage products, reportedly accounted for one third of Compaq's revenues and likely the largest part of its profits, it was responsible for the earnings shortfall in Q1 of 1999. In addition, Rose was part of the \"old guard\" close to former CEO Pfeiffer, and he and other Compaq executives had been criticized at the company's annual meeting for selling stock before reporting the sales slowdown. Rose was succeeded by SVP Enrico Pesatori, who had previously worked as a senior executive at Olivetti, Zenith Data Systems, Digital Equipment Corporation, and Tandem Computers. Capellas was appointed COO after pressure mounted on Rosen to find a permanent CEO, however it was reported that potential candidates did not want to work under Rosen as chairman. Around the same time Pesatori was placed in charge of the newly created Enterprise Solutions and Services Group, making him Compaq's second most powerful executive in operational responsibility after Capellas.", "title": "History" }, { "paragraph_id": 48, "text": "Pfeiffer's permanent replacement was Michael Capellas, who had been serving as Compaq's SVP and CIO for under a year. A couple months after Pfeiffer's ouster, Capellas was elevated to interim chief operating officer on June 2, 2000, and was soon appointed president and CEO. Capellas also assumed the title of chairman on September 28, 2000, when Rosen stepped down from the board of directors. At his retirement, Rosen proclaimed \"These are great achievements—to create 65,000 jobs, $40 billion in sales and $40 billion in market value, all starting with a sketch and a dream\".", "title": "History" }, { "paragraph_id": 49, "text": "In 1998, Compaq signed new sales and equipment alliance with NaviSite. Under the pact, Compaq agreed to promote and sell NaviSite Web hosting services. In return, NaviSite took Compaq as a preferred provider for its storage and Intel-based servers.", "title": "History" }, { "paragraph_id": 50, "text": "During November 1999, Compaq began to work with Microsoft to create the first in a line of small-scale, web-based computer systems called MSN Companions.", "title": "History" }, { "paragraph_id": 51, "text": "Capellas was able to restore some of the luster lost in the latter part of the Pfeiffer era and he repaired the relationship with Microsoft which had deteriorated under his predecessor's tenure.", "title": "History" }, { "paragraph_id": 52, "text": "However Compaq still struggled against lower-cost competitors with direct sales channels such as Dell who took over the top spot of PC manufacturer from Compaq in 2001. Compaq relied significantly on reseller channels, so their criticism caused Compaq to retreat from its proposed direct sales plan, although Capellas maintained that he would use the middlemen to provide value-added services. Despite falling to No. 2 among PC manufacturers, Capellas proclaimed \"We are No. 2 in the traditional PC market, but we're focused on industry leadership in the next generation of Internet access devices and wireless mobility. That's where the growth and the profitability will be.\" The company's longer-term strategy involved extending its services to servers and storage products, as well as handheld computers such as the iPAQ PocketPC which accounted for 11 percent of total unit volume.", "title": "History" }, { "paragraph_id": 53, "text": "Compaq struggled as a result of the collapse of the dot-com bubble, which hurt sales of their high-end systems in 2001 and 2002, and they managed only a small profit in a few quarters during these years. They also accumulated $1.7 billion in short-term debt around this time. The stock price of Compaq, which was around $25 when Capellas became CEO, was trading at half that by 2002.", "title": "History" }, { "paragraph_id": 54, "text": "In 2002, Compaq signed a merger agreement with Hewlett-Packard for US$24.2 billion, including US$14.45 billion for goodwill, where each Compaq share would be exchanged for 0.6325 of a Hewlett-Packard share. There would be a termination fee of US$675 million that either company would have to pay the other to break the merger. Compaq shareholders would own 36% of the combined company while HP's would have 64%. Hewlett-Packard had reported yearly revenues of $47 billion, while Compaq's was $40 billion, and the combined company would have been close to IBM's $90 billion revenues. It was projected to have $2.5 billion in annual cost savings by mid-2004. The expected layoffs at Compaq and HP, 8500 and 9000 jobs respectively, would leave the combined company with a workforce of 145,000. The companies would dole out a combined $634.5 million in bonuses to prevent key employees from leaving if shareholders approve the proposed merger, with $370.1 million for HP employees and $264.4 million for Compaq employees.", "title": "History" }, { "paragraph_id": 55, "text": "Both companies had to seek approval from their shareholders through separate special meetings. While Compaq shareholders unanimously approved the deal, there was a public proxy battle within HP as the deal was strongly opposed by numerous large HP shareholders, including the sons of the company founders, Walter Hewlett and David W. Packard, as well as the California Public Employees’ Retirement System (CalPERS) and the Ontario Teachers' Pension Plan. Walter Hewlett only reluctantly approved the merger, in his duty as a member of the board of directors, since the merger agreement \"called for unanimous board approval in order to ensure the best possible shareholder reception\". While supporters of the merger argued that there would be economies of scale and that the sales of PCs would drive sales of printers and cameras, Walter Hewlett was convinced that PCs were a low-margin but risky business that would not contribute and would likely dilute the old HP's traditionally profitable Imaging and Printing division. David W. Packard in his opposition to the deal \"[cited] massive layoffs as an example of this departure from HP’s core values...[arguing] that although the founders never guaranteed job security, 'Bill and Dave never developed a premeditated business strategy that treated HP employees as expendable.'\" Packard further stated that \"[Carly] Fiorina’s high-handed management and her efforts to reinvent the company ran counter to the company’s core values as established by the founders\". The founders' families who controlled a significant amount of HP shares were further irked because Fiorina had made no attempt to reach out to them and consult about the merger, instead they received the same standard roadshow presentation as other investors.", "title": "History" }, { "paragraph_id": 56, "text": "Analysts on Wall Street were generally critical of the merger, as both companies had been struggling before the announcement, and the stock prices of both companies dropped in the months after the merger agreement was made public. Particularly rival Dell made gains from defecting HP and Compaq customers who were wary of the merger. Carly Fiorina, initially seen as HP's savior when she was hired as CEO back in 1999, had seen the company's stock price drop to less than half since she assumed the position, and her job was said to be on shaky ground before the merger announcement. HP's offer was regarded by analysts to be overvaluing Compaq, due to Compaq's shaky financial performance in the past recent years (there were rumors that it could run out of money in 12 months and be forced to cease business operations had it stayed independent), as well as Compaq's own more conservative valuation of its assets. Detractors of the deal noted that buying Compaq was a \"distraction\" that would not directly help HP take on IBM's breadth or Dell Computer's direct sales model. Plus there were significant cultural differences between HP and Compaq; which made decisions by consensus and rapid autocratic styles, respectively. One of Compaq's few bright spots was its services business, which was outperforming HP's own services division.", "title": "History" }, { "paragraph_id": 57, "text": "The merger was approved by HP shareholders only after the narrowest of margins, and allegations of vote buying (primarily involving an alleged last-second back-room deal with Deutsche Bank) haunted the new company. It was subsequently disclosed that HP had retained Deutsche Bank's investment banking division in January 2002 to assist in the merger. HP had agreed to pay Deutsche Bank $1 million guaranteed, and another $1 million contingent upon approval of the merger. On August 19, 2003, the U.S. SEC charged Deutsche Bank with failing to disclose a material conflict of interest in its voting of client proxies for the merger and imposed a civil penalty of $750,000. Deutsche Bank consented without admitting or denying the findings.", "title": "History" }, { "paragraph_id": 58, "text": "Hewlett-Packard announced the completion of merger on May 3, 2002, and the merged HP-Compaq company was officially launched on May 7. Compaq's pre-merger ticker symbol was CPQ. This was combined with Hewlett-Packard's ticker symbol (HWP) to create the current ticker symbol (HPQ), which was announced on May 6.", "title": "History" }, { "paragraph_id": 59, "text": "Capellas, Compaq's last chairman and CEO, became president of the post-merger Hewlett-Packard, under chairman and CEO Carly Fiorina, to ease the integration of the two companies. However, Capellas was reported not to be happy with his role, being said not to be utilized and being unlikely to become CEO as the board supported Fiorina. Capellas stepped down as HP president on November 12, 2002, after just six months on the job, to become CEO of MCI Worldcom where he would lead its acquisition by Verizon. Capellas' former role of president was not filled as the executives who reported to him then reported directly to the CEO.", "title": "History" }, { "paragraph_id": 60, "text": "Fiorina helmed the post-merger HP for nearly three years after Capellas left. HP laid off thousands of former Compaq, DEC, HP, and Tandem employees, its stock price generally declined and profits did not perk up. Several senior executives from the Compaq side including Jeff Clarke and Peter Blackmore would resign or be ousted from the post-merger HP. Though the combination of both companies' PC manufacturing capacity initially made the post-merger HP number one, it soon lost the lead and further market share to Dell which squeezed HP on low end PCs. HP was also unable to compete effectively with IBM in the high-end server market. In addition, the merging of the stagnant Compaq computer assembly business with HP's lucrative printing and imaging division was criticized for obstructing the profitability of the printing/imaging segment. Overall, it has been suggested that the purchase of Compaq was not a good move for HP, due to the narrow profit margins in the commoditized PC business, especially in light of IBM's 2004 announcement to sell its PC division to Lenovo. The Inquirer noted that the continued low return on investment and small margins of HP's personal computer manufacturing business, now named the Personal Systems Group, \"continues to be what it was in the individual companies, not much more than a job creation scheme for its employees\". One of the few positives was Compaq's sales approach and enterprise focus that influenced the newly combined company's strategy and philosophy.", "title": "History" }, { "paragraph_id": 61, "text": "In February 2005, the board of directors ousted Fiorina, with CFO Robert Wayman being named interim CEO. Former Compaq CEO Capellas was mentioned by some as a potential successor, but several months afterwards, Mark Hurd was hired as president and CEO of HP. Hurd separated the PC division from the imaging and printing division and renamed it the Personal Systems Group, placing it under the leadership of EVP Todd R. Bradley. Hewlett Packard's PC business has since been reinvigorated by Hurd's restructuring and now generates more revenue than the traditionally more profitable printers. By late 2006, HP had retaken the #1 sales position of PCs from Dell, which struggled with missed estimates and poor quality, and held that rank until supplanted in the mid-2010s by Lenovo.", "title": "History" }, { "paragraph_id": 62, "text": "Most Compaq products have been re-branded with the HP nameplate, such as the company's market leading ProLiant server line (now owned by Hewlett Packard Enterprise, which spun off from HP in 2015), while the Compaq brand was repurposed for some of HP's consumer-oriented and budget products, notably Compaq Presario PCs. HP's business computers line was discontinued in favour of the Compaq Evo line, which was initially rebranded HP Compaq but now use brands such as EliteBook and ProBook, among others. HP's Jornada PDAs were replaced by Compaq iPAQ PDAs, which were renamed HP iPAQ. Following the merger, all Compaq computers were shipped with HP software.", "title": "History" }, { "paragraph_id": 63, "text": "In May 2007, HP announced in a press release a new logo for their Compaq Division to be placed on the new model Compaq Presarios.", "title": "History" }, { "paragraph_id": 64, "text": "In 2008, HP reshuffled its business line notebooks. The \"Compaq\" name from its \"HP Compaq\" series was originally used for all of HP's business and budget notebooks. However, the HP EliteBook line became the top of the business notebook lineup while the HP Compaq B series became its middle business line. As of early 2009, the \"HP ProBook\" filled out HP's low end business lineup.", "title": "History" }, { "paragraph_id": 65, "text": "In 2009, HP sold part of Compaq's former headquarters to the Lone Star College System.", "title": "History" }, { "paragraph_id": 66, "text": "On August 18, 2011, then-CEO of HP Léo Apotheker announced plans for a partial or full spinoff of the Personal Systems Group. The PC unit had the lowest profit margin although it accounted for nearly a third of HP's overall revenues in 2010. HP was still selling more PCs than any other vendor, shipping 14.9 million PCs in the second quarter of 2011 (17.5% of the market according to Gartner), while Dell and Lenovo were tied for second place, each with more than a 12% share of the market and shipments of over 10 million units. However, the announcement of the PC spinoff (concurrent with the discontinuation of WebOS, and the purchase of Autonomy Corp. for $10 billion) was poorly received by the market, and after Apotheker's ouster, plans for a divestiture were cancelled. In March 2012, the printing and imaging division was merged into the PC unit. In October 2012, according to Gartner, Lenovo took the lead as the number one PC manufacturer from HP, while IDC ranked Lenovo just right behind HP. In Q2 2013, Forbes reported that Lenovo ranked ahead of HP as the world's number-one PC supplier.", "title": "History" }, { "paragraph_id": 67, "text": "HP discontinued the Compaq brand name in the United States in 2013. Around that same year, Globalk (a Brazilian-based retailer and licensing management firm) started a partnership with HP to re-introduce the brand with a new line of desktop and laptop computers.", "title": "History" }, { "paragraph_id": 68, "text": "In 2015, Grupo Newsan (an Argentinian-based company) acquired the brand's license, along with a $3 million investment, and developed two new lines of Presario notebooks for the local market over the course of the year. However, Compaq's Argentine web site went offline in March 2019. The last archived copy of the site was made in October 2018, which featured the same models introduced in 2016.", "title": "History" }, { "paragraph_id": 69, "text": "In 2018, Ossify Industries (an Indian-based company) entered a licensing agreement with HP to use the Compaq brand name for the distribution and manufacturing of Smart TV sets.", "title": "History" }, { "paragraph_id": 70, "text": "The Compaq World Headquarters (now HP United States) campus consisted of 80 acres (320,000 m) of land which contained 15 office buildings, 7 manufacturing buildings, a product conference center, an employee cafeteria, mechanical laboratories, warehouses, and chemical handling facilities.", "title": "Headquarters" }, { "paragraph_id": 71, "text": "Instead of headquartering the company in a downtown Houston skyscraper, then-CEO Rod Canion chose a West Coast-style campus surrounded by forests, where every employee had similar offices and no-one (not even the CEO) had a reserved parking spot. As it grew, Compaq became so important to Houston that it negotiated the expansion of Highway 249 in the late 1980s, and many other technology companies appeared in what became known as the \"249 Corridor\".", "title": "Headquarters" }, { "paragraph_id": 72, "text": "After Canion's ouster, senior vice-president of human resources, Hans W. Gutsch, oversaw the company's facilities and security. Gutsch had an extensive security system and guard station installed on the eight floor of CCA-1, where the company's senior vice presidents had their offices. Eckhard Pfeiffer, president and CEO, introduced a whole series of executive perks to a company that had always had an egalitarian culture; for instance, he oversaw the construction of an executive parking garage, previously parking places had never been reserved.", "title": "Headquarters" }, { "paragraph_id": 73, "text": "On August 31, 1998, the Compaq Commons was opened in the headquarters campus, which featured a conference center, an employee convenience store, a wellness center, and an employee cafeteria.", "title": "Headquarters" }, { "paragraph_id": 74, "text": "In 2009, HP sold part of Compaq's former headquarters to the Lone Star College System. Hewlett Packard Buildings #7 & #8, two eight-story reinforced concrete buildings totaling 450,000 square feet, plus a 1,200-car parking garage and a central chiller plant, were all deemed by the college to be too robust and costly to maintain, and so they were demolished by implosion on September 18, 2011.", "title": "Headquarters" }, { "paragraph_id": 75, "text": "As of January 2013, the site is one of HP's largest campuses, with 7,000 employees in all six of HP's divisions. In 2018, HP announced the sale of the entire former Compaq HQ campus.", "title": "Headquarters" }, { "paragraph_id": 76, "text": "Compaq originally competed directly against IBM, manufacturing computer systems equivalent with the IBM PC, as well as Apple Computer. In the 1990s, as IBM's own PC division declined, Compaq faced other IBM PC Compatible manufacturers like Dell, Packard Bell, AST Research, and Gateway 2000.", "title": "Competitors" }, { "paragraph_id": 77, "text": "By the mid-1990s, Compaq's price war had enabled it to overtake IBM and Apple, while other IBM PC Compatible manufacturers such as Packard Bell and AST were driven out from the market.", "title": "Competitors" }, { "paragraph_id": 78, "text": "Dell overtook Compaq and became the number-one supplier of PCs in 2001.", "title": "Competitors" }, { "paragraph_id": 79, "text": "At the time of their 2002 merger, Compaq and HP were the second and third largest PC manufacturers, so their combination made them number one. However, the combined HP-Compaq struggled and fell to second place behind Dell from 2003 to 2006. Due to Dell's struggles in late 2006, HP has led all PC vendors from 2007 to 2012.", "title": "Competitors" }, { "paragraph_id": 80, "text": "During its existence as a division of HP, Compaq primarily competed against other budget-oriented personal computer series from manufacturers including Acer, Lenovo, and Toshiba. Most of Compaq's competitors except Dell were later acquired by bigger rivals like Acer (Gateway 2000 and Packard Bell) and Lenovo absorbing IBM's PC division. From 2013 onwards, Lenovo has been the world leader for PCs.", "title": "Competitors" }, { "paragraph_id": 81, "text": "Before its merger with HP, Compaq sponsored the Williams Formula One team when it was still powered by BMW engines. HP inherited and continued the sponsorship deal for a few years. Compaq sponsored Queens Park Rangers F.C. for the 1994–95 and 1995–96 seasons.", "title": "Sponsorship" } ]
Compaq Computer Corporation was an American information technology company founded in 1982 that developed, sold, and supported computers and related products and services. Compaq produced some of the first IBM PC compatible computers, being the second company after Columbia Data Products to legally reverse engineer the BIOS of the IBM Personal Computer. It rose to become the largest supplier of PC systems during the 1990s before being overtaken by Dell in 2001. Struggling to keep up in the price wars against Dell, as well as with a risky acquisition of DEC, Compaq was acquired for US$25 billion by HP in 2002. The Compaq brand remained in use by HP for lower-end systems until 2013 when it was discontinued. Since 2013, the brand is currently licensed to third parties for use on electronics in Brazil and India. The company was formed by Rod Canion, Jim Harris, and Bill Murto, all of whom were former Texas Instruments senior managers. Murto departed Compaq in 1987, while Canion and Harris left under a shakeup in 1991, which saw Eckhard Pfeiffer appointed president and CEO. Pfeiffer served through the 1990s. Ben Rosen provided the venture capital financing for the fledgling company and served as chairman of the board for 17 years from 1983 until September 28, 2000, when he retired and was succeeded by Michael Capellas, who served as the last chairman and CEO until its merger with HP. Prior to its merger, the company was headquartered in northwest unincorporated Harris County, Texas, which now continues as HP's largest United States facility.
2002-02-25T15:43:11Z
2023-12-25T19:28:56Z
[ "Template:Pic", "Template:Asof", "Template:Dead link", "Template:ProQuest", "Template:Cite book", "Template:Use mdy dates", "Template:By whom", "Template:Convert", "Template:Cite press release", "Template:Cite news", "Template:Cite interview", "Template:Citation needed", "Template:Blockquote", "Template:Val", "Template:Cite encyclopedia", "Template:Queens Park Rangers F.C. shirt sponsors", "Template:Portal", "Template:Cite web", "Template:Short description", "Template:Infobox company", "Template:Multiple image", "Template:Notelist", "Template:Cite magazine", "Template:Compaq", "Template:US$", "Template:Clarify", "Template:Authority control", "Template:HP", "Template:Distinguish", "Template:Use American English", "Template:R", "Template:Reflist", "Template:Webarchive", "Template:Cite journal" ]
https://en.wikipedia.org/wiki/Compaq
7,751
CPSU (disambiguation)
CPSU is the Communist Party of the Soviet Union, the sole governing party of the Soviet Union until 1990. CPSU may also refer to:
[ { "paragraph_id": 0, "text": "CPSU is the Communist Party of the Soviet Union, the sole governing party of the Soviet Union until 1990.", "title": "" }, { "paragraph_id": 1, "text": "CPSU may also refer to:", "title": "" } ]
CPSU is the Communist Party of the Soviet Union, the sole governing party of the Soviet Union until 1990. CPSU may also refer to:
2022-06-22T04:55:17Z
[ "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/CPSU_(disambiguation)
7,755
Cluny
Cluny (French pronunciation: [klyni]) is a commune in the eastern French department of Saône-et-Loire, in the region of Bourgogne-Franche-Comté. It is 20 km (12 mi) northwest of Mâcon. The town grew up around the Benedictine Abbey of Cluny, founded by Duke William I of Aquitaine in 910. The height of Cluniac influence was from the second half of the 10th century through the early 12th. The abbey was sacked by the Huguenots in 1562, and many of its valuable manuscripts were destroyed or removed. The river Grosne flows northward through the commune and crosses the town.
[ { "paragraph_id": 0, "text": "Cluny (French pronunciation: [klyni]) is a commune in the eastern French department of Saône-et-Loire, in the region of Bourgogne-Franche-Comté. It is 20 km (12 mi) northwest of Mâcon.", "title": "" }, { "paragraph_id": 1, "text": "The town grew up around the Benedictine Abbey of Cluny, founded by Duke William I of Aquitaine in 910. The height of Cluniac influence was from the second half of the 10th century through the early 12th. The abbey was sacked by the Huguenots in 1562, and many of its valuable manuscripts were destroyed or removed.", "title": "" }, { "paragraph_id": 2, "text": "The river Grosne flows northward through the commune and crosses the town.", "title": "Geography" }, { "paragraph_id": 3, "text": "", "title": "External links" } ]
Cluny is a commune in the eastern French department of Saône-et-Loire, in the region of Bourgogne-Franche-Comté. It is 20 km (12 mi) northwest of Mâcon. The town grew up around the Benedictine Abbey of Cluny, founded by Duke William I of Aquitaine in 910. The height of Cluniac influence was from the second half of the 10th century through the early 12th. The abbey was sacked by the Huguenots in 1562, and many of its valuable manuscripts were destroyed or removed.
2002-01-14T21:18:48Z
2023-11-09T11:49:27Z
[ "Template:SaôneLoire-geo-stub", "Template:Other uses", "Template:Infobox French commune", "Template:IPA-fr", "Template:Convert", "Template:Reflist", "Template:Cite EB9", "Template:Saône-et-Loire communes", "Template:Commons category", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Cluny
7,756
Chet Atkins
Chester Burton Atkins (June 20, 1924 – June 30, 2001), also known as "Mr. Guitar" and "The Country Gentleman", was an American musician who, along with Owen Bradley and Bob Ferguson, helped create the Nashville sound, the country music style which expanded its appeal to adult pop music fans. He was primarily a guitarist, but he also played the mandolin, fiddle, banjo, and ukulele, and occasionally sang. Atkins's signature picking style was inspired by Merle Travis. Other major guitar influences were Django Reinhardt, George Barnes, Les Paul, and, later, Jerry Reed. His distinctive picking style and musicianship brought him admirers inside and outside the country scene, both in the United States and abroad. Atkins spent most of his career at RCA Victor and produced records for the Browns, Hank Snow, Porter Wagoner, Norma Jean, Dolly Parton, Dottie West, Perry Como, Floyd Cramer, Elvis Presley, the Everly Brothers, Eddy Arnold, Don Gibson, Jim Reeves, Jerry Reed, Skeeter Davis, Waylon Jennings, Roger Whittaker, Ann-Margret and many others. Rolling Stone credited Atkins with inventing the "popwise 'Nashville sound' that rescued country music from a commercial slump" and ranked him number 21 on their list of "The 100 Greatest Guitarists of All Time". In 2023, Atkins was named the 39th best guitarist of all time. Among many other honors, Atkins received 14 Grammy Awards and the Grammy Lifetime Achievement Award. He also received nine Country Music Association awards for Instrumentalist of the Year. He was inducted into the Rock & Roll Hall of Fame, the Country Music Hall of Fame and Museum, and the Musicians Hall of Fame and Museum. George Harrison was also inspired by Chet Atkins; early Beatles songs such as "All My Loving" show the influence. Atkins was born on June 20, 1924, in Luttrell, Tennessee, near Clinch Mountain. His parents divorced when he was six years old, after which he was raised by his mother. He was the youngest of three boys and a girl. He started out on the ukulele, later moving on to the fiddle, but he made a swap with his brother Lowell when he was nine: an old pistol and some chores for a guitar. He stated in his 1974 autobiography, "We were so poor and everybody around us was so poor that it was the forties before anyone even knew there had been a depression." Forced to relocate to Fortson, Georgia, outside of Columbus to live with his father because of a critical asthma condition, Atkins was a sensitive youth who became obsessed with music. Because of his illness, he was forced to sleep in a straight-back chair to breathe comfortably. On those nights, he played his guitar until he fell asleep holding it, a habit that lasted his whole life. While living in Fortson, Atkins attended the historic Mountain Hill School. He returned in the 1990s to play a series of charity concerts to save the school from demolition. Stories have been told about the very young Chet who, when a friend or relative would come to visit and play guitar, crowded the musician and put his ear so close to the instrument that it became difficult for the visitor to play. Atkins became an accomplished guitarist while he was in high school. He used the restroom in the school to practice because it had good acoustics. His first guitar had a nail for a nut and was so bowed that only the first few frets could be used. He later purchased a semi-acoustic electric guitar and amp, but he had to travel many miles to find an electrical outlet, since his home didn't have electricity. Later in life, he lightheartedly gave himself (along with John Knowles, Tommy Emmanuel, Steve Wariner, and Jerry Reed) the honorary degree CGP ("Certified Guitar Player"). In 2011, his daughter Merle Atkins Russell bestowed the CGP degree on his longtime sideman Paul Yandell. She then declared no more CGPs would be allowed by the Atkins estate. His half-brother Jim was a successful guitarist who worked with the Les Paul Trio in New York. Atkins did not have a strong style of his own until 1939 when (while still living in Georgia) he heard Merle Travis picking over WLW radio. This early influence dramatically shaped his unique playing style. Whereas Travis used his index finger on his right hand for the melody and his thumb for the bass notes, Atkins expanded his right-hand style to include picking with his first three fingers, with the thumb on bass. Chet Atkins was an amateur radio general class licensee. Formerly using the call sign WA4CZD, he obtained the vanity call sign W4CGP in 1998 to include the CGP designation, which supposedly stood for "Certified Guitar Picker". He was a member of the American Radio Relay League. After dropping out of high school in 1942, Atkins landed a job at WNOX (AM) (now WNML) radio in Knoxville, where he played fiddle and guitar with the singer Bill Carlisle and the comic Archie Campbell and became a member of the station's Dixieland Swingsters, a small swing instrumental combo. After three years, he moved to WLW-AM in Cincinnati, Ohio, where Merle Travis had formerly worked. After six months, he moved to Raleigh and worked with Johnnie and Jack before heading for Richmond, Virginia, where he performed with Sunshine Sue Workman. Atkins's shy personality worked against him, as did the fact that his sophisticated style led many to doubt he was truly "country". He was fired often but was soon able to land another job at another radio station on account of his unique playing ability. Atkins and Jethro Burns (of Homer and Jethro) married twin sisters Leona and Lois Johnson, who sang as Laverne and Fern Johnson, the Johnson Sisters. Leona Atkins outlived her husband by eight years, dying in 2009 at the age of 85. Travelling to Chicago, Atkins auditioned for Red Foley, who was leaving his star position on WLS-AM's National Barn Dance to join the Grand Ole Opry. Atkins made his first appearance at the Opry in 1946 as a member of Foley's band. He also recorded a single for Nashville-based Bullet Records that year. That single, "Guitar Blues", was fairly progressive, including a clarinet solo by the Nashville dance band musician Dutch McMillan, with Owen Bradley on piano. He had a solo spot on the Opry, but when that was cut, Atkins moved on to KWTO in Springfield, Missouri. Despite the support of executive Si Siman, however, he soon was fired for not sounding "country enough". While working with a Western band in Denver, Colorado, Atkins came to the attention of RCA Victor. Siman had been encouraging Steve Sholes to sign Atkins, as his style (with the success of Merle Travis as a hit recording artist) was suddenly in vogue. Sholes, A&R director of country music at RCA, tracked Atkins down in Denver. He made his first RCA Victor recordings in Chicago in 1947, but they did not sell. He did some studio work for RCA that year, but had relocated to Knoxville again where he worked with Homer and Jethro on WNOX's new Saturday night radio show The Tennessee Barn Dance and the popular Midday Merry Go Round. In 1949, he left WNOX to join June Carter with Mother Maybelle and the Carter Sisters on KWTO. This incarnation of the Carter Family featured Maybelle Carter and daughters June, Helen, and Anita. Their work soon attracted attention from the Grand Ole Opry. The group relocated to Nashville in the mid-1950s. Atkins began working on recording sessions and performing on WSM-AM and the Opry. Atkins became a member of the Opry in the 1950s. While he had not yet had a hit record for RCA Victor, his stature was growing. He began assisting Sholes as a session leader when the New York–based producer needed help organizing Nashville sessions for RCA Victor artists. Atkins's first hit single was "Mr. Sandman", followed by "Silver Bell", which he recorded as a duet with Hank Snow. His albums also became more popular. He was featured on ABC-TV's The Eddy Arnold Show in the summer of 1956 and on Country Music Jubilee in 1957 and 1958 (by then renamed Jubilee USA). In addition to recording, Atkins was a design consultant for Gretsch, which manufactured a popular Chet Atkins line of electric guitars from 1955 to 1980. He became manager of RCA Victor's Nashville studios, eventually inspiring and seeing the completion of the legendary RCA Studio B, the first studio built specifically for the purpose of recording on the now-famous Music Row. Also later on, Chet and Owen Bradley would become instrumental in the creation of studio B's adjacent building RCA Studio A as well. When Sholes took over pop production in 1957—a result of his success with Elvis Presley—he put Atkins in charge of RCA Victor's Nashville division. With country music record sales declining as rock and roll became more popular, Atkins and Bob Ferguson took their cue from Owen Bradley and eliminated fiddles and steel guitar as a means of making country singers appeal to pop fans. This became known as the Nashville sound, which Atkins said was a label created by the media for a style of recording during that period intended to keep country (and their jobs) viable. Atkins used the Jordanaires and a rhythm section on hits such as Jim Reeves's "Four Walls" and "He'll Have to Go" and Don Gibson's "Oh Lonesome Me" and "Blue Blue Day". The once-rare phenomenon of having a country hit cross over to pop success became more common. He and Bradley had essentially put the producer in the driver's seat, guiding an artist's choice of material and the musical background. Atkins made his own records, which usually visited pop standards and jazz, in a sophisticated home studio, often recording the rhythm tracks at RCA and adding his solo parts at home, refining the tracks until the results satisfied him. Guitarists of all styles came to admire various Atkins albums for their unique musical ideas and in some cases experimental electronic ideas. In this period, he became known internationally as "Mister Guitar", inspiring an album, Mister Guitar, engineered by both Bob Ferris and Bill Porter, Ferris's replacement. At the end of March 1959, Porter took over as chief engineer at RCA's Nashville studio, in the space eventually known as Studio B after the facility expanded with a second studio in 1960. (At the time, RCA's sole Nashville studio had no letter designation.) Porter soon helped Atkins get a better reverberation sound from the studio's German effects device, an EMT plate reverb. With his golden ear, Porter found the studio's acoustics to be problematic, and he devised a set of acoustic baffles to hang from the ceiling, then selected positions for microphones based on resonant room modes. The sound of the recordings improved significantly, and the studio achieved a string of successes. The Nashville sound became more dynamic. In later years, when Bradley asked how he achieved his sound, Atkins told him "it was Porter." Porter described Atkins as respectful of musicians when recording—if someone was out of tune, he would not single that person out by name. Instead, he would say something like, "we got a little tuning problem ... Everybody check and see what's going on." If that did not work, Atkins would instruct Porter to turn the offending player down in the mix. When Porter left RCA in late-1964, Atkins said, "the sound was never the same, never as great." Atkins's trademark "Atkins style" of playing uses the thumb and first two or sometimes three fingers of the right hand. He developed this style from listening to Merle Travis, occasionally on a primitive radio. He was sure no one could play that articulately with just the thumb and index finger (which was exactly how Travis played), and he assumed it required the thumb and two fingers—and that was the style he pioneered and mastered. He enjoyed jamming with fellow studio musicians, and they were asked to perform at the Newport Jazz Festival in 1960. That performance was cancelled because of rioting, but a live recording of the group (After the Riot at Newport) was released. Atkins performed by invitation at the White House for every U.S. president from John F. Kennedy through to George H. W. Bush. Atkins was a member of the Million Dollar Band during the 1980s. He is also well known for his song "Yankee Doodle Dixie", in which he played "Yankee Doodle" and "Dixie" simultaneously, on the same guitar. Before his mentor Sholes died in 1968, Atkins had become vice president of RCA's country division. In 1987, he told Nine-O-One Network magazine that he was "ashamed" of his promotion: "I wanted to be known as a guitarist and I know, too, that they give you titles like that in lieu of money. So beware when they want to make you vice president." He had brought Waylon Jennings, Willie Nelson, Connie Smith, Bobby Bare, Dolly Parton, Jerry Reed, and John Hartford to the label in the 1960s and inspired and helped countless others. He took a considerable risk during the mid-1960s, when the civil rights movement sparked violence throughout the South, by signing country music's first African-American singer, Charley Pride, who sang rawer country than the smoother music Atkins had pioneered. Atkins's biggest hit single came in 1965, with "Yakety Axe", an adaptation of "Yakety Sax", by his friend, the saxophonist Boots Randolph. He rarely performed in those days and eventually hired other RCA producers, such as Bob Ferguson and Felton Jarvis, to lessen his workload. In the 1970s, Atkins became increasingly stressed by his executive duties. He produced fewer records, but could still turn out hits such as Perry Como's 1973 pop hit "And I Love You So". He recorded extensively with close friend and fellow picker Jerry Reed, who had become a hit artist in his own right. A 1973 diagnosis of colon cancer, however, led Atkins to redefine his role at RCA Records, to allow others to handle administration while he went back to his first love, the guitar, often recording with Reed or even Jethro Burns from Homer and Jethro (his brother-in-law) after Homer died in 1971. Atkins would turn over his administrative duties to Jerry Bradley, son of Owen, in 1973 at RCA. Atkins did little production work at RCA after stepping down and in fact, had hired producers at the label in the 1960s, among them Bob Ferguson and Felton Jarvis. As a recording artist, Atkins grew disillusioned with RCA in the late 1970s. He felt stifled because the record company would not let him branch into jazz. He had also produced late '60s jazz recordings by Canadian guitarist Lenny Breau, a friend and protege. His mid-1970s collaborations with one of his influences, Les Paul, Chester & Lester and Guitar Monsters, had already reflected that interest; Chester & Lester was one of the best-selling recordings of Atkins's career. At the same time, he grew dissatisfied with the direction Gretsch (no longer family-owned) was going and withdrew his authorization for them to use his name and began designing guitars with Gibson. In 1982, Atkins ended his 35-year association with RCA Records and signed with rival Columbia Records. He produced his first album for Columbia in 1983. Atkins had always been an ardent lover of jazz and throughout his career he was often criticized by "pure" country musicians for his jazz influences. He also said on many occasions that he did not like being referred to as a "country guitarist", insisting that he was "a guitarist, period." Although he played by ear and was a masterful improviser, he was able to read music and even performed some classical guitar pieces. When Roger C. Field, a friend, suggested to him in 1991 that he record and perform with a female singer, he did so with Suzy Bogguss. Atkins returned to his country roots for albums he recorded with Mark Knopfler and Jerry Reed. Knopfler had long mentioned Atkins as one of his earliest influences. Atkins also collaborated with Australian guitar legend Tommy Emmanuel. On being asked to name the ten most influential guitarists of the twentieth century, he named Django Reinhardt to the first position, and also placed himself on the list. In later years, he returned to radio, appearing on Garrison Keillor's Prairie Home Companion program, on American Public Media radio, even picking up a fiddle from time to time, and performing songs such as Bob Wills's "Corrina, Corrina" and Willie Nelson's "Seven Spanish Angels" with Nelson on a 1985 broadcast of the show at the Bridges Auditorium on the campus of Pomona College. Atkins received numerous awards, including 14 Grammy awards and nine Country Music Association awards for Instrumentalist of the Year. In 1993, he was honored with the Grammy Lifetime Achievement Award. Billboard magazine awarded him its Century Award, its "highest honor for distinguished creative achievement", in December 1997. Atkins is notable for his broad influence. His love for numerous styles of music can be traced from his early recording of the stride pianist James P. Johnson's "Johnson Rag", all the way to the rock stylings of Eric Johnson, an invited guest on Atkins's recording sessions, who, when Atkins attempted to copy his influential rocker "Cliffs of Dover", led to Atkins's creation of a unique arrangement of "Londonderry Air (Danny Boy)". The classical guitar selections included on almost all his albums were, for many American artists working in the field today, the first classical guitar they ever heard. He recorded smooth jazz guitar still played on American airwaves today. Atkins continued performing in the 1990s, but his health declined after he was diagnosed again with colon cancer in 1996. He died on June 30, 2001, at his home in Nashville, Tennessee, at the age of 77. His memorial service was held at Ryman Auditorium in Nashville. He was buried at Harpeth Hills Memory Gardens in Nashville. A stretch of Interstate 185 in southwest Georgia (between LaGrange and Columbus) is named "Chet Atkins Parkway". This stretch of interstate runs through Fortson, where Atkins spent much of his childhood. In 2002, Atkins was posthumously inducted into the Rock and Roll Hall of Fame. His award was presented by Marty Stuart and Brian Setzer and accepted by Atkins's grandson, Jonathan Russell. The following year, Atkins ranked number 28 in Country Music Television's "40 Greatest Men of Country Music". At the age of 13, the future jazz guitarist Earl Klugh was captivated watching Atkins's guitar playing on The Perry Como Show. Similarly, he was a big influence on Doyle Dykes. Atkins also inspired Drexl Jonez and Tommy Emmanuel. Johnny Winter's thumb-picking style came from Atkin's playing. Clint Black's album Nothin' but the Taillights includes the song "Ode to Chet", which includes the lyrics "'Cause I can win her over like Romeo did Juliet, if I can only show her I can almost pick that legato lick like Chet" and "It'll take more than Mel Bay 1, 2, & 3 if I'm ever gonna play like CGP." Atkins played guitar on the track. At the end of the song, Black and Atkins had a brief conversation. Atkins' song "Jam Man" is currently used in commercials for Esurance. In 1967, a tribute song, "Chet's Tune", was produced for Atkins' birthday, with contributions by a long list of RCA Victor artists, including Eddy Arnold, Connie Smith, Jerry Reed, Willie Nelson, Hank Snow, and others. The song was written by the Nashville songwriter Cy Coben, a friend of Atkins. The single reached number 38 on the country charts. In 2009, Steve Wariner released an album titled My Tribute to Chet Atkins. One song from that record, "Producer's Medley", featured Wariner's recreation of several famous songs that Atkins both produced and performed. "Producer's Medley" won the Grammy for Best Country Instrumental Performance in 2010. In November 2011, Rolling Stone ranked Atkins number 21 on their list of the "100 Greatest Guitarists of All Time". Country Music Association Country Music Hall of Fame and Museum Grammy Awards Rock and Roll Hall of Fame
[ { "paragraph_id": 0, "text": "Chester Burton Atkins (June 20, 1924 – June 30, 2001), also known as \"Mr. Guitar\" and \"The Country Gentleman\", was an American musician who, along with Owen Bradley and Bob Ferguson, helped create the Nashville sound, the country music style which expanded its appeal to adult pop music fans. He was primarily a guitarist, but he also played the mandolin, fiddle, banjo, and ukulele, and occasionally sang.", "title": "" }, { "paragraph_id": 1, "text": "Atkins's signature picking style was inspired by Merle Travis. Other major guitar influences were Django Reinhardt, George Barnes, Les Paul, and, later, Jerry Reed. His distinctive picking style and musicianship brought him admirers inside and outside the country scene, both in the United States and abroad. Atkins spent most of his career at RCA Victor and produced records for the Browns, Hank Snow, Porter Wagoner, Norma Jean, Dolly Parton, Dottie West, Perry Como, Floyd Cramer, Elvis Presley, the Everly Brothers, Eddy Arnold, Don Gibson, Jim Reeves, Jerry Reed, Skeeter Davis, Waylon Jennings, Roger Whittaker, Ann-Margret and many others.", "title": "" }, { "paragraph_id": 2, "text": "Rolling Stone credited Atkins with inventing the \"popwise 'Nashville sound' that rescued country music from a commercial slump\" and ranked him number 21 on their list of \"The 100 Greatest Guitarists of All Time\". In 2023, Atkins was named the 39th best guitarist of all time. Among many other honors, Atkins received 14 Grammy Awards and the Grammy Lifetime Achievement Award. He also received nine Country Music Association awards for Instrumentalist of the Year. He was inducted into the Rock & Roll Hall of Fame, the Country Music Hall of Fame and Museum, and the Musicians Hall of Fame and Museum. George Harrison was also inspired by Chet Atkins; early Beatles songs such as \"All My Loving\" show the influence.", "title": "" }, { "paragraph_id": 3, "text": "Atkins was born on June 20, 1924, in Luttrell, Tennessee, near Clinch Mountain. His parents divorced when he was six years old, after which he was raised by his mother. He was the youngest of three boys and a girl. He started out on the ukulele, later moving on to the fiddle, but he made a swap with his brother Lowell when he was nine: an old pistol and some chores for a guitar. He stated in his 1974 autobiography, \"We were so poor and everybody around us was so poor that it was the forties before anyone even knew there had been a depression.\" Forced to relocate to Fortson, Georgia, outside of Columbus to live with his father because of a critical asthma condition, Atkins was a sensitive youth who became obsessed with music. Because of his illness, he was forced to sleep in a straight-back chair to breathe comfortably. On those nights, he played his guitar until he fell asleep holding it, a habit that lasted his whole life. While living in Fortson, Atkins attended the historic Mountain Hill School. He returned in the 1990s to play a series of charity concerts to save the school from demolition. Stories have been told about the very young Chet who, when a friend or relative would come to visit and play guitar, crowded the musician and put his ear so close to the instrument that it became difficult for the visitor to play.", "title": "Biography" }, { "paragraph_id": 4, "text": "Atkins became an accomplished guitarist while he was in high school. He used the restroom in the school to practice because it had good acoustics. His first guitar had a nail for a nut and was so bowed that only the first few frets could be used. He later purchased a semi-acoustic electric guitar and amp, but he had to travel many miles to find an electrical outlet, since his home didn't have electricity.", "title": "Biography" }, { "paragraph_id": 5, "text": "Later in life, he lightheartedly gave himself (along with John Knowles, Tommy Emmanuel, Steve Wariner, and Jerry Reed) the honorary degree CGP (\"Certified Guitar Player\"). In 2011, his daughter Merle Atkins Russell bestowed the CGP degree on his longtime sideman Paul Yandell. She then declared no more CGPs would be allowed by the Atkins estate.", "title": "Biography" }, { "paragraph_id": 6, "text": "His half-brother Jim was a successful guitarist who worked with the Les Paul Trio in New York.", "title": "Biography" }, { "paragraph_id": 7, "text": "Atkins did not have a strong style of his own until 1939 when (while still living in Georgia) he heard Merle Travis picking over WLW radio. This early influence dramatically shaped his unique playing style. Whereas Travis used his index finger on his right hand for the melody and his thumb for the bass notes, Atkins expanded his right-hand style to include picking with his first three fingers, with the thumb on bass.", "title": "Biography" }, { "paragraph_id": 8, "text": "Chet Atkins was an amateur radio general class licensee. Formerly using the call sign WA4CZD, he obtained the vanity call sign W4CGP in 1998 to include the CGP designation, which supposedly stood for \"Certified Guitar Picker\". He was a member of the American Radio Relay League.", "title": "Biography" }, { "paragraph_id": 9, "text": "After dropping out of high school in 1942, Atkins landed a job at WNOX (AM) (now WNML) radio in Knoxville, where he played fiddle and guitar with the singer Bill Carlisle and the comic Archie Campbell and became a member of the station's Dixieland Swingsters, a small swing instrumental combo. After three years, he moved to WLW-AM in Cincinnati, Ohio, where Merle Travis had formerly worked.", "title": "Biography" }, { "paragraph_id": 10, "text": "After six months, he moved to Raleigh and worked with Johnnie and Jack before heading for Richmond, Virginia, where he performed with Sunshine Sue Workman. Atkins's shy personality worked against him, as did the fact that his sophisticated style led many to doubt he was truly \"country\". He was fired often but was soon able to land another job at another radio station on account of his unique playing ability.", "title": "Biography" }, { "paragraph_id": 11, "text": "Atkins and Jethro Burns (of Homer and Jethro) married twin sisters Leona and Lois Johnson, who sang as Laverne and Fern Johnson, the Johnson Sisters. Leona Atkins outlived her husband by eight years, dying in 2009 at the age of 85.", "title": "Biography" }, { "paragraph_id": 12, "text": "Travelling to Chicago, Atkins auditioned for Red Foley, who was leaving his star position on WLS-AM's National Barn Dance to join the Grand Ole Opry. Atkins made his first appearance at the Opry in 1946 as a member of Foley's band. He also recorded a single for Nashville-based Bullet Records that year. That single, \"Guitar Blues\", was fairly progressive, including a clarinet solo by the Nashville dance band musician Dutch McMillan, with Owen Bradley on piano. He had a solo spot on the Opry, but when that was cut, Atkins moved on to KWTO in Springfield, Missouri. Despite the support of executive Si Siman, however, he soon was fired for not sounding \"country enough\".", "title": "Biography" }, { "paragraph_id": 13, "text": "While working with a Western band in Denver, Colorado, Atkins came to the attention of RCA Victor. Siman had been encouraging Steve Sholes to sign Atkins, as his style (with the success of Merle Travis as a hit recording artist) was suddenly in vogue. Sholes, A&R director of country music at RCA, tracked Atkins down in Denver.", "title": "Biography" }, { "paragraph_id": 14, "text": "He made his first RCA Victor recordings in Chicago in 1947, but they did not sell. He did some studio work for RCA that year, but had relocated to Knoxville again where he worked with Homer and Jethro on WNOX's new Saturday night radio show The Tennessee Barn Dance and the popular Midday Merry Go Round.", "title": "Biography" }, { "paragraph_id": 15, "text": "In 1949, he left WNOX to join June Carter with Mother Maybelle and the Carter Sisters on KWTO. This incarnation of the Carter Family featured Maybelle Carter and daughters June, Helen, and Anita. Their work soon attracted attention from the Grand Ole Opry. The group relocated to Nashville in the mid-1950s. Atkins began working on recording sessions and performing on WSM-AM and the Opry. Atkins became a member of the Opry in the 1950s.", "title": "Biography" }, { "paragraph_id": 16, "text": "While he had not yet had a hit record for RCA Victor, his stature was growing. He began assisting Sholes as a session leader when the New York–based producer needed help organizing Nashville sessions for RCA Victor artists. Atkins's first hit single was \"Mr. Sandman\", followed by \"Silver Bell\", which he recorded as a duet with Hank Snow. His albums also became more popular. He was featured on ABC-TV's The Eddy Arnold Show in the summer of 1956 and on Country Music Jubilee in 1957 and 1958 (by then renamed Jubilee USA).", "title": "Biography" }, { "paragraph_id": 17, "text": "In addition to recording, Atkins was a design consultant for Gretsch, which manufactured a popular Chet Atkins line of electric guitars from 1955 to 1980. He became manager of RCA Victor's Nashville studios, eventually inspiring and seeing the completion of the legendary RCA Studio B, the first studio built specifically for the purpose of recording on the now-famous Music Row. Also later on, Chet and Owen Bradley would become instrumental in the creation of studio B's adjacent building RCA Studio A as well.", "title": "Biography" }, { "paragraph_id": 18, "text": "When Sholes took over pop production in 1957—a result of his success with Elvis Presley—he put Atkins in charge of RCA Victor's Nashville division. With country music record sales declining as rock and roll became more popular, Atkins and Bob Ferguson took their cue from Owen Bradley and eliminated fiddles and steel guitar as a means of making country singers appeal to pop fans. This became known as the Nashville sound, which Atkins said was a label created by the media for a style of recording during that period intended to keep country (and their jobs) viable.", "title": "Biography" }, { "paragraph_id": 19, "text": "Atkins used the Jordanaires and a rhythm section on hits such as Jim Reeves's \"Four Walls\" and \"He'll Have to Go\" and Don Gibson's \"Oh Lonesome Me\" and \"Blue Blue Day\". The once-rare phenomenon of having a country hit cross over to pop success became more common. He and Bradley had essentially put the producer in the driver's seat, guiding an artist's choice of material and the musical background.", "title": "Biography" }, { "paragraph_id": 20, "text": "Atkins made his own records, which usually visited pop standards and jazz, in a sophisticated home studio, often recording the rhythm tracks at RCA and adding his solo parts at home, refining the tracks until the results satisfied him. Guitarists of all styles came to admire various Atkins albums for their unique musical ideas and in some cases experimental electronic ideas. In this period, he became known internationally as \"Mister Guitar\", inspiring an album, Mister Guitar, engineered by both Bob Ferris and Bill Porter, Ferris's replacement.", "title": "Biography" }, { "paragraph_id": 21, "text": "At the end of March 1959, Porter took over as chief engineer at RCA's Nashville studio, in the space eventually known as Studio B after the facility expanded with a second studio in 1960. (At the time, RCA's sole Nashville studio had no letter designation.) Porter soon helped Atkins get a better reverberation sound from the studio's German effects device, an EMT plate reverb. With his golden ear, Porter found the studio's acoustics to be problematic, and he devised a set of acoustic baffles to hang from the ceiling, then selected positions for microphones based on resonant room modes. The sound of the recordings improved significantly, and the studio achieved a string of successes. The Nashville sound became more dynamic. In later years, when Bradley asked how he achieved his sound, Atkins told him \"it was Porter.\" Porter described Atkins as respectful of musicians when recording—if someone was out of tune, he would not single that person out by name. Instead, he would say something like, \"we got a little tuning problem ... Everybody check and see what's going on.\" If that did not work, Atkins would instruct Porter to turn the offending player down in the mix. When Porter left RCA in late-1964, Atkins said, \"the sound was never the same, never as great.\"", "title": "Biography" }, { "paragraph_id": 22, "text": "Atkins's trademark \"Atkins style\" of playing uses the thumb and first two or sometimes three fingers of the right hand. He developed this style from listening to Merle Travis, occasionally on a primitive radio. He was sure no one could play that articulately with just the thumb and index finger (which was exactly how Travis played), and he assumed it required the thumb and two fingers—and that was the style he pioneered and mastered.", "title": "Biography" }, { "paragraph_id": 23, "text": "He enjoyed jamming with fellow studio musicians, and they were asked to perform at the Newport Jazz Festival in 1960. That performance was cancelled because of rioting, but a live recording of the group (After the Riot at Newport) was released. Atkins performed by invitation at the White House for every U.S. president from John F. Kennedy through to George H. W. Bush. Atkins was a member of the Million Dollar Band during the 1980s. He is also well known for his song \"Yankee Doodle Dixie\", in which he played \"Yankee Doodle\" and \"Dixie\" simultaneously, on the same guitar.", "title": "Biography" }, { "paragraph_id": 24, "text": "Before his mentor Sholes died in 1968, Atkins had become vice president of RCA's country division. In 1987, he told Nine-O-One Network magazine that he was \"ashamed\" of his promotion: \"I wanted to be known as a guitarist and I know, too, that they give you titles like that in lieu of money. So beware when they want to make you vice president.\" He had brought Waylon Jennings, Willie Nelson, Connie Smith, Bobby Bare, Dolly Parton, Jerry Reed, and John Hartford to the label in the 1960s and inspired and helped countless others. He took a considerable risk during the mid-1960s, when the civil rights movement sparked violence throughout the South, by signing country music's first African-American singer, Charley Pride, who sang rawer country than the smoother music Atkins had pioneered.", "title": "Biography" }, { "paragraph_id": 25, "text": "Atkins's biggest hit single came in 1965, with \"Yakety Axe\", an adaptation of \"Yakety Sax\", by his friend, the saxophonist Boots Randolph. He rarely performed in those days and eventually hired other RCA producers, such as Bob Ferguson and Felton Jarvis, to lessen his workload.", "title": "Biography" }, { "paragraph_id": 26, "text": "In the 1970s, Atkins became increasingly stressed by his executive duties. He produced fewer records, but could still turn out hits such as Perry Como's 1973 pop hit \"And I Love You So\". He recorded extensively with close friend and fellow picker Jerry Reed, who had become a hit artist in his own right. A 1973 diagnosis of colon cancer, however, led Atkins to redefine his role at RCA Records, to allow others to handle administration while he went back to his first love, the guitar, often recording with Reed or even Jethro Burns from Homer and Jethro (his brother-in-law) after Homer died in 1971. Atkins would turn over his administrative duties to Jerry Bradley, son of Owen, in 1973 at RCA.", "title": "Biography" }, { "paragraph_id": 27, "text": "Atkins did little production work at RCA after stepping down and in fact, had hired producers at the label in the 1960s, among them Bob Ferguson and Felton Jarvis. As a recording artist, Atkins grew disillusioned with RCA in the late 1970s. He felt stifled because the record company would not let him branch into jazz. He had also produced late '60s jazz recordings by Canadian guitarist Lenny Breau, a friend and protege. His mid-1970s collaborations with one of his influences, Les Paul, Chester & Lester and Guitar Monsters, had already reflected that interest; Chester & Lester was one of the best-selling recordings of Atkins's career. At the same time, he grew dissatisfied with the direction Gretsch (no longer family-owned) was going and withdrew his authorization for them to use his name and began designing guitars with Gibson. In 1982, Atkins ended his 35-year association with RCA Records and signed with rival Columbia Records. He produced his first album for Columbia in 1983.", "title": "Biography" }, { "paragraph_id": 28, "text": "Atkins had always been an ardent lover of jazz and throughout his career he was often criticized by \"pure\" country musicians for his jazz influences. He also said on many occasions that he did not like being referred to as a \"country guitarist\", insisting that he was \"a guitarist, period.\" Although he played by ear and was a masterful improviser, he was able to read music and even performed some classical guitar pieces. When Roger C. Field, a friend, suggested to him in 1991 that he record and perform with a female singer, he did so with Suzy Bogguss.", "title": "Biography" }, { "paragraph_id": 29, "text": "Atkins returned to his country roots for albums he recorded with Mark Knopfler and Jerry Reed. Knopfler had long mentioned Atkins as one of his earliest influences. Atkins also collaborated with Australian guitar legend Tommy Emmanuel. On being asked to name the ten most influential guitarists of the twentieth century, he named Django Reinhardt to the first position, and also placed himself on the list.", "title": "Biography" }, { "paragraph_id": 30, "text": "In later years, he returned to radio, appearing on Garrison Keillor's Prairie Home Companion program, on American Public Media radio, even picking up a fiddle from time to time, and performing songs such as Bob Wills's \"Corrina, Corrina\" and Willie Nelson's \"Seven Spanish Angels\" with Nelson on a 1985 broadcast of the show at the Bridges Auditorium on the campus of Pomona College.", "title": "Biography" }, { "paragraph_id": 31, "text": "Atkins received numerous awards, including 14 Grammy awards and nine Country Music Association awards for Instrumentalist of the Year. In 1993, he was honored with the Grammy Lifetime Achievement Award. Billboard magazine awarded him its Century Award, its \"highest honor for distinguished creative achievement\", in December 1997.", "title": "Death and legacy" }, { "paragraph_id": 32, "text": "Atkins is notable for his broad influence. His love for numerous styles of music can be traced from his early recording of the stride pianist James P. Johnson's \"Johnson Rag\", all the way to the rock stylings of Eric Johnson, an invited guest on Atkins's recording sessions, who, when Atkins attempted to copy his influential rocker \"Cliffs of Dover\", led to Atkins's creation of a unique arrangement of \"Londonderry Air (Danny Boy)\".", "title": "Death and legacy" }, { "paragraph_id": 33, "text": "The classical guitar selections included on almost all his albums were, for many American artists working in the field today, the first classical guitar they ever heard. He recorded smooth jazz guitar still played on American airwaves today.", "title": "Death and legacy" }, { "paragraph_id": 34, "text": "Atkins continued performing in the 1990s, but his health declined after he was diagnosed again with colon cancer in 1996. He died on June 30, 2001, at his home in Nashville, Tennessee, at the age of 77. His memorial service was held at Ryman Auditorium in Nashville. He was buried at Harpeth Hills Memory Gardens in Nashville.", "title": "Death and legacy" }, { "paragraph_id": 35, "text": "A stretch of Interstate 185 in southwest Georgia (between LaGrange and Columbus) is named \"Chet Atkins Parkway\". This stretch of interstate runs through Fortson, where Atkins spent much of his childhood.", "title": "Death and legacy" }, { "paragraph_id": 36, "text": "In 2002, Atkins was posthumously inducted into the Rock and Roll Hall of Fame. His award was presented by Marty Stuart and Brian Setzer and accepted by Atkins's grandson, Jonathan Russell. The following year, Atkins ranked number 28 in Country Music Television's \"40 Greatest Men of Country Music\".", "title": "Death and legacy" }, { "paragraph_id": 37, "text": "At the age of 13, the future jazz guitarist Earl Klugh was captivated watching Atkins's guitar playing on The Perry Como Show. Similarly, he was a big influence on Doyle Dykes. Atkins also inspired Drexl Jonez and Tommy Emmanuel.", "title": "Death and legacy" }, { "paragraph_id": 38, "text": "Johnny Winter's thumb-picking style came from Atkin's playing.", "title": "Death and legacy" }, { "paragraph_id": 39, "text": "Clint Black's album Nothin' but the Taillights includes the song \"Ode to Chet\", which includes the lyrics \"'Cause I can win her over like Romeo did Juliet, if I can only show her I can almost pick that legato lick like Chet\" and \"It'll take more than Mel Bay 1, 2, & 3 if I'm ever gonna play like CGP.\" Atkins played guitar on the track. At the end of the song, Black and Atkins had a brief conversation.", "title": "Death and legacy" }, { "paragraph_id": 40, "text": "Atkins' song \"Jam Man\" is currently used in commercials for Esurance.", "title": "Death and legacy" }, { "paragraph_id": 41, "text": "In 1967, a tribute song, \"Chet's Tune\", was produced for Atkins' birthday, with contributions by a long list of RCA Victor artists, including Eddy Arnold, Connie Smith, Jerry Reed, Willie Nelson, Hank Snow, and others. The song was written by the Nashville songwriter Cy Coben, a friend of Atkins. The single reached number 38 on the country charts.", "title": "Death and legacy" }, { "paragraph_id": 42, "text": "In 2009, Steve Wariner released an album titled My Tribute to Chet Atkins. One song from that record, \"Producer's Medley\", featured Wariner's recreation of several famous songs that Atkins both produced and performed. \"Producer's Medley\" won the Grammy for Best Country Instrumental Performance in 2010.", "title": "Death and legacy" }, { "paragraph_id": 43, "text": "In November 2011, Rolling Stone ranked Atkins number 21 on their list of the \"100 Greatest Guitarists of All Time\".", "title": "Death and legacy" }, { "paragraph_id": 44, "text": "Country Music Association", "title": "Industry awards" }, { "paragraph_id": 45, "text": "Country Music Hall of Fame and Museum", "title": "Industry awards" }, { "paragraph_id": 46, "text": "Grammy Awards", "title": "Industry awards" }, { "paragraph_id": 47, "text": "Rock and Roll Hall of Fame", "title": "Industry awards" } ]
Chester Burton Atkins, also known as "Mr. Guitar" and "The Country Gentleman", was an American musician who, along with Owen Bradley and Bob Ferguson, helped create the Nashville sound, the country music style which expanded its appeal to adult pop music fans. He was primarily a guitarist, but he also played the mandolin, fiddle, banjo, and ukulele, and occasionally sang. Atkins's signature picking style was inspired by Merle Travis. Other major guitar influences were Django Reinhardt, George Barnes, Les Paul, and, later, Jerry Reed. His distinctive picking style and musicianship brought him admirers inside and outside the country scene, both in the United States and abroad. Atkins spent most of his career at RCA Victor and produced records for the Browns, Hank Snow, Porter Wagoner, Norma Jean, Dolly Parton, Dottie West, Perry Como, Floyd Cramer, Elvis Presley, the Everly Brothers, Eddy Arnold, Don Gibson, Jim Reeves, Jerry Reed, Skeeter Davis, Waylon Jennings, Roger Whittaker, Ann-Margret and many others. Rolling Stone credited Atkins with inventing the "popwise 'Nashville sound' that rescued country music from a commercial slump" and ranked him number 21 on their list of "The 100 Greatest Guitarists of All Time". In 2023, Atkins was named the 39th best guitarist of all time. Among many other honors, Atkins received 14 Grammy Awards and the Grammy Lifetime Achievement Award. He also received nine Country Music Association awards for Instrumentalist of the Year. He was inducted into the Rock & Roll Hall of Fame, the Country Music Hall of Fame and Museum, and the Musicians Hall of Fame and Museum. George Harrison was also inspired by Chet Atkins; early Beatles songs such as "All My Loving" show the influence.
2002-01-15T04:40:59Z
2023-11-15T04:02:21Z
[ "Template:Won", "Template:Official website", "Template:Pop Chronicles", "Template:Citation needed", "Template:Further", "Template:Webarchive", "Template:Navboxes", "Template:Use American English", "Template:Use mdy dates", "Template:Redirect", "Template:Cite web", "Template:Dead link", "Template:Cbignore", "Template:Rockhall", "Template:Short description", "Template:Infobox musical artist", "Template:Reflist", "Template:Gilliland", "Template:Commons category", "Template:Cite news", "Template:IMDb name", "Template:Chet Atkins", "Template:Grand Ole Opry members", "Template:Authority control", "Template:When", "Template:Honoured", "Template:ISBN", "Template:YouTube", "Template:Cite book" ]
https://en.wikipedia.org/wiki/Chet_Atkins
7,757
Conrad II (disambiguation)
Conrad II may refer to:
[ { "paragraph_id": 0, "text": "Conrad II may refer to:", "title": "" } ]
Conrad II may refer to: Conrad II, Duke of Transjurane Burgundy Conrad II, Holy Roman Emperor Conrad II, Duke of Carinthia Conrad II of Italy (1074–1101) Conrad II of Dachau Conrad II of Znojmo Conrad II of Bohemia Conrad II, Margrave of Lusatia Conrad II Conrad II of Jerusalem (1228–1254) Conrad II of Teck (1235–1292) Conrad II the Hunchback (1252/65–1304) Konrad II the Gray
2023-06-27T04:05:45Z
[ "Template:Floruit", "Template:Hndis" ]
https://en.wikipedia.org/wiki/Conrad_II_(disambiguation)
7,765
Cahiers du Cinéma
Cahiers du Cinéma (French pronunciation: [kaje dy sinema], lit. 'notebooks on cinema') is a French film magazine co-founded in 1951 by André Bazin, Jacques Doniol-Valcroze, and Joseph-Marie Lo Duca. It developed from the earlier magazine Revue du Cinéma (lit. 'review of cinema' established in 1928) involving members of two Paris film clubs—Objectif 49 (Robert Bresson, Jean Cocteau, and Alexandre Astruc, among others; lit. 'objective 49') and Ciné-Club du Quartier Latin (lit. 'cinema club of the Latin Quarter'). Initially edited by Doniol-Valcroze and, after 1957, by Éric Rohmer (aka, Maurice Scherer), it included amongst its writers Jacques Rivette, Jean-Luc Godard, Claude Chabrol, and François Truffaut, who went on to become highly influential filmmakers. It is the oldest French-language film magazine in publication. The first issue of Cahiers appeared in April 1951. Much of its head staff, including Bazin, Doniol-Valcroze, Lo Duca, and the various younger, less-established critics, had met and shared their beliefs about film through their involvement in the publication of Revue du Cinéma from 1946 until its final issue in 1948; Cahiers was created as a successor to this earlier magazine. Early issues of Cahiers were small journals of thirty pages which bore minimalist covers, distinctive for their lack of headlines in favor of film stills on a distinctive bright yellow background. Each issue contained four or five articles (with at least one piece by Bazin in most issues), most of which were reviews of specific films or appreciations of directors, supplemented on occasion by longer theoretical essays. The first few years of the magazine's publication were dominated by Bazin, who was the de facto head of the editorial board. Bazin intended Cahiers to be a continuation of the intellectual form of criticism that Revue had printed, which prominently featured his articles advocating for realism as the most valuable quality of cinema. As more issues of Cahiers were published, however, Bazin found that a group of young proteges and critics serving as editors underneath him were beginning to disagree with him in the pages of the magazine. Godard would voice his discontent with Bazin as early as 1952, when he challenged Bazin's views on editing in an article for the September issue of Cahiers. Gradually, the tastes of these young critics drifted away from those of Bazin, as members of the group began to write critical appreciations of more commercial American filmmakers such as Alfred Hitchcock and Howard Hawks rather than the canonized French and Italian filmmakers that interested Bazin. The younger critics broke completely with Bazin by 1954, when an article in the January issue by Truffaut attacked what he called La qualité française (lit. ' the French quality', usually translated as "The Tradition of Quality"), denouncing many critically respected French films of the time as being unimaginative, oversimplified, and even immoral adaptations of literary works. The article became the manifesto for the politique des auteurs (lit. ' the policy of the authors '), which became the label for Cahiers younger critics' emphasis on the importance of the director in the creation of a film—as a film's "author"—and their re-evaluation of Hollywood films and directors such as Hitchcock, Hawks, Jerry Lewis, Robert Aldrich, Nicholas Ray, and Fritz Lang. Subsequently, American critic Andrew Sarris latched onto the word, "auteur", and paired it with the English word, "theory"; hence coining the phrase the "auteur theory" by which this critical approach is known in English-language film criticism. After the publication of Truffaut's article, Doniol-Valcroze and most of the Cahiers editors besides Bazin and Lo Duca rallied behind the rebellious authors; Lo Duca left Cahiers a year later, while Bazin, in failing health, gave editorial control of the magazine to Rohmer and largely left Paris, though he continued to write for the magazine. Now with control over the magazine's ideological approaches to film, the younger critics (minus Godard, who had left Paris in 1952, not to return until 1956) changed the format of Cahiers somewhat, frequently conducting interviews with directors deemed "auteurs" and voting on films in a "Council" of ten core critics. These critics came to champion non-American directors as well, writing on the mise en scène (the "dominant object of study" at the magazine) of such filmmakers as Jean Renoir, Roberto Rossellini, Kenji Mizoguchi, Max Ophüls, and Jean Cocteau, many of whom Bazin had introduced them to. By the end of the 1950s, many of the remaining editors of Cahiers, however, were becoming increasingly dissatisfied with the mere act of writing film criticism. Spurred on by the return of Godard to Paris in 1956 (who in the interim had made a short film himself), many of the younger critics became interested in making films themselves. Godard, Truffaut, Chabrol, Doniol-Valcroze, and even Rohmer, who had officially succeeded Doniol-Valcroze as head editor in 1958, began to divide their time between making films and writing about them. The films that these critics made were experimental explorations of various theoretical, artistic, and ideological aspects of the film form, and would, along with the films of young French filmmakers outside the Cahiers circle, form the basis for the cinematic movement known as the French New Wave. Meanwhile, Cahiers underwent staff changes, as Rohmer hired new editors such as Jean Douchet to fill the roles of those editors who were now making films, while other existing editors, particularly Jacques Rivette, began to write even more for the magazine. Many of the newer critical voices (except for Rivette) largely ignored the films of the New Wave for Hollywood when they were not outright criticizing them, creating friction between much of the directorial side of the younger critics and the head editor Rohmer. A group of five Cahiers editors, including Godard and Doniol-Valcroze and led by Rivette, urged Rohmer to refocus the magazine's content on newer films such as their own. When he refused, the "gang of five" forced Rohmer out and installed Rivette as his replacement in 1963. Rivette shifted political and social concerns farther to the left, and began a trend in the magazine of paying more attention to non-Hollywood films. The style of the journal moved through literary modernism in the early 1960s to radicalism and dialectical materialism by 1970. Moreover, during the mid-1970s the magazine was run by a Maoist editorial collective. In the mid-1970s, a review of the American film Jaws marked the magazine's return to more commercial perspectives, and an editorial turnover: (Serge Daney, Serge Toubiana, Thierry Jousse, Antoine de Baecque, and Charles Tesson). It led to the rehabilitation of some of the old Cahiers favourites, as well as some new film makers like Manoel de Oliveira, Raoul Ruiz, Hou Hsiao-hsien, Youssef Chahine, and Maurice Pialat. Recent writers have included Daney, André Téchiné, Léos Carax, Olivier Assayas, Danièle Dubroux, and Serge Le Péron. In 1998, the Editions de l'Etoile (the company publishing Cahiers) was acquired by the press group Le Monde. Traditionally losing money, the magazine attempted a make-over in 1999 to gain new readers, leading to a first split among writers and resulting in a magazine addressing all visual arts in a post-modernist approach. This version of the magazine printed ill-received opinion pieces on reality TV or video games that confused the traditional readership of the magazine. Le Monde took full editorial control of the magazine in 2003, appointing Jean-Michel Frodon as editor-in-chief. In February 2009, Cahiers was acquired from Le Monde by Richard Schlagman, also owner of Phaidon Press, a worldwide publishing group which specialises in books on the visual arts. In July 2009, Stéphane Delorme and Jean-Philippe Tessé were promoted respectively to the positions of editor-in-chief and deputy chief editor. In February 2020, the magazine was bought by several French entrepreneurs, including Xavier Niel and Alain Weill. The entire editorial staff resigned, saying the change posed a threat to their editorial independence. The magazine has compiled a list of the top 10 films of each year for much of its existence.
[ { "paragraph_id": 0, "text": "Cahiers du Cinéma (French pronunciation: [kaje dy sinema], lit. 'notebooks on cinema') is a French film magazine co-founded in 1951 by André Bazin, Jacques Doniol-Valcroze, and Joseph-Marie Lo Duca. It developed from the earlier magazine Revue du Cinéma (lit. 'review of cinema' established in 1928) involving members of two Paris film clubs—Objectif 49 (Robert Bresson, Jean Cocteau, and Alexandre Astruc, among others; lit. 'objective 49') and Ciné-Club du Quartier Latin (lit. 'cinema club of the Latin Quarter').", "title": "" }, { "paragraph_id": 1, "text": "Initially edited by Doniol-Valcroze and, after 1957, by Éric Rohmer (aka, Maurice Scherer), it included amongst its writers Jacques Rivette, Jean-Luc Godard, Claude Chabrol, and François Truffaut, who went on to become highly influential filmmakers. It is the oldest French-language film magazine in publication.", "title": "" }, { "paragraph_id": 2, "text": "The first issue of Cahiers appeared in April 1951. Much of its head staff, including Bazin, Doniol-Valcroze, Lo Duca, and the various younger, less-established critics, had met and shared their beliefs about film through their involvement in the publication of Revue du Cinéma from 1946 until its final issue in 1948; Cahiers was created as a successor to this earlier magazine.", "title": "History" }, { "paragraph_id": 3, "text": "Early issues of Cahiers were small journals of thirty pages which bore minimalist covers, distinctive for their lack of headlines in favor of film stills on a distinctive bright yellow background. Each issue contained four or five articles (with at least one piece by Bazin in most issues), most of which were reviews of specific films or appreciations of directors, supplemented on occasion by longer theoretical essays. The first few years of the magazine's publication were dominated by Bazin, who was the de facto head of the editorial board.", "title": "History" }, { "paragraph_id": 4, "text": "Bazin intended Cahiers to be a continuation of the intellectual form of criticism that Revue had printed, which prominently featured his articles advocating for realism as the most valuable quality of cinema. As more issues of Cahiers were published, however, Bazin found that a group of young proteges and critics serving as editors underneath him were beginning to disagree with him in the pages of the magazine. Godard would voice his discontent with Bazin as early as 1952, when he challenged Bazin's views on editing in an article for the September issue of Cahiers. Gradually, the tastes of these young critics drifted away from those of Bazin, as members of the group began to write critical appreciations of more commercial American filmmakers such as Alfred Hitchcock and Howard Hawks rather than the canonized French and Italian filmmakers that interested Bazin.", "title": "History" }, { "paragraph_id": 5, "text": "The younger critics broke completely with Bazin by 1954, when an article in the January issue by Truffaut attacked what he called La qualité française (lit. ' the French quality', usually translated as \"The Tradition of Quality\"), denouncing many critically respected French films of the time as being unimaginative, oversimplified, and even immoral adaptations of literary works. The article became the manifesto for the politique des auteurs (lit. ' the policy of the authors '), which became the label for Cahiers younger critics' emphasis on the importance of the director in the creation of a film—as a film's \"author\"—and their re-evaluation of Hollywood films and directors such as Hitchcock, Hawks, Jerry Lewis, Robert Aldrich, Nicholas Ray, and Fritz Lang. Subsequently, American critic Andrew Sarris latched onto the word, \"auteur\", and paired it with the English word, \"theory\"; hence coining the phrase the \"auteur theory\" by which this critical approach is known in English-language film criticism.", "title": "History" }, { "paragraph_id": 6, "text": "After the publication of Truffaut's article, Doniol-Valcroze and most of the Cahiers editors besides Bazin and Lo Duca rallied behind the rebellious authors; Lo Duca left Cahiers a year later, while Bazin, in failing health, gave editorial control of the magazine to Rohmer and largely left Paris, though he continued to write for the magazine. Now with control over the magazine's ideological approaches to film, the younger critics (minus Godard, who had left Paris in 1952, not to return until 1956) changed the format of Cahiers somewhat, frequently conducting interviews with directors deemed \"auteurs\" and voting on films in a \"Council\" of ten core critics. These critics came to champion non-American directors as well, writing on the mise en scène (the \"dominant object of study\" at the magazine) of such filmmakers as Jean Renoir, Roberto Rossellini, Kenji Mizoguchi, Max Ophüls, and Jean Cocteau, many of whom Bazin had introduced them to.", "title": "History" }, { "paragraph_id": 7, "text": "By the end of the 1950s, many of the remaining editors of Cahiers, however, were becoming increasingly dissatisfied with the mere act of writing film criticism. Spurred on by the return of Godard to Paris in 1956 (who in the interim had made a short film himself), many of the younger critics became interested in making films themselves. Godard, Truffaut, Chabrol, Doniol-Valcroze, and even Rohmer, who had officially succeeded Doniol-Valcroze as head editor in 1958, began to divide their time between making films and writing about them. The films that these critics made were experimental explorations of various theoretical, artistic, and ideological aspects of the film form, and would, along with the films of young French filmmakers outside the Cahiers circle, form the basis for the cinematic movement known as the French New Wave. Meanwhile, Cahiers underwent staff changes, as Rohmer hired new editors such as Jean Douchet to fill the roles of those editors who were now making films, while other existing editors, particularly Jacques Rivette, began to write even more for the magazine. Many of the newer critical voices (except for Rivette) largely ignored the films of the New Wave for Hollywood when they were not outright criticizing them, creating friction between much of the directorial side of the younger critics and the head editor Rohmer. A group of five Cahiers editors, including Godard and Doniol-Valcroze and led by Rivette, urged Rohmer to refocus the magazine's content on newer films such as their own. When he refused, the \"gang of five\" forced Rohmer out and installed Rivette as his replacement in 1963.", "title": "History" }, { "paragraph_id": 8, "text": "Rivette shifted political and social concerns farther to the left, and began a trend in the magazine of paying more attention to non-Hollywood films. The style of the journal moved through literary modernism in the early 1960s to radicalism and dialectical materialism by 1970. Moreover, during the mid-1970s the magazine was run by a Maoist editorial collective. In the mid-1970s, a review of the American film Jaws marked the magazine's return to more commercial perspectives, and an editorial turnover: (Serge Daney, Serge Toubiana, Thierry Jousse, Antoine de Baecque, and Charles Tesson). It led to the rehabilitation of some of the old Cahiers favourites, as well as some new film makers like Manoel de Oliveira, Raoul Ruiz, Hou Hsiao-hsien, Youssef Chahine, and Maurice Pialat. Recent writers have included Daney, André Téchiné, Léos Carax, Olivier Assayas, Danièle Dubroux, and Serge Le Péron.", "title": "History" }, { "paragraph_id": 9, "text": "In 1998, the Editions de l'Etoile (the company publishing Cahiers) was acquired by the press group Le Monde. Traditionally losing money, the magazine attempted a make-over in 1999 to gain new readers, leading to a first split among writers and resulting in a magazine addressing all visual arts in a post-modernist approach. This version of the magazine printed ill-received opinion pieces on reality TV or video games that confused the traditional readership of the magazine.", "title": "History" }, { "paragraph_id": 10, "text": "Le Monde took full editorial control of the magazine in 2003, appointing Jean-Michel Frodon as editor-in-chief. In February 2009, Cahiers was acquired from Le Monde by Richard Schlagman, also owner of Phaidon Press, a worldwide publishing group which specialises in books on the visual arts. In July 2009, Stéphane Delorme and Jean-Philippe Tessé were promoted respectively to the positions of editor-in-chief and deputy chief editor.", "title": "History" }, { "paragraph_id": 11, "text": "In February 2020, the magazine was bought by several French entrepreneurs, including Xavier Niel and Alain Weill. The entire editorial staff resigned, saying the change posed a threat to their editorial independence.", "title": "History" }, { "paragraph_id": 12, "text": "The magazine has compiled a list of the top 10 films of each year for much of its existence.", "title": "Annual top 10 films list" } ]
Cahiers du Cinéma is a French film magazine co-founded in 1951 by André Bazin, Jacques Doniol-Valcroze, and Joseph-Marie Lo Duca. It developed from the earlier magazine Revue du Cinéma involving members of two Paris film clubs—Objectif 49 and Ciné-Club du Quartier Latin. Initially edited by Doniol-Valcroze and, after 1957, by Éric Rohmer, it included amongst its writers Jacques Rivette, Jean-Luc Godard, Claude Chabrol, and François Truffaut, who went on to become highly influential filmmakers. It is the oldest French-language film magazine in publication.
2002-01-15T12:05:48Z
2023-12-04T01:15:35Z
[ "Template:Official website", "Template:Cahiers du Cinéma's Top Ten Films", "Template:French New Wave", "Template:Infobox magazine", "Template:Lang", "Template:Literal translation", "Template:Main article", "Template:Cite book", "Template:IPA-fr", "Template:Cite magazine", "Template:Cite web", "Template:Authority control", "Template:Use dmy dates", "Template:Cite news", "Template:Cite journal", "Template:Short description", "Template:--", "Template:Reflist", "Template:Portal bar" ]
https://en.wikipedia.org/wiki/Cahiers_du_Cin%C3%A9ma
7,767
Circuit Zandvoort
Circuit Zandvoort (Dutch pronunciation: [sɪrˈkʋi ˈzɑntˌfoːrt]), known for sponsorship reasons as CM.com Circuit Zandvoort, previously known as Circuit Park Zandvoort until 2017, is a 4.259 km (2.646 mi) motorsport race track located in the dunes north of Zandvoort, the Netherlands, near the North Sea coast line. It returned to the Formula One calendar in 2021 as the location of the revived Dutch Grand Prix. There were plans for races at Zandvoort before World War II: the first street race was held on 3 June 1939. However, a permanent race track was not constructed until after the war, using communications roads built by the occupying German army. Contrary to popular belief John Hugenholtz cannot be credited with the design of the Zandvoort track, although he was involved as the chairman of the Nederlandse Automobiel Ren Club (Dutch Auto Racing Club) before becoming the first track director in 1949. Instead, it was 1927 Le Mans winner, S. C. H. "Sammy" Davis who was brought in as a track design advisor in July 1946 although the layout was partly dictated by the existing roads. The first race on the circuit, the Prijs van Zandvoort, took place on 7 August 1948. The race was renamed the Grote Prijs van Zandvoort (Zandvoort Grand Prix) in 1949, then the Grote Prijs van Nederland (Dutch Grand Prix) in 1950. The 1952 race was the first to be run as a round of the World Championship, albeit to Formula Two regulations rather than Formula One regulations like all the European rounds of the championship that year; a similar situation also applied to the 1953. There was no Dutch Grand Prix in 1954, 1956 or 1957, but 1955 saw the first true Formula One race as part of the Drivers' Championship. The Dutch Grand Prix returned in 1958 and remained a permanent fixture on the F1 calendar (with the exception of 1972) through 1985, when it was held for the last time in the 20th century. To solve a number of problems that had made it impossible to develop and upgrade the circuit, most importantly noise pollution for Zandvoort inhabitants living closest to the track, the track management developed and adopted a plan to move the most southern part of the track away from the nearby housing estate, and rebuild a more compact track in the remaining former 'infield'. In January 1987 this plan got the necessary 'green light' when it was formally approved by the Provincial Council of North Holland. However, only a couple of months later a new problem arose: the company that commercially ran the circuit (CENAV), called in the receiver and went out of business, marking the end of 'Circuit Zandvoort'. Again the track, owned by the municipality of Zandvoort, was in danger of being permanently lost for motorsports. However, a new operating foundation, the "Stichting Exploitatie Circuit Park", was formed and started work at the realization of the track's reconstruction plans. Circuit Park Zandvoort was born and in the summer of 1989 the track was remodeled to an interim Club Circuit of 2.526 km (1.570 mi), while the disposed southern part of the track was used to build a Vendorado Bungalow Park and new premises for the local football and field-hockey clubs. In 1995, CPZ (Circuit Park Zandvoort) got the "A Status" of the government of the Netherlands and began building an international Grand Prix Circuit. This project was finished in 2001 when, after the track was redesigned to a 4.307 km (2.676 mi) long circuit and a new pits building was realized (by HPG, the development company of John Hugenholtz Jr., son of the former director), a new grandstand was situated along the long straight. One of the major events that is held at the circuit, along with DTM and A1GP, is the RTL Masters of Formula 3, where Formula Three cars of several national racing series compete with each other (originally called Marlboro Masters, before tobacco advertising ban). A noise restriction order was responsible for this event moving to the Belgian Circuit Zolder for 2007 and 2008. However, the race returned to its historical home in 2009. Circuit Park Zandvoort played host to the first race in the 2006/07 season of A1 Grand Prix from 29 September–1 October 2006. On 21 August 2008, the official A1GP site reported that the 2008/09 season's first race has moved from the Mugello Circuit, Italy to Zandvoort on the 4–5 October 2008 due to the delay in the building the new chassis for the new race cars. The Dutch round moved to TT Circuit Assen in 2010. A1GP bankrupted before its fifth season and the Dutch round was replaced with Superleague Formula. In November 2018 reported that Formula One Management (FOM) had invited the owners of the Zandvoort race track to make a proposal to stage a Grand Prix race in 2020. In March 2019, it was confirmed that a letter of intent had been signed between Zandvoort and FOM to stage the Dutch Grand Prix, dependent on private funding being secured to cover the cost of hosting the race. A deadline of 31 March 2019 was set for a final decision to be made. On 14 May 2019 it was confirmed that Zandvoort would host the Dutch Grand Prix for 2020 and beyond for a duration of at least three years, with the option to host another two years beyond that. Several alterations were made to the track by Jarno Zaffelli [it] to bring it up to date with F1 standards, including adding banking to turn 14 (Arie Luyendijkbocht) and turn 3 (Hugenholtzbocht), but the layout as a whole remained the same. The municipality of Zandvoort invested four million euros into the infrastructure around the circuit to improve the accessibility to the track. On 29 August 2019, the 2020 Dutch Grand Prix at Zandvoort was included as the fifth race on the provisional schedule, listed on 3 May 2020, between the Chinese Grand Prix and Spanish Grand Prix. The 2020 scheduled appearance was canceled due to the COVID-19 pandemic, however F1 racing did finally return to the circuit on 5 September 2021. On 17 September 2019, it was announced that Zandvoort would host the FIA Formula 2 Championship and FIA Formula 3 Championship, replacing the series' support races at Circuit Paul Ricard. The circuit gained popularity because of its fast, sweeping corners such as Scheivlak as well as the "Tarzanbocht" (Tarzan corner) hairpin at the end of the start/finish straight. Tarzanbocht is the most famous corner in the circuit. Since there is a camber in the corner, it provides excellent overtaking opportunities. It is possible to pass around the outside as well as the easier inside lane. This corner is reportedly named after a local character who had earned the nickname of Tarzan and only wanted to give up his vegetable garden in the dunes if the track's designers named a nearby corner after him. On the other hand, many different stories about Tarzan Corner are known. The circuit design has been modified and altered several times: The corners are named as follows (the numbers correspond to the present map, starting at the start/finish line): The elevation difference is 8.9 m (29 ft). Turns 3 and 13/14 are extremely cambered corners; turn 3 has a 19-degree bank while turns 13/14 have an 18-degree bank. The official lap record for the current circuit layout is 1:11.097, set by Lewis Hamilton driving for Mercedes in the 2021 Dutch Grand Prix. The all-time fastest official track record set during a race weekend for the current Grand Prix Circuit layout is 1:08.885, set by Max Verstappen during qualifying for the aforementioned Grand Prix. As of October 2023, the official race lap records at the Circuit Zandvoort are listed as: In the history of the circuit, several fatal accidents have occurred. Motor racer Willy Koppen was the first woman to participate in motor trials in the early fifties on the circuit. In August 1959 the UCI Road World Championships men's race was held at Zandvoort. André Darrigade of France won the 180 mi (290 km) race, Tom Simpson (Britain) was 4th. In 1994 a large interregional amateur race cycling race was organised by HSV De Kampioen in Haarlem. Since 2008, the course has been used as the venue for the Runner's World Zandvoort Circuit Run, a 5-kilometre road running competition. The 2010 edition of the race attracted Lornah Kiplagat, a multiple world champion, who won the ladies 5 km race. The Cycling Zandvoort 24h race was first held on 25–26 May 2013. It is open for public for soloists and teams up to 8 riders. A 6-hours was added to the event in 2016. On 13./14. June 2015 (12:00) the Cycling Zandvoort – 24 hour race over 4307-m-laps took place.
[ { "paragraph_id": 0, "text": "Circuit Zandvoort (Dutch pronunciation: [sɪrˈkʋi ˈzɑntˌfoːrt]), known for sponsorship reasons as CM.com Circuit Zandvoort, previously known as Circuit Park Zandvoort until 2017, is a 4.259 km (2.646 mi) motorsport race track located in the dunes north of Zandvoort, the Netherlands, near the North Sea coast line. It returned to the Formula One calendar in 2021 as the location of the revived Dutch Grand Prix.", "title": "" }, { "paragraph_id": 1, "text": "There were plans for races at Zandvoort before World War II: the first street race was held on 3 June 1939. However, a permanent race track was not constructed until after the war, using communications roads built by the occupying German army. Contrary to popular belief John Hugenholtz cannot be credited with the design of the Zandvoort track, although he was involved as the chairman of the Nederlandse Automobiel Ren Club (Dutch Auto Racing Club) before becoming the first track director in 1949. Instead, it was 1927 Le Mans winner, S. C. H. \"Sammy\" Davis who was brought in as a track design advisor in July 1946 although the layout was partly dictated by the existing roads.", "title": "History" }, { "paragraph_id": 2, "text": "The first race on the circuit, the Prijs van Zandvoort, took place on 7 August 1948. The race was renamed the Grote Prijs van Zandvoort (Zandvoort Grand Prix) in 1949, then the Grote Prijs van Nederland (Dutch Grand Prix) in 1950. The 1952 race was the first to be run as a round of the World Championship, albeit to Formula Two regulations rather than Formula One regulations like all the European rounds of the championship that year; a similar situation also applied to the 1953. There was no Dutch Grand Prix in 1954, 1956 or 1957, but 1955 saw the first true Formula One race as part of the Drivers' Championship. The Dutch Grand Prix returned in 1958 and remained a permanent fixture on the F1 calendar (with the exception of 1972) through 1985, when it was held for the last time in the 20th century.", "title": "History" }, { "paragraph_id": 3, "text": "To solve a number of problems that had made it impossible to develop and upgrade the circuit, most importantly noise pollution for Zandvoort inhabitants living closest to the track, the track management developed and adopted a plan to move the most southern part of the track away from the nearby housing estate, and rebuild a more compact track in the remaining former 'infield'. In January 1987 this plan got the necessary 'green light' when it was formally approved by the Provincial Council of North Holland. However, only a couple of months later a new problem arose: the company that commercially ran the circuit (CENAV), called in the receiver and went out of business, marking the end of 'Circuit Zandvoort'. Again the track, owned by the municipality of Zandvoort, was in danger of being permanently lost for motorsports. However, a new operating foundation, the \"Stichting Exploitatie Circuit Park\", was formed and started work at the realization of the track's reconstruction plans. Circuit Park Zandvoort was born and in the summer of 1989 the track was remodeled to an interim Club Circuit of 2.526 km (1.570 mi), while the disposed southern part of the track was used to build a Vendorado Bungalow Park and new premises for the local football and field-hockey clubs.", "title": "History" }, { "paragraph_id": 4, "text": "In 1995, CPZ (Circuit Park Zandvoort) got the \"A Status\" of the government of the Netherlands and began building an international Grand Prix Circuit. This project was finished in 2001 when, after the track was redesigned to a 4.307 km (2.676 mi) long circuit and a new pits building was realized (by HPG, the development company of John Hugenholtz Jr., son of the former director), a new grandstand was situated along the long straight. One of the major events that is held at the circuit, along with DTM and A1GP, is the RTL Masters of Formula 3, where Formula Three cars of several national racing series compete with each other (originally called Marlboro Masters, before tobacco advertising ban). A noise restriction order was responsible for this event moving to the Belgian Circuit Zolder for 2007 and 2008. However, the race returned to its historical home in 2009.", "title": "History" }, { "paragraph_id": 5, "text": "Circuit Park Zandvoort played host to the first race in the 2006/07 season of A1 Grand Prix from 29 September–1 October 2006. On 21 August 2008, the official A1GP site reported that the 2008/09 season's first race has moved from the Mugello Circuit, Italy to Zandvoort on the 4–5 October 2008 due to the delay in the building the new chassis for the new race cars. The Dutch round moved to TT Circuit Assen in 2010. A1GP bankrupted before its fifth season and the Dutch round was replaced with Superleague Formula.", "title": "History" }, { "paragraph_id": 6, "text": "In November 2018 reported that Formula One Management (FOM) had invited the owners of the Zandvoort race track to make a proposal to stage a Grand Prix race in 2020. In March 2019, it was confirmed that a letter of intent had been signed between Zandvoort and FOM to stage the Dutch Grand Prix, dependent on private funding being secured to cover the cost of hosting the race. A deadline of 31 March 2019 was set for a final decision to be made. On 14 May 2019 it was confirmed that Zandvoort would host the Dutch Grand Prix for 2020 and beyond for a duration of at least three years, with the option to host another two years beyond that.", "title": "History" }, { "paragraph_id": 7, "text": "Several alterations were made to the track by Jarno Zaffelli [it] to bring it up to date with F1 standards, including adding banking to turn 14 (Arie Luyendijkbocht) and turn 3 (Hugenholtzbocht), but the layout as a whole remained the same. The municipality of Zandvoort invested four million euros into the infrastructure around the circuit to improve the accessibility to the track. On 29 August 2019, the 2020 Dutch Grand Prix at Zandvoort was included as the fifth race on the provisional schedule, listed on 3 May 2020, between the Chinese Grand Prix and Spanish Grand Prix. The 2020 scheduled appearance was canceled due to the COVID-19 pandemic, however F1 racing did finally return to the circuit on 5 September 2021. On 17 September 2019, it was announced that Zandvoort would host the FIA Formula 2 Championship and FIA Formula 3 Championship, replacing the series' support races at Circuit Paul Ricard.", "title": "History" }, { "paragraph_id": 8, "text": "The circuit gained popularity because of its fast, sweeping corners such as Scheivlak as well as the \"Tarzanbocht\" (Tarzan corner) hairpin at the end of the start/finish straight. Tarzanbocht is the most famous corner in the circuit. Since there is a camber in the corner, it provides excellent overtaking opportunities. It is possible to pass around the outside as well as the easier inside lane. This corner is reportedly named after a local character who had earned the nickname of Tarzan and only wanted to give up his vegetable garden in the dunes if the track's designers named a nearby corner after him. On the other hand, many different stories about Tarzan Corner are known.", "title": "The circuit" }, { "paragraph_id": 9, "text": "The circuit design has been modified and altered several times:", "title": "The circuit" }, { "paragraph_id": 10, "text": "The corners are named as follows (the numbers correspond to the present map, starting at the start/finish line):", "title": "The circuit" }, { "paragraph_id": 11, "text": "The elevation difference is 8.9 m (29 ft).", "title": "The circuit" }, { "paragraph_id": 12, "text": "Turns 3 and 13/14 are extremely cambered corners; turn 3 has a 19-degree bank while turns 13/14 have an 18-degree bank.", "title": "The circuit" }, { "paragraph_id": 13, "text": "", "title": "The circuit" }, { "paragraph_id": 14, "text": "The official lap record for the current circuit layout is 1:11.097, set by Lewis Hamilton driving for Mercedes in the 2021 Dutch Grand Prix. The all-time fastest official track record set during a race weekend for the current Grand Prix Circuit layout is 1:08.885, set by Max Verstappen during qualifying for the aforementioned Grand Prix. As of October 2023, the official race lap records at the Circuit Zandvoort are listed as:", "title": "Lap records" }, { "paragraph_id": 15, "text": "In the history of the circuit, several fatal accidents have occurred.", "title": "Fatal accidents" }, { "paragraph_id": 16, "text": "Motor racer Willy Koppen was the first woman to participate in motor trials in the early fifties on the circuit. In August 1959 the UCI Road World Championships men's race was held at Zandvoort. André Darrigade of France won the 180 mi (290 km) race, Tom Simpson (Britain) was 4th. In 1994 a large interregional amateur race cycling race was organised by HSV De Kampioen in Haarlem. Since 2008, the course has been used as the venue for the Runner's World Zandvoort Circuit Run, a 5-kilometre road running competition. The 2010 edition of the race attracted Lornah Kiplagat, a multiple world champion, who won the ladies 5 km race.", "title": "Cycling and running competitions" }, { "paragraph_id": 17, "text": "The Cycling Zandvoort 24h race was first held on 25–26 May 2013. It is open for public for soloists and teams up to 8 riders. A 6-hours was added to the event in 2016. On 13./14. June 2015 (12:00) the Cycling Zandvoort – 24 hour race over 4307-m-laps took place.", "title": "Cycling and running competitions" } ]
Circuit Zandvoort, known for sponsorship reasons as CM.com Circuit Zandvoort, previously known as Circuit Park Zandvoort until 2017, is a 4.259 km (2.646 mi) motorsport race track located in the dunes north of Zandvoort, the Netherlands, near the North Sea coast line. It returned to the Formula One calendar in 2021 as the location of the revived Dutch Grand Prix.
2002-02-25T15:51:15Z
2023-12-10T02:03:59Z
[ "Template:F1", "Template:Clarify", "Template:Ill", "Template:Which", "Template:Center", "Template:Commons category", "Template:Use dmy dates", "Template:IPA-nl", "Template:Citation needed", "Template:Reflist", "Template:Cite web", "Template:Cite news", "Template:Cvt", "Template:Short description", "Template:Infobox motorsport venue", "Template:Convert", "Template:Navboxes", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Circuit_Zandvoort
7,768
Crete Senesi
The Crete Senesi refers to an area of the Italian region of Tuscany immediately to the south of Siena. It consists of a range of hills and woods among villages and includes the comuni of Asciano, Buonconvento, Monteroni d'Arbia, Rapolano Terme and San Giovanni d'Asso, all within the province of Siena. They border to the north with the Chianti Senese area, to the east with Val di Chiana and to the south-west with Val d'Orcia. Nearby is also the semi-arid area known as the Accona Desert. Crete Senesi are literally the "clays of Siena": the distinctive grey colouration of the soil gives the landscape an appearance often described as lunar. This characteristic clay, known as mattaione, represents the sediments of the Pliocene sea which covered the area between 2.5 and 4.5 million years ago. The landscape is characterized by barren and gently undulating hills, solitary oaks and cypresses, isolated farms at the top of the heights, stretches of wood and ponds of rainwater (commonly referred as fontoni, literally "big springs") in the valleys. Badlands and biancane [it] are typical conformations of the land. Perhaps the most notable edifice of this area is the Abbey of Monte Oliveto Maggiore, located 10 km south of Asciano. The region is known for its production of white truffles, and hosts a festival and a museum dedicated to the rare fungus (genus Tuber). 43°12′N 11°34′E / 43.200°N 11.567°E / 43.200; 11.567
[ { "paragraph_id": 0, "text": "The Crete Senesi refers to an area of the Italian region of Tuscany immediately to the south of Siena. It consists of a range of hills and woods among villages and includes the comuni of Asciano, Buonconvento, Monteroni d'Arbia, Rapolano Terme and San Giovanni d'Asso, all within the province of Siena. They border to the north with the Chianti Senese area, to the east with Val di Chiana and to the south-west with Val d'Orcia. Nearby is also the semi-arid area known as the Accona Desert.", "title": "" }, { "paragraph_id": 1, "text": "Crete Senesi are literally the \"clays of Siena\": the distinctive grey colouration of the soil gives the landscape an appearance often described as lunar. This characteristic clay, known as mattaione, represents the sediments of the Pliocene sea which covered the area between 2.5 and 4.5 million years ago. The landscape is characterized by barren and gently undulating hills, solitary oaks and cypresses, isolated farms at the top of the heights, stretches of wood and ponds of rainwater (commonly referred as fontoni, literally \"big springs\") in the valleys. Badlands and biancane [it] are typical conformations of the land.", "title": "" }, { "paragraph_id": 2, "text": "Perhaps the most notable edifice of this area is the Abbey of Monte Oliveto Maggiore, located 10 km south of Asciano.", "title": "" }, { "paragraph_id": 3, "text": "The region is known for its production of white truffles, and hosts a festival and a museum dedicated to the rare fungus (genus Tuber).", "title": "" }, { "paragraph_id": 4, "text": "43°12′N 11°34′E / 43.200°N 11.567°E / 43.200; 11.567", "title": "External links" }, { "paragraph_id": 5, "text": "", "title": "External links" } ]
The Crete Senesi refers to an area of the Italian region of Tuscany immediately to the south of Siena. It consists of a range of hills and woods among villages and includes the comuni of Asciano, Buonconvento, Monteroni d'Arbia, Rapolano Terme and San Giovanni d'Asso, all within the province of Siena. They border to the north with the Chianti Senese area, to the east with Val di Chiana and to the south-west with Val d'Orcia. Nearby is also the semi-arid area known as the Accona Desert. Crete Senesi are literally the "clays of Siena": the distinctive grey colouration of the soil gives the landscape an appearance often described as lunar. This characteristic clay, known as mattaione, represents the sediments of the Pliocene sea which covered the area between 2.5 and 4.5 million years ago. The landscape is characterized by barren and gently undulating hills, solitary oaks and cypresses, isolated farms at the top of the heights, stretches of wood and ponds of rainwater in the valleys. Badlands and biancane are typical conformations of the land. Perhaps the most notable edifice of this area is the Abbey of Monte Oliveto Maggiore, located 10 km south of Asciano. The region is known for its production of white truffles, and hosts a festival and a museum dedicated to the rare fungus.
2022-02-04T05:23:35Z
[ "Template:Short description", "Template:Infobox valley", "Template:Authority control", "Template:Siena-geo-stub", "Template:Interlanguage link", "Template:Wide image", "Template:Reflist", "Template:Cite web", "Template:Coord" ]
https://en.wikipedia.org/wiki/Crete_Senesi
7,770
Christmas tree
A Christmas tree is a decorated tree, usually an evergreen conifer, such as a spruce, pine or fir, or an artificial tree of similar appearance, associated with the celebration of Christmas. The custom was developed in Central Europe and the Baltic states, particularly Estonia, Germany and Livonia (now Latvia), where Protestant Christians brought decorated trees into their homes. The tree was traditionally decorated with "roses made of colored paper, apples, wafers, tinsel, [and] sweetmeats". Moravian Christians began to illuminate Christmas trees with candles, which were often replaced by Christmas lights after the advent of electrification. Today, there is a wide variety of traditional and modern ornaments, such as garlands, baubles, tinsel, and candy canes. An angel or star might be placed at the top of the tree to represent the Angel Gabriel or the Star of Bethlehem, respectively, from the Nativity. Edible items such as gingerbread, chocolate, and other sweets are also popular and are tied to or hung from the tree's branches with ribbons. The Christmas tree has been historically regarded as a custom of the Lutheran Churches and only in 1982 did the Catholic Church erect the Vatican Christmas Tree. In the Western Christian tradition, Christmas trees are variously erected on days such as the first day of Advent or even as late as Christmas Eve depending on the country; customs of the same faith hold that the two traditional days when Christmas decorations, such as the Christmas tree, are removed are Twelfth Night and, if they are not taken down on that day, Candlemas, the latter of which ends the Christmas-Epiphany season in some denominations. The Christmas tree is sometimes compared with the "Yule-tree", especially in discussions of its folkloric origins. Modern Christmas trees originated in Central Europe and the Baltic states, particularly Estonia, Germany and Livonia (now Latvia) during the Renaissance in early modern Europe. Its 16th-century origins are sometimes associated with Protestant Christian reformer Martin Luther, who is said to have first added lighted candles to an evergreen tree. The Christmas tree was first recorded to be used by German Lutherans in the 16th century, with records indicating that a Christmas tree was placed in the Cathedral of Strasbourg in 1539, under the leadership of the Protestant Reformer Martin Bucer. The Moravian Christians put lighted candles on those trees." The earliest known firmly dated representation of a Christmas tree is on the keystone sculpture of a private home in Turckheim, Alsace (then part of the Holy Roman Empire of the German Nation, today France), with the date 1576. Modern Christmas trees have been related to the "tree of paradise" of medieval mystery plays that were given on 24 December, the commemoration and name day of Adam and Eve in various countries. In such plays, a tree decorated with apples (representing fruit from the tree of the knowledge of good and evil and thus to the original sin that Christ took away) and round white wafers (to represent the Eucharist and redemption) was used as a setting for the play. Like the Christmas crib, the Paradise tree was later placed in homes. The apples were replaced by round objects such as shiny red balls. Fir trees decorated with apples served as the central prop for the paradise play, a kind of folk religious drama often performed on December 24 These props were called paradise trees, and some researchers believe they were the forerunners of the Christmas tree At the end of the Middle Ages, an early predecessor appears referred in the 15th century Regiment of the Cistercian Alcobaça Monastery in Portugal. The Regiment of the local high-Sacristans of the Cistercian Order refers to what may be considered the oldest references to the Christmas tree: "Note on how to put the Christmas branch, scilicet: On the Christmas eve, you will look for a large Branch of green laurel, and you shall reap many red oranges, and place them on the branches that come of the laurel, specifically as you have seen, and in every orange you shall put a candle, and hang the Branch by a rope in the pole, which shall be by the candle of the high altar." Other sources have offered a connection between the symbolism of the first documented Christmas trees in Germany around 1600 and the trees of pre-Christian traditions, though this claim has been disputed. According to the Encyclopædia Britannica, "The use of evergreen trees, wreaths, and garlands to symbolize eternal life was a custom of the ancient Egyptians, Chinese, and Hebrews. Tree worship was common among the pagan Europeans and survived their conversion to Christianity in the Scandinavian customs of decorating the house and barn with evergreens at the New Year to scare away the devil and of setting up a tree for the birds during Christmas time." It is commonly believed that ancient Romans used to decorate their houses with evergreen trees to celebrate Saturnalia, although there are no historical records of that. In the poem Epithalamium by Catullus, he tells of the gods decorating the home of Peleus with trees, including laurel and cypress. Later Libanius, Tertullian, and Chrysostom speak of the use of evergreen trees to adorn Christian houses. The Vikings and Saxons worshiped trees. The story of Saint Boniface cutting down Donar's Oak illustrates the pagan practices in 8th century among the Germans. A later folk version of the story adds the detail that an evergreen tree grew in place of the felled oak, telling them about how its triangular shape reminds humanity of the Trinity and how it points to heaven. Customs of erecting decorated trees in winter time can be traced to Christmas celebrations in Renaissance-era guilds in Northern Germany and Livonia. The first evidence of decorated trees associated with Christmas Day are trees in guildhalls decorated with sweets to be enjoyed by the apprentices and children. In Livonia (present-day Estonia and Latvia), in 1441, 1442, 1510, and 1514, the Brotherhood of Blackheads erected a tree for the holidays in their guild houses in Reval (now Tallinn) and Riga. On the last night of the celebrations leading up to the holidays, the tree was taken to the Town Hall Square, where the members of the brotherhood danced around it. A Bremen guild chronicle of 1570 reports that a small tree decorated with "apples, nuts, dates, pretzels, and paper flowers" was erected in the guild-house for the benefit of the guild members' children, who collected the dainties on Christmas Day. In 1584, the pastor and chronicler Balthasar Russow in his Chronica der Provinz Lyfflandt (1584) wrote of an established tradition of setting up a decorated spruce at the market square, where the young men "went with a flock of maidens and women, first sang and danced there and then set the tree aflame". After the Protestant Reformation, such trees are seen in the houses of upper-class Protestant families as a counterpart to the Catholic Christmas cribs. This transition from the guild hall to the bourgeois family homes in the Protestant parts of Germany ultimately gives rise to the modern tradition as it developed in the 18th and 19th centuries. In the present-day, the churches and homes of Protestants and Catholics feature both Christmas cribs and Christmas trees. In Poland, there is a folk tradition dating back to an old Slavic pre-Christian custom of suspending a branch of fir, spruce, or pine from the ceiling rafters, called podłaźniczka, during the time of the Koliada winter festival. The branches were decorated with apples, nuts, acorns, and stars made of straw. In more recent times, the decorations also included colored paper cutouts (wycinanki), wafers, cookies, and Christmas baubles. According to old pagan beliefs, the branch's powers were linked to good harvest and prosperity. The custom was practiced by the peasants until the early 20th century, particularly in the regions of Lesser Poland and Upper Silesia. Most often the branches were hung above the wigilia dinner table on Christmas Eve. Beginning in the mid-19th century, the tradition over time was almost completely replaced by the later German practice of decorating a standing Christmas tree. In the early 19th century, the custom became popular among the nobility and spread to royal courts as far as Russia. Introduced by Fanny von Arnstein and popularized by Princess Henrietta of Nassau-Weilburg the Christmas tree reached Vienna in 1814 during the Congress of Vienna, and the custom spread across Austria in the following years. In France, the first Christmas tree was introduced in 1840 by the duchesse d'Orléans. In Denmark, a newspaper company claims that the first attested Christmas tree was lit in 1808 by countess Wilhemine of Holsteinborg. It was the aging countess who told the story of the first Danish Christmas tree to the Danish writer Hans Christian Andersen in 1865. He had published a fairy tale called The Fir-Tree in 1844, recounting the fate of a fir tree being used as a Christmas tree. By the early 18th century, the custom had become common in towns of the upper Rhineland, but it had not yet spread to rural areas. Wax candles, expensive items at the time, are found in attestations from the late 18th century. Along the lower Rhine, an area of Roman Catholic majority, the Christmas tree was largely regarded as a Protestant custom. As a result, it remained confined to the upper Rhineland for a relatively long period of time. The custom did eventually gain wider acceptance beginning around 1815 by way of Prussian officials who emigrated there following the Congress of Vienna. In the 19th century, the Christmas tree was taken to be an expression of German culture and of Gemütlichkeit, especially among emigrants overseas. A decisive factor in winning general popularity was the German army's decision to place Christmas trees in its barracks and military hospitals during the Franco-Prussian War. Only at the start of the 20th century did Christmas trees appear inside churches, this time in a new brightly lit form. Early Slovenian custom dating back to around the 17th century was to suspend the tree either upright or upside-down above the well, a corner of the dinner table, in the backyard, or from the fences, modestly decorated with fruits or not decorated at all. German brewer Peter Luelsdorf brought the first Christmas tree of the current tradition to Slovenia in 1845. He set it up in his small brewery inn in Ljubljana, the Slovenian capital. German officials, craftsmen and merchants quickly spread the tradition among the bourgeois population. The trees were typically decorated with walnuts, golden apples, carobs, and candles. At first the Catholic majority rejected this custom because they considered it a typical Protestant tradition. The first decorated Christmas Market was organized in Ljubljana already in 1859. However, this tradition was almost unknown to the rural population until World War I, after which everyone started decorating trees. Spruce trees have a centuries-long tradition in Slovenia. After World War II during Yugoslavia period, trees set in the public places (towns, squares, and markets) were politically replaced with fir trees, a symbol of socialism and Slavic mythology strongly associated with loyalty, courage, and dignity. However, spruce retained its popularity in Slovenian homes during those years and came back to public places after independence. Although the tradition of decorating churches and homes with evergreens at Christmas was long established, the custom of decorating an entire small tree was unknown in Britain until some two centuries ago. The German-born Queen Charlotte introduced a Christmas tree at a party she gave for children in 1800. The custom did not at first spread much beyond the royal family. Queen Victoria as a child was familiar with it and a tree was placed in her room every Christmas. In her journal for Christmas Eve 1832, the delighted 13-year-old princess wrote: After dinner [...] we then went into the drawing room near the dining room [...] There were two large round tables on which were placed two trees hung with lights and sugar ornaments. All the presents being placed round the trees [...] After Victoria's marriage to her German cousin Prince Albert, by 1841 the custom became even more widespread as wealthier middle-class families followed the fashion. In 1842 a newspaper advert for Christmas trees makes clear their smart cachet, German origins and association with children and gift-giving. An illustrated book, The Christmas Tree, describing their use and origins in detail, was on sale in December 1844. On 2 January 1846 Elizabeth Fielding (née Fox Strangways) wrote from Lacock Abbey to William Henry Fox-Talbot: "Constance is extremely busy preparing the Bohemian Xmas Tree. It is made from Caroline's description of those she saw in Germany". In 1847 Prince Albert wrote: "I must now seek in the children an echo of what Ernest [his brother] and I were in the old time, of what we felt and thought; and their delight in the Christmas trees is not less than ours used to be". A boost to the trend was given in 1848 when The Illustrated London News, in a report picked up by other papers, described the trees in Windsor Castle in detail and showed the main tree, surrounded by the royal family, on its cover. In fewer than ten years their use in better-off homes was widespread. By 1856 a northern provincial newspaper contained an advert alluding casually to them, as well as reporting the accidental death of a woman whose dress caught fire as she lit the tapers on a Christmas tree. They had not yet spread down the social scale though, as a report from Berlin in 1858 contrasts the situation there where "Every family has its own" with that of Britain, where Christmas trees were still the preserve of the wealthy or the "romantic". Their use at public entertainments, charity bazaars and in hospitals made them increasingly familiar however, and in 1906 a charity was set up specifically to ensure even poor children in London slums "who had never seen a Christmas tree" would enjoy one that year. Anti-German sentiment after World War I briefly reduced their popularity but the effect was short-lived, and by the mid-1920s the use of Christmas trees had spread to all classes. In 1933 a restriction on the importation of foreign trees led to the "rapid growth of a new industry" as the growing of Christmas trees within Britain became commercially viable due to the size of demand. By 2013 the number of trees grown in Britain for the Christmas market was approximately eight million and their display in homes, shops and public spaces a normal part of the Christmas season. Georgians have their own traditional Christmas tree called Chichilaki, made from dried up hazelnut or walnut branches that are shaped to form a small coniferous tree. These pale-colored ornaments differ in height from 20 cm (7.9 in) to 3 meters (9.8 ft). Chichilakis are most common in the Guria and Samegrelo regions of Georgia near the Black Sea, but they can also be found in some stores around the capital of Tbilisi. Georgians believe that Chichilaki resembles the famous beard of St. Basil the Great, because Eastern Orthodox Church commemorates St. Basil on 1 January. The earliest reference of Christmas trees being used in The Bahamas dates to January 1864 and is associated with the Anglican Sunday Schools in Nassau, New Providence: "After prayers and a sermon from the Rev. R. Swann, the teachers and children of St. Agnes', accompanied by those of St. Mary's, marched to the Parsonage of Rev. J. H. Fisher, in front of which a large Christmas tree had been planted for their gratification. The delighted little ones formed a circle around it singing "Come follow me to the Christmas tree"." The gifts decorated the trees as ornaments and the children were given tickets with numbers that matched the gifts. This appears to be the typical way of decorating the trees in the 1860s Bahamas. In the Christmas of 1864, there was a Christmas tree put up in the Ladies Saloon in the Royal Victoria Hotel for the respectable children of the neighbourhood. The tree was ornamented with gifts for the children who formed a circle about it and sung the song "Oats and Beans". The gifts were later given to the children in the name of Santa Claus. The tradition was introduced to North America in the winter of 1781 by Hessian soldiers stationed in the Province of Québec (1763–1791) to garrison the colony against American attack. General Friedrich Adolf Riedesel and his wife, the Baroness von Riedesel, held a Christmas party for the officers at Sorel, Quebec, delighting their guests with a fir tree decorated with candles and fruits. The Christmas tree became very common in the United States of America in the early nineteenth century. Dating from late 1812 or early 1813, the watercolor sketchbooks of John Lewis Krimmel contain perhaps the earliest depictions of a Christmas tree in American art, representing a family celebrating Christmas Eve in the Moravian tradition. The first published image of a Christmas tree appeared in 1836 as the frontispiece to The Stranger's Gift by Hermann Bokum. The first mention of the Christmas tree in American literature was in a story in the 1836 edition of The Token and Atlantic Souvenir, titled "New Year's Day", by Catherine Maria Sedgwick, where she tells the story of a German maid decorating her mistress's tree. Also, a woodcut of the British royal family with their Christmas tree at Windsor Castle, initially published in The Illustrated London News December 1848, was copied in the United States at Christmas 1850, in Godey's Lady's Book. Godey's copied it exactly, except for the removal of the Queen's tiara and Prince Albert's moustache, to remake the engraving into an American scene. The republished Godey's image became the first widely circulated picture of a decorated evergreen Christmas tree in America. Art historian Karal Ann Marling called Prince Albert and Queen Victoria, shorn of their royal trappings, "the first influential American Christmas tree". Folk-culture historian Alfred Lewis Shoemaker states, "In all of America there was no more important medium in spreading the Christmas tree in the decade 1850–60 than Godey's Lady's Book". The image was reprinted in 1860, and by the 1870s, putting up a Christmas tree had become even more common in America. President Benjamin Harrison and his wife Caroline put up the first White House Christmas tree in 1889. Several cities in the United States with German connections lay claim to that country's first Christmas tree: Windsor Locks, Connecticut, claims that a Hessian soldier put up a Christmas tree in 1777 while imprisoned at the Noden-Reed House, while the "First Christmas Tree in America" is also claimed by Easton, Pennsylvania, where German settlers purportedly erected a Christmas tree in 1816. In his diary, Matthew Zahm of Lancaster, Pennsylvania, recorded the use of a Christmas tree in 1821, leading Lancaster to also lay claim to the first Christmas tree in America. Other accounts credit Charles Follen, a German immigrant to Boston, for being the first to introduce to America the custom of decorating a Christmas tree. In 1847, August Imgard, a German immigrant living in Wooster, Ohio cut a blue spruce tree from a woods outside town, had the Wooster village tinsmith construct a star, and placed the tree in his house, decorating it with paper ornaments, gilded nuts and Kuchen. German immigrant Charles Minnigerode accepted a position as a professor of humanities at the College of William & Mary in Williamsburg, Virginia, in 1842, where he taught Latin and Greek. Entering into the social life of the Virginia Tidewater, Minnigerode introduced the German custom of decorating an evergreen tree at Christmas at the home of law professor St. George Tucker, thereby becoming another of many influences that prompted Americans to adopt the practice at about that time. An 1853 article on Christmas customs in Pennsylvania defines them as mostly "German in origin", including the Christmas tree, which is "planted in a flower pot filled with earth, and its branches are covered with presents, chiefly of confectionary, for the younger members of the family." The article distinguishes between customs in different states however, claiming that in New England generally "Christmas is not much celebrated", whereas in Pennsylvania and New York it is. When Edward H. Johnson was vice president of the Edison Electric Light Company, a predecessor of Con Edison, he created the first known electrically illuminated Christmas tree at his home in New York City in 1882. Johnson became the "Father of Electric Christmas Tree Lights". The lyrics sung in the United States to the German tune O Tannenbaum begin "O Christmas tree...", giving rise to the mistaken idea that the German word Tannenbaum (fir tree) means "Christmas tree", the German word for which is instead Weihnachtsbaum. Under the state atheism of the Soviet Union, the Christmas tree, along with the entire celebration of the Christian holiday, was banned in that country after the October Revolution. However, the government then introduced a New-year spruce (Russian: Новогодняя ёлка, romanized: Novogodnyaya yolka) in 1935 for the New Year holiday. It became a fully secular icon of the New Year holiday: for example, the crowning star was regarded not as a symbol of Bethlehem Star, but as the Red star. Decorations, such as figurines of airplanes, bicycles, space rockets, cosmonauts, and characters of Russian fairy tales, were produced. This tradition persists after the fall of the USSR, with the New Year holiday outweighing the Christmas (7 January) for a wide majority of Russian people. The Peanuts TV special A Charlie Brown Christmas (1965) was influential on the pop culture surrounding the Christmas tree. Aluminum Christmas trees were popular during the early 1960s in the US. They were satirized in the TV special and came to be seen as symbolizing the commercialization of Christmas. The term Charlie Brown Christmas tree, describing any poor-looking or malformed little tree, also derives from the 1965 TV special, based on the appearance of Charlie Brown's Christmas tree. Since the early 20th century, it has become common in many cities, towns, and department stores to put up public Christmas trees outdoors, such as the Macy's Great Tree in Atlanta (since 1948), the Rockefeller Center Christmas Tree in New York City, and the large Christmas tree at Victoria Square in Adelaide. The use of fire retardant allows many indoor public areas to place real trees and be compliant with code. Licensed applicants of fire retardant solution spray the tree, tag the tree, and provide a certificate for inspection. The United States' National Christmas Tree has been lit each year since 1923 on the South Lawn of the White House, becoming part of what evolved into a major holiday event at the White House. President Jimmy Carter lit only the crowning star atop the tree in 1979 in honor of the Americans being held hostage in Iran. The same was true in 1980, except the tree was fully lit for 417 seconds, one second for each day the hostages had been in captivity. During most of the 1970s and 1980s, the largest decorated Christmas tree in the world was put up every year on the property of the National Enquirer in Lantana, Florida. This tradition grew into one of the most spectacular and celebrated events in the history of southern Florida, but was discontinued on the death of the paper's founder in the late 1980s. In some cities, a charity event called the Festival of Trees is organized, in which multiple trees are decorated and displayed. The giving of Christmas trees has also often been associated with the end of hostilities. After the signing of the Armistice in 1918 the city of Manchester sent a tree, and £500 to buy chocolate and cakes, for the children of the much-bombarded town of Lille in northern France. In some cases the trees represent special commemorative gifts, such as in Trafalgar Square in London, where the City of Oslo, Norway presents a tree to the people of London as a token of appreciation for the British support of Norwegian resistance during the Second World War; in Boston, where the tree is a gift from the province of Nova Scotia, in thanks for rapid deployment of supplies and rescuers to the 1917 ammunition ship explosion that leveled the city of Halifax; and in Newcastle upon Tyne, where the main civic Christmas tree is an annual gift from the city of Bergen, in thanks for the part played by soldiers from Newcastle in liberating Bergen from Nazi occupation. Norway also annually gifts a Christmas tree to Washington, D.C. as a symbol of friendship between Norway and the US and as an expression of gratitude from Norway for the help received from the US during World War II. Both setting up and taking down a Christmas tree are associated with specific dates; liturgically, this is done through the hanging of the greens ceremony. In many areas, it has become customary to set up one's Christmas tree on Advent Sunday, the first day of the Advent season. Traditionally, however, Christmas trees were not brought in and decorated until the evening of Christmas Eve (24 December), the end of the Advent season and the start of the twelve days of Christmastide. It is customary for Christians in many localities to remove their Christmas decorations on the last day of the twelve days of Christmastide that falls on 5 January—Epiphany Eve (Twelfth Night), although those in other Christian countries remove them on Candlemas, the conclusion of the extended Christmas-Epiphany season (Epiphanytide). According to the first tradition, those who fail to remember to remove their Christmas decorations on Epiphany Eve must leave them untouched until Candlemas, the second opportunity to remove them; failure to observe this custom is considered inauspicious. Christmas ornaments are decorations (usually made of glass, metal, wood, or ceramics) that are used to decorate a Christmas tree. The first decorated trees were adorned with apples, white candy canes and pastries in the shapes of stars, hearts and flowers. Glass baubles were first made in Lauscha, Germany, and also garlands of glass beads and tin figures that could be hung on trees. The popularity of these decorations fueled the production of glass figures made by highly skilled artisans with clay molds. Tinsel and several types of garland or ribbon are commonly used as Christmas tree decorations. Silvered saran-based tinsel was introduced later. Delicate mold-blown and painted colored glass Christmas ornaments were a specialty of the glass factories in the Thuringian Forest, especially in Lauscha in the late 19th century, and have since become a large industry, complete with famous-name designers. Baubles are another common decoration, consisting of small hollow glass or plastic spheres coated with a thin metallic layer to make them reflective, with a further coating of a thin pigmented polymer in order to provide coloration. Lighting with electric lights (Christmas lights or, in the United Kingdom, fairy lights) is commonly done. A tree-topper, sometimes an angel but more frequently a star, completes the decoration. In the late 1800s, home-made white Christmas trees were made by wrapping strips of cotton batting around leafless branches creating the appearance of a snow-laden tree. In the 1940s and 1950s, popularized by Hollywood films in the late 1930s, flocking was very popular on the West Coast of the United States. There were home flocking kits that could be used with vacuum cleaners. In the 1980s some trees were sprayed with fluffy white flocking to simulate snow. The earliest legend of the origin of a fir tree becoming a Christian symbol dates back to 723 AD, involving Saint Boniface as he was evangelizing Germany. It is said that at a pagan gathering in Geismar where a group of people dancing under a decorated oak tree were about to sacrifice a baby in the name of Thor, Saint Boniface took an axe and called on the name of Jesus. In one swipe, he managed to take down the entire oak tree, to the crowd's astonishment. Behind the fallen tree was a baby fir tree. Boniface said, "let this tree be the symbol of the true God, its leaves are ever green and will not die." The tree's needles pointed to heaven and it was shaped triangularly to represent the Holy Trinity. When decorating the Christmas tree, many individuals place a star at the top of the tree symbolizing the Star of Bethlehem. It became popular for people to also use an angel to top the Christmas tree in order to symbolize the angels mentioned in the accounts of the Nativity of Jesus. Additionally, in the context of a Christian celebration of Christmas, the evergreen Christmas tree symbolizes eternal life; the candles or lights on the tree represent Christ as the light of the world. Each year, 33 to 36 million Christmas trees are produced in America, and 50 to 60 million are produced in Europe. In 1998, there were about 15,000 growers in America (a third of them "choose and cut" farms). In that same year, it was estimated that Americans spent $1.5 billion on Christmas trees. By 2016 that had climbed to $2.04 billion for natural trees and a further $1.86 billion for artificial trees. In Europe, 75 million trees worth €2.4 billion ($3.2 billion) are harvested annually. The most commonly used species are fir (Abies), which have the benefit of not shedding their needles when they dry out, as well as retaining good foliage color and scent; but species in other genera are also used. In northern Europe most commonly used are: In North America, Central America, South America and Australia most commonly used are: Several other species are used to a lesser extent. Less-traditional conifers are sometimes used, such as giant sequoia, Leyland cypress, Monterey cypress, and eastern juniper. Various types of spruce tree are also used for Christmas trees (including the blue spruce and, less commonly, the white spruce); but spruces begin to lose their needles rapidly upon being cut, and spruce needles are often sharp, making decorating uncomfortable. Virginia pine is still available on some tree farms in the southeastern United States; however, its winter color is faded. The long-needled eastern white pine is also used there, though it is an unpopular Christmas tree in most parts of the country, owing also to its faded winter coloration and limp branches, making decorating difficult with all but the lightest ornaments. Norfolk Island pine is sometimes used, particularly in Oceania, and in Australia, some species of the genera Casuarina and Allocasuarina are also occasionally used as Christmas trees. But, by far, the most common tree is the Pinus radiata Monterey pine. Adenanthos sericeus or Albany woolly bush is commonly sold in southern Australia as a potted living Christmas tree. Hemlock species are generally considered unsuitable as Christmas trees due to their poor needle retention and inability to support the weight of lights and ornaments. Some trees, frequently referred to as "living Christmas trees", are sold live with roots and soil, often from a plant nursery, to be stored at nurseries in planters or planted later outdoors and enjoyed (and often decorated) for years or decades. Others are produced in a container and sometimes as topiary for a porch or patio. However, when done improperly, the combination of root loss caused by digging, and the indoor environment of high temperature and low humidity is very detrimental to the tree's health; additionally, the warmth of an indoor climate will bring the tree out of its natural winter dormancy, leaving it little protection when put back outside into a cold outdoor climate. Often Christmas trees are a large attraction for living animals, including mice and spiders. Thus, the survival rate of these trees is low. However, when done properly, replanting provides higher survival rates. European tradition prefers the open aspect of naturally grown, unsheared trees, while in North America (outside western areas where trees are often wild-harvested on public lands) there is a preference for close-sheared trees with denser foliage, but less space to hang decorations. In the past, Christmas trees were often harvested from wild forests, but now almost all are commercially grown on tree farms. Almost all Christmas trees in the United States are grown on Christmas tree farms where they are cut after about ten years of growth and new trees planted. According to the United States Department of Agriculture's agriculture census for 2007, 21,537 farms were producing conifers for the cut Christmas tree market in America, 5,717.09 square kilometres (1,412,724 acres) were planted in Christmas trees. The life cycle of a Christmas tree from the seed to a 2-metre (7 ft) tree takes, depending on species and treatment in cultivation, between eight and twelve years. First, the seed is extracted from cones harvested from older trees. These seeds are then usually grown in nurseries and then sold to Christmas tree farms at an age of three to four years. The remaining development of the tree greatly depends on the climate, soil quality, as well as the cultivation and how the trees are tended by the Christmas tree farmer. The first artificial Christmas trees were developed in Germany during the 19th century, though earlier examples exist. These "trees" were made using goose feathers that were dyed green, as one response by Germans to continued deforestation. Feather Christmas trees ranged widely in size, from a small 5-centimeter (2 in) tree to a large 2.5-meter (98 in) tree sold in department stores during the 1920s. Often, the tree branches were tipped with artificial red berries which acted as candle holders. Over the years, other styles of artificial Christmas trees have evolved and become popular. In 1930, the U.S.-based Addis Brush Company created the first artificial Christmas tree made from brush bristles. Another type of artificial tree is the aluminum Christmas tree, first manufactured in Chicago in 1958, and later in Manitowoc, Wisconsin, where the majority of the trees were produced. Most modern artificial Christmas trees are made from plastic recycled from used packaging materials, such as polyvinyl chloride (PVC). Approximately 10% of artificial Christmas trees are using virgin suspension PVC resin; despite being plastic most artificial trees are not recyclable or biodegradable. Other trends developed also in the early 2000s. Optical fiber Christmas trees come in two major varieties; one resembles a traditional Christmas tree. One Dallas-based company offers "holographic mylar" trees in many hues. Tree-shaped objects made from such materials as cardboard, glass, ceramic or other materials can be found in use as tabletop decorations. Upside-down artificial Christmas trees became popular for a short time and were originally introduced as a marketing gimmick; they allowed consumers to get closer to ornaments for sale in retail stores and opened up floor space for more products. Artificial trees became increasingly popular during the late 20th century. Users of artificial Christmas trees assert that they are more convenient, and, because they are reusable, much cheaper than their natural alternative. They are also considered much safer, as natural trees can be a significant fire hazard. Between 2001 and 2007, artificial Christmas tree sales in the U.S. jumped from 7.3 million to 17.4 million. Currently, it is estimated that around 58% of Christmas trees used in the United States are artificial, while numbers in the United Kingdom are indicated to be around 66%. The debate about the environmental impact of artificial trees is ongoing. Generally, natural tree growers contend that artificial trees are more environmentally harmful than their natural counterparts. However, trade groups such as the American Christmas Tree Association, claim that the PVC used in Christmas trees is chemically and mechanically stable and does not affect human health and has excellent recyclable properties. Live trees are typically grown as a crop and replanted in rotation after cutting, often providing suitable habitat for wildlife. Alternately, live trees can be donated to livestock farmers who find that such trees uncontaminated by chemical additives are excellent fodder. In some cases management of Christmas tree crops can result in poor habitat since it sometimes involves heavy input of pesticides. Concerns have been raised by arborists about people cutting down old and rare conifers, such as the Keteleeria evelyniana for Christmas trees. Real or cut trees are used only for a short time, but can be recycled and used as mulch, wildlife habitat, or used to prevent erosion. Real trees are carbon-neutral, they emit no more carbon dioxide by being cut down and disposed of than they absorb while growing. However, emissions can occur from farming activities and transportation. An independent life-cycle assessment study, conducted by a firm of experts in sustainable development, states that a natural tree will generate 3.1 kg (6.8 lb) of greenhouse gases every year (based on purchasing 5 km (3.1 mi) from home) whereas the artificial tree will produce 48.3 kg (106 lb) over its lifetime. Some people use living Christmas or potted trees for several seasons, providing a longer life cycle for each tree. Living Christmas trees can be purchased or rented from local market growers. Rentals are picked up after the holidays, while purchased trees can be planted by the owner after use or donated to local tree adoption or urban reforestation services. Smaller and younger trees may be replanted after each season, with the following year running up to the next Christmas allowing the tree to carry out further growth. The use of lead stabilizer in Chinese imported trees has been an issue of concern among politicians and scientists over recent years. A 2004 study found that while in general artificial trees pose little health risk from lead contamination, there do exist "worst-case scenarios" where major health risks to young children exist. A 2008 United States Environmental Protection Agency report found that as the PVC in artificial Christmas trees aged it began to degrade. The report determined that of the fifty million artificial trees in the United States approximately twenty million were nine or more years old, the point where dangerous lead contamination levels are reached. A professional study on the life-cycle assessment of both real and artificial Christmas trees revealed that one must use an artificial Christmas tree at least twenty years to leave an environmental footprint as small as the natural Christmas tree. Under the Marxist-Leninist doctrine of state atheism in the Soviet Union, after its foundation in 1917, Christmas celebrations—along with other religious holidays—were prohibited as a result of the Soviet anti-religious campaign. The League of Militant Atheists encouraged school pupils to campaign against Christmas traditions, among them being the Christmas tree, as well as other Christian holidays, including Easter; the League established an anti-religious holiday to be the 31st of each month as a replacement. With the Christmas tree being prohibited in accordance with Soviet anti-religious legislation, people supplanted the former Christmas custom with New Year's trees. In 1935, the tree was brought back as New Year tree and became a secular, not a religious holiday. Pope John Paul II introduced the Christmas tree custom to the Vatican in 1982. Although at first disapproved of by some as out of place at the centre of the Roman Catholic Church, the Vatican Christmas Tree has become an integral part of the Vatican Christmas celebrations, and in 2005 Pope Benedict XVI spoke of it as part of the normal Christmas decorations in Catholic homes. In 2004, Pope John Paul called the Christmas tree a symbol of Christ. This very ancient custom, he said, exalts the value of life, as in winter what is evergreen becomes a sign of undying life, and it reminds Christians of the "tree of life", an image of Christ, the supreme gift of God to humanity. In the previous year he said: "Beside the crib, the Christmas tree, with its twinkling lights, reminds us that with the birth of Jesus the tree of life has blossomed anew in the desert of humanity. The crib and the tree: precious symbols, which hand down in time the true meaning of Christmas." The Catholic Church's official Book of Blessings has a service for the blessing of the Christmas tree in a home. The Episcopal Church in The Anglican Family Prayer Book, which has the imprimatur of The Rt. Rev. Catherine S. Roskam of the Anglican Communion, has long had a ritual titled Blessing of a Christmas Tree, as well as Blessing of a Crèche, for use in the church and the home; family services and public liturgies for the blessing of Christmas trees are common in other Christian denominations as well. Chrismon trees, which find their origin in the Lutheran Christian tradition though now used in many Christian denominations such as the Catholic Church and Methodist Church, are used to decorate churches during the liturgical season of Advent; during the period of Christmastide, Christian churches display the traditional Christmas tree in their sanctuaries. In 2005, the city of Boston renamed the spruce tree used to decorate the Boston Common a "Holiday Tree" rather than a "Christmas Tree". The name change was reversed after the city was threatened with several lawsuits.
[ { "paragraph_id": 0, "text": "A Christmas tree is a decorated tree, usually an evergreen conifer, such as a spruce, pine or fir, or an artificial tree of similar appearance, associated with the celebration of Christmas.", "title": "" }, { "paragraph_id": 1, "text": "The custom was developed in Central Europe and the Baltic states, particularly Estonia, Germany and Livonia (now Latvia), where Protestant Christians brought decorated trees into their homes. The tree was traditionally decorated with \"roses made of colored paper, apples, wafers, tinsel, [and] sweetmeats\". Moravian Christians began to illuminate Christmas trees with candles, which were often replaced by Christmas lights after the advent of electrification. Today, there is a wide variety of traditional and modern ornaments, such as garlands, baubles, tinsel, and candy canes. An angel or star might be placed at the top of the tree to represent the Angel Gabriel or the Star of Bethlehem, respectively, from the Nativity. Edible items such as gingerbread, chocolate, and other sweets are also popular and are tied to or hung from the tree's branches with ribbons. The Christmas tree has been historically regarded as a custom of the Lutheran Churches and only in 1982 did the Catholic Church erect the Vatican Christmas Tree.", "title": "" }, { "paragraph_id": 2, "text": "In the Western Christian tradition, Christmas trees are variously erected on days such as the first day of Advent or even as late as Christmas Eve depending on the country; customs of the same faith hold that the two traditional days when Christmas decorations, such as the Christmas tree, are removed are Twelfth Night and, if they are not taken down on that day, Candlemas, the latter of which ends the Christmas-Epiphany season in some denominations.", "title": "" }, { "paragraph_id": 3, "text": "The Christmas tree is sometimes compared with the \"Yule-tree\", especially in discussions of its folkloric origins.", "title": "" }, { "paragraph_id": 4, "text": "Modern Christmas trees originated in Central Europe and the Baltic states, particularly Estonia, Germany and Livonia (now Latvia) during the Renaissance in early modern Europe. Its 16th-century origins are sometimes associated with Protestant Christian reformer Martin Luther, who is said to have first added lighted candles to an evergreen tree. The Christmas tree was first recorded to be used by German Lutherans in the 16th century, with records indicating that a Christmas tree was placed in the Cathedral of Strasbourg in 1539, under the leadership of the Protestant Reformer Martin Bucer. The Moravian Christians put lighted candles on those trees.\" The earliest known firmly dated representation of a Christmas tree is on the keystone sculpture of a private home in Turckheim, Alsace (then part of the Holy Roman Empire of the German Nation, today France), with the date 1576.", "title": "History" }, { "paragraph_id": 5, "text": "Modern Christmas trees have been related to the \"tree of paradise\" of medieval mystery plays that were given on 24 December, the commemoration and name day of Adam and Eve in various countries. In such plays, a tree decorated with apples (representing fruit from the tree of the knowledge of good and evil and thus to the original sin that Christ took away) and round white wafers (to represent the Eucharist and redemption) was used as a setting for the play. Like the Christmas crib, the Paradise tree was later placed in homes. The apples were replaced by round objects such as shiny red balls.", "title": "History" }, { "paragraph_id": 6, "text": "Fir trees decorated with apples served as the central prop for the paradise play, a kind of folk religious drama often performed on December 24 These props were called paradise trees, and some researchers believe they were the forerunners of the Christmas tree", "title": "History" }, { "paragraph_id": 7, "text": "", "title": "History" }, { "paragraph_id": 8, "text": "At the end of the Middle Ages, an early predecessor appears referred in the 15th century Regiment of the Cistercian Alcobaça Monastery in Portugal. The Regiment of the local high-Sacristans of the Cistercian Order refers to what may be considered the oldest references to the Christmas tree: \"Note on how to put the Christmas branch, scilicet: On the Christmas eve, you will look for a large Branch of green laurel, and you shall reap many red oranges, and place them on the branches that come of the laurel, specifically as you have seen, and in every orange you shall put a candle, and hang the Branch by a rope in the pole, which shall be by the candle of the high altar.\"", "title": "History" }, { "paragraph_id": 9, "text": "Other sources have offered a connection between the symbolism of the first documented Christmas trees in Germany around 1600 and the trees of pre-Christian traditions, though this claim has been disputed. According to the Encyclopædia Britannica, \"The use of evergreen trees, wreaths, and garlands to symbolize eternal life was a custom of the ancient Egyptians, Chinese, and Hebrews. Tree worship was common among the pagan Europeans and survived their conversion to Christianity in the Scandinavian customs of decorating the house and barn with evergreens at the New Year to scare away the devil and of setting up a tree for the birds during Christmas time.\"", "title": "History" }, { "paragraph_id": 10, "text": "It is commonly believed that ancient Romans used to decorate their houses with evergreen trees to celebrate Saturnalia, although there are no historical records of that. In the poem Epithalamium by Catullus, he tells of the gods decorating the home of Peleus with trees, including laurel and cypress. Later Libanius, Tertullian, and Chrysostom speak of the use of evergreen trees to adorn Christian houses.", "title": "History" }, { "paragraph_id": 11, "text": "The Vikings and Saxons worshiped trees. The story of Saint Boniface cutting down Donar's Oak illustrates the pagan practices in 8th century among the Germans. A later folk version of the story adds the detail that an evergreen tree grew in place of the felled oak, telling them about how its triangular shape reminds humanity of the Trinity and how it points to heaven.", "title": "History" }, { "paragraph_id": 12, "text": "Customs of erecting decorated trees in winter time can be traced to Christmas celebrations in Renaissance-era guilds in Northern Germany and Livonia. The first evidence of decorated trees associated with Christmas Day are trees in guildhalls decorated with sweets to be enjoyed by the apprentices and children. In Livonia (present-day Estonia and Latvia), in 1441, 1442, 1510, and 1514, the Brotherhood of Blackheads erected a tree for the holidays in their guild houses in Reval (now Tallinn) and Riga. On the last night of the celebrations leading up to the holidays, the tree was taken to the Town Hall Square, where the members of the brotherhood danced around it.", "title": "History" }, { "paragraph_id": 13, "text": "A Bremen guild chronicle of 1570 reports that a small tree decorated with \"apples, nuts, dates, pretzels, and paper flowers\" was erected in the guild-house for the benefit of the guild members' children, who collected the dainties on Christmas Day. In 1584, the pastor and chronicler Balthasar Russow in his Chronica der Provinz Lyfflandt (1584) wrote of an established tradition of setting up a decorated spruce at the market square, where the young men \"went with a flock of maidens and women, first sang and danced there and then set the tree aflame\".", "title": "History" }, { "paragraph_id": 14, "text": "After the Protestant Reformation, such trees are seen in the houses of upper-class Protestant families as a counterpart to the Catholic Christmas cribs. This transition from the guild hall to the bourgeois family homes in the Protestant parts of Germany ultimately gives rise to the modern tradition as it developed in the 18th and 19th centuries. In the present-day, the churches and homes of Protestants and Catholics feature both Christmas cribs and Christmas trees.", "title": "History" }, { "paragraph_id": 15, "text": "In Poland, there is a folk tradition dating back to an old Slavic pre-Christian custom of suspending a branch of fir, spruce, or pine from the ceiling rafters, called podłaźniczka, during the time of the Koliada winter festival. The branches were decorated with apples, nuts, acorns, and stars made of straw. In more recent times, the decorations also included colored paper cutouts (wycinanki), wafers, cookies, and Christmas baubles. According to old pagan beliefs, the branch's powers were linked to good harvest and prosperity.", "title": "History" }, { "paragraph_id": 16, "text": "The custom was practiced by the peasants until the early 20th century, particularly in the regions of Lesser Poland and Upper Silesia. Most often the branches were hung above the wigilia dinner table on Christmas Eve. Beginning in the mid-19th century, the tradition over time was almost completely replaced by the later German practice of decorating a standing Christmas tree.", "title": "History" }, { "paragraph_id": 17, "text": "In the early 19th century, the custom became popular among the nobility and spread to royal courts as far as Russia. Introduced by Fanny von Arnstein and popularized by Princess Henrietta of Nassau-Weilburg the Christmas tree reached Vienna in 1814 during the Congress of Vienna, and the custom spread across Austria in the following years. In France, the first Christmas tree was introduced in 1840 by the duchesse d'Orléans. In Denmark, a newspaper company claims that the first attested Christmas tree was lit in 1808 by countess Wilhemine of Holsteinborg. It was the aging countess who told the story of the first Danish Christmas tree to the Danish writer Hans Christian Andersen in 1865. He had published a fairy tale called The Fir-Tree in 1844, recounting the fate of a fir tree being used as a Christmas tree.", "title": "History" }, { "paragraph_id": 18, "text": "By the early 18th century, the custom had become common in towns of the upper Rhineland, but it had not yet spread to rural areas. Wax candles, expensive items at the time, are found in attestations from the late 18th century.", "title": "History" }, { "paragraph_id": 19, "text": "Along the lower Rhine, an area of Roman Catholic majority, the Christmas tree was largely regarded as a Protestant custom. As a result, it remained confined to the upper Rhineland for a relatively long period of time. The custom did eventually gain wider acceptance beginning around 1815 by way of Prussian officials who emigrated there following the Congress of Vienna.", "title": "History" }, { "paragraph_id": 20, "text": "In the 19th century, the Christmas tree was taken to be an expression of German culture and of Gemütlichkeit, especially among emigrants overseas.", "title": "History" }, { "paragraph_id": 21, "text": "A decisive factor in winning general popularity was the German army's decision to place Christmas trees in its barracks and military hospitals during the Franco-Prussian War. Only at the start of the 20th century did Christmas trees appear inside churches, this time in a new brightly lit form.", "title": "History" }, { "paragraph_id": 22, "text": "Early Slovenian custom dating back to around the 17th century was to suspend the tree either upright or upside-down above the well, a corner of the dinner table, in the backyard, or from the fences, modestly decorated with fruits or not decorated at all. German brewer Peter Luelsdorf brought the first Christmas tree of the current tradition to Slovenia in 1845. He set it up in his small brewery inn in Ljubljana, the Slovenian capital. German officials, craftsmen and merchants quickly spread the tradition among the bourgeois population. The trees were typically decorated with walnuts, golden apples, carobs, and candles. At first the Catholic majority rejected this custom because they considered it a typical Protestant tradition. The first decorated Christmas Market was organized in Ljubljana already in 1859. However, this tradition was almost unknown to the rural population until World War I, after which everyone started decorating trees. Spruce trees have a centuries-long tradition in Slovenia. After World War II during Yugoslavia period, trees set in the public places (towns, squares, and markets) were politically replaced with fir trees, a symbol of socialism and Slavic mythology strongly associated with loyalty, courage, and dignity. However, spruce retained its popularity in Slovenian homes during those years and came back to public places after independence.", "title": "History" }, { "paragraph_id": 23, "text": "Although the tradition of decorating churches and homes with evergreens at Christmas was long established, the custom of decorating an entire small tree was unknown in Britain until some two centuries ago. The German-born Queen Charlotte introduced a Christmas tree at a party she gave for children in 1800. The custom did not at first spread much beyond the royal family. Queen Victoria as a child was familiar with it and a tree was placed in her room every Christmas. In her journal for Christmas Eve 1832, the delighted 13-year-old princess wrote:", "title": "History" }, { "paragraph_id": 24, "text": "After dinner [...] we then went into the drawing room near the dining room [...] There were two large round tables on which were placed two trees hung with lights and sugar ornaments. All the presents being placed round the trees [...]", "title": "History" }, { "paragraph_id": 25, "text": "After Victoria's marriage to her German cousin Prince Albert, by 1841 the custom became even more widespread as wealthier middle-class families followed the fashion. In 1842 a newspaper advert for Christmas trees makes clear their smart cachet, German origins and association with children and gift-giving. An illustrated book, The Christmas Tree, describing their use and origins in detail, was on sale in December 1844. On 2 January 1846 Elizabeth Fielding (née Fox Strangways) wrote from Lacock Abbey to William Henry Fox-Talbot: \"Constance is extremely busy preparing the Bohemian Xmas Tree. It is made from Caroline's description of those she saw in Germany\". In 1847 Prince Albert wrote: \"I must now seek in the children an echo of what Ernest [his brother] and I were in the old time, of what we felt and thought; and their delight in the Christmas trees is not less than ours used to be\". A boost to the trend was given in 1848 when The Illustrated London News, in a report picked up by other papers, described the trees in Windsor Castle in detail and showed the main tree, surrounded by the royal family, on its cover. In fewer than ten years their use in better-off homes was widespread. By 1856 a northern provincial newspaper contained an advert alluding casually to them, as well as reporting the accidental death of a woman whose dress caught fire as she lit the tapers on a Christmas tree. They had not yet spread down the social scale though, as a report from Berlin in 1858 contrasts the situation there where \"Every family has its own\" with that of Britain, where Christmas trees were still the preserve of the wealthy or the \"romantic\".", "title": "History" }, { "paragraph_id": 26, "text": "Their use at public entertainments, charity bazaars and in hospitals made them increasingly familiar however, and in 1906 a charity was set up specifically to ensure even poor children in London slums \"who had never seen a Christmas tree\" would enjoy one that year. Anti-German sentiment after World War I briefly reduced their popularity but the effect was short-lived, and by the mid-1920s the use of Christmas trees had spread to all classes. In 1933 a restriction on the importation of foreign trees led to the \"rapid growth of a new industry\" as the growing of Christmas trees within Britain became commercially viable due to the size of demand. By 2013 the number of trees grown in Britain for the Christmas market was approximately eight million and their display in homes, shops and public spaces a normal part of the Christmas season.", "title": "History" }, { "paragraph_id": 27, "text": "Georgians have their own traditional Christmas tree called Chichilaki, made from dried up hazelnut or walnut branches that are shaped to form a small coniferous tree. These pale-colored ornaments differ in height from 20 cm (7.9 in) to 3 meters (9.8 ft). Chichilakis are most common in the Guria and Samegrelo regions of Georgia near the Black Sea, but they can also be found in some stores around the capital of Tbilisi. Georgians believe that Chichilaki resembles the famous beard of St. Basil the Great, because Eastern Orthodox Church commemorates St. Basil on 1 January.", "title": "History" }, { "paragraph_id": 28, "text": "The earliest reference of Christmas trees being used in The Bahamas dates to January 1864 and is associated with the Anglican Sunday Schools in Nassau, New Providence: \"After prayers and a sermon from the Rev. R. Swann, the teachers and children of St. Agnes', accompanied by those of St. Mary's, marched to the Parsonage of Rev. J. H. Fisher, in front of which a large Christmas tree had been planted for their gratification. The delighted little ones formed a circle around it singing \"Come follow me to the Christmas tree\".\" The gifts decorated the trees as ornaments and the children were given tickets with numbers that matched the gifts. This appears to be the typical way of decorating the trees in the 1860s Bahamas. In the Christmas of 1864, there was a Christmas tree put up in the Ladies Saloon in the Royal Victoria Hotel for the respectable children of the neighbourhood. The tree was ornamented with gifts for the children who formed a circle about it and sung the song \"Oats and Beans\". The gifts were later given to the children in the name of Santa Claus.", "title": "History" }, { "paragraph_id": 29, "text": "The tradition was introduced to North America in the winter of 1781 by Hessian soldiers stationed in the Province of Québec (1763–1791) to garrison the colony against American attack. General Friedrich Adolf Riedesel and his wife, the Baroness von Riedesel, held a Christmas party for the officers at Sorel, Quebec, delighting their guests with a fir tree decorated with candles and fruits.", "title": "History" }, { "paragraph_id": 30, "text": "The Christmas tree became very common in the United States of America in the early nineteenth century. Dating from late 1812 or early 1813, the watercolor sketchbooks of John Lewis Krimmel contain perhaps the earliest depictions of a Christmas tree in American art, representing a family celebrating Christmas Eve in the Moravian tradition. The first published image of a Christmas tree appeared in 1836 as the frontispiece to The Stranger's Gift by Hermann Bokum. The first mention of the Christmas tree in American literature was in a story in the 1836 edition of The Token and Atlantic Souvenir, titled \"New Year's Day\", by Catherine Maria Sedgwick, where she tells the story of a German maid decorating her mistress's tree. Also, a woodcut of the British royal family with their Christmas tree at Windsor Castle, initially published in The Illustrated London News December 1848, was copied in the United States at Christmas 1850, in Godey's Lady's Book. Godey's copied it exactly, except for the removal of the Queen's tiara and Prince Albert's moustache, to remake the engraving into an American scene. The republished Godey's image became the first widely circulated picture of a decorated evergreen Christmas tree in America. Art historian Karal Ann Marling called Prince Albert and Queen Victoria, shorn of their royal trappings, \"the first influential American Christmas tree\". Folk-culture historian Alfred Lewis Shoemaker states, \"In all of America there was no more important medium in spreading the Christmas tree in the decade 1850–60 than Godey's Lady's Book\". The image was reprinted in 1860, and by the 1870s, putting up a Christmas tree had become even more common in America.", "title": "History" }, { "paragraph_id": 31, "text": "President Benjamin Harrison and his wife Caroline put up the first White House Christmas tree in 1889.", "title": "History" }, { "paragraph_id": 32, "text": "Several cities in the United States with German connections lay claim to that country's first Christmas tree: Windsor Locks, Connecticut, claims that a Hessian soldier put up a Christmas tree in 1777 while imprisoned at the Noden-Reed House, while the \"First Christmas Tree in America\" is also claimed by Easton, Pennsylvania, where German settlers purportedly erected a Christmas tree in 1816. In his diary, Matthew Zahm of Lancaster, Pennsylvania, recorded the use of a Christmas tree in 1821, leading Lancaster to also lay claim to the first Christmas tree in America. Other accounts credit Charles Follen, a German immigrant to Boston, for being the first to introduce to America the custom of decorating a Christmas tree. In 1847, August Imgard, a German immigrant living in Wooster, Ohio cut a blue spruce tree from a woods outside town, had the Wooster village tinsmith construct a star, and placed the tree in his house, decorating it with paper ornaments, gilded nuts and Kuchen. German immigrant Charles Minnigerode accepted a position as a professor of humanities at the College of William & Mary in Williamsburg, Virginia, in 1842, where he taught Latin and Greek. Entering into the social life of the Virginia Tidewater, Minnigerode introduced the German custom of decorating an evergreen tree at Christmas at the home of law professor St. George Tucker, thereby becoming another of many influences that prompted Americans to adopt the practice at about that time. An 1853 article on Christmas customs in Pennsylvania defines them as mostly \"German in origin\", including the Christmas tree, which is \"planted in a flower pot filled with earth, and its branches are covered with presents, chiefly of confectionary, for the younger members of the family.\" The article distinguishes between customs in different states however, claiming that in New England generally \"Christmas is not much celebrated\", whereas in Pennsylvania and New York it is.", "title": "History" }, { "paragraph_id": 33, "text": "When Edward H. Johnson was vice president of the Edison Electric Light Company, a predecessor of Con Edison, he created the first known electrically illuminated Christmas tree at his home in New York City in 1882. Johnson became the \"Father of Electric Christmas Tree Lights\".", "title": "History" }, { "paragraph_id": 34, "text": "The lyrics sung in the United States to the German tune O Tannenbaum begin \"O Christmas tree...\", giving rise to the mistaken idea that the German word Tannenbaum (fir tree) means \"Christmas tree\", the German word for which is instead Weihnachtsbaum.", "title": "History" }, { "paragraph_id": 35, "text": "Under the state atheism of the Soviet Union, the Christmas tree, along with the entire celebration of the Christian holiday, was banned in that country after the October Revolution. However, the government then introduced a New-year spruce (Russian: Новогодняя ёлка, romanized: Novogodnyaya yolka) in 1935 for the New Year holiday. It became a fully secular icon of the New Year holiday: for example, the crowning star was regarded not as a symbol of Bethlehem Star, but as the Red star. Decorations, such as figurines of airplanes, bicycles, space rockets, cosmonauts, and characters of Russian fairy tales, were produced. This tradition persists after the fall of the USSR, with the New Year holiday outweighing the Christmas (7 January) for a wide majority of Russian people.", "title": "History" }, { "paragraph_id": 36, "text": "The Peanuts TV special A Charlie Brown Christmas (1965) was influential on the pop culture surrounding the Christmas tree. Aluminum Christmas trees were popular during the early 1960s in the US. They were satirized in the TV special and came to be seen as symbolizing the commercialization of Christmas. The term Charlie Brown Christmas tree, describing any poor-looking or malformed little tree, also derives from the 1965 TV special, based on the appearance of Charlie Brown's Christmas tree.", "title": "History" }, { "paragraph_id": 37, "text": "Since the early 20th century, it has become common in many cities, towns, and department stores to put up public Christmas trees outdoors, such as the Macy's Great Tree in Atlanta (since 1948), the Rockefeller Center Christmas Tree in New York City, and the large Christmas tree at Victoria Square in Adelaide.", "title": "History" }, { "paragraph_id": 38, "text": "The use of fire retardant allows many indoor public areas to place real trees and be compliant with code. Licensed applicants of fire retardant solution spray the tree, tag the tree, and provide a certificate for inspection.", "title": "History" }, { "paragraph_id": 39, "text": "The United States' National Christmas Tree has been lit each year since 1923 on the South Lawn of the White House, becoming part of what evolved into a major holiday event at the White House. President Jimmy Carter lit only the crowning star atop the tree in 1979 in honor of the Americans being held hostage in Iran. The same was true in 1980, except the tree was fully lit for 417 seconds, one second for each day the hostages had been in captivity.", "title": "History" }, { "paragraph_id": 40, "text": "During most of the 1970s and 1980s, the largest decorated Christmas tree in the world was put up every year on the property of the National Enquirer in Lantana, Florida. This tradition grew into one of the most spectacular and celebrated events in the history of southern Florida, but was discontinued on the death of the paper's founder in the late 1980s.", "title": "History" }, { "paragraph_id": 41, "text": "In some cities, a charity event called the Festival of Trees is organized, in which multiple trees are decorated and displayed.", "title": "History" }, { "paragraph_id": 42, "text": "The giving of Christmas trees has also often been associated with the end of hostilities. After the signing of the Armistice in 1918 the city of Manchester sent a tree, and £500 to buy chocolate and cakes, for the children of the much-bombarded town of Lille in northern France. In some cases the trees represent special commemorative gifts, such as in Trafalgar Square in London, where the City of Oslo, Norway presents a tree to the people of London as a token of appreciation for the British support of Norwegian resistance during the Second World War; in Boston, where the tree is a gift from the province of Nova Scotia, in thanks for rapid deployment of supplies and rescuers to the 1917 ammunition ship explosion that leveled the city of Halifax; and in Newcastle upon Tyne, where the main civic Christmas tree is an annual gift from the city of Bergen, in thanks for the part played by soldiers from Newcastle in liberating Bergen from Nazi occupation. Norway also annually gifts a Christmas tree to Washington, D.C. as a symbol of friendship between Norway and the US and as an expression of gratitude from Norway for the help received from the US during World War II.", "title": "History" }, { "paragraph_id": 43, "text": "Both setting up and taking down a Christmas tree are associated with specific dates; liturgically, this is done through the hanging of the greens ceremony. In many areas, it has become customary to set up one's Christmas tree on Advent Sunday, the first day of the Advent season. Traditionally, however, Christmas trees were not brought in and decorated until the evening of Christmas Eve (24 December), the end of the Advent season and the start of the twelve days of Christmastide. It is customary for Christians in many localities to remove their Christmas decorations on the last day of the twelve days of Christmastide that falls on 5 January—Epiphany Eve (Twelfth Night), although those in other Christian countries remove them on Candlemas, the conclusion of the extended Christmas-Epiphany season (Epiphanytide). According to the first tradition, those who fail to remember to remove their Christmas decorations on Epiphany Eve must leave them untouched until Candlemas, the second opportunity to remove them; failure to observe this custom is considered inauspicious.", "title": "Customs and traditions" }, { "paragraph_id": 44, "text": "Christmas ornaments are decorations (usually made of glass, metal, wood, or ceramics) that are used to decorate a Christmas tree. The first decorated trees were adorned with apples, white candy canes and pastries in the shapes of stars, hearts and flowers. Glass baubles were first made in Lauscha, Germany, and also garlands of glass beads and tin figures that could be hung on trees. The popularity of these decorations fueled the production of glass figures made by highly skilled artisans with clay molds.", "title": "Customs and traditions" }, { "paragraph_id": 45, "text": "Tinsel and several types of garland or ribbon are commonly used as Christmas tree decorations. Silvered saran-based tinsel was introduced later. Delicate mold-blown and painted colored glass Christmas ornaments were a specialty of the glass factories in the Thuringian Forest, especially in Lauscha in the late 19th century, and have since become a large industry, complete with famous-name designers. Baubles are another common decoration, consisting of small hollow glass or plastic spheres coated with a thin metallic layer to make them reflective, with a further coating of a thin pigmented polymer in order to provide coloration. Lighting with electric lights (Christmas lights or, in the United Kingdom, fairy lights) is commonly done. A tree-topper, sometimes an angel but more frequently a star, completes the decoration.", "title": "Customs and traditions" }, { "paragraph_id": 46, "text": "In the late 1800s, home-made white Christmas trees were made by wrapping strips of cotton batting around leafless branches creating the appearance of a snow-laden tree.", "title": "Customs and traditions" }, { "paragraph_id": 47, "text": "In the 1940s and 1950s, popularized by Hollywood films in the late 1930s, flocking was very popular on the West Coast of the United States. There were home flocking kits that could be used with vacuum cleaners. In the 1980s some trees were sprayed with fluffy white flocking to simulate snow.", "title": "Customs and traditions" }, { "paragraph_id": 48, "text": "The earliest legend of the origin of a fir tree becoming a Christian symbol dates back to 723 AD, involving Saint Boniface as he was evangelizing Germany. It is said that at a pagan gathering in Geismar where a group of people dancing under a decorated oak tree were about to sacrifice a baby in the name of Thor, Saint Boniface took an axe and called on the name of Jesus. In one swipe, he managed to take down the entire oak tree, to the crowd's astonishment. Behind the fallen tree was a baby fir tree. Boniface said, \"let this tree be the symbol of the true God, its leaves are ever green and will not die.\" The tree's needles pointed to heaven and it was shaped triangularly to represent the Holy Trinity.", "title": "Symbolism and interpretations" }, { "paragraph_id": 49, "text": "When decorating the Christmas tree, many individuals place a star at the top of the tree symbolizing the Star of Bethlehem. It became popular for people to also use an angel to top the Christmas tree in order to symbolize the angels mentioned in the accounts of the Nativity of Jesus. Additionally, in the context of a Christian celebration of Christmas, the evergreen Christmas tree symbolizes eternal life; the candles or lights on the tree represent Christ as the light of the world.", "title": "Symbolism and interpretations" }, { "paragraph_id": 50, "text": "Each year, 33 to 36 million Christmas trees are produced in America, and 50 to 60 million are produced in Europe. In 1998, there were about 15,000 growers in America (a third of them \"choose and cut\" farms). In that same year, it was estimated that Americans spent $1.5 billion on Christmas trees. By 2016 that had climbed to $2.04 billion for natural trees and a further $1.86 billion for artificial trees. In Europe, 75 million trees worth €2.4 billion ($3.2 billion) are harvested annually.", "title": "Production" }, { "paragraph_id": 51, "text": "The most commonly used species are fir (Abies), which have the benefit of not shedding their needles when they dry out, as well as retaining good foliage color and scent; but species in other genera are also used.", "title": "Production" }, { "paragraph_id": 52, "text": "In northern Europe most commonly used are:", "title": "Production" }, { "paragraph_id": 53, "text": "In North America, Central America, South America and Australia most commonly used are:", "title": "Production" }, { "paragraph_id": 54, "text": "Several other species are used to a lesser extent. Less-traditional conifers are sometimes used, such as giant sequoia, Leyland cypress, Monterey cypress, and eastern juniper. Various types of spruce tree are also used for Christmas trees (including the blue spruce and, less commonly, the white spruce); but spruces begin to lose their needles rapidly upon being cut, and spruce needles are often sharp, making decorating uncomfortable. Virginia pine is still available on some tree farms in the southeastern United States; however, its winter color is faded. The long-needled eastern white pine is also used there, though it is an unpopular Christmas tree in most parts of the country, owing also to its faded winter coloration and limp branches, making decorating difficult with all but the lightest ornaments. Norfolk Island pine is sometimes used, particularly in Oceania, and in Australia, some species of the genera Casuarina and Allocasuarina are also occasionally used as Christmas trees. But, by far, the most common tree is the Pinus radiata Monterey pine. Adenanthos sericeus or Albany woolly bush is commonly sold in southern Australia as a potted living Christmas tree. Hemlock species are generally considered unsuitable as Christmas trees due to their poor needle retention and inability to support the weight of lights and ornaments.", "title": "Production" }, { "paragraph_id": 55, "text": "Some trees, frequently referred to as \"living Christmas trees\", are sold live with roots and soil, often from a plant nursery, to be stored at nurseries in planters or planted later outdoors and enjoyed (and often decorated) for years or decades. Others are produced in a container and sometimes as topiary for a porch or patio. However, when done improperly, the combination of root loss caused by digging, and the indoor environment of high temperature and low humidity is very detrimental to the tree's health; additionally, the warmth of an indoor climate will bring the tree out of its natural winter dormancy, leaving it little protection when put back outside into a cold outdoor climate. Often Christmas trees are a large attraction for living animals, including mice and spiders. Thus, the survival rate of these trees is low. However, when done properly, replanting provides higher survival rates.", "title": "Production" }, { "paragraph_id": 56, "text": "European tradition prefers the open aspect of naturally grown, unsheared trees, while in North America (outside western areas where trees are often wild-harvested on public lands) there is a preference for close-sheared trees with denser foliage, but less space to hang decorations.", "title": "Production" }, { "paragraph_id": 57, "text": "In the past, Christmas trees were often harvested from wild forests, but now almost all are commercially grown on tree farms. Almost all Christmas trees in the United States are grown on Christmas tree farms where they are cut after about ten years of growth and new trees planted. According to the United States Department of Agriculture's agriculture census for 2007, 21,537 farms were producing conifers for the cut Christmas tree market in America, 5,717.09 square kilometres (1,412,724 acres) were planted in Christmas trees.", "title": "Production" }, { "paragraph_id": 58, "text": "The life cycle of a Christmas tree from the seed to a 2-metre (7 ft) tree takes, depending on species and treatment in cultivation, between eight and twelve years. First, the seed is extracted from cones harvested from older trees. These seeds are then usually grown in nurseries and then sold to Christmas tree farms at an age of three to four years. The remaining development of the tree greatly depends on the climate, soil quality, as well as the cultivation and how the trees are tended by the Christmas tree farmer.", "title": "Production" }, { "paragraph_id": 59, "text": "The first artificial Christmas trees were developed in Germany during the 19th century, though earlier examples exist. These \"trees\" were made using goose feathers that were dyed green, as one response by Germans to continued deforestation. Feather Christmas trees ranged widely in size, from a small 5-centimeter (2 in) tree to a large 2.5-meter (98 in) tree sold in department stores during the 1920s. Often, the tree branches were tipped with artificial red berries which acted as candle holders.", "title": "Production" }, { "paragraph_id": 60, "text": "Over the years, other styles of artificial Christmas trees have evolved and become popular. In 1930, the U.S.-based Addis Brush Company created the first artificial Christmas tree made from brush bristles. Another type of artificial tree is the aluminum Christmas tree, first manufactured in Chicago in 1958, and later in Manitowoc, Wisconsin, where the majority of the trees were produced. Most modern artificial Christmas trees are made from plastic recycled from used packaging materials, such as polyvinyl chloride (PVC). Approximately 10% of artificial Christmas trees are using virgin suspension PVC resin; despite being plastic most artificial trees are not recyclable or biodegradable.", "title": "Production" }, { "paragraph_id": 61, "text": "Other trends developed also in the early 2000s. Optical fiber Christmas trees come in two major varieties; one resembles a traditional Christmas tree. One Dallas-based company offers \"holographic mylar\" trees in many hues. Tree-shaped objects made from such materials as cardboard, glass, ceramic or other materials can be found in use as tabletop decorations. Upside-down artificial Christmas trees became popular for a short time and were originally introduced as a marketing gimmick; they allowed consumers to get closer to ornaments for sale in retail stores and opened up floor space for more products.", "title": "Production" }, { "paragraph_id": 62, "text": "Artificial trees became increasingly popular during the late 20th century. Users of artificial Christmas trees assert that they are more convenient, and, because they are reusable, much cheaper than their natural alternative. They are also considered much safer, as natural trees can be a significant fire hazard. Between 2001 and 2007, artificial Christmas tree sales in the U.S. jumped from 7.3 million to 17.4 million. Currently, it is estimated that around 58% of Christmas trees used in the United States are artificial, while numbers in the United Kingdom are indicated to be around 66%.", "title": "Production" }, { "paragraph_id": 63, "text": "The debate about the environmental impact of artificial trees is ongoing. Generally, natural tree growers contend that artificial trees are more environmentally harmful than their natural counterparts. However, trade groups such as the American Christmas Tree Association, claim that the PVC used in Christmas trees is chemically and mechanically stable and does not affect human health and has excellent recyclable properties. Live trees are typically grown as a crop and replanted in rotation after cutting, often providing suitable habitat for wildlife. Alternately, live trees can be donated to livestock farmers who find that such trees uncontaminated by chemical additives are excellent fodder. In some cases management of Christmas tree crops can result in poor habitat since it sometimes involves heavy input of pesticides. Concerns have been raised by arborists about people cutting down old and rare conifers, such as the Keteleeria evelyniana for Christmas trees.", "title": "Environmental issues" }, { "paragraph_id": 64, "text": "Real or cut trees are used only for a short time, but can be recycled and used as mulch, wildlife habitat, or used to prevent erosion. Real trees are carbon-neutral, they emit no more carbon dioxide by being cut down and disposed of than they absorb while growing. However, emissions can occur from farming activities and transportation. An independent life-cycle assessment study, conducted by a firm of experts in sustainable development, states that a natural tree will generate 3.1 kg (6.8 lb) of greenhouse gases every year (based on purchasing 5 km (3.1 mi) from home) whereas the artificial tree will produce 48.3 kg (106 lb) over its lifetime. Some people use living Christmas or potted trees for several seasons, providing a longer life cycle for each tree. Living Christmas trees can be purchased or rented from local market growers. Rentals are picked up after the holidays, while purchased trees can be planted by the owner after use or donated to local tree adoption or urban reforestation services. Smaller and younger trees may be replanted after each season, with the following year running up to the next Christmas allowing the tree to carry out further growth.", "title": "Environmental issues" }, { "paragraph_id": 65, "text": "The use of lead stabilizer in Chinese imported trees has been an issue of concern among politicians and scientists over recent years. A 2004 study found that while in general artificial trees pose little health risk from lead contamination, there do exist \"worst-case scenarios\" where major health risks to young children exist. A 2008 United States Environmental Protection Agency report found that as the PVC in artificial Christmas trees aged it began to degrade. The report determined that of the fifty million artificial trees in the United States approximately twenty million were nine or more years old, the point where dangerous lead contamination levels are reached. A professional study on the life-cycle assessment of both real and artificial Christmas trees revealed that one must use an artificial Christmas tree at least twenty years to leave an environmental footprint as small as the natural Christmas tree.", "title": "Environmental issues" }, { "paragraph_id": 66, "text": "Under the Marxist-Leninist doctrine of state atheism in the Soviet Union, after its foundation in 1917, Christmas celebrations—along with other religious holidays—were prohibited as a result of the Soviet anti-religious campaign. The League of Militant Atheists encouraged school pupils to campaign against Christmas traditions, among them being the Christmas tree, as well as other Christian holidays, including Easter; the League established an anti-religious holiday to be the 31st of each month as a replacement. With the Christmas tree being prohibited in accordance with Soviet anti-religious legislation, people supplanted the former Christmas custom with New Year's trees. In 1935, the tree was brought back as New Year tree and became a secular, not a religious holiday.", "title": "Religious issues" }, { "paragraph_id": 67, "text": "Pope John Paul II introduced the Christmas tree custom to the Vatican in 1982. Although at first disapproved of by some as out of place at the centre of the Roman Catholic Church, the Vatican Christmas Tree has become an integral part of the Vatican Christmas celebrations, and in 2005 Pope Benedict XVI spoke of it as part of the normal Christmas decorations in Catholic homes. In 2004, Pope John Paul called the Christmas tree a symbol of Christ. This very ancient custom, he said, exalts the value of life, as in winter what is evergreen becomes a sign of undying life, and it reminds Christians of the \"tree of life\", an image of Christ, the supreme gift of God to humanity. In the previous year he said: \"Beside the crib, the Christmas tree, with its twinkling lights, reminds us that with the birth of Jesus the tree of life has blossomed anew in the desert of humanity. The crib and the tree: precious symbols, which hand down in time the true meaning of Christmas.\" The Catholic Church's official Book of Blessings has a service for the blessing of the Christmas tree in a home. The Episcopal Church in The Anglican Family Prayer Book, which has the imprimatur of The Rt. Rev. Catherine S. Roskam of the Anglican Communion, has long had a ritual titled Blessing of a Christmas Tree, as well as Blessing of a Crèche, for use in the church and the home; family services and public liturgies for the blessing of Christmas trees are common in other Christian denominations as well.", "title": "Religious issues" }, { "paragraph_id": 68, "text": "Chrismon trees, which find their origin in the Lutheran Christian tradition though now used in many Christian denominations such as the Catholic Church and Methodist Church, are used to decorate churches during the liturgical season of Advent; during the period of Christmastide, Christian churches display the traditional Christmas tree in their sanctuaries.", "title": "Religious issues" }, { "paragraph_id": 69, "text": "In 2005, the city of Boston renamed the spruce tree used to decorate the Boston Common a \"Holiday Tree\" rather than a \"Christmas Tree\". The name change was reversed after the city was threatened with several lawsuits.", "title": "Religious issues" } ]
A Christmas tree is a decorated tree, usually an evergreen conifer, such as a spruce, pine or fir, or an artificial tree of similar appearance, associated with the celebration of Christmas. The custom was developed in Central Europe and the Baltic states, particularly Estonia, Germany and Livonia, where Protestant Christians brought decorated trees into their homes. The tree was traditionally decorated with "roses made of colored paper, apples, wafers, tinsel, [and] sweetmeats". Moravian Christians began to illuminate Christmas trees with candles, which were often replaced by Christmas lights after the advent of electrification. Today, there is a wide variety of traditional and modern ornaments, such as garlands, baubles, tinsel, and candy canes. An angel or star might be placed at the top of the tree to represent the Angel Gabriel or the Star of Bethlehem, respectively, from the Nativity. Edible items such as gingerbread, chocolate, and other sweets are also popular and are tied to or hung from the tree's branches with ribbons. The Christmas tree has been historically regarded as a custom of the Lutheran Churches and only in 1982 did the Catholic Church erect the Vatican Christmas Tree. In the Western Christian tradition, Christmas trees are variously erected on days such as the first day of Advent or even as late as Christmas Eve depending on the country; customs of the same faith hold that the two traditional days when Christmas decorations, such as the Christmas tree, are removed are Twelfth Night and, if they are not taken down on that day, Candlemas, the latter of which ends the Christmas-Epiphany season in some denominations. The Christmas tree is sometimes compared with the "Yule-tree", especially in discussions of its folkloric origins.
2002-02-25T15:51:15Z
2023-12-27T03:49:19Z
[ "Template:Efn", "Template:Christmas", "Template:Christmas trees", "Template:Short description", "Template:Webarchive", "Template:Commons category-inline", "Template:Convert", "Template:Transliteration", "Template:Self-published source", "Template:Authority control", "Template:C.", "Template:Blockquote", "Template:Nbsp", "Template:Notelist", "Template:Dead link", "Template:Lang", "Template:Main article", "Template:See also", "Template:Cite news", "Template:Bibleverse", "Template:Quote", "Template:Self-published inline", "Template:Cite journal", "Template:Main", "Template:More citations needed section", "Template:Div col end", "Template:Use dmy dates", "Template:Multiple image", "Template:Cvt", "Template:Div col", "Template:Cite web", "Template:Cite book", "Template:Pp-vandalism", "Template:Other uses", "Template:Lang-ru", "Template:Cite encyclopedia", "Template:Use American English", "Template:Portal", "Template:Reflist" ]
https://en.wikipedia.org/wiki/Christmas_tree
7,772
Carrier battle group
A carrier battle group (CVBG) is a naval fleet consisting of an aircraft carrier capital ship and its large number of escorts, together defining the group. The CV in CVBG is the United States Navy hull classification code for an aircraft carrier. The first naval task forces built around carriers appeared just prior to and during World War II. The Imperial Japanese Navy (IJN) was the first to assemble many carriers into a single task force, known as the Kido Butai. This task force was used with devastating effect in the Japanese attack on Pearl Harbor. The Kido Butai operated as the IJN's main carrier battle group until four of its carriers were sunk at the Battle of Midway. In contrast, the United States Navy deployed its large carriers in separate formations, with each carrier assigned its own cruiser and destroyer escorts. These single-carrier formations would often be paired or grouped together for certain assignments, most notably the Battle of the Coral Sea and Midway. By 1943, however, large numbers of fleet and light carriers became available, which required larger formations of three or four carriers. These groups eventually formed the Fast Carrier Task Force, which became the primary battle unit of the U.S. Third and Fifth Fleets. With the construction of the large "supercarriers" of the Cold War era, the practice of operating each carrier in a single formation was revived. During the Cold War, the main role of the CVBG in case of conflict with the Soviet Union would have been to protect Atlantic supply routes between the United States and its NATO allies in Europe, while the role of the Soviet Navy would have been to interrupt these sea lanes, a fundamentally easier task. Because the Soviet Union had no large carriers of its own, a situation of dueling aircraft carriers would have been unlikely. However, a primary mission of the Soviet Navy's attack submarines was to track every allied battle group and, on the outbreak of hostilities, sink the carriers. Understanding this threat, the CVBG expended enormous resources in its own anti-submarine warfare mission. In the late 20th and early 21st centuries, most uses of carrier battle groups by the United States as well as that of other Western nations have been in situations where their use has been uncontested by other comparable forces. During the Cold War, an important battle scenario was an attack against a CVBG using numerous antiship missiles. British and French carrier battle groups were involved in the 1956 Suez Crisis. During the Indo-Pakistani War of 1971, India used its carrier strike group centered on INS Vikrant to impose a naval blockade on East Pakistan. Air strikes were carried out initially on shipping in the harbors of Chittagong and Cox's Bazar, sinking or incapacitating most ships there. Further strikes were carried out on Cox's Bazar from 60 nautical miles (110 km) offshore. On the evening of 4 December, the air group again struck Chittagong harbor. Later strikes targeted Khulna and the Port of Mongla. Air strikes continued until 10 December 1971. The first attempted use of anti-ship missiles against a carrier battle group was part of Argentina's efforts against British armed forces during the Falklands War. This was the last conflict so far in which opposing belligerents employed aircraft carriers, although Argentina made little use of its sole carrier, ARA Veinticinco de Mayo, which was originally built in the United Kingdom as HMS Venerable and later served with the Royal Netherlands Navy (1948–1968). The United States Sixth Fleet assembled a force of three carrier battle groups and a battleship during the Lebanese Civil War in 1983. Daily reconnaissance flights were flown over the Bekaa Valley and a strike was flown against targets in the area resulting in loss of an A-6 Intruder and an A-7 Corsair. Carrier battle groups routinely operated in the Gulf of Sidra inside the "Line of Death" proclaimed by Libya resulting in aerial engagements in 1981, 1986 and 1989 between U.S. Navy Tomcats and Libyan Su-22 aircraft, SA-5 surface-to-air missiles and MiG-23 fighters. During the 1986 clashes, three carrier battle groups deployed to the Gulf of Sidra and ultimately two of them conducted strikes against Libya in Operation El Dorado Canyon. During the international military intervention in the 2011 Libyan civil war, the French Navy deployed its aircraft carrier, Charles de Gaulle, off Libya. The Charles de Gaulle was accompanied by several frigates as Forbin, Dupleix, Aconit, the replenishment tanker Meuse and two Rubis-class nuclear attack submarines. China plans to set up several carrier battle groups in the future. At present China's two aircraft carriers, the Liaoning and Shandong, use Type 055 destroyers for area air defence with anti-submarine warfare, Type 052C or Type 052D destroyers for air defense, Type 054A frigates for anti-submarine and anti-ship warfare, 1–2 Type 093 nuclear attack submarines, and 1 Type 901 supply ship. China is currently building a third carrier, expected to be launched in 2020 and enter active service in 2023, as well as a nuclear-powered fourth carrier planned for construction and expected to be completed by the late 2020s. China is also building a new larger class of air defense destroyers, the Type 055. The only serving French carrier is the Charles de Gaulle, which also serves as the flagship of the Marine Nationale. The carrier battle group of the Force d'Action Navale is known as the Groupe Aéronaval (GAN) and is usually composed, in addition to the aircraft carrier, of: This group is commanded by a rear admiral (contre-amiral, in French) on board the aircraft carrier. The commanding officer of the air group (usually a capitaine de frégate—equivalent to commander) is subordinate to the commanding officer of the aircraft carrier, a senior captain. The escort destroyers (called frigates in the French denomination) are commanded by more junior captains. France also operates three Mistral-class amphibious assault ships. While incapable of operating fixed-winged aircraft, they function as helicopter carriers and form the backbone of France's amphibious force. These ships are typically escorted by the same escorts the Charles De Gaulle uses. Indian Navy has operated all types of aircraft carriers including CATOBAR configured Vikrant, STOVL configured Viraat and STOBAR configured Vikramaditya and Vikrant (2013) and CBGs centered on it. The Indian Navy has been operating carrier battle groups since 1961, with its first carrier battle group formed around the now decommissioned INS Vikrant. INS Viraat was an updated Centaur-class light carrier originally built for the Royal Navy as HMS Hermes, which was laid down in 1944 and commissioned in 1959. It was purchased by India in May 1987, and was decommissioned in March 2017. India commissioned INS Vikramaditya in 2013 followed by the new INS Vikrant in 2022. INS Vikramaditya is the modified Kiev-class aircraft carrier Admiral Gorshkov, INS Vikrant is the first indigenous aircraft carrier built in India. India plans to have three carrier battle groups by 2035, each centered on Vikrant, Vikramaditya and Vishal, another planned carrier. As of 2023, the Indian Navy operates two carrier battle groups centered on INS Vikramaditya and INS Vikrant. The Indian Navy's carrier battle group centered on Viraat consisted of two destroyers, usually of the Delhi class (previously Rajputs were used), two or more frigates, usually of the Brahmaputra, Godavari or Nilgiri classes, and one support ship. The navy's new carrier battle group centered on Vikramaditya and Vikrant consists of the modern Kolkata class guided missile destroyers, Shivalik and Talwar-class guided missile frigates, Kamorta-class anti-submarine warfare corvettes and new tankers. INS Chakra is expected to fill the sub-surface component. The CVS–ASW (Aircraft Carrier with Anti-Submarine Warfare) Italian aircraft carrier Giuseppe Garibaldi is Italy's first carrier. The battle group based in Taranto called COMFORAL is formed by the carrier Giuseppe Garibaldi, two Durand de la Penne-class destroyers, two support ships Etna and Elettra, and three amphibious/support ships (San Giusto, San Marco and San Giorgio). After 2010, the Italian battle group will be formed by the new Italian aircraft carrier Cavour, 5–6 new warships (including destroyers Horizon and frigates FREMM), one new support ship, some minehunters and new submarines (the COMFORAL will be a reserve group). Admiral Kuznetsov has been observed sailing together with a Kirov-class battlecruiser (CBGN), Slava-class cruiser (CG), Sovremenny-class destroyer (ASuW), Udaloy-class destroyer (ASW) and Krivak I/II FFG (ASW). These escorts, especially the heavily armed Kirov-class battlecruiser, use advanced sensors and carry a variety of weaponry. During Admiral Kuznetsov's deployment to Syria in November 2016 on her first combat tour, the carrier was escorted by a pair of Udaloy-class destroyers and a Kirov-class battlecruiser en route, while additional Russian Navy warships met her off Syria. Admiral Kuznetsov is designed specifically to sail alone and carries greater firepower than her U.S. counterparts. This includes 12x SS-N-19 'Shipwreck' (long range, high speed, sea-skimming) SSMs, 24x VLS units loaded with 192 SA-N-9 'Gauntlet' SAMs, and 8x Kashtan CIWS with dual 30 mm guns, and 8x AK-630 CIWS. Compared to the 4x Phalanx CIWS and 4x Sea Sparrow launchers, each with 8 missiles carried by the Nimitz-class, Admiral Kuznetsov is well armed for both air-defence and offensive operations against hostile shipping. As one of the pioneers of aircraft carriers, the Royal Navy has maintained a carrier strike capability since the commissioning of HMS Argus (I49) in 1918. However, the capability was temporarily lost between 2010 and 2018, following the retirement of the Invincible-class aircraft carrier and Harrier GR9s. During this period, the Royal Navy worked to regenerate its carrier strike capability based on the Carrier-Enabled Power Projection (CEPP) concept by ordering two Queen Elizabeth-class aircraft carriers and the F-35B Lightning to operate from them. To maintain its skills and experience, the Royal Navy embedded personnel and ships with partner navies, in particular the United States Navy. In 2017, the first Queen Elizabeth-class aircraft carrier HMS Queen Elizabeth entered service followed by her sister ship HMS Prince of Wales in 2019. The first carrier strike group took to sea in September 2019 as part of an exercise known as Westlant 19. HMS Queen Elizabeth and her air group of F-35B Lightning jets operated alongside two surface escorts and a fleet tanker off the east coast of the United States. The deployment was in preparation for the first operational deployment in 2021, which is expected to involve HMS Queen Elizabeth alongside four Royal Navy escorts, two support ships and a submarine. Under current plans, a Royal Navy carrier strike group will typically comprise a Queen Elizabeth-class aircraft carrier, two air defence destroyers, two anti-submarine frigates, a submarine, solid stores ship and a fleet tanker, however the composition varies depending on the operational tasking. While Queen Elizabeth's initial deployment will be as part of an all-British carrier group, it is envisaged in the longer term that the UK's carriers will usually form the centre of a multi-national operation – in 2018, it was announced that the British and Dutch governments had come to an agreement that would see escort vessels of the Royal Netherlands Navy operating as part of the UK Carrier Strike Group. Command of the UK carrier strike group is the responsibility of Commander United Kingdom Carrier Strike Group. A June 2020 National Audit Office report however provided a critical review of the forthcoming Carrier Strike Group, especially noting the delay to the Crowsnest system. In modern United States Navy carrier air operations, a carrier strike group (CSG) normally consists of 1 aircraft carrier, 1 guided missile cruiser (for air defense), 2 LAMPS-capable warships (focusing on anti-submarine and surface warfare), and 1–2 anti-submarine destroyers or frigates. The large number of CSGs used by the United States reflects, in part, a division of roles and missions allotted during the Cold War, in which the United States assumed primary responsibility for blue-water operations and for safeguarding supply lines between the United States and Europe, while the NATO allies assumed responsibility for less costly brown- and green-water operations. The CSG has replaced the old term of carrier battle group (CVBG or CARBATGRU). The US Navy maintains 11 carrier strike groups, 9 of which are based in the United States and one that is forward deployed in Yokosuka, Japan. An expeditionary strike group is composed of an amphibious assault ship (LHA/LHD), a dock landing ship (LSD), an amphibious transport dock (LPD), a Marine expeditionary unit, AV-8B Harrier II or, more recently Lockheed Martin F-35B Lightning II aircraft, CH-53E Super Stallion and CH-46E Sea Knight helicopters or, more recently, MV-22B tiltrotors. Cruisers, destroyers and attack submarines are deployed with either an Expeditionary Strike Group or a Carrier Strike Group. During the period when the American navy recommissioned all four of its Iowa-class battleships, it sometimes used a similar formation centered on a battleship, referred to as a battleship battle group. It was alternately referred to as a surface action group. The battleship battle group typically consisted of one modernized battleship, one Ticonderoga-class cruiser, one Kidd-class destroyer or Arleigh Burke-class destroyer, one Spruance-class destroyer, three Oliver Hazard Perry-class frigates and one auxiliary ship such as a replenishment oiler. A surface action group is "a temporary or standing organization of combatant ships, other than carriers, tailored for a specific tactical mission". Since its origins, the viability of the carrier battle group has been dependent on its ability to remain at sea for extended periods. Specialized ships were developed to provide underway replenishment of fuel (for the carrier and its aircraft), ordnance, and other supplies necessary to sustain operations. Carrier battle groups devote a great deal of planning to efficiently conduct underway replenishment to minimize the time spent conducting replenishment. The carrier can also provide replenishment on a limited basis to its escorts, but typically a replenishment ship such as a fast combat support ship (AOE) or replenishment oiler (AOR) pulls alongside a carrier and conducts simultaneous operations with the carrier on its port side and one of the escorts on its starboard side. The advent of the helicopter provides the ability to speed replenishment by lifting supplies at the same time that fueling hoses and lines are delivering other goods. There is debate in naval warfare circles as to the viability of carrier battle groups in 21st century naval warfare. Proponents of the CVBG argue that it provides unmatched firepower and force projection capabilities. Opponents argue that CVBGs are increasingly vulnerable to arsenal ships and cruise missiles, especially those with supersonic or even hypersonic flight and the ability to perform radical trajectory changes to avoid anti-missile systems. It is also noted that CVBGs were designed for Cold War scenarios, and are less useful in establishing control of areas close to shore. It is argued however that such missiles and arsenal ships pose no serious threat as they would be eliminated due to increasing improvement in ship defenses such as Cooperative Engagement Capability (CEC), DEW technology and missile technology. Additionally, carrier battle groups proved to be vulnerable to diesel-electric submarines owned by many smaller naval forces. Examples are the German U24 of the conventional 206 class which in 2001 "sank" USS Enterprise during the exercise JTFEX 01-2 in the Caribbean Sea by firing flares and taking a photograph through its periscope or the Swedish Gotland which managed the same feat in 2006 during JTFEX 06-2 by penetrating the defensive measures of Carrier Strike Group 7 undetected and snap several pictures of USS Ronald Reagan. However, carriers have been called upon to be first responders even when conventional land-based aircraft were employed. During Desert Shield, the U.S. Navy sortied additional carriers to augment the on-station assets, eventually maintaining six carriers for Desert Storm. Although the U.S. Air Force sent fighters such as the F-16 to theater in Desert Shield, they had to carry bombs with them as no stores were in place for sustained operations, whereas the carriers arrived on scene with full magazines and had support ships to allow them to conduct strikes indefinitely. The Global War on Terror has shown the flexibility and responsiveness of the carrier on multiple occasions when land-based air was not feasible or able to respond in a timely fashion. After the 11 September terrorist attacks on the U.S., carriers immediately headed to the Arabian Sea to support Operation Enduring Freedom and took up station, building to a force of three carriers. Their steaming location was closer to the targets in Afghanistan than any land-based assets and thereby more responsive. The USS Kitty Hawk was adapted to be a support base for special operations helicopters. Carriers were used again in Operation Iraqi Freedom and even provided aircraft to be based ashore on occasion and have done so periodically when special capabilities are needed. This precedent was established during World War II in the Battle of Guadalcanal. Regardless of the debate over viability, the United States has made a major investment in the development of a new carrier class—the Gerald R. Ford-class aircraft carriers (formerly designated CVN-X, or the X Carrier)—to replace the existing Nimitz-class aircraft carriers. The new Ford-class carriers are designed to be modular and are easily adaptable as technology and equipment needed on board changes.
[ { "paragraph_id": 0, "text": "A carrier battle group (CVBG) is a naval fleet consisting of an aircraft carrier capital ship and its large number of escorts, together defining the group. The CV in CVBG is the United States Navy hull classification code for an aircraft carrier.", "title": "" }, { "paragraph_id": 1, "text": "The first naval task forces built around carriers appeared just prior to and during World War II. The Imperial Japanese Navy (IJN) was the first to assemble many carriers into a single task force, known as the Kido Butai. This task force was used with devastating effect in the Japanese attack on Pearl Harbor. The Kido Butai operated as the IJN's main carrier battle group until four of its carriers were sunk at the Battle of Midway. In contrast, the United States Navy deployed its large carriers in separate formations, with each carrier assigned its own cruiser and destroyer escorts. These single-carrier formations would often be paired or grouped together for certain assignments, most notably the Battle of the Coral Sea and Midway. By 1943, however, large numbers of fleet and light carriers became available, which required larger formations of three or four carriers. These groups eventually formed the Fast Carrier Task Force, which became the primary battle unit of the U.S. Third and Fifth Fleets.", "title": "" }, { "paragraph_id": 2, "text": "With the construction of the large \"supercarriers\" of the Cold War era, the practice of operating each carrier in a single formation was revived. During the Cold War, the main role of the CVBG in case of conflict with the Soviet Union would have been to protect Atlantic supply routes between the United States and its NATO allies in Europe, while the role of the Soviet Navy would have been to interrupt these sea lanes, a fundamentally easier task. Because the Soviet Union had no large carriers of its own, a situation of dueling aircraft carriers would have been unlikely. However, a primary mission of the Soviet Navy's attack submarines was to track every allied battle group and, on the outbreak of hostilities, sink the carriers. Understanding this threat, the CVBG expended enormous resources in its own anti-submarine warfare mission.", "title": "" }, { "paragraph_id": 3, "text": "In the late 20th and early 21st centuries, most uses of carrier battle groups by the United States as well as that of other Western nations have been in situations where their use has been uncontested by other comparable forces. During the Cold War, an important battle scenario was an attack against a CVBG using numerous antiship missiles.", "title": "Carrier battle groups in crises" }, { "paragraph_id": 4, "text": "British and French carrier battle groups were involved in the 1956 Suez Crisis.", "title": "Carrier battle groups in crises" }, { "paragraph_id": 5, "text": "During the Indo-Pakistani War of 1971, India used its carrier strike group centered on INS Vikrant to impose a naval blockade on East Pakistan. Air strikes were carried out initially on shipping in the harbors of Chittagong and Cox's Bazar, sinking or incapacitating most ships there. Further strikes were carried out on Cox's Bazar from 60 nautical miles (110 km) offshore. On the evening of 4 December, the air group again struck Chittagong harbor. Later strikes targeted Khulna and the Port of Mongla. Air strikes continued until 10 December 1971.", "title": "Carrier battle groups in crises" }, { "paragraph_id": 6, "text": "The first attempted use of anti-ship missiles against a carrier battle group was part of Argentina's efforts against British armed forces during the Falklands War. This was the last conflict so far in which opposing belligerents employed aircraft carriers, although Argentina made little use of its sole carrier, ARA Veinticinco de Mayo, which was originally built in the United Kingdom as HMS Venerable and later served with the Royal Netherlands Navy (1948–1968).", "title": "Carrier battle groups in crises" }, { "paragraph_id": 7, "text": "The United States Sixth Fleet assembled a force of three carrier battle groups and a battleship during the Lebanese Civil War in 1983. Daily reconnaissance flights were flown over the Bekaa Valley and a strike was flown against targets in the area resulting in loss of an A-6 Intruder and an A-7 Corsair.", "title": "Carrier battle groups in crises" }, { "paragraph_id": 8, "text": "Carrier battle groups routinely operated in the Gulf of Sidra inside the \"Line of Death\" proclaimed by Libya resulting in aerial engagements in 1981, 1986 and 1989 between U.S. Navy Tomcats and Libyan Su-22 aircraft, SA-5 surface-to-air missiles and MiG-23 fighters. During the 1986 clashes, three carrier battle groups deployed to the Gulf of Sidra and ultimately two of them conducted strikes against Libya in Operation El Dorado Canyon.", "title": "Carrier battle groups in crises" }, { "paragraph_id": 9, "text": "During the international military intervention in the 2011 Libyan civil war, the French Navy deployed its aircraft carrier, Charles de Gaulle, off Libya. The Charles de Gaulle was accompanied by several frigates as Forbin, Dupleix, Aconit, the replenishment tanker Meuse and two Rubis-class nuclear attack submarines.", "title": "Carrier battle groups in crises" }, { "paragraph_id": 10, "text": "China plans to set up several carrier battle groups in the future. At present China's two aircraft carriers, the Liaoning and Shandong, use Type 055 destroyers for area air defence with anti-submarine warfare, Type 052C or Type 052D destroyers for air defense, Type 054A frigates for anti-submarine and anti-ship warfare, 1–2 Type 093 nuclear attack submarines, and 1 Type 901 supply ship. China is currently building a third carrier, expected to be launched in 2020 and enter active service in 2023, as well as a nuclear-powered fourth carrier planned for construction and expected to be completed by the late 2020s. China is also building a new larger class of air defense destroyers, the Type 055.", "title": "Applications" }, { "paragraph_id": 11, "text": "The only serving French carrier is the Charles de Gaulle, which also serves as the flagship of the Marine Nationale. The carrier battle group of the Force d'Action Navale is known as the Groupe Aéronaval (GAN) and is usually composed, in addition to the aircraft carrier, of:", "title": "Applications" }, { "paragraph_id": 12, "text": "This group is commanded by a rear admiral (contre-amiral, in French) on board the aircraft carrier. The commanding officer of the air group (usually a capitaine de frégate—equivalent to commander) is subordinate to the commanding officer of the aircraft carrier, a senior captain. The escort destroyers (called frigates in the French denomination) are commanded by more junior captains.", "title": "Applications" }, { "paragraph_id": 13, "text": "France also operates three Mistral-class amphibious assault ships. While incapable of operating fixed-winged aircraft, they function as helicopter carriers and form the backbone of France's amphibious force. These ships are typically escorted by the same escorts the Charles De Gaulle uses.", "title": "Applications" }, { "paragraph_id": 14, "text": "Indian Navy has operated all types of aircraft carriers including CATOBAR configured Vikrant, STOVL configured Viraat and STOBAR configured Vikramaditya and Vikrant (2013) and CBGs centered on it. The Indian Navy has been operating carrier battle groups since 1961, with its first carrier battle group formed around the now decommissioned INS Vikrant. INS Viraat was an updated Centaur-class light carrier originally built for the Royal Navy as HMS Hermes, which was laid down in 1944 and commissioned in 1959. It was purchased by India in May 1987, and was decommissioned in March 2017. India commissioned INS Vikramaditya in 2013 followed by the new INS Vikrant in 2022. INS Vikramaditya is the modified Kiev-class aircraft carrier Admiral Gorshkov, INS Vikrant is the first indigenous aircraft carrier built in India. India plans to have three carrier battle groups by 2035, each centered on Vikrant, Vikramaditya and Vishal, another planned carrier. As of 2023, the Indian Navy operates two carrier battle groups centered on INS Vikramaditya and INS Vikrant.", "title": "Applications" }, { "paragraph_id": 15, "text": "The Indian Navy's carrier battle group centered on Viraat consisted of two destroyers, usually of the Delhi class (previously Rajputs were used), two or more frigates, usually of the Brahmaputra, Godavari or Nilgiri classes, and one support ship.", "title": "Applications" }, { "paragraph_id": 16, "text": "The navy's new carrier battle group centered on Vikramaditya and Vikrant consists of the modern Kolkata class guided missile destroyers, Shivalik and Talwar-class guided missile frigates, Kamorta-class anti-submarine warfare corvettes and new tankers. INS Chakra is expected to fill the sub-surface component.", "title": "Applications" }, { "paragraph_id": 17, "text": "The CVS–ASW (Aircraft Carrier with Anti-Submarine Warfare) Italian aircraft carrier Giuseppe Garibaldi is Italy's first carrier. The battle group based in Taranto called COMFORAL is formed by the carrier Giuseppe Garibaldi, two Durand de la Penne-class destroyers, two support ships Etna and Elettra, and three amphibious/support ships (San Giusto, San Marco and San Giorgio).", "title": "Applications" }, { "paragraph_id": 18, "text": "After 2010, the Italian battle group will be formed by the new Italian aircraft carrier Cavour, 5–6 new warships (including destroyers Horizon and frigates FREMM), one new support ship, some minehunters and new submarines (the COMFORAL will be a reserve group).", "title": "Applications" }, { "paragraph_id": 19, "text": "Admiral Kuznetsov has been observed sailing together with a Kirov-class battlecruiser (CBGN), Slava-class cruiser (CG), Sovremenny-class destroyer (ASuW), Udaloy-class destroyer (ASW) and Krivak I/II FFG (ASW). These escorts, especially the heavily armed Kirov-class battlecruiser, use advanced sensors and carry a variety of weaponry. During Admiral Kuznetsov's deployment to Syria in November 2016 on her first combat tour, the carrier was escorted by a pair of Udaloy-class destroyers and a Kirov-class battlecruiser en route, while additional Russian Navy warships met her off Syria.", "title": "Applications" }, { "paragraph_id": 20, "text": "Admiral Kuznetsov is designed specifically to sail alone and carries greater firepower than her U.S. counterparts. This includes 12x SS-N-19 'Shipwreck' (long range, high speed, sea-skimming) SSMs, 24x VLS units loaded with 192 SA-N-9 'Gauntlet' SAMs, and 8x Kashtan CIWS with dual 30 mm guns, and 8x AK-630 CIWS. Compared to the 4x Phalanx CIWS and 4x Sea Sparrow launchers, each with 8 missiles carried by the Nimitz-class, Admiral Kuznetsov is well armed for both air-defence and offensive operations against hostile shipping.", "title": "Applications" }, { "paragraph_id": 21, "text": "As one of the pioneers of aircraft carriers, the Royal Navy has maintained a carrier strike capability since the commissioning of HMS Argus (I49) in 1918. However, the capability was temporarily lost between 2010 and 2018, following the retirement of the Invincible-class aircraft carrier and Harrier GR9s. During this period, the Royal Navy worked to regenerate its carrier strike capability based on the Carrier-Enabled Power Projection (CEPP) concept by ordering two Queen Elizabeth-class aircraft carriers and the F-35B Lightning to operate from them. To maintain its skills and experience, the Royal Navy embedded personnel and ships with partner navies, in particular the United States Navy.", "title": "Applications" }, { "paragraph_id": 22, "text": "In 2017, the first Queen Elizabeth-class aircraft carrier HMS Queen Elizabeth entered service followed by her sister ship HMS Prince of Wales in 2019. The first carrier strike group took to sea in September 2019 as part of an exercise known as Westlant 19. HMS Queen Elizabeth and her air group of F-35B Lightning jets operated alongside two surface escorts and a fleet tanker off the east coast of the United States. The deployment was in preparation for the first operational deployment in 2021, which is expected to involve HMS Queen Elizabeth alongside four Royal Navy escorts, two support ships and a submarine.", "title": "Applications" }, { "paragraph_id": 23, "text": "Under current plans, a Royal Navy carrier strike group will typically comprise a Queen Elizabeth-class aircraft carrier, two air defence destroyers, two anti-submarine frigates, a submarine, solid stores ship and a fleet tanker, however the composition varies depending on the operational tasking. While Queen Elizabeth's initial deployment will be as part of an all-British carrier group, it is envisaged in the longer term that the UK's carriers will usually form the centre of a multi-national operation – in 2018, it was announced that the British and Dutch governments had come to an agreement that would see escort vessels of the Royal Netherlands Navy operating as part of the UK Carrier Strike Group. Command of the UK carrier strike group is the responsibility of Commander United Kingdom Carrier Strike Group. A June 2020 National Audit Office report however provided a critical review of the forthcoming Carrier Strike Group, especially noting the delay to the Crowsnest system.", "title": "Applications" }, { "paragraph_id": 24, "text": "In modern United States Navy carrier air operations, a carrier strike group (CSG) normally consists of 1 aircraft carrier, 1 guided missile cruiser (for air defense), 2 LAMPS-capable warships (focusing on anti-submarine and surface warfare), and 1–2 anti-submarine destroyers or frigates. The large number of CSGs used by the United States reflects, in part, a division of roles and missions allotted during the Cold War, in which the United States assumed primary responsibility for blue-water operations and for safeguarding supply lines between the United States and Europe, while the NATO allies assumed responsibility for less costly brown- and green-water operations. The CSG has replaced the old term of carrier battle group (CVBG or CARBATGRU). The US Navy maintains 11 carrier strike groups, 9 of which are based in the United States and one that is forward deployed in Yokosuka, Japan.", "title": "Applications" }, { "paragraph_id": 25, "text": "An expeditionary strike group is composed of an amphibious assault ship (LHA/LHD), a dock landing ship (LSD), an amphibious transport dock (LPD), a Marine expeditionary unit, AV-8B Harrier II or, more recently Lockheed Martin F-35B Lightning II aircraft, CH-53E Super Stallion and CH-46E Sea Knight helicopters or, more recently, MV-22B tiltrotors. Cruisers, destroyers and attack submarines are deployed with either an Expeditionary Strike Group or a Carrier Strike Group.", "title": "Applications" }, { "paragraph_id": 26, "text": "During the period when the American navy recommissioned all four of its Iowa-class battleships, it sometimes used a similar formation centered on a battleship, referred to as a battleship battle group. It was alternately referred to as a surface action group.", "title": "Applications" }, { "paragraph_id": 27, "text": "The battleship battle group typically consisted of one modernized battleship, one Ticonderoga-class cruiser, one Kidd-class destroyer or Arleigh Burke-class destroyer, one Spruance-class destroyer, three Oliver Hazard Perry-class frigates and one auxiliary ship such as a replenishment oiler.", "title": "Applications" }, { "paragraph_id": 28, "text": "A surface action group is \"a temporary or standing organization of combatant ships, other than carriers, tailored for a specific tactical mission\".", "title": "Applications" }, { "paragraph_id": 29, "text": "Since its origins, the viability of the carrier battle group has been dependent on its ability to remain at sea for extended periods. Specialized ships were developed to provide underway replenishment of fuel (for the carrier and its aircraft), ordnance, and other supplies necessary to sustain operations. Carrier battle groups devote a great deal of planning to efficiently conduct underway replenishment to minimize the time spent conducting replenishment. The carrier can also provide replenishment on a limited basis to its escorts, but typically a replenishment ship such as a fast combat support ship (AOE) or replenishment oiler (AOR) pulls alongside a carrier and conducts simultaneous operations with the carrier on its port side and one of the escorts on its starboard side. The advent of the helicopter provides the ability to speed replenishment by lifting supplies at the same time that fueling hoses and lines are delivering other goods.", "title": "Underway replenishment" }, { "paragraph_id": 30, "text": "There is debate in naval warfare circles as to the viability of carrier battle groups in 21st century naval warfare. Proponents of the CVBG argue that it provides unmatched firepower and force projection capabilities. Opponents argue that CVBGs are increasingly vulnerable to arsenal ships and cruise missiles, especially those with supersonic or even hypersonic flight and the ability to perform radical trajectory changes to avoid anti-missile systems. It is also noted that CVBGs were designed for Cold War scenarios, and are less useful in establishing control of areas close to shore. It is argued however that such missiles and arsenal ships pose no serious threat as they would be eliminated due to increasing improvement in ship defenses such as Cooperative Engagement Capability (CEC), DEW technology and missile technology.", "title": "Debate on future viability" }, { "paragraph_id": 31, "text": "Additionally, carrier battle groups proved to be vulnerable to diesel-electric submarines owned by many smaller naval forces. Examples are the German U24 of the conventional 206 class which in 2001 \"sank\" USS Enterprise during the exercise JTFEX 01-2 in the Caribbean Sea by firing flares and taking a photograph through its periscope or the Swedish Gotland which managed the same feat in 2006 during JTFEX 06-2 by penetrating the defensive measures of Carrier Strike Group 7 undetected and snap several pictures of USS Ronald Reagan.", "title": "Debate on future viability" }, { "paragraph_id": 32, "text": "However, carriers have been called upon to be first responders even when conventional land-based aircraft were employed. During Desert Shield, the U.S. Navy sortied additional carriers to augment the on-station assets, eventually maintaining six carriers for Desert Storm. Although the U.S. Air Force sent fighters such as the F-16 to theater in Desert Shield, they had to carry bombs with them as no stores were in place for sustained operations, whereas the carriers arrived on scene with full magazines and had support ships to allow them to conduct strikes indefinitely.", "title": "Debate on future viability" }, { "paragraph_id": 33, "text": "The Global War on Terror has shown the flexibility and responsiveness of the carrier on multiple occasions when land-based air was not feasible or able to respond in a timely fashion. After the 11 September terrorist attacks on the U.S., carriers immediately headed to the Arabian Sea to support Operation Enduring Freedom and took up station, building to a force of three carriers. Their steaming location was closer to the targets in Afghanistan than any land-based assets and thereby more responsive. The USS Kitty Hawk was adapted to be a support base for special operations helicopters. Carriers were used again in Operation Iraqi Freedom and even provided aircraft to be based ashore on occasion and have done so periodically when special capabilities are needed. This precedent was established during World War II in the Battle of Guadalcanal.", "title": "Debate on future viability" }, { "paragraph_id": 34, "text": "Regardless of the debate over viability, the United States has made a major investment in the development of a new carrier class—the Gerald R. Ford-class aircraft carriers (formerly designated CVN-X, or the X Carrier)—to replace the existing Nimitz-class aircraft carriers. The new Ford-class carriers are designed to be modular and are easily adaptable as technology and equipment needed on board changes.", "title": "Debate on future viability" } ]
A carrier battle group (CVBG) is a naval fleet consisting of an aircraft carrier capital ship and its large number of escorts, together defining the group. The CV in CVBG is the United States Navy hull classification code for an aircraft carrier. The first naval task forces built around carriers appeared just prior to and during World War II. The Imperial Japanese Navy (IJN) was the first to assemble many carriers into a single task force, known as the Kido Butai. This task force was used with devastating effect in the Japanese attack on Pearl Harbor. The Kido Butai operated as the IJN's main carrier battle group until four of its carriers were sunk at the Battle of Midway. In contrast, the United States Navy deployed its large carriers in separate formations, with each carrier assigned its own cruiser and destroyer escorts. These single-carrier formations would often be paired or grouped together for certain assignments, most notably the Battle of the Coral Sea and Midway. By 1943, however, large numbers of fleet and light carriers became available, which required larger formations of three or four carriers. These groups eventually formed the Fast Carrier Task Force, which became the primary battle unit of the U.S. Third and Fifth Fleets. With the construction of the large "supercarriers" of the Cold War era, the practice of operating each carrier in a single formation was revived. During the Cold War, the main role of the CVBG in case of conflict with the Soviet Union would have been to protect Atlantic supply routes between the United States and its NATO allies in Europe, while the role of the Soviet Navy would have been to interrupt these sea lanes, a fundamentally easier task. Because the Soviet Union had no large carriers of its own, a situation of dueling aircraft carriers would have been unlikely. However, a primary mission of the Soviet Navy's attack submarines was to track every allied battle group and, on the outbreak of hostilities, sink the carriers. Understanding this threat, the CVBG expended enormous resources in its own anti-submarine warfare mission.
2002-01-15T16:10:27Z
2023-12-20T22:40:02Z
[ "Template:Short description", "Template:Naval units", "Template:Multiple image", "Template:Sclass", "Template:When", "Template:Lang", "Template:Reflist", "Template:Cite journal", "Template:Use dmy dates", "Template:INS", "Template:Ship", "Template:HMS", "Template:Dead link", "Template:Cbignore", "Template:About", "Template:More citations needed", "Template:USS", "Template:Update needed", "Template:'", "Template:Main", "Template:Clarify timeframe", "Template:Cite web", "Template:Cite news", "Template:Cite book", "Template:Military and war" ]
https://en.wikipedia.org/wiki/Carrier_battle_group
7,773
Boeing Vertol CH-46 Sea Knight
The Boeing Vertol CH-46 Sea Knight is an American medium-lift tandem-rotor transport helicopter powered by twin turboshaft engines. It was designed by Vertol and manufactured by Boeing Vertol following Vertol's acquisition by Boeing. Development of the Sea Knight, which was originally designated by the firm as the Vertol Model 107, commenced during 1956. It was envisioned as a successor to the first generation of rotorcraft, such as the H-21 "Flying Banana", that had been powered by piston engines; in its place, the V-107 made use of the emergent turboshaft engine. On 22 April 1958, the V-107 prototype performed its maiden flight. During June 1958, the US Army awarded a contract for the construction of ten production-standard aircraft, designated as the YHC-1A, based on the V-107; this initial order was later cut down to three YHC-1As though. During 1961, the US Marine Corps (USMC), which had been studying its requirements for a medium-lift, twin-turbine cargo/troop assault helicopter, selected Boeing Vertol's Model 107M as the basis from which to manufacture a suitable rotorcraft to meet their needs. Known colloquially as the "Phrog" and formally as the "Sea Knight", it was operated across all US Marine Corps' operational environments between its introduction during the Vietnam War and its frontline retirement during 2014. The Sea Knight was operated by the USMC to provide all-weather, day-or-night assault transport of combat troops, supplies and equipment until it was replaced by the MV-22 Osprey during the 2010s. The USMC also used the helicopter for combat support, search and rescue (SAR), casualty evacuation and Tactical Recovery of Aircraft and Personnel (TRAP). The Sea Knight also functioned as the US Navy's standard medium-lift utility helicopter prior to the type being phased out of service in favor of the MH-60S Knighthawk during the early 2000s. Several overseas operators acquired the rotorcraft as well. Canada operated the Sea Knight, designated as CH-113; the type was used predominantly in the SAR role until 2004. Other export customers for the type included Japan, Sweden, and Saudi Arabia. The commercial version of the rotorcraft is the BV 107-II, commonly referred to simply as the "Vertol". During the 1940s and 1950s, American rotorcraft manufacturer Piasecki Helicopter emerged as a pioneering developer of tandem-rotor helicopters; perhaps the most famous of these being the piston-powered H-21 "Flying Banana", an early utility and transport helicopter. During 1955, Piasecki was officially renamed as Vertol Corporation (standing for vertical take-off and landing); it was around this time that work commenced on the development of a new generation of tandem rotor helicopter. During 1956, the new design received the internal company designation of Vertol Model 107, or simply V-107; this rotorcraft differed from its predecessors by harnessing the newly developed turboshaft engine instead of piston-based counterparts. During that year, construction of a prototype, powered by a pair of Lycoming T53 turboshaft engines, each one being capable of producing 877 shp (640 kW), commenced. On 22 April 1958, the V-107 prototype performed its maiden flight. In order to garner publicity for the newly developed rotorcraft, it was decided to use the prototype to conduct a series of publicised flight demonstrations during a tour across the United States and several overseas nations. During June 1958, it was announced that the U.S. Army had awarded a contract to Vertol for the construction of ten production-standard aircraft based on the V-107, which were designated YHC-1A. However, this order was later decreased to three helicopters; according to aviation author Jay P. Spenser, the cutback had been enacted in order that the U.S. Army would be able to divert funds for the development of the rival V-114 helicopter, which was also a turbine-powered tandem rotor design but substantially larger than the V-107. All of the U.S. Army's three YHC-1As were powered by pairs of GE-T-58 engines. During August 1959, the first YHC-1A-model rotorcraft conducted its first flight; independently, it was shortly followed by the maiden flight of an improved model intended for the commercial and export markets, designated 107-II. During 1960, the U.S. Marine Corps evolved a requirement for a medium-lift, twin-turbine troop/cargo assault helicopter to replace the various piston-engined types that were then in widespread use with the service. That same year, American aviation company Boeing acquired Vertol, after which the group was consequently renamed Boeing Vertol. Following a competition between several competing designs, during early 1961, it was announced that Boeing Vertol had been selected to manufacture its model 107M for the U.S. Marine Corps, where it was designated HRB-1. During 1962, the U.S. Air Force placed its own order for 12 XCH-46B Sea Knight helicopters, which used the XH-49A designation; however, the service later decided to cancel the order due to delays in its delivery; instead, the U.S. Air Force opted to procure the rival Sikorsky S-61R in its place. Following the Sea Knight's first flight in August 1962, the military designation was changed to CH-46A. During November 1964, the introduction of the Marines' CH-46A and the Navy's UH-46As commenced. The UH-46A variant was a modified version of the rotorcraft to perform the vertical replenishment mission. The CH-46A was equipped with a pair of T58-GE8-8B turboshaft engines, each being rated at 1,250 shp (930 kW); these allowed the Sea Knight to carry up to 17 passengers or a maximum of 4,000 pounds (1,815 kg) of cargo. During 1966, production of the improved CH-46D commenced with deliveries following shortly thereafter. This model featured various improvements, including modified rotor blades and the adoption of more powerful T58-GE-10 turboshaft engines, rated at 1,400 shp (1,040 kW) each. The increased power of these new engines allowed the CH-46D to carry an increased payload, such as up to 25 troops or a maximum of 7,000 pounds (3,180 kg) of cargo. During late 1967, the improved model was introduced to the Vietnam theater, where it supplemented the U.S. Marine Corps' existing CH-46A fleet, which had proven to be relatively unreliable and problematic in service. Along with the USMC's CH-46Ds, the U.S. Navy also acquired a small number of UH-46Ds for ship resupply purposes. In addition, approximately 33 CH-46As were progressively re-manufactured to the CH-46D standard. Between 1968 and 1971, the U.S. Marine Corps received a number of CH-46F standard rotorcraft. This model retained the T58-GE-10 engines used on the CH-46D while featuring revised avionics and featured a number of other modifications. The CH-46F was the final production model of the type. During its service life, the Sea Knight received a variety of upgrades and modifications. Over time, the majority of the U.S. Marine Corps' Sea Knights were upgraded to the improved CH-46E standard. This model featured fiberglass rotor blades, reinforcement measures throughout the airframe, along with the refitting of further uprated T58-GE-16 engines, capable of producing 1,870 shp (1,390 kW) each; in addition, several CH-46Es were modified to double their maximum fuel capacity. Starting in the mid-1990s, the Dynamic Component Upgrade (DCU) programmes was enacted, focusing on the implementation of strengthened drive systems and modified rotor controls. The commercial variant, the BV 107-II, was first ordered by New York Airways during 1960. During July 1962, they took delivery of their first three aircraft, which was configured to seat up to 25 passengers. During 1965, Boeing Vertol sold the manufacturing rights of the 107 to Japanese conglomerate Kawasaki Heavy Industries. Under this arrangement, all Model 107 civilian and military aircraft built in Japan were referred to by the KV 107 designation. On 15 December 2006, Columbia Helicopters, Inc acquired the type certificate for the BV 107-II; at the time, the company was reportedly in the process of acquiring a Production Certificate from the Federal Aviation Administration (FAA). Plans for actual production of the aircraft were not announced. The Boeing Vertol CH-46 Sea Knight is a medium-lift tandem-rotor transport helicopter, furnished with a set of counter-rotating main rotors in a tandem-rotor configuration. It was typically powered by a pair of General Electric T58 turboshaft engines, which were mounted on each side of the rear rotor pedestal; power to the forward rotor was transferred from the rear-mounted engines via a drive shaft. For redundancy, both engines are coupled so that either one would be capable of powering both of the main rotors in the event of a single engine failure or a similar emergency situation. Each of the rotors feature three blades, which can be folded to better facilitate storage and naval operations. The CH-46 features a fixed tricycle landing gear, complete with twin wheels on all three legs of the landing gear; this configuration results in a nose-up stance, helping to facilitate cargo loading and unloading. Two of the main landing gear were installed within protruding rear sponsons; the free interior space of the sponsons are also used to house fuel tanks, possessing a total capacity of 350 US gallons (1,438 L). The interior of the CH-46 was largely taken up by its cargo bay, complete with a rear loading ramp that could be removed or left open in flight for the carriage of extended cargoes or for parachute drops. Various furnishings were normally provided to aid in its use as a utility rotorcraft, such as an internal winch mounted within the forward cabin, which can be used to assisting loading by pulling external cargo on pallets into the aircraft via the ramp and rollers, and an optionally-attached belly-mounted cargo hook, which would be usually rated at 10,000 lb (4,500 kg) for carrying cargoes externally underneath the Sea Knight; despite the hook having been rated at 10,000 lb (4,500 kg), this was safety restricted to less payload as they got older. When operated in a typical configuration, the CH-46 would usually be operated by a crew of three; a larger crew could be accommodated when required, which would be dependent upon mission specifics. For example, a search and rescue (SAR) variant would usually carry a crew of five (Pilot, Co-Pilot, Crew Chief, Swimmer, and Medic) to facilitate all aspects of such operations. For self-defense, a pintle-mounted 0.50 in (12.7 mm) Browning machine gun could be mounted on each side of the helicopter. Service in southeast Asia resulted in the addition of armor along with the machine guns. Known colloquially as the "Phrog", the Sea Knight was used in all U.S. Marine operational environments between its introduction during the Vietnam War and its frontline retirement in 2014. The type's longevity and reputation for reliability led to mantras such as "phrogs phorever" and "never trust a helicopter under 30". CH-46s transported personnel, evacuated wounded, supplied forward arming and refueling points (FARP), performed vertical replenishment, search and rescue, recovered downed aircraft and crews and other tasks. During the Vietnam War, the CH-46 was one of the prime US Marine troop transport helicopters in the theater, slotting between the smaller Bell UH-1 Iroquois and larger Sikorsky CH-53 Sea Stallion. CH-46 operations were plagued by major technical problems; the engines, being prone to foreign object damage (FOD) from debris being ingested when hovering close to the ground and subsequently suffering a compressor stall, had a lifespan as low as 85 flight hours; on 21 July 1966, all CH-46s were grounded until more efficient filters had been fitted. On 3 May 1967, a CH-46D at Marine Corps Air Facility Santa Ana crashed, killing all four members of the crew. Within three days the accident investigators had determined that the mounting brackets of the main transmission had failed, allowing the front and rear overlapping rotors to intermesh. All CH-46s were temporarily grounded for inspection. On 13 May, a CH-46A crashed off the coast of Vietnam when the tail pylon containing the engines, main transmission and aft rotors broke off in flight. All four crew members were killed. On 20 June, another CH-46A crashed, though two of the four-man crew survived. Once again, even though the aircraft was not recovered from the water, failure of some sort in the rear pylon was suspected. On 30 June a CH-46D at Santa Ana crashed when a rotor blade separated from the aircraft, all three of the crew survived. As a result of this latest accident, all CH-46Ds were immediately grounded, but the CH-46As continued flying. On 3 July another CH-46A crashed in Vietnam, killing all four Marines of its crew. The cause of the crash again was traced to failure of the main transmission. On 31 August 1967, a CH-46A on a medical evacuation mission to USS Tripoli disintegrated in midair killing all its occupants. The following day another CH-46A experienced a similar incident at Marble Mountain Air Facility leading to the type being grounded for all except emergency situations and cutting Marine airlift capacity in half. An investigation conducted by a joint Naval Air Systems Command/Boeing Vertol accident investigation team revealed that structural failures were occurring in the area of the rear pylon resulting in the rear rotor tearing off in flight and may have been the cause of several earlier losses. The team recommended structural and systems modifications to reinforce the rear rotor mount as well as installation of an indicator to detect excessive strain on critical parts of the aircraft. 80 CH-46As were shipped to Marine Corps Air Station Futenma, Okinawa where they received the necessary modifications by a combined force of Marine and Boeing Vertol personnel. The modified CH-46As began returning to service in December 1967 and all had been returned to service by February 1968. During the 1972 Easter Offensive, Sea Knights saw heavy use to convey US and South Vietnamese ground forces to and around the front lines. By the end of US military operations in Vietnam, over a hundred Sea Knights had been lost to enemy fire. In February 1968 the Marine Corps Development and Education Command obtained several CH-46s to perform herbicide dissemination tests using HIDAL (Helicopter, Insecticide Dispersal Apparatus, Liquid) systems; testing indicated the need for redesign and further study. Tandem-rotor helicopters were often used to transport nuclear warheads; the CH-46A was evaluated to deploy Naval Special Forces with the Special Atomic Demolition Munition (SADM). Nuclear Weapon Accident Exercise 1983 (NUWAX-83), simulating the crash of a Navy CH-46E carrying 3 nuclear warheads, was conducted at the Nevada Test Site on behalf of several federal agencies; the exercise, which used real radiological agents, was depicted in a Defense Nuclear Agency-produced documentary. U.S. Marine CH-46s were used to deploy the 8th Marine Regiment into Grenada during Operation Urgent Fury, evacuated the surviving crewmember of a downed AH-1 Cobra, and then carried infantry from the 75th Ranger Regiment to secure and evacuate U.S. students at the Grand Anse campus of St. George's University, though one crashed after colliding with a palm tree. CH-46E Sea Knights were also used by the U.S. Marine Corps during the 2003 invasion of Iraq. In one incident on 1 April 2003, Marine CH-46Es and CH-53Es carried U.S. Army Rangers and Special Operations troops on an extraction mission for captured Army Private Jessica Lynch from an Iraqi hospital. During the subsequent occupation of Iraq and counter-insurgency operations, the CH-46E was heavily used in the CASEVAC role, being required to maintain 24/7 availability regardless of conditions. According to authors Williamson Murray and Robert H Scales, the Sea Knight displayed serious reliability and maintenance problems during its deployment to Iraq, as well as "limited lift capabilities". Following the loss of numerous US helicopters in the Iraqi theatre, the Marines opted to equip their CH-46s with more advanced anti-missile countermeasures. The U.S. Navy retired the type on 24 September 2004, replacing it with the MH-60S Seahawk; the Marine Corps maintained its fleet as the MV-22 Osprey was fielded. In March 2006 Marine Medium Helicopter Squadron 263 (HMM-263) was deactivated and redesignated VMM-263 to serve as the first MV-22 squadron. The replacement process continued through the other medium helicopter squadrons into 2014. On 5 October 2014, the Sea Knight performed its final service flight with the U.S. Marine Corps at Marine Corps Air Station Miramar. HMM-364 was the last squadron to use it outside the United States, landing it aboard USS America on her maiden transit. On 9 April 2015, the CH-46 was retired by the Marine Medium Helicopter Training Squadron 164, the last Marine Corps squadron to transition to the MV-22. The USMC retired the CH-46 on 1 August 2015 in a ceremony at the Udvar-Hazy Center near Washington DC. Beginning in April 2011 the Navy's Fleet Readiness Center East began refurbishing retired USMC CH-46Es for service with the United States Department of State Air Wing. A number of CH-46s from HMX-1 were transferred to the Air Wing in late 2014. In Afghanistan the CH-46s were used by Embassy Air for secure transport of State Department personnel. The CH-46s were equipped with missile warning sensors and flare dispensers and could be armed with M240D or M2 Browning machine guns. A report in September 2019 by the State Department Inspector General found that a seat on a CH-46 for a seven-minute flight cost US$1,500 (~$1,702 in 2022). Seven of the CH-46s were rendered unusable and abandoned at Kabul Airport following the 2021 evacuation from Afghanistan. Seven of the former U.S. Marine Corps CH-46s that were refurbished by the U.S. State Department Air Wing took part in the 2021 Kabul Airlift. Prior to the complete withdrawal of U.S. forces, all seven of the helicopters were rendered unusable and abandoned at Kabul International Airport and are seen in many videos and pictures online. One of the CH-46s that was abandoned (BuNo 154038, c/n 2389) also took part in Operation Frequent Wind with the evacuation of the Embassy of Saigon in South Vietnam exactly forty-six years prior. The U.S. State Department drew criticism for leaving behind the aircraft. Commenting on the issue, the U.S. State Department claimed that the helicopters were already being phased out of State Department Air Wing due to their age and the inability to support them. The seven CH-46s left behind were the only U.S. State Department aircraft left behind at Kabul International Airport. The Royal Canadian Air Force procured six CH-113 Labrador helicopters for the SAR role and the Canadian Army acquired 12 of the similar CH-113A Voyageur for the medium-lift transport role. The RCAF Labradors were delivered first with the first one entering service on 11 October 1963. When the larger CH-147 Chinook was procured by the Canadian Forces in the mid-1970s, the Voyageur fleet was converted to Labrador specifications to undertake SAR missions. The refurbished Voyageurs were re-designated as CH-113A Labradors, thus a total of 15 Labradors were ultimately in service. The Labrador was fitted with a watertight hull for marine landings, a 5,000 kilogram cargo hook and an external rescue hoist mounted over the right front door. It featured a 1,110 kilometer flying range, emergency medical equipment and an 18-person passenger capacity. In multiple instances throughout the 1970s and 1980s, this increased range provided the capability of the CH-113 to provide assistance to U.S. Coast Guard (USCG) missions or perform long range medevacs over distances the USCG helicopters at the time simply could not reach. In 1981, a mid-life upgrade of the fleet was carried out by Boeing Canada in Arnprior, Ontario. Known as the SAR-CUP (Search and Rescue Capability Upgrade Program), the refit scheme included new instrumentation, a nose-mounted weather radar, a tail-mounted auxiliary power unit, a new high-speed rescue hoist mounted over the side door and front-mounted searchlights. A total of six CH-113s and five CH-113As were upgraded with the last delivered in 1984. Nonetheless, as a search and rescue helicopter it endured heavy use and hostile weather conditions; which had begun to take their toll on the Labrador fleet by the 1990s, resulting in increasing maintenance costs and the need for prompt replacement. In 1992, it was announced that the Labradors were to be replaced by 15 new helicopters, a variant of the AgustaWestland EH101, designated CH-149 Chimo. The order was subsequently cancelled by the Jean Chrétien Liberal government in 1993, resulting in cancellation penalties, as well as extending the service life of the Labrador fleet. However, in 1998, a CH-113 from CFB Greenwood crashed on Quebec's Gaspé Peninsula while returning from a SAR mission, resulting in the deaths of all crewmembers on board. The crash placed pressure upon the government to procure a replacement, thus an order was placed with the manufacturers of the EH101 for 15 aircraft to perform the search-and-rescue mission, designated CH-149 Cormorant. CH-149 deliveries began in 2003, allowing the last CH-113 to be retired in 2004. In October 2005 Columbia Helicopters of Aurora, Oregon purchased eight of the retired CH-113 Labradors to add to their fleet of 15 Vertol 107-II helicopters. In 1963, Sweden procured ten UH-46Bs from the US as a transport and anti-submarine helicopter for the Swedish Armed Forces, designated Hkp 4A. In 1973, a further eight Kawasaki-built KV-107s, which were accordingly designated Hkp 4B, were acquired to replace the older Piasecki H-21. During the Cold War, the fleet's primary missions were anti-submarine warfare and troop transportation. They were also frequently employed in the search and rescue role, most famously during the rescue operation of the MS Estonia after it sank in the Baltic Sea on 28 September 1994. In the 1980s, the Hkp 4A was phased out, having been replaced by the Eurocopter AS332 Super Puma; the later Kawasaki-built Sea Knights continued in operational service until 2011, they were replaced by the UH-60 Black Hawk and NH90. On 15 September 2023, Argentina's Air Force chief Gen. Xavier Issac briefed the media that Argentina had sent a letter requesting the US to approve the refurbishment of surplus CH-46s currently stored with the 309th Aerospace Maintenance and Regeneration Group in Arizona. The availability of civilian-operated CH-46s was also being explored. They would be used to support Argentina's Antarctic bases. The CH-46s would replace two Mil Mi-171E helicopters acquired in 2010, but now not able to be repaired by Russia due to sanctions from the Russian invasion of Ukraine. The civilian version, designated as the BV 107-II Vertol, was developed prior to the military CH-46. It was operated commercially by New York Airways, Pan American World Airways and later on by Columbia Helicopters. Among the diversity of tasks was commuter service in the mid-1960s from the roof of the Pan Am skyscraper in Manhattan to JFK Airport in Queens, pulling a hover barge, and constructing transmission towers for overhead power lines. In December 2006, Columbia Helicopters purchased the type certificate of the Model 107 from Boeing, with the aim of eventually producing new-build aircraft themselves. Thailand Data from The International Directory of Military Aircraft, 2002/2003, The Complete Encyclopedia of World Aircraft : Boeing Vertol Model 107 (H-46 Sea Knight), Encyclopedia of world military aircraft : Volume One General characteristics Performance Armament Related development Aircraft of comparable role, configuration, and era Related lists
[ { "paragraph_id": 0, "text": "The Boeing Vertol CH-46 Sea Knight is an American medium-lift tandem-rotor transport helicopter powered by twin turboshaft engines. It was designed by Vertol and manufactured by Boeing Vertol following Vertol's acquisition by Boeing.", "title": "" }, { "paragraph_id": 1, "text": "Development of the Sea Knight, which was originally designated by the firm as the Vertol Model 107, commenced during 1956. It was envisioned as a successor to the first generation of rotorcraft, such as the H-21 \"Flying Banana\", that had been powered by piston engines; in its place, the V-107 made use of the emergent turboshaft engine. On 22 April 1958, the V-107 prototype performed its maiden flight. During June 1958, the US Army awarded a contract for the construction of ten production-standard aircraft, designated as the YHC-1A, based on the V-107; this initial order was later cut down to three YHC-1As though. During 1961, the US Marine Corps (USMC), which had been studying its requirements for a medium-lift, twin-turbine cargo/troop assault helicopter, selected Boeing Vertol's Model 107M as the basis from which to manufacture a suitable rotorcraft to meet their needs. Known colloquially as the \"Phrog\" and formally as the \"Sea Knight\", it was operated across all US Marine Corps' operational environments between its introduction during the Vietnam War and its frontline retirement during 2014.", "title": "" }, { "paragraph_id": 2, "text": "The Sea Knight was operated by the USMC to provide all-weather, day-or-night assault transport of combat troops, supplies and equipment until it was replaced by the MV-22 Osprey during the 2010s. The USMC also used the helicopter for combat support, search and rescue (SAR), casualty evacuation and Tactical Recovery of Aircraft and Personnel (TRAP). The Sea Knight also functioned as the US Navy's standard medium-lift utility helicopter prior to the type being phased out of service in favor of the MH-60S Knighthawk during the early 2000s. Several overseas operators acquired the rotorcraft as well. Canada operated the Sea Knight, designated as CH-113; the type was used predominantly in the SAR role until 2004. Other export customers for the type included Japan, Sweden, and Saudi Arabia. The commercial version of the rotorcraft is the BV 107-II, commonly referred to simply as the \"Vertol\".", "title": "" }, { "paragraph_id": 3, "text": "During the 1940s and 1950s, American rotorcraft manufacturer Piasecki Helicopter emerged as a pioneering developer of tandem-rotor helicopters; perhaps the most famous of these being the piston-powered H-21 \"Flying Banana\", an early utility and transport helicopter. During 1955, Piasecki was officially renamed as Vertol Corporation (standing for vertical take-off and landing); it was around this time that work commenced on the development of a new generation of tandem rotor helicopter. During 1956, the new design received the internal company designation of Vertol Model 107, or simply V-107; this rotorcraft differed from its predecessors by harnessing the newly developed turboshaft engine instead of piston-based counterparts. During that year, construction of a prototype, powered by a pair of Lycoming T53 turboshaft engines, each one being capable of producing 877 shp (640 kW), commenced.", "title": "Development" }, { "paragraph_id": 4, "text": "On 22 April 1958, the V-107 prototype performed its maiden flight. In order to garner publicity for the newly developed rotorcraft, it was decided to use the prototype to conduct a series of publicised flight demonstrations during a tour across the United States and several overseas nations. During June 1958, it was announced that the U.S. Army had awarded a contract to Vertol for the construction of ten production-standard aircraft based on the V-107, which were designated YHC-1A. However, this order was later decreased to three helicopters; according to aviation author Jay P. Spenser, the cutback had been enacted in order that the U.S. Army would be able to divert funds for the development of the rival V-114 helicopter, which was also a turbine-powered tandem rotor design but substantially larger than the V-107. All of the U.S. Army's three YHC-1As were powered by pairs of GE-T-58 engines. During August 1959, the first YHC-1A-model rotorcraft conducted its first flight; independently, it was shortly followed by the maiden flight of an improved model intended for the commercial and export markets, designated 107-II.", "title": "Development" }, { "paragraph_id": 5, "text": "During 1960, the U.S. Marine Corps evolved a requirement for a medium-lift, twin-turbine troop/cargo assault helicopter to replace the various piston-engined types that were then in widespread use with the service. That same year, American aviation company Boeing acquired Vertol, after which the group was consequently renamed Boeing Vertol. Following a competition between several competing designs, during early 1961, it was announced that Boeing Vertol had been selected to manufacture its model 107M for the U.S. Marine Corps, where it was designated HRB-1. During 1962, the U.S. Air Force placed its own order for 12 XCH-46B Sea Knight helicopters, which used the XH-49A designation; however, the service later decided to cancel the order due to delays in its delivery; instead, the U.S. Air Force opted to procure the rival Sikorsky S-61R in its place.", "title": "Development" }, { "paragraph_id": 6, "text": "Following the Sea Knight's first flight in August 1962, the military designation was changed to CH-46A. During November 1964, the introduction of the Marines' CH-46A and the Navy's UH-46As commenced. The UH-46A variant was a modified version of the rotorcraft to perform the vertical replenishment mission. The CH-46A was equipped with a pair of T58-GE8-8B turboshaft engines, each being rated at 1,250 shp (930 kW); these allowed the Sea Knight to carry up to 17 passengers or a maximum of 4,000 pounds (1,815 kg) of cargo.", "title": "Development" }, { "paragraph_id": 7, "text": "During 1966, production of the improved CH-46D commenced with deliveries following shortly thereafter. This model featured various improvements, including modified rotor blades and the adoption of more powerful T58-GE-10 turboshaft engines, rated at 1,400 shp (1,040 kW) each. The increased power of these new engines allowed the CH-46D to carry an increased payload, such as up to 25 troops or a maximum of 7,000 pounds (3,180 kg) of cargo. During late 1967, the improved model was introduced to the Vietnam theater, where it supplemented the U.S. Marine Corps' existing CH-46A fleet, which had proven to be relatively unreliable and problematic in service. Along with the USMC's CH-46Ds, the U.S. Navy also acquired a small number of UH-46Ds for ship resupply purposes. In addition, approximately 33 CH-46As were progressively re-manufactured to the CH-46D standard.", "title": "Development" }, { "paragraph_id": 8, "text": "Between 1968 and 1971, the U.S. Marine Corps received a number of CH-46F standard rotorcraft. This model retained the T58-GE-10 engines used on the CH-46D while featuring revised avionics and featured a number of other modifications. The CH-46F was the final production model of the type. During its service life, the Sea Knight received a variety of upgrades and modifications. Over time, the majority of the U.S. Marine Corps' Sea Knights were upgraded to the improved CH-46E standard. This model featured fiberglass rotor blades, reinforcement measures throughout the airframe, along with the refitting of further uprated T58-GE-16 engines, capable of producing 1,870 shp (1,390 kW) each; in addition, several CH-46Es were modified to double their maximum fuel capacity. Starting in the mid-1990s, the Dynamic Component Upgrade (DCU) programmes was enacted, focusing on the implementation of strengthened drive systems and modified rotor controls.", "title": "Development" }, { "paragraph_id": 9, "text": "The commercial variant, the BV 107-II, was first ordered by New York Airways during 1960. During July 1962, they took delivery of their first three aircraft, which was configured to seat up to 25 passengers. During 1965, Boeing Vertol sold the manufacturing rights of the 107 to Japanese conglomerate Kawasaki Heavy Industries. Under this arrangement, all Model 107 civilian and military aircraft built in Japan were referred to by the KV 107 designation. On 15 December 2006, Columbia Helicopters, Inc acquired the type certificate for the BV 107-II; at the time, the company was reportedly in the process of acquiring a Production Certificate from the Federal Aviation Administration (FAA). Plans for actual production of the aircraft were not announced.", "title": "Development" }, { "paragraph_id": 10, "text": "The Boeing Vertol CH-46 Sea Knight is a medium-lift tandem-rotor transport helicopter, furnished with a set of counter-rotating main rotors in a tandem-rotor configuration. It was typically powered by a pair of General Electric T58 turboshaft engines, which were mounted on each side of the rear rotor pedestal; power to the forward rotor was transferred from the rear-mounted engines via a drive shaft. For redundancy, both engines are coupled so that either one would be capable of powering both of the main rotors in the event of a single engine failure or a similar emergency situation. Each of the rotors feature three blades, which can be folded to better facilitate storage and naval operations. The CH-46 features a fixed tricycle landing gear, complete with twin wheels on all three legs of the landing gear; this configuration results in a nose-up stance, helping to facilitate cargo loading and unloading. Two of the main landing gear were installed within protruding rear sponsons; the free interior space of the sponsons are also used to house fuel tanks, possessing a total capacity of 350 US gallons (1,438 L).", "title": "Design" }, { "paragraph_id": 11, "text": "The interior of the CH-46 was largely taken up by its cargo bay, complete with a rear loading ramp that could be removed or left open in flight for the carriage of extended cargoes or for parachute drops. Various furnishings were normally provided to aid in its use as a utility rotorcraft, such as an internal winch mounted within the forward cabin, which can be used to assisting loading by pulling external cargo on pallets into the aircraft via the ramp and rollers, and an optionally-attached belly-mounted cargo hook, which would be usually rated at 10,000 lb (4,500 kg) for carrying cargoes externally underneath the Sea Knight; despite the hook having been rated at 10,000 lb (4,500 kg), this was safety restricted to less payload as they got older. When operated in a typical configuration, the CH-46 would usually be operated by a crew of three; a larger crew could be accommodated when required, which would be dependent upon mission specifics. For example, a search and rescue (SAR) variant would usually carry a crew of five (Pilot, Co-Pilot, Crew Chief, Swimmer, and Medic) to facilitate all aspects of such operations. For self-defense, a pintle-mounted 0.50 in (12.7 mm) Browning machine gun could be mounted on each side of the helicopter. Service in southeast Asia resulted in the addition of armor along with the machine guns.", "title": "Design" }, { "paragraph_id": 12, "text": "Known colloquially as the \"Phrog\", the Sea Knight was used in all U.S. Marine operational environments between its introduction during the Vietnam War and its frontline retirement in 2014. The type's longevity and reputation for reliability led to mantras such as \"phrogs phorever\" and \"never trust a helicopter under 30\". CH-46s transported personnel, evacuated wounded, supplied forward arming and refueling points (FARP), performed vertical replenishment, search and rescue, recovered downed aircraft and crews and other tasks.", "title": "Operational history" }, { "paragraph_id": 13, "text": "During the Vietnam War, the CH-46 was one of the prime US Marine troop transport helicopters in the theater, slotting between the smaller Bell UH-1 Iroquois and larger Sikorsky CH-53 Sea Stallion. CH-46 operations were plagued by major technical problems; the engines, being prone to foreign object damage (FOD) from debris being ingested when hovering close to the ground and subsequently suffering a compressor stall, had a lifespan as low as 85 flight hours; on 21 July 1966, all CH-46s were grounded until more efficient filters had been fitted.", "title": "Operational history" }, { "paragraph_id": 14, "text": "On 3 May 1967, a CH-46D at Marine Corps Air Facility Santa Ana crashed, killing all four members of the crew. Within three days the accident investigators had determined that the mounting brackets of the main transmission had failed, allowing the front and rear overlapping rotors to intermesh. All CH-46s were temporarily grounded for inspection. On 13 May, a CH-46A crashed off the coast of Vietnam when the tail pylon containing the engines, main transmission and aft rotors broke off in flight. All four crew members were killed. On 20 June, another CH-46A crashed, though two of the four-man crew survived. Once again, even though the aircraft was not recovered from the water, failure of some sort in the rear pylon was suspected. On 30 June a CH-46D at Santa Ana crashed when a rotor blade separated from the aircraft, all three of the crew survived. As a result of this latest accident, all CH-46Ds were immediately grounded, but the CH-46As continued flying. On 3 July another CH-46A crashed in Vietnam, killing all four Marines of its crew. The cause of the crash again was traced to failure of the main transmission.", "title": "Operational history" }, { "paragraph_id": 15, "text": "On 31 August 1967, a CH-46A on a medical evacuation mission to USS Tripoli disintegrated in midair killing all its occupants. The following day another CH-46A experienced a similar incident at Marble Mountain Air Facility leading to the type being grounded for all except emergency situations and cutting Marine airlift capacity in half. An investigation conducted by a joint Naval Air Systems Command/Boeing Vertol accident investigation team revealed that structural failures were occurring in the area of the rear pylon resulting in the rear rotor tearing off in flight and may have been the cause of several earlier losses. The team recommended structural and systems modifications to reinforce the rear rotor mount as well as installation of an indicator to detect excessive strain on critical parts of the aircraft. 80 CH-46As were shipped to Marine Corps Air Station Futenma, Okinawa where they received the necessary modifications by a combined force of Marine and Boeing Vertol personnel. The modified CH-46As began returning to service in December 1967 and all had been returned to service by February 1968.", "title": "Operational history" }, { "paragraph_id": 16, "text": "During the 1972 Easter Offensive, Sea Knights saw heavy use to convey US and South Vietnamese ground forces to and around the front lines. By the end of US military operations in Vietnam, over a hundred Sea Knights had been lost to enemy fire.", "title": "Operational history" }, { "paragraph_id": 17, "text": "In February 1968 the Marine Corps Development and Education Command obtained several CH-46s to perform herbicide dissemination tests using HIDAL (Helicopter, Insecticide Dispersal Apparatus, Liquid) systems; testing indicated the need for redesign and further study. Tandem-rotor helicopters were often used to transport nuclear warheads; the CH-46A was evaluated to deploy Naval Special Forces with the Special Atomic Demolition Munition (SADM). Nuclear Weapon Accident Exercise 1983 (NUWAX-83), simulating the crash of a Navy CH-46E carrying 3 nuclear warheads, was conducted at the Nevada Test Site on behalf of several federal agencies; the exercise, which used real radiological agents, was depicted in a Defense Nuclear Agency-produced documentary.", "title": "Operational history" }, { "paragraph_id": 18, "text": "U.S. Marine CH-46s were used to deploy the 8th Marine Regiment into Grenada during Operation Urgent Fury, evacuated the surviving crewmember of a downed AH-1 Cobra, and then carried infantry from the 75th Ranger Regiment to secure and evacuate U.S. students at the Grand Anse campus of St. George's University, though one crashed after colliding with a palm tree.", "title": "Operational history" }, { "paragraph_id": 19, "text": "CH-46E Sea Knights were also used by the U.S. Marine Corps during the 2003 invasion of Iraq. In one incident on 1 April 2003, Marine CH-46Es and CH-53Es carried U.S. Army Rangers and Special Operations troops on an extraction mission for captured Army Private Jessica Lynch from an Iraqi hospital. During the subsequent occupation of Iraq and counter-insurgency operations, the CH-46E was heavily used in the CASEVAC role, being required to maintain 24/7 availability regardless of conditions. According to authors Williamson Murray and Robert H Scales, the Sea Knight displayed serious reliability and maintenance problems during its deployment to Iraq, as well as \"limited lift capabilities\". Following the loss of numerous US helicopters in the Iraqi theatre, the Marines opted to equip their CH-46s with more advanced anti-missile countermeasures.", "title": "Operational history" }, { "paragraph_id": 20, "text": "The U.S. Navy retired the type on 24 September 2004, replacing it with the MH-60S Seahawk; the Marine Corps maintained its fleet as the MV-22 Osprey was fielded. In March 2006 Marine Medium Helicopter Squadron 263 (HMM-263) was deactivated and redesignated VMM-263 to serve as the first MV-22 squadron. The replacement process continued through the other medium helicopter squadrons into 2014. On 5 October 2014, the Sea Knight performed its final service flight with the U.S. Marine Corps at Marine Corps Air Station Miramar. HMM-364 was the last squadron to use it outside the United States, landing it aboard USS America on her maiden transit. On 9 April 2015, the CH-46 was retired by the Marine Medium Helicopter Training Squadron 164, the last Marine Corps squadron to transition to the MV-22. The USMC retired the CH-46 on 1 August 2015 in a ceremony at the Udvar-Hazy Center near Washington DC.", "title": "Operational history" }, { "paragraph_id": 21, "text": "Beginning in April 2011 the Navy's Fleet Readiness Center East began refurbishing retired USMC CH-46Es for service with the United States Department of State Air Wing. A number of CH-46s from HMX-1 were transferred to the Air Wing in late 2014. In Afghanistan the CH-46s were used by Embassy Air for secure transport of State Department personnel. The CH-46s were equipped with missile warning sensors and flare dispensers and could be armed with M240D or M2 Browning machine guns. A report in September 2019 by the State Department Inspector General found that a seat on a CH-46 for a seven-minute flight cost US$1,500 (~$1,702 in 2022). Seven of the CH-46s were rendered unusable and abandoned at Kabul Airport following the 2021 evacuation from Afghanistan.", "title": "Operational history" }, { "paragraph_id": 22, "text": "Seven of the former U.S. Marine Corps CH-46s that were refurbished by the U.S. State Department Air Wing took part in the 2021 Kabul Airlift. Prior to the complete withdrawal of U.S. forces, all seven of the helicopters were rendered unusable and abandoned at Kabul International Airport and are seen in many videos and pictures online. One of the CH-46s that was abandoned (BuNo 154038, c/n 2389) also took part in Operation Frequent Wind with the evacuation of the Embassy of Saigon in South Vietnam exactly forty-six years prior. The U.S. State Department drew criticism for leaving behind the aircraft. Commenting on the issue, the U.S. State Department claimed that the helicopters were already being phased out of State Department Air Wing due to their age and the inability to support them. The seven CH-46s left behind were the only U.S. State Department aircraft left behind at Kabul International Airport.", "title": "Operational history" }, { "paragraph_id": 23, "text": "The Royal Canadian Air Force procured six CH-113 Labrador helicopters for the SAR role and the Canadian Army acquired 12 of the similar CH-113A Voyageur for the medium-lift transport role. The RCAF Labradors were delivered first with the first one entering service on 11 October 1963. When the larger CH-147 Chinook was procured by the Canadian Forces in the mid-1970s, the Voyageur fleet was converted to Labrador specifications to undertake SAR missions. The refurbished Voyageurs were re-designated as CH-113A Labradors, thus a total of 15 Labradors were ultimately in service.", "title": "Operational history" }, { "paragraph_id": 24, "text": "The Labrador was fitted with a watertight hull for marine landings, a 5,000 kilogram cargo hook and an external rescue hoist mounted over the right front door. It featured a 1,110 kilometer flying range, emergency medical equipment and an 18-person passenger capacity. In multiple instances throughout the 1970s and 1980s, this increased range provided the capability of the CH-113 to provide assistance to U.S. Coast Guard (USCG) missions or perform long range medevacs over distances the USCG helicopters at the time simply could not reach.", "title": "Operational history" }, { "paragraph_id": 25, "text": "In 1981, a mid-life upgrade of the fleet was carried out by Boeing Canada in Arnprior, Ontario. Known as the SAR-CUP (Search and Rescue Capability Upgrade Program), the refit scheme included new instrumentation, a nose-mounted weather radar, a tail-mounted auxiliary power unit, a new high-speed rescue hoist mounted over the side door and front-mounted searchlights. A total of six CH-113s and five CH-113As were upgraded with the last delivered in 1984. Nonetheless, as a search and rescue helicopter it endured heavy use and hostile weather conditions; which had begun to take their toll on the Labrador fleet by the 1990s, resulting in increasing maintenance costs and the need for prompt replacement.", "title": "Operational history" }, { "paragraph_id": 26, "text": "In 1992, it was announced that the Labradors were to be replaced by 15 new helicopters, a variant of the AgustaWestland EH101, designated CH-149 Chimo. The order was subsequently cancelled by the Jean Chrétien Liberal government in 1993, resulting in cancellation penalties, as well as extending the service life of the Labrador fleet. However, in 1998, a CH-113 from CFB Greenwood crashed on Quebec's Gaspé Peninsula while returning from a SAR mission, resulting in the deaths of all crewmembers on board. The crash placed pressure upon the government to procure a replacement, thus an order was placed with the manufacturers of the EH101 for 15 aircraft to perform the search-and-rescue mission, designated CH-149 Cormorant. CH-149 deliveries began in 2003, allowing the last CH-113 to be retired in 2004. In October 2005 Columbia Helicopters of Aurora, Oregon purchased eight of the retired CH-113 Labradors to add to their fleet of 15 Vertol 107-II helicopters.", "title": "Operational history" }, { "paragraph_id": 27, "text": "In 1963, Sweden procured ten UH-46Bs from the US as a transport and anti-submarine helicopter for the Swedish Armed Forces, designated Hkp 4A. In 1973, a further eight Kawasaki-built KV-107s, which were accordingly designated Hkp 4B, were acquired to replace the older Piasecki H-21. During the Cold War, the fleet's primary missions were anti-submarine warfare and troop transportation. They were also frequently employed in the search and rescue role, most famously during the rescue operation of the MS Estonia after it sank in the Baltic Sea on 28 September 1994. In the 1980s, the Hkp 4A was phased out, having been replaced by the Eurocopter AS332 Super Puma; the later Kawasaki-built Sea Knights continued in operational service until 2011, they were replaced by the UH-60 Black Hawk and NH90.", "title": "Operational history" }, { "paragraph_id": 28, "text": "On 15 September 2023, Argentina's Air Force chief Gen. Xavier Issac briefed the media that Argentina had sent a letter requesting the US to approve the refurbishment of surplus CH-46s currently stored with the 309th Aerospace Maintenance and Regeneration Group in Arizona. The availability of civilian-operated CH-46s was also being explored. They would be used to support Argentina's Antarctic bases. The CH-46s would replace two Mil Mi-171E helicopters acquired in 2010, but now not able to be repaired by Russia due to sanctions from the Russian invasion of Ukraine.", "title": "Operational history" }, { "paragraph_id": 29, "text": "The civilian version, designated as the BV 107-II Vertol, was developed prior to the military CH-46. It was operated commercially by New York Airways, Pan American World Airways and later on by Columbia Helicopters. Among the diversity of tasks was commuter service in the mid-1960s from the roof of the Pan Am skyscraper in Manhattan to JFK Airport in Queens, pulling a hover barge, and constructing transmission towers for overhead power lines.", "title": "Operational history" }, { "paragraph_id": 30, "text": "In December 2006, Columbia Helicopters purchased the type certificate of the Model 107 from Boeing, with the aim of eventually producing new-build aircraft themselves.", "title": "Operational history" }, { "paragraph_id": 31, "text": "Thailand", "title": "Operators" }, { "paragraph_id": 32, "text": "Data from The International Directory of Military Aircraft, 2002/2003, The Complete Encyclopedia of World Aircraft : Boeing Vertol Model 107 (H-46 Sea Knight), Encyclopedia of world military aircraft : Volume One", "title": "Specifications (CH-46E)" }, { "paragraph_id": 33, "text": "General characteristics", "title": "Specifications (CH-46E)" }, { "paragraph_id": 34, "text": "Performance", "title": "Specifications (CH-46E)" }, { "paragraph_id": 35, "text": "Armament", "title": "Specifications (CH-46E)" }, { "paragraph_id": 36, "text": "Related development", "title": "See also" }, { "paragraph_id": 37, "text": "Aircraft of comparable role, configuration, and era", "title": "See also" }, { "paragraph_id": 38, "text": "Related lists", "title": "See also" } ]
The Boeing Vertol CH-46 Sea Knight is an American medium-lift tandem-rotor transport helicopter powered by twin turboshaft engines. It was designed by Vertol and manufactured by Boeing Vertol following Vertol's acquisition by Boeing. Development of the Sea Knight, which was originally designated by the firm as the Vertol Model 107, commenced during 1956. It was envisioned as a successor to the first generation of rotorcraft, such as the H-21 "Flying Banana", that had been powered by piston engines; in its place, the V-107 made use of the emergent turboshaft engine. On 22 April 1958, the V-107 prototype performed its maiden flight. During June 1958, the US Army awarded a contract for the construction of ten production-standard aircraft, designated as the YHC-1A, based on the V-107; this initial order was later cut down to three YHC-1As though. During 1961, the US Marine Corps (USMC), which had been studying its requirements for a medium-lift, twin-turbine cargo/troop assault helicopter, selected Boeing Vertol's Model 107M as the basis from which to manufacture a suitable rotorcraft to meet their needs. Known colloquially as the "Phrog" and formally as the "Sea Knight", it was operated across all US Marine Corps' operational environments between its introduction during the Vietnam War and its frontline retirement during 2014. The Sea Knight was operated by the USMC to provide all-weather, day-or-night assault transport of combat troops, supplies and equipment until it was replaced by the MV-22 Osprey during the 2010s. The USMC also used the helicopter for combat support, search and rescue (SAR), casualty evacuation and Tactical Recovery of Aircraft and Personnel (TRAP). The Sea Knight also functioned as the US Navy's standard medium-lift utility helicopter prior to the type being phased out of service in favor of the MH-60S Knighthawk during the early 2000s. Several overseas operators acquired the rotorcraft as well. Canada operated the Sea Knight, designated as CH-113; the type was used predominantly in the SAR role until 2004. Other export customers for the type included Japan, Sweden, and Saudi Arabia. The commercial version of the rotorcraft is the BV 107-II, commonly referred to simply as the "Vertol".
2002-02-11T20:22:30Z
2023-12-07T23:00:10Z
[ "Template:Cite book", "Template:Cite report", "Template:Convert", "Template:Portal", "Template:Webarchive", "Template:Infobox aircraft begin", "Template:Multiple image", "Template:Dead link", "Template:Rp", "Template:Format price", "Template:Inflation/year", "Template:USS", "Template:Aircraft specs", "Template:Reflist", "Template:ISBN", "Template:Page needed", "Template:Short description", "Template:Redirect", "Template:Use dmy dates", "Template:Navboxes", "Template:Authority control", "Template:Commons", "Template:Cite web", "Template:Citation-attribution", "Template:Refend", "Template:Infobox aircraft type", "Template:Aircontent", "Template:Refbegin", "Template:Cite news", "Template:Cite journal", "Template:Citation needed", "Template:Flag", "Template:Cn" ]
https://en.wikipedia.org/wiki/Boeing_Vertol_CH-46_Sea_Knight
7,774
Chief of Naval Operations
The chief of naval operations (CNO) is the highest ranking officer of the United States Navy. The position is a statutory office (10 U.S.C. § 8033) held by an admiral who is a military adviser and deputy to the secretary of the Navy. In a separate capacity as a member of the Joint Chiefs of Staff (10 U.S.C. § 151), the CNO is a military adviser to the National Security Council, the Homeland Security Council, the secretary of defense, and the president. Despite the title, the CNO does not have operational command authority over naval forces. The CNO is an administrative position based in the Pentagon, and exercises supervision of Navy organizations as the designee of the secretary of the Navy. Operational command of naval forces falls within the purview of the combatant commanders who report to the secretary of defense. The current chief of naval operations is Lisa Franchetti, who was sworn in on November 2, 2023. The chief of naval operations (CNO) is typically the highest-ranking officer on active duty in the U.S. Navy unless the chairman and/or the vice chairman of the Joint Chiefs of Staff are naval officers. The CNO is nominated for appointment by the president, for a four-year term of office, and must be confirmed by the Senate. A requirement for being Chief of Naval Operations is having significant experience in joint duty assignments, which includes at least one full tour of duty in a joint duty assignment as a flag officer. However, the president may waive those requirements if he determines that appointing the officer is necessary for the national interest. The chief can be reappointed to serve one additional term, but only during times of war or national emergency declared by Congress. By statute, the CNO is appointed as a four-star admiral. As per 10 U.S.C. § 8035, whenever there is a vacancy for the chief of naval operations or during the absence or disability of the chief of naval operations, and unless the president directs otherwise, the vice chief of naval operations performs the duties of the chief of naval operations until a successor is appointed or the absence or disability ceases. The CNO also performs all other functions prescribed under 10 U.S.C. § 8033, such as presiding over the Office of the Chief of Naval Operations (OPNAV), exercising supervision of Navy organizations, and other duties assigned by the secretary or higher lawful authority, or the CNO delegates those duties and responsibilities to other officers in OPNAV or in organizations below. Acting for the secretary of the Navy, the CNO also designates naval personnel and naval forces available to the commanders of unified combatant commands, subject to the approval of the secretary of defense. The CNO is a member of the Joint Chiefs of Staff as prescribed by 10 U.S.C. § 151 and 10 U.S.C. § 8033. Like the other members of the Joint Chiefs of Staff, the CNO is an administrative position, with no operational command authority over the United States Navy forces. Members of the Joint Chiefs of Staff, individually or collectively, in their capacity as military advisers, shall provide advice to the president, the National Security Council (NSC), or the secretary of defense (SECDEF) on a particular matter when the president, the NSC, or SECDEF requests such advice. Members of the Joint Chiefs of Staff (other than the chairman of the Joint Chiefs of Staff) may submit to the chairman advice or an opinion in disagreement with, or advice or an opinion in addition to, the advice presented by the chairman to the president, NSC, or SECDEF. When performing her JCS duties, the CNO is responsible directly to the SECDEF, but keeps SECNAV fully informed of significant military operations affecting the duties and responsibilities of the SECNAV, unless SECDEF orders otherwise. In 1900, administrative and operational authority over the Navy was concentrated in the secretary of the Navy and bureau chiefs, with the General Board holding only advisory powers. Critics of the lack of military command authority included Charles J. Bonaparte, Navy secretary from 1905 to 1906, then-Captain Reginald R. Belknap and future admiral William Sims. Rear Admiral George A. Converse, commander of the Bureau of Navigation (BuNav) from 1905 to 1906, reported: [W]ith each year that passes the need is painfully apparent for a military administrative authority under the secretary, whose purpose would be to initiate and direct the steps necessary to carry out the Department’s policy, and to coordinate the work of the bureaus and direct their energies toward the effective preparation of the fleet for war. However, reorganization attempts were opposed by Congress due to fears of a Prussian-style general staff and inadvertently increasing the powers of the Navy secretary, which risked infringing on legislative authority. Senator Eugene Hale, chairman of the Senate Committee on Naval Affairs, disliked reformers like Sims and persistently blocked attempts to bring such ideas to debate. To circumvent the opposition, George von Lengerke Meyer, Secretary of the Navy under William Howard Taft implemented a system of "aides" on 18 November 1909. These aides lacked command authority and instead served as principal advisors to the Navy secretary. The aide for operations was deemed by Meyer to be the most important one, responsible for devoting "his entire attention and study to the operations of the fleet," and drafting orders for the movement of ships on the advice of the General Board and approval of the secretary in times of war or emergency. The successes of Meyer's first operations aide, Rear Admiral Richard Wainwright, factored into Meyer's decision to make his third operations aide, Rear Admiral Bradley A. Fiske his de facto principal advisor on 10 February 1913. Fiske retained his post under Meyer's successor, Josephus Daniels, becoming the most prominent advocate for what would become the office of CNO. In 1914, Fiske, frustrated at Daniels' ambivalence towards his opinion that the Navy was unprepared for the possibility of entry into World War I, bypassed the secretary to collaborate with Representative Richmond P. Hobson, a retired Navy admiral, to draft legislation providing for the office of "a chief of naval operations". The preliminary proposal (passed off as Hobson's own to mask Fiske's involvement), in spite of Daniels' opposition, passed Hobson's subcommittee unanimously on 4 January 1915, and passed the full House Committee on Naval Affairs on 6 January. Fiske's younger supporters expected him to be named the first chief of naval operations, and his versions of the bill provided for the minimum rank of the officeholder to be a two-star rear admiral. There shall be a Chief of Naval Operations, who shall be an officer on the active list of the Navy not below the grade of Rear Admiral, appointed for a term of four years by the President, by and with the advice of the Senate, who, under the Secretary of the Navy, shall be responsible for the readiness of the Navy for war and be charged with its general direction. In contrast, Daniels' version, included in the final bill, emphasized the office's subordination to the Navy secretary, allowed for the selection of the CNO from officers of the rank of captain, and denied it authority over the Navy's general direction: There shall be a Chief of Naval Operations, who shall be an officer on the active list of the Navy appointed by the President, by and with the advice and consent of the Senate, from the officers of the line of the Navy not below the grade of Captain for a period of four years, who shall, under the direction of the Secretary, be charged with the operations of the fleet, and with the preparation and readiness of plans for its use in war. Fiske's "end-running" of Daniels eliminated any possibility of him being named the first CNO. Nevertheless, satisfied with the change he had helped enact, Fiske made a final contribution: elevating the statutory rank of the CNO to admiral with commensurate pay. The Senate passed the appropriations bill creating the CNO position and its accompanying office on 3 March 1915, simultaneously abolishing the aides system promulgated under Meyer. Captain William S. Benson was promoted to the temporary rank of rear admiral and became the first CNO on 11 May 1915. He further assumed the rank of admiral after the passage of the 1916 Naval Appropriations Bill with Fiske's amendments, second only to Admiral of the Navy George Dewey and explicitly senior to the commanders-in-chief of the Atlantic, Pacific and Asiatic Fleets. Unlike Fiske, who had campaigned for a powerful, aggressive CNO sharing authority with the Navy secretary, Benson demonstrated personal loyalty to Secretary Daniels and subordinated himself to civilian control, yet maintained the CNO's autonomy where necessary. While alienating reformers like Sims and Fiske (who retired in 1916), Benson's conduct gave Daniels immense trust in his new CNO, and Benson was delegated greater resources and authority. Among the organizational efforts initiated or recommended by Benson included an advisory council to coordinate high-level staff activities, composed of himself, the SECNAV and the bureau chiefs which "worked out to the great satisfaction" of Daniels and Benson; the reestablishment of the Joint Army and Navy Board in 1918 with Benson as its Navy member; and the consolidation of all matters of naval aviation under the authority of the CNO. Benson also revamped the structure of the naval districts, transferring authority for them from SECNAV to the Office of the Chief of Naval Operations under the Operations, Plans, Naval Districts division. This enabled closer cooperation between naval district commanders and the uniformed leadership, who could more easily handle communications between the former and the Navy's fleet commanders. In the waning years of his tenure, Benson set regulations for officers on shore duty to have temporary assignments with the Office of the Chief of Naval Operations to maintain cohesion between the higher-level staff and the fleet. Until 1916, the CNO's office was chronically understaffed. The formal establishment of the CNO's "general staff", the Office of the Chief of Naval Operations (OPNAV), originally called the Office for Operations, was exacerbated by Eugene Hale's retirement from politics in 1911, and skepticism of whether the CNO's small staff could implement President Wilson's policy of "preparedness" without violating American neutrality in World War I. By June 1916, OPNAV was organized into eight divisions: Operations, Plans, Naval Districts; Regulations; Ship Movements; Communications; Publicity; and Materiel. Operations provided a link between fleet commanders and the General Board, Ship Movements coordinated the movement of Navy vessels and oversaw navy yard overhauls, Communications accounted for the Navy's developing radio network, Publicity conducted the Navy's public affairs, and the Materiel section coordinated the work of the naval bureaus. Numbering only 75 staffers in January 1917, OPNAV increased in size following the American entry into World War I, as it was deemed of great importance to manage the rapid mobilization of forces to fight in the war. By war's end, OPNAV employed over 1462 people. The CNO and OPNAV thus gained influence over Navy administration but at the expense of the Navy secretary and bureau chiefs. In 1918, Benson became a military advisor to Edward M. House, an advisor and confidant of President Wilson, joining him on a trip to Europe as the 1918 armistice with Germany was signed. His stance that the United States remain equal to Great Britain in naval power was very useful to House and Wilson, enough for Wilson to insist Benson remain in Europe until after the Treaty of Versailles was signed in July 1919. Benson's tenure as CNO was slated to end on 10 May 1919, but this was delayed by the president at Secretary Daniels' insistence; Benson instead retired on 25 September 1919. Admiral Robert Coontz replaced Benson as CNO on 1 November 1919. The CNO's office faced no significant changes in authority during the interwar period, largely due to the Navy secretaries opting to keep executive authority within their own office. Innovations during this period included encouraging coordination in war planning process, and compliance with the Washington Naval Treaty while still keeping to the shipbuilding plan authorized by the Naval Act of 1916. and implementing the concept of naval aviation into naval doctrine. William V. Pratt became the fifth Chief of Naval Operations on 17 September 1930, after the resignation of Charles F. Hughes. He had previously served as assistant chief of naval operations under CNO Benson. A premier naval policymaker and supporter of arms control under the Washington Naval Treaty, Pratt, despite otherwise good relations, clashed with President Herbert Hoover over building up naval force strength to treaty levels, with Hoover favoring restrictions in spending due to financial difficulties caused by the Great Depression. Under Pratt, such a "treaty system" was needed to maintain a compliant peacetime navy. Pratt opposed centralized management of the Navy, and encouraged diversity of opinion between the offices of the Navy secretary, CNO and the Navy's General Board. To this effect, Pratt removed the CNO as an ex officio member of the General Board, concerned that the office's association with the Board could hamper diversities of opinion between the former and counterparts within the offices of the Navy secretary and OPNAV. Pratt's vision of a less powerful CNO also clashed with Representative Carl Vinson of Georgia, chair of the House Naval Affairs Committee from 1931 to 1947, a proponent of centralizing power within OPNAV. Vinson deliberately delayed many of his planned reorganization proposals until Pratt's replacement by William H. Standley to avoid the unnecessary delays that would otherwise have happened with Pratt. Pratt also enjoyed a good working relationship with Army chief of staff Douglas MacArthur, and negotiated several key agreements with him over coordinating their services' radio communications networks, mutual interests in coastal defense, and authority over Army and Navy aviation. William H. Standley, who succeeded Pratt in 1933, had a weaker relationship with President Franklin D. Roosevelt than Pratt enjoyed with Hoover. Often in direct conflict with Navy secretary Claude A. Swanson and assistant secretary Henry L. Roosevelt, Standley's hostility to the latter was described as "poisonous". Conversely, Standley successfully improved relations with Congress, streamlining communications between the Department of the Navy and the naval oversight committees by appointing the first naval legislative liaisons, the highest-ranked of which reported to the judge advocate general. Standley also worked with Representative Vinson to pass the Vinson-Trammell Act, considered by Standley to be his most important achievement as CNO. The Act authorized the President: “to suspend” construction of the ships authorized by the law “as may be necessary to bring the naval armament of the United States within the limitation so agreed upon, except that such suspension shall not apply to vessels actually under construction on the date of the passage of this act.” This effectively provided security for all Navy vessels under construction; even if new shipbuilding projects could not be initiated, shipbuilders with new classes under construction could not legally be obliged to cease operations, allowing the Navy to prepare for World War II without breaking potential limits from future arms control conferences. The Act also granted the CNO "soft oversight power" of the naval bureaus which nominally lay with the secretary of the Navy, as Standley gradually inserted OPNAV into the ship design process. Under Standley, the "treaty system" created by Pratt was abandoned. Outgoing commander, Battle Force William D. Leahy succeeded Standley as CNO on 2 January 1937. Leahy's close personal friendship with President Roosevelt since his days as Navy assistant secretary, as well as good relationships with Representative Vinson and Secretary Swanson brought him to the forefront of potential candidates for the post. Unlike Standley, who tried to dominate the bureaus, Leahy preferred to let the bureau chiefs function autonomously as per convention, with the CNO acting as a primus inter pares. Leahy's views of the CNO's authority led to clashes with his predecessor; Standley even attempted to block Leahy from being assigned a fleet command in retaliation. Leahy, on his part, continued Standley's efforts to insert the CNO into the ship design process. Swanson's ill health and assistant secretary Henry Roosevelt's death on 22 February 1936 gave Leahy unprecedented influence. Leahy had private lunches with the President frequently; during his tenure as CNO, Roosevelt had 52 meetings with him, compared with 12 with his Army counterpart, General Malin Craig, none of which were private lunches. Leahy retired from the Navy on 1 August 1939 to become Governor of Puerto Rico, a month before the invasion of Poland. Number One Observatory Circle, located on the northeast grounds of the United States Naval Observatory in Washington, DC, was built in 1893 for its superintendent. The chief of naval operations liked the house so much that in 1923 he took over the house as his own official residence. It remained the residence of the CNO until 1974, when Congress authorized its transformation to an official residence for the vice president. The chief of naval operations currently resides in Quarters A in the Washington Naval Yard. The chief of naval operations presides over the Navy Staff, formally known as the Office of the Chief of Naval Operations (OPNAV). The Office of the Chief of Naval Operations is a statutory organization within the executive part of the Department of the Navy, and its purpose is to furnish professional assistance to the secretary of the Navy (SECNAV) and the CNO in carrying out their responsibilities. Under the authority of the CNO, the director of the Navy Staff (DNS) is responsible for day-to-day administration of the Navy Staff and coordination of the activities of the deputy chiefs of naval operations, who report directly to the CNO. The office was previously known as the assistant vice chief of naval operations (AVCNO) until 1996, when CNO Jeremy Boorda ordered its redesignation to its current name. Previously held by a three-star vice admiral, the position became a civilian's billet in 2018. The present DNS is Andrew S. Haueptle, a retired Marine Corps colonel. († - died in office)
[ { "paragraph_id": 0, "text": "The chief of naval operations (CNO) is the highest ranking officer of the United States Navy. The position is a statutory office (10 U.S.C. § 8033) held by an admiral who is a military adviser and deputy to the secretary of the Navy. In a separate capacity as a member of the Joint Chiefs of Staff (10 U.S.C. § 151), the CNO is a military adviser to the National Security Council, the Homeland Security Council, the secretary of defense, and the president.", "title": "" }, { "paragraph_id": 1, "text": "Despite the title, the CNO does not have operational command authority over naval forces. The CNO is an administrative position based in the Pentagon, and exercises supervision of Navy organizations as the designee of the secretary of the Navy. Operational command of naval forces falls within the purview of the combatant commanders who report to the secretary of defense.", "title": "" }, { "paragraph_id": 2, "text": "The current chief of naval operations is Lisa Franchetti, who was sworn in on November 2, 2023.", "title": "" }, { "paragraph_id": 3, "text": "The chief of naval operations (CNO) is typically the highest-ranking officer on active duty in the U.S. Navy unless the chairman and/or the vice chairman of the Joint Chiefs of Staff are naval officers. The CNO is nominated for appointment by the president, for a four-year term of office, and must be confirmed by the Senate. A requirement for being Chief of Naval Operations is having significant experience in joint duty assignments, which includes at least one full tour of duty in a joint duty assignment as a flag officer. However, the president may waive those requirements if he determines that appointing the officer is necessary for the national interest. The chief can be reappointed to serve one additional term, but only during times of war or national emergency declared by Congress. By statute, the CNO is appointed as a four-star admiral.", "title": "Appointment, rank, and responsibilities" }, { "paragraph_id": 4, "text": "As per 10 U.S.C. § 8035, whenever there is a vacancy for the chief of naval operations or during the absence or disability of the chief of naval operations, and unless the president directs otherwise, the vice chief of naval operations performs the duties of the chief of naval operations until a successor is appointed or the absence or disability ceases.", "title": "Appointment, rank, and responsibilities" }, { "paragraph_id": 5, "text": "The CNO also performs all other functions prescribed under 10 U.S.C. § 8033, such as presiding over the Office of the Chief of Naval Operations (OPNAV), exercising supervision of Navy organizations, and other duties assigned by the secretary or higher lawful authority, or the CNO delegates those duties and responsibilities to other officers in OPNAV or in organizations below.", "title": "Appointment, rank, and responsibilities" }, { "paragraph_id": 6, "text": "Acting for the secretary of the Navy, the CNO also designates naval personnel and naval forces available to the commanders of unified combatant commands, subject to the approval of the secretary of defense.", "title": "Appointment, rank, and responsibilities" }, { "paragraph_id": 7, "text": "The CNO is a member of the Joint Chiefs of Staff as prescribed by 10 U.S.C. § 151 and 10 U.S.C. § 8033. Like the other members of the Joint Chiefs of Staff, the CNO is an administrative position, with no operational command authority over the United States Navy forces.", "title": "Appointment, rank, and responsibilities" }, { "paragraph_id": 8, "text": "Members of the Joint Chiefs of Staff, individually or collectively, in their capacity as military advisers, shall provide advice to the president, the National Security Council (NSC), or the secretary of defense (SECDEF) on a particular matter when the president, the NSC, or SECDEF requests such advice. Members of the Joint Chiefs of Staff (other than the chairman of the Joint Chiefs of Staff) may submit to the chairman advice or an opinion in disagreement with, or advice or an opinion in addition to, the advice presented by the chairman to the president, NSC, or SECDEF.", "title": "Appointment, rank, and responsibilities" }, { "paragraph_id": 9, "text": "When performing her JCS duties, the CNO is responsible directly to the SECDEF, but keeps SECNAV fully informed of significant military operations affecting the duties and responsibilities of the SECNAV, unless SECDEF orders otherwise.", "title": "Appointment, rank, and responsibilities" }, { "paragraph_id": 10, "text": "In 1900, administrative and operational authority over the Navy was concentrated in the secretary of the Navy and bureau chiefs, with the General Board holding only advisory powers. Critics of the lack of military command authority included Charles J. Bonaparte, Navy secretary from 1905 to 1906, then-Captain Reginald R. Belknap and future admiral William Sims.", "title": "History" }, { "paragraph_id": 11, "text": "Rear Admiral George A. Converse, commander of the Bureau of Navigation (BuNav) from 1905 to 1906, reported:", "title": "History" }, { "paragraph_id": 12, "text": "[W]ith each year that passes the need is painfully apparent for a military administrative authority under the secretary, whose purpose would be to initiate and direct the steps necessary to carry out the Department’s policy, and to coordinate the work of the bureaus and direct their energies toward the effective preparation of the fleet for war.", "title": "History" }, { "paragraph_id": 13, "text": "However, reorganization attempts were opposed by Congress due to fears of a Prussian-style general staff and inadvertently increasing the powers of the Navy secretary, which risked infringing on legislative authority. Senator Eugene Hale, chairman of the Senate Committee on Naval Affairs, disliked reformers like Sims and persistently blocked attempts to bring such ideas to debate.", "title": "History" }, { "paragraph_id": 14, "text": "To circumvent the opposition, George von Lengerke Meyer, Secretary of the Navy under William Howard Taft implemented a system of \"aides\" on 18 November 1909. These aides lacked command authority and instead served as principal advisors to the Navy secretary. The aide for operations was deemed by Meyer to be the most important one, responsible for devoting \"his entire attention and study to the operations of the fleet,\" and drafting orders for the movement of ships on the advice of the General Board and approval of the secretary in times of war or emergency.", "title": "History" }, { "paragraph_id": 15, "text": "The successes of Meyer's first operations aide, Rear Admiral Richard Wainwright, factored into Meyer's decision to make his third operations aide, Rear Admiral Bradley A. Fiske his de facto principal advisor on 10 February 1913. Fiske retained his post under Meyer's successor, Josephus Daniels, becoming the most prominent advocate for what would become the office of CNO.", "title": "History" }, { "paragraph_id": 16, "text": "In 1914, Fiske, frustrated at Daniels' ambivalence towards his opinion that the Navy was unprepared for the possibility of entry into World War I, bypassed the secretary to collaborate with Representative Richmond P. Hobson, a retired Navy admiral, to draft legislation providing for the office of \"a chief of naval operations\". The preliminary proposal (passed off as Hobson's own to mask Fiske's involvement), in spite of Daniels' opposition, passed Hobson's subcommittee unanimously on 4 January 1915, and passed the full House Committee on Naval Affairs on 6 January.", "title": "History" }, { "paragraph_id": 17, "text": "Fiske's younger supporters expected him to be named the first chief of naval operations, and his versions of the bill provided for the minimum rank of the officeholder to be a two-star rear admiral.", "title": "History" }, { "paragraph_id": 18, "text": "There shall be a Chief of Naval Operations, who shall be an officer on the active list of the Navy not below the grade of Rear Admiral, appointed for a term of four years by the President, by and with the advice of the Senate, who, under the Secretary of the Navy, shall be responsible for the readiness of the Navy for war and be charged with its general direction.", "title": "History" }, { "paragraph_id": 19, "text": "In contrast, Daniels' version, included in the final bill, emphasized the office's subordination to the Navy secretary, allowed for the selection of the CNO from officers of the rank of captain, and denied it authority over the Navy's general direction:", "title": "History" }, { "paragraph_id": 20, "text": "There shall be a Chief of Naval Operations, who shall be an officer on the active list of the Navy appointed by the President, by and with the advice and consent of the Senate, from the officers of the line of the Navy not below the grade of Captain for a period of four years, who shall, under the direction of the Secretary, be charged with the operations of the fleet, and with the preparation and readiness of plans for its use in war.", "title": "History" }, { "paragraph_id": 21, "text": "Fiske's \"end-running\" of Daniels eliminated any possibility of him being named the first CNO. Nevertheless, satisfied with the change he had helped enact, Fiske made a final contribution: elevating the statutory rank of the CNO to admiral with commensurate pay. The Senate passed the appropriations bill creating the CNO position and its accompanying office on 3 March 1915, simultaneously abolishing the aides system promulgated under Meyer.", "title": "History" }, { "paragraph_id": 22, "text": "Captain William S. Benson was promoted to the temporary rank of rear admiral and became the first CNO on 11 May 1915. He further assumed the rank of admiral after the passage of the 1916 Naval Appropriations Bill with Fiske's amendments, second only to Admiral of the Navy George Dewey and explicitly senior to the commanders-in-chief of the Atlantic, Pacific and Asiatic Fleets.", "title": "History" }, { "paragraph_id": 23, "text": "Unlike Fiske, who had campaigned for a powerful, aggressive CNO sharing authority with the Navy secretary, Benson demonstrated personal loyalty to Secretary Daniels and subordinated himself to civilian control, yet maintained the CNO's autonomy where necessary. While alienating reformers like Sims and Fiske (who retired in 1916), Benson's conduct gave Daniels immense trust in his new CNO, and Benson was delegated greater resources and authority.", "title": "History" }, { "paragraph_id": 24, "text": "Among the organizational efforts initiated or recommended by Benson included an advisory council to coordinate high-level staff activities, composed of himself, the SECNAV and the bureau chiefs which \"worked out to the great satisfaction\" of Daniels and Benson; the reestablishment of the Joint Army and Navy Board in 1918 with Benson as its Navy member; and the consolidation of all matters of naval aviation under the authority of the CNO.", "title": "History" }, { "paragraph_id": 25, "text": "Benson also revamped the structure of the naval districts, transferring authority for them from SECNAV to the Office of the Chief of Naval Operations under the Operations, Plans, Naval Districts division. This enabled closer cooperation between naval district commanders and the uniformed leadership, who could more easily handle communications between the former and the Navy's fleet commanders.", "title": "History" }, { "paragraph_id": 26, "text": "In the waning years of his tenure, Benson set regulations for officers on shore duty to have temporary assignments with the Office of the Chief of Naval Operations to maintain cohesion between the higher-level staff and the fleet.", "title": "History" }, { "paragraph_id": 27, "text": "Until 1916, the CNO's office was chronically understaffed. The formal establishment of the CNO's \"general staff\", the Office of the Chief of Naval Operations (OPNAV), originally called the Office for Operations, was exacerbated by Eugene Hale's retirement from politics in 1911, and skepticism of whether the CNO's small staff could implement President Wilson's policy of \"preparedness\" without violating American neutrality in World War I.", "title": "History" }, { "paragraph_id": 28, "text": "By June 1916, OPNAV was organized into eight divisions: Operations, Plans, Naval Districts; Regulations; Ship Movements; Communications; Publicity; and Materiel. Operations provided a link between fleet commanders and the General Board, Ship Movements coordinated the movement of Navy vessels and oversaw navy yard overhauls, Communications accounted for the Navy's developing radio network, Publicity conducted the Navy's public affairs, and the Materiel section coordinated the work of the naval bureaus.", "title": "History" }, { "paragraph_id": 29, "text": "Numbering only 75 staffers in January 1917, OPNAV increased in size following the American entry into World War I, as it was deemed of great importance to manage the rapid mobilization of forces to fight in the war. By war's end, OPNAV employed over 1462 people. The CNO and OPNAV thus gained influence over Navy administration but at the expense of the Navy secretary and bureau chiefs.", "title": "History" }, { "paragraph_id": 30, "text": "In 1918, Benson became a military advisor to Edward M. House, an advisor and confidant of President Wilson, joining him on a trip to Europe as the 1918 armistice with Germany was signed. His stance that the United States remain equal to Great Britain in naval power was very useful to House and Wilson, enough for Wilson to insist Benson remain in Europe until after the Treaty of Versailles was signed in July 1919.", "title": "History" }, { "paragraph_id": 31, "text": "Benson's tenure as CNO was slated to end on 10 May 1919, but this was delayed by the president at Secretary Daniels' insistence; Benson instead retired on 25 September 1919. Admiral Robert Coontz replaced Benson as CNO on 1 November 1919.", "title": "History" }, { "paragraph_id": 32, "text": "The CNO's office faced no significant changes in authority during the interwar period, largely due to the Navy secretaries opting to keep executive authority within their own office. Innovations during this period included encouraging coordination in war planning process, and compliance with the Washington Naval Treaty while still keeping to the shipbuilding plan authorized by the Naval Act of 1916. and implementing the concept of naval aviation into naval doctrine.", "title": "History" }, { "paragraph_id": 33, "text": "William V. Pratt became the fifth Chief of Naval Operations on 17 September 1930, after the resignation of Charles F. Hughes. He had previously served as assistant chief of naval operations under CNO Benson. A premier naval policymaker and supporter of arms control under the Washington Naval Treaty, Pratt, despite otherwise good relations, clashed with President Herbert Hoover over building up naval force strength to treaty levels, with Hoover favoring restrictions in spending due to financial difficulties caused by the Great Depression. Under Pratt, such a \"treaty system\" was needed to maintain a compliant peacetime navy.", "title": "History" }, { "paragraph_id": 34, "text": "Pratt opposed centralized management of the Navy, and encouraged diversity of opinion between the offices of the Navy secretary, CNO and the Navy's General Board. To this effect, Pratt removed the CNO as an ex officio member of the General Board, concerned that the office's association with the Board could hamper diversities of opinion between the former and counterparts within the offices of the Navy secretary and OPNAV. Pratt's vision of a less powerful CNO also clashed with Representative Carl Vinson of Georgia, chair of the House Naval Affairs Committee from 1931 to 1947, a proponent of centralizing power within OPNAV. Vinson deliberately delayed many of his planned reorganization proposals until Pratt's replacement by William H. Standley to avoid the unnecessary delays that would otherwise have happened with Pratt.", "title": "History" }, { "paragraph_id": 35, "text": "Pratt also enjoyed a good working relationship with Army chief of staff Douglas MacArthur, and negotiated several key agreements with him over coordinating their services' radio communications networks, mutual interests in coastal defense, and authority over Army and Navy aviation.", "title": "History" }, { "paragraph_id": 36, "text": "William H. Standley, who succeeded Pratt in 1933, had a weaker relationship with President Franklin D. Roosevelt than Pratt enjoyed with Hoover. Often in direct conflict with Navy secretary Claude A. Swanson and assistant secretary Henry L. Roosevelt, Standley's hostility to the latter was described as \"poisonous\".", "title": "History" }, { "paragraph_id": 37, "text": "Conversely, Standley successfully improved relations with Congress, streamlining communications between the Department of the Navy and the naval oversight committees by appointing the first naval legislative liaisons, the highest-ranked of which reported to the judge advocate general. Standley also worked with Representative Vinson to pass the Vinson-Trammell Act, considered by Standley to be his most important achievement as CNO. The Act authorized the President:", "title": "History" }, { "paragraph_id": 38, "text": "“to suspend” construction of the ships authorized by the law “as may be necessary to bring the naval armament of the United States within the limitation so agreed upon, except that such suspension shall not apply to vessels actually under construction on the date of the passage of this act.”", "title": "History" }, { "paragraph_id": 39, "text": "This effectively provided security for all Navy vessels under construction; even if new shipbuilding projects could not be initiated, shipbuilders with new classes under construction could not legally be obliged to cease operations, allowing the Navy to prepare for World War II without breaking potential limits from future arms control conferences. The Act also granted the CNO \"soft oversight power\" of the naval bureaus which nominally lay with the secretary of the Navy, as Standley gradually inserted OPNAV into the ship design process. Under Standley, the \"treaty system\" created by Pratt was abandoned.", "title": "History" }, { "paragraph_id": 40, "text": "Outgoing commander, Battle Force William D. Leahy succeeded Standley as CNO on 2 January 1937. Leahy's close personal friendship with President Roosevelt since his days as Navy assistant secretary, as well as good relationships with Representative Vinson and Secretary Swanson brought him to the forefront of potential candidates for the post. Unlike Standley, who tried to dominate the bureaus, Leahy preferred to let the bureau chiefs function autonomously as per convention, with the CNO acting as a primus inter pares. Leahy's views of the CNO's authority led to clashes with his predecessor; Standley even attempted to block Leahy from being assigned a fleet command in retaliation. Leahy, on his part, continued Standley's efforts to insert the CNO into the ship design process.", "title": "History" }, { "paragraph_id": 41, "text": "Swanson's ill health and assistant secretary Henry Roosevelt's death on 22 February 1936 gave Leahy unprecedented influence. Leahy had private lunches with the President frequently; during his tenure as CNO, Roosevelt had 52 meetings with him, compared with 12 with his Army counterpart, General Malin Craig, none of which were private lunches.", "title": "History" }, { "paragraph_id": 42, "text": "Leahy retired from the Navy on 1 August 1939 to become Governor of Puerto Rico, a month before the invasion of Poland.", "title": "History" }, { "paragraph_id": 43, "text": "Number One Observatory Circle, located on the northeast grounds of the United States Naval Observatory in Washington, DC, was built in 1893 for its superintendent. The chief of naval operations liked the house so much that in 1923 he took over the house as his own official residence. It remained the residence of the CNO until 1974, when Congress authorized its transformation to an official residence for the vice president. The chief of naval operations currently resides in Quarters A in the Washington Naval Yard.", "title": "Official residence" }, { "paragraph_id": 44, "text": "The chief of naval operations presides over the Navy Staff, formally known as the Office of the Chief of Naval Operations (OPNAV). The Office of the Chief of Naval Operations is a statutory organization within the executive part of the Department of the Navy, and its purpose is to furnish professional assistance to the secretary of the Navy (SECNAV) and the CNO in carrying out their responsibilities.", "title": "Office of the Chief of Naval Operations" }, { "paragraph_id": 45, "text": "Under the authority of the CNO, the director of the Navy Staff (DNS) is responsible for day-to-day administration of the Navy Staff and coordination of the activities of the deputy chiefs of naval operations, who report directly to the CNO. The office was previously known as the assistant vice chief of naval operations (AVCNO) until 1996, when CNO Jeremy Boorda ordered its redesignation to its current name. Previously held by a three-star vice admiral, the position became a civilian's billet in 2018. The present DNS is Andrew S. Haueptle, a retired Marine Corps colonel.", "title": "Office of the Chief of Naval Operations" }, { "paragraph_id": 46, "text": "(† - died in office)", "title": "List of chiefs of naval operations" }, { "paragraph_id": 47, "text": "", "title": "External links" } ]
The chief of naval operations (CNO) is the highest ranking officer of the United States Navy. The position is a statutory office (10 U.S.C. § 8033) held by an admiral who is a military adviser and deputy to the secretary of the Navy. In a separate capacity as a member of the Joint Chiefs of Staff (10 U.S.C. § 151), the CNO is a military adviser to the National Security Council, the Homeland Security Council, the secretary of defense, and the president. Despite the title, the CNO does not have operational command authority over naval forces. The CNO is an administrative position based in the Pentagon, and exercises supervision of Navy organizations as the designee of the secretary of the Navy. Operational command of naval forces falls within the purview of the combatant commanders who report to the secretary of defense. The current chief of naval operations is Lisa Franchetti, who was sworn in on November 2, 2023.
2002-01-15T16:46:45Z
2023-12-07T18:23:56Z
[ "Template:Short description", "Template:Infobox official post", "Template:Efn", "Template:PD-notice", "Template:Cite journal", "Template:Official website", "Template:US military navbox", "Template:Officeholder table start", "Template:Officeholder table", "Template:Cite book", "Template:US Navy navbox", "Template:UnitedStatesCode", "Template:Blockquote", "Template:Notelist", "Template:Refend", "Template:Current US Department of Defense Secretaries", "Template:Sfn", "Template:Circa", "Template:Reflist", "Template:Cite web", "Template:Commons category", "Template:Current JCS members", "Template:Use American English", "Template:Use dmy dates", "Template:Abbr", "Template:See also", "Template:Ayd", "Template:Refbegin", "Template:Chief of the navy by country", "Template:USS", "Template:Officeholder table end", "Template:Small", "Template:Cite news", "Template:Webarchive" ]
https://en.wikipedia.org/wiki/Chief_of_Naval_Operations
7,775
Clara Petacci
Clara "Claretta" Petacci (Italian: [klaˈretta peˈtattʃi]; 28 February 1912 – 28 April 1945) was a mistress of the Italian dictator Benito Mussolini. She was killed by Italian partisans during Mussolini's execution. Daughter of Giuseppina Persichetti (1888–1962) and the physician Francesco Saverio Petacci (1883–1970), Clara Petacci was born into a privileged and religious family in Rome in 1912. Her father, a physician of the Holy Apostolic Palaces, became a supporter of fascism. A child when Mussolini rose to power in the 1920s, Clara Petacci idolised him from an early age. After Violet Gibson attempted to assassinate the dictator in April 1926, the 14-year-old Petacci wrote to him commenting "O, Duce, why was I not with you? ... Could I not have strangled that murderous woman?" Petacci had a long-standing relationship with Mussolini while he was married to Rachele Mussolini. Petacci was 28 years younger than Mussolini. They met for the first time in April 1932 when Mussolini, driving with an aide to Ostia, overtook a car occupied by the twenty-year-old Petacci and family members. She called out, "Duce! Duce!" and when he stopped, told him that she had been writing to him since her early teens. In 1934, Petacci married Italian Air Force officer Riccardo Federici, but she parted ways with her husband when he was sent to Tokyo as Air Attaché in 1936. Petacci then became the mistress of the fifty-three-year-old Mussolini, visiting the Palazzo Venezia, where a small apartment was reserved for her. Her infatuation with Mussolini appears to have been genuine and permanent. He, by contrast, welcomed a long-term and controlling relationship with an easily dominated and credulous young woman. The affair became widely known and members of the Petacci family, notably her brother, Marcello, were able to benefit financially and professionally by influence-selling. Part of Petacci and Mussolini's correspondence has not been released on the grounds of privacy. On 27 April 1945, Mussolini and Petacci were captured by partisans while traveling with a Luftwaffe convoy retreating to Germany. The German column included a number of Italian Social Republic members. On 28 April, she and Mussolini were taken to Mezzegra and executed. One source alleges Petacci's execution was not planned and that she died throwing herself on Mussolini in a vain attempt to protect him from the bullets. On the following day, the bodies of Mussolini and Petacci were taken to Piazzale Loreto in Milan and hung upside down in front of an Esso petrol station. The bodies were photographed as a crowd vented their rage upon them. On the same day, Clara's brother, Marcello Petacci, was also killed in Dongo by the partisans, along with fifteen other people complicit in Mussolini's escape. After the war, the family of Petacci began civil and criminal court cases against Walter Audisio for Petacci's unlawful killing. After a lengthy legal process, an investigating judge eventually closed the case in 1967. Audisio was acquitted of murder and embezzlement on the grounds that the actions complained of occurred as an act of war against the Germans and the fascists during a period of enemy occupation.
[ { "paragraph_id": 0, "text": "Clara \"Claretta\" Petacci (Italian: [klaˈretta peˈtattʃi]; 28 February 1912 – 28 April 1945) was a mistress of the Italian dictator Benito Mussolini. She was killed by Italian partisans during Mussolini's execution.", "title": "" }, { "paragraph_id": 1, "text": "Daughter of Giuseppina Persichetti (1888–1962) and the physician Francesco Saverio Petacci (1883–1970), Clara Petacci was born into a privileged and religious family in Rome in 1912. Her father, a physician of the Holy Apostolic Palaces, became a supporter of fascism. A child when Mussolini rose to power in the 1920s, Clara Petacci idolised him from an early age. After Violet Gibson attempted to assassinate the dictator in April 1926, the 14-year-old Petacci wrote to him commenting \"O, Duce, why was I not with you? ... Could I not have strangled that murderous woman?\"", "title": "Early life" }, { "paragraph_id": 2, "text": "Petacci had a long-standing relationship with Mussolini while he was married to Rachele Mussolini. Petacci was 28 years younger than Mussolini. They met for the first time in April 1932 when Mussolini, driving with an aide to Ostia, overtook a car occupied by the twenty-year-old Petacci and family members. She called out, \"Duce! Duce!\" and when he stopped, told him that she had been writing to him since her early teens.", "title": "Relationship with Mussolini" }, { "paragraph_id": 3, "text": "In 1934, Petacci married Italian Air Force officer Riccardo Federici, but she parted ways with her husband when he was sent to Tokyo as Air Attaché in 1936. Petacci then became the mistress of the fifty-three-year-old Mussolini, visiting the Palazzo Venezia, where a small apartment was reserved for her. Her infatuation with Mussolini appears to have been genuine and permanent. He, by contrast, welcomed a long-term and controlling relationship with an easily dominated and credulous young woman. The affair became widely known and members of the Petacci family, notably her brother, Marcello, were able to benefit financially and professionally by influence-selling.", "title": "Relationship with Mussolini" }, { "paragraph_id": 4, "text": "Part of Petacci and Mussolini's correspondence has not been released on the grounds of privacy.", "title": "Relationship with Mussolini" }, { "paragraph_id": 5, "text": "On 27 April 1945, Mussolini and Petacci were captured by partisans while traveling with a Luftwaffe convoy retreating to Germany. The German column included a number of Italian Social Republic members.", "title": "Death" }, { "paragraph_id": 6, "text": "On 28 April, she and Mussolini were taken to Mezzegra and executed. One source alleges Petacci's execution was not planned and that she died throwing herself on Mussolini in a vain attempt to protect him from the bullets. On the following day, the bodies of Mussolini and Petacci were taken to Piazzale Loreto in Milan and hung upside down in front of an Esso petrol station. The bodies were photographed as a crowd vented their rage upon them. On the same day, Clara's brother, Marcello Petacci, was also killed in Dongo by the partisans, along with fifteen other people complicit in Mussolini's escape.", "title": "Death" }, { "paragraph_id": 7, "text": "After the war, the family of Petacci began civil and criminal court cases against Walter Audisio for Petacci's unlawful killing. After a lengthy legal process, an investigating judge eventually closed the case in 1967. Audisio was acquitted of murder and embezzlement on the grounds that the actions complained of occurred as an act of war against the Germans and the fascists during a period of enemy occupation.", "title": "Death" }, { "paragraph_id": 8, "text": "", "title": "Further reading" } ]
Clara "Claretta" Petacci was a mistress of the Italian dictator Benito Mussolini. She was killed by Italian partisans during Mussolini's execution.
2002-01-15T17:28:28Z
2023-12-10T13:35:52Z
[ "Template:In lang", "Template:ISBN", "Template:Commons category", "Template:IPA-it", "Template:Expand Italian", "Template:Portal", "Template:Reflist", "Template:Infobox person", "Template:Cite web", "Template:Cite book", "Template:Benito Mussolini", "Template:Short description", "Template:Lang", "Template:Cite news", "Template:Authority control", "Template:See also" ]
https://en.wikipedia.org/wiki/Clara_Petacci
7,780
Costa Smeralda
The Costa Smeralda (Italian: [ˈkɔsta zmeˈralda], lit. 'Emerald Coast'; Gallurese: Monti di Mola; Sardinian: Montes de Mola) is a coastal area and tourist destination in northern Sardinia, Italy, with a length of some 20 km, although the term originally designated only a small stretch in the commune of Arzachena. With white sand beaches, golf clubs, private jet and helicopter services, and exclusive hotels, the area has drawn celebrities, business and political leaders, and other affluent visitors. Costa Smeralda is among the most expensive locations in Europe. House prices reach up to 300,000 euros ($392,200) per square meter. The main towns and villages in the area, built according to a detailed urban plan, are Porto Cervo, Liscia di Vacca, Capriccioli, and Romazzino. Archaeological sites include the Li Muri Giants' graves. Each September the Sardinia Cup sailing regatta is held off the coast. Polo matches are held between April and October at Gershan near Arzachena. Other attractions include a film festival in Tavolara and a vintage car rally. Development of the area started in 1961, and was financed by a consortium of companies led by Prince Karim Aga Khan. Spiaggia del Principe, one of the beaches along the Costa Smeralda, was named after this Ishmaelite prince. Architects involved in the project included Michele Busiri Vici, Jacques Couëlle, Savin Couëlle, and Vietti. 41°06′N 9°30′E / 41.1°N 9.5°E / 41.1; 9.5
[ { "paragraph_id": 0, "text": "The Costa Smeralda (Italian: [ˈkɔsta zmeˈralda], lit. 'Emerald Coast'; Gallurese: Monti di Mola; Sardinian: Montes de Mola) is a coastal area and tourist destination in northern Sardinia, Italy, with a length of some 20 km, although the term originally designated only a small stretch in the commune of Arzachena. With white sand beaches, golf clubs, private jet and helicopter services, and exclusive hotels, the area has drawn celebrities, business and political leaders, and other affluent visitors.", "title": "" }, { "paragraph_id": 1, "text": "Costa Smeralda is among the most expensive locations in Europe. House prices reach up to 300,000 euros ($392,200) per square meter.", "title": "" }, { "paragraph_id": 2, "text": "The main towns and villages in the area, built according to a detailed urban plan, are Porto Cervo, Liscia di Vacca, Capriccioli, and Romazzino. Archaeological sites include the Li Muri Giants' graves.", "title": "" }, { "paragraph_id": 3, "text": "Each September the Sardinia Cup sailing regatta is held off the coast. Polo matches are held between April and October at Gershan near Arzachena. Other attractions include a film festival in Tavolara and a vintage car rally.", "title": "" }, { "paragraph_id": 4, "text": "Development of the area started in 1961, and was financed by a consortium of companies led by Prince Karim Aga Khan. Spiaggia del Principe, one of the beaches along the Costa Smeralda, was named after this Ishmaelite prince. Architects involved in the project included Michele Busiri Vici, Jacques Couëlle, Savin Couëlle, and Vietti.", "title": "" }, { "paragraph_id": 5, "text": "41°06′N 9°30′E / 41.1°N 9.5°E / 41.1; 9.5", "title": "External links" } ]
The Costa Smeralda is a coastal area and tourist destination in northern Sardinia, Italy, with a length of some 20 km, although the term originally designated only a small stretch in the commune of Arzachena. With white sand beaches, golf clubs, private jet and helicopter services, and exclusive hotels, the area has drawn celebrities, business and political leaders, and other affluent visitors. Costa Smeralda is among the most expensive locations in Europe. House prices reach up to 300,000 euros ($392,200) per square meter. The main towns and villages in the area, built according to a detailed urban plan, are Porto Cervo, Liscia di Vacca, Capriccioli, and Romazzino. Archaeological sites include the Li Muri Giants' graves. Each September the Sardinia Cup sailing regatta is held off the coast. Polo matches are held between April and October at Gershan near Arzachena. Other attractions include a film festival in Tavolara and a vintage car rally. Development of the area started in 1961, and was financed by a consortium of companies led by Prince Karim Aga Khan. Spiaggia del Principe, one of the beaches along the Costa Smeralda, was named after this Ishmaelite prince. Architects involved in the project included Michele Busiri Vici, Jacques Couëlle, Savin Couëlle, and Vietti.
2002-01-16T00:10:46Z
2023-10-01T15:46:39Z
[ "Template:Lang-sc", "Template:Reflist", "Template:Cite web", "Template:Authority control", "Template:For", "Template:Lit", "Template:Lang-sdn", "Template:Coord", "Template:Short description", "Template:IPA-it", "Template:Sister project links" ]
https://en.wikipedia.org/wiki/Costa_Smeralda
7,781
Chianti
Chianti is an Italian red wine produced in the Chianti region of central Tuscany, principally from the Sangiovese grape. It was historically associated with a squat bottle enclosed in a straw basket, called a fiasco ("flask"; pl.: fiaschi). However, the fiasco is now only used by a few makers of the wine; most Chianti is bottled in more standard-shaped wine bottles. In the latter nineteenth century, Baron Bettino Ricasoli (later Prime Minister of the Kingdom of Italy) helped establish Sangiovese as the blend's dominant grape variety, creating the blueprint for today's Chianti wines. The first definition of a wine area called Chianti was made in 1716. It described the area near the villages of Gaiole, Castellina and Radda; the so-called Lega del Chianti and later Provincia del Chianti (Chianti province). In 1932 the Chianti area was completely redrawn and divided into seven sub-areas: Classico, Colli Aretini, Colli Fiorentini, Colline Pisane, Colli Senesi, Montalbano and Rùfina. Most of the villages that in 1932 were added to the newly defined Chianti Classico region added in Chianti to their names, for example Greve in Chianti, which amended its name in 1972. Wines labelled Chianti Classico come from the largest sub-area of Chianti, which includes the original Chianti heartland. Only Chianti from this sub-zone may display the black rooster (gallo nero) seal on the neck of the bottle, which indicates that the producer of the wine is a member of the Chianti Classico Consortium, the local association of producers. Other variants, with the exception of Rufina north-east of Florence and Montalbano south of Pistoia, originate in the named provinces: Siena for the Colli Senesi, Florence for the Colli Fiorentini, Arezzo for the Colli Aretini and Pisa for the Colline Pisane. In 1996 part of the Colli Fiorentini sub-area was renamed Montespertoli. During the 1970s producers started to reduce the quantity of white grapes in Chianti. In 1995 it became legal to produce a Chianti with 100% Sangiovese. For a wine to retain the name of Chianti it must be produced with at least 80% Sangiovese grapes. Aged Chianti (38 months instead of 4–7) may be labelled as Riserva. Chianti that meets more stringent requirements (lower yield, higher alcohol content and dry extract) may be labelled as Chianti Superiore, although Chianti from the Classico sub-area is not allowed in any event to be labelled as Superiore. The earliest documentation of a "Chianti wine" dates back to the 14th century, when viticulture was known to flourish in the "Chianti Mountains" around Florence. A military league called Lega del Chianti (League of Chianti) was formed around 1250 between the townships of Castellina, Gaiole and Radda, which would lead to the wine from this area taking on a similar name. In 1398 the earliest-known record notes Chianti as a white wine, though the red wines of Chianti were also discussed around the same time in similar documents. The first attempt to classify Chianti wine in any way came in 1427, when Florence developed a tariff system for the wines of the surrounding countryside, including an area referred to as "Chianti and its entire province". In 1716 Cosimo III de' Medici, Grand Duke of Tuscany, issued an edict legislating that the three villages of the Lega del Chianti (Castellina in Chianti, Gaiole in Chianti and Radda in Chianti) as well as the village of Greve and a 3.2-kilometre-long stretch (2-mile) of hillside north of Greve near Spedaluzzo as the only officially recognised producers of Chianti. This delineation existed until July 1932, when the Italian government expanded the Chianti zone to include the outlying areas of Barberino Val d'Elsa, Chiocchio, Robbiano, San Casciano in Val di Pesa and Strada. Subsequent expansions in 1967 would eventually result in the Chianti zone covering a very large area all over central Tuscany. By the 18th century Chianti was widely recognised as a red wine, but the exact composition and grape varieties used to make Chianti at this point is unknown. Ampelographers find clues about which grape varieties were popular at the time in the writings of Italian writer Cosimo Villifranchi, who noted that Canaiolo was a widely planted variety in the area along with Sangiovese, Mammolo and Marzemino. It was not until the work of the Italian statesman Bettino Ricasoli that the modern Chianti recipe as a Sangiovese-based wine would take shape. Prior to Ricasoli, Canaiolo was emerging as the dominant variety in the Chianti blend with Sangiovese and Malvasia Bianca Lunga playing supporting roles. In the mid-19th century, Ricasoli developed a recipe for Chianti that was based primarily on Sangiovese. Though he is often credited with creating and disseminating a specific formula (typically reported as 70% Sangiovese, 20% Canaiolo, 10% Malvasia Bianca Lunga), a review of his correspondence of the time does not corroborate this. In addition, his efforts were quickly corrupted by other local winemakers (for example, replacing Malvasia with Trebbiano Toscano, or relying too heavily on the latter), leading to further misunderstanding of the "Ricasoli formula". In 1967, the Denominazione di origine controllata (DOC) regulation set by the Italian government was based on a loose interpretation of Ricasoli's "recipe", calling for a Sangiovese-based blend with 10–30% Malvasia and Trebbiano. The late 19th century saw a period of economic and political upheaval. First came oidium and then the phylloxera epidemic would take its toll on the vineyards of Chianti just as they had ravaged vineyards across the rest of Europe. The chaos and poverty following the Risorgimento heralded the beginning of the Italian diaspora that would take Italian vineyard workers and winemakers abroad as immigrants to new lands. Those that stayed behind and replanted choose high-yielding varieties like Trebbiano and Sangiovese clones such as the Sangiovese di Romagna from the nearby Romagna region. Following the Second World War, the general trend in the world wine market for cheap, easy-drinking wine saw a brief boom for the region. With over-cropping and an emphasis on quantity over quality, the reputation of Chianti among consumers eventually plummeted. By the 1950s, Trebbiano (which is known for its neutral flavours) made up to 30% of many mass-market Chiantis. By the late 20th century, Chianti was often associated with basic Chianti sold in a squat bottle enclosed in a straw basket, called a fiasco. However, during the same period, a group of ambitious producers began working outside the boundaries of DOC regulations to make what they believed would be a higher-quality wine. These wines eventually became known as the "Super Tuscans". Many of the producers behind the Super Tuscan movement were originally Chianti producers who were rebelling against what they felt were antiquated DOC regulations. Some of these producers wanted to make Chiantis that were 100% varietal Sangiovese. Others wanted the flexibility to experiment with blending French grape varieties such as Cabernet Sauvignon and Merlot or to not be required to blend in any white grape varieties. The late 20th century saw a flurry of creativity and innovation in the Chianti zones as producers experimented with new grape varieties and introduced modern wine-making techniques such as the use of new oak barrels. The prices and wine ratings of some Super Tuscans would regularly eclipse those of DOC-sanctioned Chiantis. The success of the Super Tuscans encouraged government officials to reconsider the DOC regulations in order to bring some of these wines back into the fold labelled as Chianti. The Chianti region covers a vast area of Tuscany and includes within its boundaries several overlapping Denominazione di origine controllata (DOC) and Denominazione di Origine Controllata e Garantita (DOCG) regions. Other well known Sangiovese-based Tuscan wines such as Brunello di Montalcino and Vino Nobile di Montepulciano could be bottled and labelled under the most basic designation of "Chianti" if their producers chose to do so. Within the collective Chianti region more than 8 million cases of wines classified as DOC-level or above are produced each year. Today, most Chianti falls under two major designations of Chianti DOCG, which includes basic level Chianti, as well as that from seven designated sub-zones, and Chianti Classico DOCG. Together, these two Chianti zones produce the largest volume of DOC/G wines in Italy. The Chianti DOCG covers all the Chianti wine and includes a large stretch of land encompassing the western reaches of the province of Pisa near the coast of the Tyrrhenian Sea, the Florentine hills in the province of Florence to the north, to the province of Arezzo in the east and the Siena hills to the south. Within this regions are vineyards that overlap the DOCG regions of Brunello di Montalcino, Vino Nobile di Montepulciano and Vernaccia di San Gimignano. Any Sangiovese-based wine made according to the Chianti guidelines from these vineyards can be labelled and marked under the basic Chianti DOCG should the producer wish to use the designation. Within the Chianti DOCG there are eight defined sub-zones that are permitted to affix their name to the wine label. Wines that are labelled as simply Chianti are made either from a blend from these sub-zones or include grapes from peripheral areas not within the boundaries of a sub-zone. The sub-zones are (clockwise from the north): the Colli Fiorentini which is located south of the city of Florence; Chianti Rufina in the northeastern part of the zone located around the commune of Rufina; Classico in the centre of Chianti, across the provinces of Florence and Siena; Colli Aretini in the Arezzo province to the east; Colli Senesi south of Chianti Classico in the Siena hills, which is the largest of the sub-zones and includes the Brunello di Montalcino and Vino Nobile di Montepulciano areas; Colline Pisane, the westernmost sub-zone in the province of Pisa; Montespertoli located within the Colli Fiorentini around the commune of Montespertoli; Montalbano in the north-west part of the zone which includes the Carmignano DOCG. As of 2006, there were 318 hectares (786 acres) under production in Montalbano, 905 ha (2,236 acres) in the Colli Fiorentini, 57 ha (140 acres) in Montespertoli, 740 ha (1,840 acres) in Rufina, 3,550 ha (8,780 acres) in the Colli Senesi, 150 ha (380 acres) in Colline Pisane, 649 ha (1,603 acres) in the Colli Aretini, and an additional 10,324 ha (25,511 acres) in the peripheral areas that do not fall within one of the sub-zone classifications. Wines produced from these vineyards are labelled simply "Chianti". The original area dictated by the edict of Cosimo III de' Medici would eventually be considered the heart of the modern "Chianti Classico" subregion. As of 2006, there were 7,140 ha (17,640 acres) of vineyards in the Chianti Classico subregion. The Chianti Classico subregion covers an area of approximate 260 km (100 square miles) between the city of Florence to the north and Siena to the south. The four communes of Castellina in Chianti, Gaiole in Chianti, Greve in Chianti and Radda in Chianti are located entirely within the boundaries of the Classico area with parts of Barberino Val d'Elsa, San Casciano in Val di Pesa and Tavarnelle Val di Pesa in the province of Florence as well as Castelnuovo Berardenga and Poggibonsi in the province of Siena included within the permitted boundaries of Chianti Classico. The soil and geography of this subregion can be quite varied, with altitudes ranging from 250 to 610 m (820 to 2,000 feet), and rolling hills producing differing macroclimates. There are two main soil types in the area: a weathered sandstone known as alberese and a bluish-gray chalky marlstone known as galestro. The soil in the north is richer and more fertile with more galestro, with the soil gradually becoming harder and stonier with more albarese in the south. In the north, the Arno River can have an influence on the climate, keeping the temperatures slightly cooler, an influence that diminishes further south in the warmer Classico territory towards Castelnuovo Berardenga. Chianti Classico are premium Chianti wines that tend to be medium-bodied with firm tannins and medium-high to high acidity. Floral, cherry and light nutty notes are characteristic aromas with the wines expressing more notes on the mid-palate and finish than at the front of the mouth. As with Bordeaux, the different zones of Chianti Classico have unique characteristics that can be exemplified and perceived in some wines from those areas. According to Master of Wine Mary Ewing-Mulligan, Chianti Classico wines from the Castellina area tend to have a very delicate aroma and flavour, Castelnuovo Berardegna wines tend to be the most ripe and richest tasting, wines from Gaiole tend to have been characterised by their structure and firm tannins while wines from the Greve area tend to have very concentrated flavours. The production of Chianti Classico is realised under the supervision of Consorzio del Vino Chianti Classico, a union of producers in the Chianti Classico subregion. The Consorzio was founded with the aim of promoting the wines of the subregion, improving quality and preventing wine fraud. Since the 1980s, the foundation has sponsored extensive research into the viticultural and winemaking practice of the Chianti Classico area, particularly in the area of clonal research. In the last three decades, more than 50% of the vineyards in the Chianti Classico subregion have been replanted with improved Sangiovese clones and modern vineyard techniques as part of the Consorzio Chianti Classico's project "Chianti 2000". In 2014, a new category of Chianti Classico was introduced: Chianti Classico Gran Selezione. Gran Selezione is made exclusively from a winery's own grapes grown according to stricter regulations compared to regular Chianti Classico. Gran Selezione is granted to a Chianti Classico after it passes a suitability test conducted by authorised laboratories, and after it is approved by a special tasting committee. The creation of the Chianti Classico Gran Selezione DOCG has been criticized, with some describing it as being "Needless; an extra layer of confusion created by marketing people hoping to help Chianti Classico out of a sales crisis." Outside of the Chianti Classico area, the wines of the Chianti sub-zone of Rufina are among the most widely recognised and exported from the Chianti region. Located in the Arno valley near the town of Pontassieve, the Rufina region includes much area in the Pomino region, an area that has a long history of wine production. The area is noted for the cool climate of its elevated vineyards located up to 900 m (2,950 feet). The vineyard soils of the area are predominantly marl and chalk. The Florentine merchant families of the Antinori and Frescobaldi own the majority of the vineyards in Rufina. Chianti from the Rufina area is characterised by its multi-layered complexity and elegance. The Colli Fiorentini subregion has seen an influx of activity and new vineyard development in recent years as wealthy Florentine business people move to the country to plant vineyards and open wineries. Many foreign "flying winemakers" have had a hand in this development, bringing global viticulture and wine-making techniques to the Colli Fiorentini. Located in the hills between the Chianti Classico area and Arno valley, the wines of the Colli Fiorentini vary widely depending on producer, but tend to have a simple structure with strong character and fruit notes. The Montespertoli sub-zone was part of the Colli Fiorentini sub-zone until 2002 when it became its own tiny enclave. The Montalbano subregion is located in the shadow of the Carmignano DOCG, with much of the best Sangiovese going to that wine. A similar situation exists in the Colli Senesi which includes the well known DOCG region of Vino Nobile di Montepulciano. Both regions rarely appear on wine labels that are exported out of Tuscany. The Colli Pisane area produces typical Chiantis with the lightest body and color. The Colli Aretini is a relatively new and emerging area that has seen an influx of investment and new winemaking in recent years. Since 1996 the blend for Chianti and Chianti Classico has been 75–100% Sangiovese, up to 10% Canaiolo and up to 20% of any other approved red grape variety such as Cabernet Sauvignon, Merlot or Syrah. Since 2006, the use of white grape varieties such as Malvasia and Trebbiano have been prohibited in Chianti Classico. Chianti Classico must have a minimum alcohol level of at least 12% with a minimum of 7 months aging in oak, while Chianti Classicos labeled riserva must be aged at least 24 months at the winery, with a minimum alcohol level of at least 12.5%. The harvest yields for Chianti Classico are restricted to no more than 7.5 t/ha (3 tonnes per acre). For basic Chianti, the minimum alcohol level is 11.5% with yields restricted to 9 t/ha (4 tonnes per acre). The aging for basic Chianti DOCG is much less stringent with most varieties allowed to be released to the market on 1 March following the vintage year. The sub-zones of Colli Fiorentini, Montespertoli and Rufina must be aged for a further three months and not released until 1 June. All Chianti Classicos must be held back until 1 October in the year following the vintage. Jancis Robinson notes that Chianti is sometimes called the "Bordeaux of Italy" but the structure of the wines is very different from any French wine. The flexibility in the blending recipe for Chianti accounts for some of the variability in styles among Chiantis. Lighter-bodied styles will generally have a higher proportion of white grape varieties blended in, while Chiantis that have only red grape varieties will be fuller and richer. While only 15% of Cabernet Sauvignon is permitted in the blend, the nature of the grape variety can have a dominant personality in the Chianti blend and be a strong influence in the wine. Chianti Classico wines are characterised in their youth by their predominantly floral and cinnamon spicy bouquet. As the wine ages, aromas of tobacco and leather can emerge. Chiantis tend to have medium-high acidity and medium tannins. Basic level Chianti is often characterised by its juicy fruit notes of cherry, plum and raspberry and can range from simple quaffing wines to those approaching the level of Chianti Classico. Wine expert Tom Stevenson notes that these basic everyday-drinking Chiantis are at their peak drinking qualities often between three and five years after vintage, with premium examples having the potential to age for four to eight years. Well-made examples of Chianti Classico often have the potential to age and improve in the bottle for six to twenty years. Chianti Superiore is an Italian DOCG wine produced in the provinces of Arezzo, Florence, Pisa, Pistoia, Prato and Siena, in Tuscany. Superiore is a specification for wines produced with a stricter rule of production than other Chianti wines. Chianti Superiore has been authorised since 1996. Chianti Superiore wines can be produced only from grapes cultivated in the Chianti wine areas except from those vineyards that are registered in the Chianti Classico sub-zone. Vineyards registered in Chianti sub-zones other than Classico can produce Chianti Superiore wines but must omit the sub-zone name on the label. Aging is calculated from 1 January after the picking. Chianti Superiore cannot be sold to the consumer before nine months of aging, of which three must be in the bottle. Therefore, it cannot be bottled before the June after picking or sold to consumers before the next September. Chianti Classico was promoted as the "Official wine of the 2013 UCI Road World Championships” and sold bottles dedicated to the Championships with special labels.
[ { "paragraph_id": 0, "text": "Chianti is an Italian red wine produced in the Chianti region of central Tuscany, principally from the Sangiovese grape. It was historically associated with a squat bottle enclosed in a straw basket, called a fiasco (\"flask\"; pl.: fiaschi). However, the fiasco is now only used by a few makers of the wine; most Chianti is bottled in more standard-shaped wine bottles. In the latter nineteenth century, Baron Bettino Ricasoli (later Prime Minister of the Kingdom of Italy) helped establish Sangiovese as the blend's dominant grape variety, creating the blueprint for today's Chianti wines.", "title": "" }, { "paragraph_id": 1, "text": "The first definition of a wine area called Chianti was made in 1716. It described the area near the villages of Gaiole, Castellina and Radda; the so-called Lega del Chianti and later Provincia del Chianti (Chianti province). In 1932 the Chianti area was completely redrawn and divided into seven sub-areas: Classico, Colli Aretini, Colli Fiorentini, Colline Pisane, Colli Senesi, Montalbano and Rùfina. Most of the villages that in 1932 were added to the newly defined Chianti Classico region added in Chianti to their names, for example Greve in Chianti, which amended its name in 1972. Wines labelled Chianti Classico come from the largest sub-area of Chianti, which includes the original Chianti heartland. Only Chianti from this sub-zone may display the black rooster (gallo nero) seal on the neck of the bottle, which indicates that the producer of the wine is a member of the Chianti Classico Consortium, the local association of producers. Other variants, with the exception of Rufina north-east of Florence and Montalbano south of Pistoia, originate in the named provinces: Siena for the Colli Senesi, Florence for the Colli Fiorentini, Arezzo for the Colli Aretini and Pisa for the Colline Pisane. In 1996 part of the Colli Fiorentini sub-area was renamed Montespertoli.", "title": "" }, { "paragraph_id": 2, "text": "During the 1970s producers started to reduce the quantity of white grapes in Chianti. In 1995 it became legal to produce a Chianti with 100% Sangiovese. For a wine to retain the name of Chianti it must be produced with at least 80% Sangiovese grapes. Aged Chianti (38 months instead of 4–7) may be labelled as Riserva. Chianti that meets more stringent requirements (lower yield, higher alcohol content and dry extract) may be labelled as Chianti Superiore, although Chianti from the Classico sub-area is not allowed in any event to be labelled as Superiore.", "title": "" }, { "paragraph_id": 3, "text": "The earliest documentation of a \"Chianti wine\" dates back to the 14th century, when viticulture was known to flourish in the \"Chianti Mountains\" around Florence. A military league called Lega del Chianti (League of Chianti) was formed around 1250 between the townships of Castellina, Gaiole and Radda, which would lead to the wine from this area taking on a similar name. In 1398 the earliest-known record notes Chianti as a white wine, though the red wines of Chianti were also discussed around the same time in similar documents. The first attempt to classify Chianti wine in any way came in 1427, when Florence developed a tariff system for the wines of the surrounding countryside, including an area referred to as \"Chianti and its entire province\". In 1716 Cosimo III de' Medici, Grand Duke of Tuscany, issued an edict legislating that the three villages of the Lega del Chianti (Castellina in Chianti, Gaiole in Chianti and Radda in Chianti) as well as the village of Greve and a 3.2-kilometre-long stretch (2-mile) of hillside north of Greve near Spedaluzzo as the only officially recognised producers of Chianti. This delineation existed until July 1932, when the Italian government expanded the Chianti zone to include the outlying areas of Barberino Val d'Elsa, Chiocchio, Robbiano, San Casciano in Val di Pesa and Strada. Subsequent expansions in 1967 would eventually result in the Chianti zone covering a very large area all over central Tuscany.", "title": "History" }, { "paragraph_id": 4, "text": "By the 18th century Chianti was widely recognised as a red wine, but the exact composition and grape varieties used to make Chianti at this point is unknown. Ampelographers find clues about which grape varieties were popular at the time in the writings of Italian writer Cosimo Villifranchi, who noted that Canaiolo was a widely planted variety in the area along with Sangiovese, Mammolo and Marzemino. It was not until the work of the Italian statesman Bettino Ricasoli that the modern Chianti recipe as a Sangiovese-based wine would take shape. Prior to Ricasoli, Canaiolo was emerging as the dominant variety in the Chianti blend with Sangiovese and Malvasia Bianca Lunga playing supporting roles. In the mid-19th century, Ricasoli developed a recipe for Chianti that was based primarily on Sangiovese. Though he is often credited with creating and disseminating a specific formula (typically reported as 70% Sangiovese, 20% Canaiolo, 10% Malvasia Bianca Lunga), a review of his correspondence of the time does not corroborate this. In addition, his efforts were quickly corrupted by other local winemakers (for example, replacing Malvasia with Trebbiano Toscano, or relying too heavily on the latter), leading to further misunderstanding of the \"Ricasoli formula\". In 1967, the Denominazione di origine controllata (DOC) regulation set by the Italian government was based on a loose interpretation of Ricasoli's \"recipe\", calling for a Sangiovese-based blend with 10–30% Malvasia and Trebbiano.", "title": "History" }, { "paragraph_id": 5, "text": "The late 19th century saw a period of economic and political upheaval. First came oidium and then the phylloxera epidemic would take its toll on the vineyards of Chianti just as they had ravaged vineyards across the rest of Europe. The chaos and poverty following the Risorgimento heralded the beginning of the Italian diaspora that would take Italian vineyard workers and winemakers abroad as immigrants to new lands. Those that stayed behind and replanted choose high-yielding varieties like Trebbiano and Sangiovese clones such as the Sangiovese di Romagna from the nearby Romagna region. Following the Second World War, the general trend in the world wine market for cheap, easy-drinking wine saw a brief boom for the region. With over-cropping and an emphasis on quantity over quality, the reputation of Chianti among consumers eventually plummeted. By the 1950s, Trebbiano (which is known for its neutral flavours) made up to 30% of many mass-market Chiantis. By the late 20th century, Chianti was often associated with basic Chianti sold in a squat bottle enclosed in a straw basket, called a fiasco. However, during the same period, a group of ambitious producers began working outside the boundaries of DOC regulations to make what they believed would be a higher-quality wine. These wines eventually became known as the \"Super Tuscans\".", "title": "History" }, { "paragraph_id": 6, "text": "Many of the producers behind the Super Tuscan movement were originally Chianti producers who were rebelling against what they felt were antiquated DOC regulations. Some of these producers wanted to make Chiantis that were 100% varietal Sangiovese. Others wanted the flexibility to experiment with blending French grape varieties such as Cabernet Sauvignon and Merlot or to not be required to blend in any white grape varieties. The late 20th century saw a flurry of creativity and innovation in the Chianti zones as producers experimented with new grape varieties and introduced modern wine-making techniques such as the use of new oak barrels. The prices and wine ratings of some Super Tuscans would regularly eclipse those of DOC-sanctioned Chiantis. The success of the Super Tuscans encouraged government officials to reconsider the DOC regulations in order to bring some of these wines back into the fold labelled as Chianti.", "title": "History" }, { "paragraph_id": 7, "text": "The Chianti region covers a vast area of Tuscany and includes within its boundaries several overlapping Denominazione di origine controllata (DOC) and Denominazione di Origine Controllata e Garantita (DOCG) regions. Other well known Sangiovese-based Tuscan wines such as Brunello di Montalcino and Vino Nobile di Montepulciano could be bottled and labelled under the most basic designation of \"Chianti\" if their producers chose to do so. Within the collective Chianti region more than 8 million cases of wines classified as DOC-level or above are produced each year. Today, most Chianti falls under two major designations of Chianti DOCG, which includes basic level Chianti, as well as that from seven designated sub-zones, and Chianti Classico DOCG. Together, these two Chianti zones produce the largest volume of DOC/G wines in Italy.", "title": "Chianti subregions" }, { "paragraph_id": 8, "text": "The Chianti DOCG covers all the Chianti wine and includes a large stretch of land encompassing the western reaches of the province of Pisa near the coast of the Tyrrhenian Sea, the Florentine hills in the province of Florence to the north, to the province of Arezzo in the east and the Siena hills to the south. Within this regions are vineyards that overlap the DOCG regions of Brunello di Montalcino, Vino Nobile di Montepulciano and Vernaccia di San Gimignano. Any Sangiovese-based wine made according to the Chianti guidelines from these vineyards can be labelled and marked under the basic Chianti DOCG should the producer wish to use the designation.", "title": "Chianti subregions" }, { "paragraph_id": 9, "text": "Within the Chianti DOCG there are eight defined sub-zones that are permitted to affix their name to the wine label. Wines that are labelled as simply Chianti are made either from a blend from these sub-zones or include grapes from peripheral areas not within the boundaries of a sub-zone. The sub-zones are (clockwise from the north): the Colli Fiorentini which is located south of the city of Florence; Chianti Rufina in the northeastern part of the zone located around the commune of Rufina; Classico in the centre of Chianti, across the provinces of Florence and Siena; Colli Aretini in the Arezzo province to the east; Colli Senesi south of Chianti Classico in the Siena hills, which is the largest of the sub-zones and includes the Brunello di Montalcino and Vino Nobile di Montepulciano areas; Colline Pisane, the westernmost sub-zone in the province of Pisa; Montespertoli located within the Colli Fiorentini around the commune of Montespertoli; Montalbano in the north-west part of the zone which includes the Carmignano DOCG. As of 2006, there were 318 hectares (786 acres) under production in Montalbano, 905 ha (2,236 acres) in the Colli Fiorentini, 57 ha (140 acres) in Montespertoli, 740 ha (1,840 acres) in Rufina, 3,550 ha (8,780 acres) in the Colli Senesi, 150 ha (380 acres) in Colline Pisane, 649 ha (1,603 acres) in the Colli Aretini, and an additional 10,324 ha (25,511 acres) in the peripheral areas that do not fall within one of the sub-zone classifications. Wines produced from these vineyards are labelled simply \"Chianti\".", "title": "Chianti subregions" }, { "paragraph_id": 10, "text": "The original area dictated by the edict of Cosimo III de' Medici would eventually be considered the heart of the modern \"Chianti Classico\" subregion. As of 2006, there were 7,140 ha (17,640 acres) of vineyards in the Chianti Classico subregion. The Chianti Classico subregion covers an area of approximate 260 km (100 square miles) between the city of Florence to the north and Siena to the south. The four communes of Castellina in Chianti, Gaiole in Chianti, Greve in Chianti and Radda in Chianti are located entirely within the boundaries of the Classico area with parts of Barberino Val d'Elsa, San Casciano in Val di Pesa and Tavarnelle Val di Pesa in the province of Florence as well as Castelnuovo Berardenga and Poggibonsi in the province of Siena included within the permitted boundaries of Chianti Classico.", "title": "Chianti subregions" }, { "paragraph_id": 11, "text": "The soil and geography of this subregion can be quite varied, with altitudes ranging from 250 to 610 m (820 to 2,000 feet), and rolling hills producing differing macroclimates. There are two main soil types in the area: a weathered sandstone known as alberese and a bluish-gray chalky marlstone known as galestro. The soil in the north is richer and more fertile with more galestro, with the soil gradually becoming harder and stonier with more albarese in the south. In the north, the Arno River can have an influence on the climate, keeping the temperatures slightly cooler, an influence that diminishes further south in the warmer Classico territory towards Castelnuovo Berardenga.", "title": "Chianti subregions" }, { "paragraph_id": 12, "text": "Chianti Classico are premium Chianti wines that tend to be medium-bodied with firm tannins and medium-high to high acidity. Floral, cherry and light nutty notes are characteristic aromas with the wines expressing more notes on the mid-palate and finish than at the front of the mouth. As with Bordeaux, the different zones of Chianti Classico have unique characteristics that can be exemplified and perceived in some wines from those areas. According to Master of Wine Mary Ewing-Mulligan, Chianti Classico wines from the Castellina area tend to have a very delicate aroma and flavour, Castelnuovo Berardegna wines tend to be the most ripe and richest tasting, wines from Gaiole tend to have been characterised by their structure and firm tannins while wines from the Greve area tend to have very concentrated flavours.", "title": "Chianti subregions" }, { "paragraph_id": 13, "text": "The production of Chianti Classico is realised under the supervision of Consorzio del Vino Chianti Classico, a union of producers in the Chianti Classico subregion. The Consorzio was founded with the aim of promoting the wines of the subregion, improving quality and preventing wine fraud. Since the 1980s, the foundation has sponsored extensive research into the viticultural and winemaking practice of the Chianti Classico area, particularly in the area of clonal research. In the last three decades, more than 50% of the vineyards in the Chianti Classico subregion have been replanted with improved Sangiovese clones and modern vineyard techniques as part of the Consorzio Chianti Classico's project \"Chianti 2000\".", "title": "Chianti subregions" }, { "paragraph_id": 14, "text": "In 2014, a new category of Chianti Classico was introduced: Chianti Classico Gran Selezione. Gran Selezione is made exclusively from a winery's own grapes grown according to stricter regulations compared to regular Chianti Classico. Gran Selezione is granted to a Chianti Classico after it passes a suitability test conducted by authorised laboratories, and after it is approved by a special tasting committee. The creation of the Chianti Classico Gran Selezione DOCG has been criticized, with some describing it as being \"Needless; an extra layer of confusion created by marketing people hoping to help Chianti Classico out of a sales crisis.\"", "title": "Chianti subregions" }, { "paragraph_id": 15, "text": "Outside of the Chianti Classico area, the wines of the Chianti sub-zone of Rufina are among the most widely recognised and exported from the Chianti region. Located in the Arno valley near the town of Pontassieve, the Rufina region includes much area in the Pomino region, an area that has a long history of wine production. The area is noted for the cool climate of its elevated vineyards located up to 900 m (2,950 feet). The vineyard soils of the area are predominantly marl and chalk. The Florentine merchant families of the Antinori and Frescobaldi own the majority of the vineyards in Rufina. Chianti from the Rufina area is characterised by its multi-layered complexity and elegance.", "title": "Chianti subregions" }, { "paragraph_id": 16, "text": "The Colli Fiorentini subregion has seen an influx of activity and new vineyard development in recent years as wealthy Florentine business people move to the country to plant vineyards and open wineries. Many foreign \"flying winemakers\" have had a hand in this development, bringing global viticulture and wine-making techniques to the Colli Fiorentini. Located in the hills between the Chianti Classico area and Arno valley, the wines of the Colli Fiorentini vary widely depending on producer, but tend to have a simple structure with strong character and fruit notes. The Montespertoli sub-zone was part of the Colli Fiorentini sub-zone until 2002 when it became its own tiny enclave.", "title": "Chianti subregions" }, { "paragraph_id": 17, "text": "The Montalbano subregion is located in the shadow of the Carmignano DOCG, with much of the best Sangiovese going to that wine. A similar situation exists in the Colli Senesi which includes the well known DOCG region of Vino Nobile di Montepulciano. Both regions rarely appear on wine labels that are exported out of Tuscany. The Colli Pisane area produces typical Chiantis with the lightest body and color. The Colli Aretini is a relatively new and emerging area that has seen an influx of investment and new winemaking in recent years.", "title": "Chianti subregions" }, { "paragraph_id": 18, "text": "Since 1996 the blend for Chianti and Chianti Classico has been 75–100% Sangiovese, up to 10% Canaiolo and up to 20% of any other approved red grape variety such as Cabernet Sauvignon, Merlot or Syrah. Since 2006, the use of white grape varieties such as Malvasia and Trebbiano have been prohibited in Chianti Classico. Chianti Classico must have a minimum alcohol level of at least 12% with a minimum of 7 months aging in oak, while Chianti Classicos labeled riserva must be aged at least 24 months at the winery, with a minimum alcohol level of at least 12.5%. The harvest yields for Chianti Classico are restricted to no more than 7.5 t/ha (3 tonnes per acre). For basic Chianti, the minimum alcohol level is 11.5% with yields restricted to 9 t/ha (4 tonnes per acre).", "title": "Grapes and classification" }, { "paragraph_id": 19, "text": "The aging for basic Chianti DOCG is much less stringent with most varieties allowed to be released to the market on 1 March following the vintage year. The sub-zones of Colli Fiorentini, Montespertoli and Rufina must be aged for a further three months and not released until 1 June. All Chianti Classicos must be held back until 1 October in the year following the vintage.", "title": "Grapes and classification" }, { "paragraph_id": 20, "text": "Jancis Robinson notes that Chianti is sometimes called the \"Bordeaux of Italy\" but the structure of the wines is very different from any French wine. The flexibility in the blending recipe for Chianti accounts for some of the variability in styles among Chiantis. Lighter-bodied styles will generally have a higher proportion of white grape varieties blended in, while Chiantis that have only red grape varieties will be fuller and richer. While only 15% of Cabernet Sauvignon is permitted in the blend, the nature of the grape variety can have a dominant personality in the Chianti blend and be a strong influence in the wine.", "title": "Grapes and classification" }, { "paragraph_id": 21, "text": "Chianti Classico wines are characterised in their youth by their predominantly floral and cinnamon spicy bouquet. As the wine ages, aromas of tobacco and leather can emerge. Chiantis tend to have medium-high acidity and medium tannins. Basic level Chianti is often characterised by its juicy fruit notes of cherry, plum and raspberry and can range from simple quaffing wines to those approaching the level of Chianti Classico. Wine expert Tom Stevenson notes that these basic everyday-drinking Chiantis are at their peak drinking qualities often between three and five years after vintage, with premium examples having the potential to age for four to eight years. Well-made examples of Chianti Classico often have the potential to age and improve in the bottle for six to twenty years.", "title": "Grapes and classification" }, { "paragraph_id": 22, "text": "Chianti Superiore is an Italian DOCG wine produced in the provinces of Arezzo, Florence, Pisa, Pistoia, Prato and Siena, in Tuscany. Superiore is a specification for wines produced with a stricter rule of production than other Chianti wines. Chianti Superiore has been authorised since 1996. Chianti Superiore wines can be produced only from grapes cultivated in the Chianti wine areas except from those vineyards that are registered in the Chianti Classico sub-zone. Vineyards registered in Chianti sub-zones other than Classico can produce Chianti Superiore wines but must omit the sub-zone name on the label. Aging is calculated from 1 January after the picking. Chianti Superiore cannot be sold to the consumer before nine months of aging, of which three must be in the bottle. Therefore, it cannot be bottled before the June after picking or sold to consumers before the next September.", "title": "Grapes and classification" }, { "paragraph_id": 23, "text": "Chianti Classico was promoted as the \"Official wine of the 2013 UCI Road World Championships” and sold bottles dedicated to the Championships with special labels.", "title": "Special editions" } ]
Chianti is an Italian red wine produced in the Chianti region of central Tuscany, principally from the Sangiovese grape. It was historically associated with a squat bottle enclosed in a straw basket, called a fiasco. However, the fiasco is now only used by a few makers of the wine; most Chianti is bottled in more standard-shaped wine bottles. In the latter nineteenth century, Baron Bettino Ricasoli helped establish Sangiovese as the blend's dominant grape variety, creating the blueprint for today's Chianti wines. The first definition of a wine area called Chianti was made in 1716. It described the area near the villages of Gaiole, Castellina and Radda; the so-called Lega del Chianti and later Provincia del Chianti. In 1932 the Chianti area was completely redrawn and divided into seven sub-areas: Classico, Colli Aretini, Colli Fiorentini, Colline Pisane, Colli Senesi, Montalbano and Rùfina. Most of the villages that in 1932 were added to the newly defined Chianti Classico region added in Chianti to their names, for example Greve in Chianti, which amended its name in 1972. Wines labelled Chianti Classico come from the largest sub-area of Chianti, which includes the original Chianti heartland. Only Chianti from this sub-zone may display the black rooster seal on the neck of the bottle, which indicates that the producer of the wine is a member of the Chianti Classico Consortium, the local association of producers. Other variants, with the exception of Rufina north-east of Florence and Montalbano south of Pistoia, originate in the named provinces: Siena for the Colli Senesi, Florence for the Colli Fiorentini, Arezzo for the Colli Aretini and Pisa for the Colline Pisane. In 1996 part of the Colli Fiorentini sub-area was renamed Montespertoli. During the 1970s producers started to reduce the quantity of white grapes in Chianti. In 1995 it became legal to produce a Chianti with 100% Sangiovese. For a wine to retain the name of Chianti it must be produced with at least 80% Sangiovese grapes. Aged Chianti may be labelled as Riserva. Chianti that meets more stringent requirements may be labelled as Chianti Superiore, although Chianti from the Classico sub-area is not allowed in any event to be labelled as Superiore.
2002-02-25T15:43:11Z
2023-11-29T18:17:27Z
[ "Template:Webarchive", "Template:Short description", "Template:Efn", "Template:Reflist", "Template:Cite news", "Template:Dead link", "Template:Authority control", "Template:Plural form", "Template:Notelist", "Template:As of", "Template:Portal", "Template:ISBN", "Template:Cite book", "Template:Wikivoyage", "Template:About", "Template:Main", "Template:Cite web", "Template:Commons category", "Template:Wines", "Template:Use dmy dates", "Template:Convert" ]
https://en.wikipedia.org/wiki/Chianti
7,783
Coriolis force
In physics, the Coriolis force is an inertial (or fictitious) force that acts on objects in motion within a frame of reference that rotates with respect to an inertial frame. In a reference frame with clockwise rotation, the force acts to the left of the motion of the object. In one with anticlockwise (or counterclockwise) rotation, the force acts to the right. Deflection of an object due to the Coriolis force is called the Coriolis effect. Though recognized previously by others, the mathematical expression for the Coriolis force appeared in an 1835 paper by French scientist Gaspard-Gustave de Coriolis, in connection with the theory of water wheels. Early in the 20th century, the term Coriolis force began to be used in connection with meteorology. Newton's laws of motion describe the motion of an object in an inertial (non-accelerating) frame of reference. When Newton's laws are transformed to a rotating frame of reference, the Coriolis and centrifugal accelerations appear. When applied to objects with masses, the respective forces are proportional to their masses. The magnitude of the Coriolis force is proportional to the rotation rate, and the magnitude of the centrifugal force is proportional to the square of the rotation rate. The Coriolis force acts in a direction perpendicular to two quantities: the angular velocity of the rotating frame relative to the inertial frame and the velocity of the body relative to the rotating frame, and its magnitude is proportional to the object's speed in the rotating frame (more precisely, to the component of its velocity that is perpendicular to the axis of rotation). The centrifugal force acts outwards in the radial direction and is proportional to the distance of the body from the axis of the rotating frame. These additional forces are termed inertial forces, fictitious forces, or pseudo forces. By introducing these fictitious forces to a rotating frame of reference, Newton's laws of motion can be applied to the rotating system as though it were an inertial system; these forces are correction factors that are not required in a non-rotating system. In popular (non-technical) usage of the term "Coriolis effect", the rotating reference frame implied is almost always the Earth. Because the Earth spins, Earth-bound observers need to account for the Coriolis force to correctly analyze the motion of objects. The Earth completes one rotation for each day/night cycle, so for motions of everyday objects the Coriolis force is imperceptible; its effects become noticeable only for motions occurring over large distances and long periods of time, such as large-scale movement of air in the atmosphere or water in the ocean; or where high precision is important, such as long-range artillery or missile trajectories. Such motions are constrained by the surface of the Earth, so only the horizontal component of the Coriolis force is generally important. This force causes moving objects on the surface of the Earth to be deflected to the right (with respect to the direction of travel) in the Northern Hemisphere and to the left in the Southern Hemisphere. The horizontal deflection effect is greater near the poles, since the effective rotation rate about a local vertical axis is largest there, and decreases to zero at the equator. Rather than flowing directly from areas of high pressure to low pressure, as they would in a non-rotating system, winds and currents tend to flow to the right of this direction north of the equator (anticlockwise) and to the left of this direction south of it (clockwise). This effect is responsible for the rotation and thus formation of cyclones (see Coriolis effects in meteorology). Italian scientist Giovanni Battista Riccioli and his assistant Francesco Maria Grimaldi described the effect in connection with artillery in the 1651 Almagestum Novum, writing that rotation of the Earth should cause a cannonball fired to the north to deflect to the east. In 1674, Claude François Milliet Dechales described in his Cursus seu Mundus Mathematicus how the rotation of the Earth should cause a deflection in the trajectories of both falling bodies and projectiles aimed toward one of the planet's poles. Riccioli, Grimaldi, and Dechales all described the effect as part of an argument against the heliocentric system of Copernicus. In other words, they argued that the Earth's rotation should create the effect, and so failure to detect the effect was evidence for an immobile Earth. The Coriolis acceleration equation was derived by Euler in 1749, and the effect was described in the tidal equations of Pierre-Simon Laplace in 1778. Gaspard-Gustave Coriolis published a paper in 1835 on the energy yield of machines with rotating parts, such as waterwheels. That paper considered the supplementary forces that are detected in a rotating frame of reference. Coriolis divided these supplementary forces into two categories. The second category contained a force that arises from the cross product of the angular velocity of a coordinate system and the projection of a particle's velocity into a plane perpendicular to the system's axis of rotation. Coriolis referred to this force as the "compound centrifugal force" due to its analogies with the centrifugal force already considered in category one. The effect was known in the early 20th century as the "acceleration of Coriolis", and by 1920 as "Coriolis force". In 1856, William Ferrel proposed the existence of a circulation cell in the mid-latitudes with air being deflected by the Coriolis force to create the prevailing westerly winds. The understanding of the kinematics of how exactly the rotation of the Earth affects airflow was partial at first. Late in the 19th century, the full extent of the large scale interaction of pressure-gradient force and deflecting force that in the end causes air masses to move along isobars was understood. In Newtonian mechanics, the equation of motion for an object in an inertial reference frame is: where F {\displaystyle {\boldsymbol {F}}} is the vector sum of the physical forces acting on the object, m {\displaystyle m} is the mass of the object, and a {\displaystyle {\boldsymbol {a}}} is the acceleration of the object relative to the inertial reference frame. Transforming this equation to a reference frame rotating about a fixed axis through the origin with angular velocity ω {\displaystyle {\boldsymbol {\omega }}} having variable rotation rate, the equation takes the form: where The fictitious forces as they are perceived in the rotating frame act as additional forces that contribute to the apparent acceleration just like the real external forces. The fictitious force terms of the equation are, reading from left to right: As seen in these formulas the Euler and centrifugal forces depend on the position vector r ′ {\displaystyle {\boldsymbol {r'}}} of the object, while the Coriolis force depends on the object's velocity v ′ {\displaystyle {\boldsymbol {v'}}} as measured in the rotating reference frame. As expected, for a non-rotating inertial frame of reference ( ω = 0 ) {\displaystyle ({\boldsymbol {\omega }}=0)} the Coriolis force and all other fictitious forces disappear. As the Coriolis force is proportional to a cross product of two vectors, it is perpendicular to both vectors, in this case the object's velocity and the frame's rotation vector. It therefore follows that: For an intuitive explanation of the origin of the Coriolis force, consider an object, constrained to follow the Earth's surface and moving northward in the Northern Hemisphere. Viewed from outer space, the object does not appear to go due north, but has an eastward motion (it rotates around toward the right along with the surface of the Earth). The further north it travels, the smaller the "radius of its parallel (latitude)" (the minimum distance from the surface point to the axis of rotation, which is in a plane orthogonal to the axis), and so the slower the eastward motion of its surface. As the object moves north, to higher latitudes, it has a tendency to maintain the eastward speed it started with (rather than slowing down to match the reduced eastward speed of local objects on the Earth's surface), so it veers east (i.e. to the right of its initial motion). Though not obvious from this example, which considers northward motion, the horizontal deflection occurs equally for objects moving eastward or westward (or in any other direction). However, the theory that the effect determines the rotation of draining water in a typical size household bathtub, sink or toilet has been repeatedly disproven by modern-day scientists; the force is negligibly small compared to the many other influences on the rotation. The time, space, and velocity scales are important in determining the importance of the Coriolis force. Whether rotation is important in a system can be determined by its Rossby number, which is the ratio of the velocity, U, of a system to the product of the Coriolis parameter, f = 2 ω sin φ {\displaystyle f=2\omega \sin \varphi \,} , and the length scale, L, of the motion: Hence, it is the ratio of inertial to Coriolis forces; a small Rossby number indicates a system is strongly affected by Coriolis forces, and a large Rossby number indicates a system in which inertial forces dominate. For example, in tornadoes, the Rossby number is large, so in them the Coriolis force is negligible, and balance is between pressure and centrifugal forces. In low-pressure systems the Rossby number is low, as the centrifugal force is negligible; there, the balance is between Coriolis and pressure forces. In oceanic systems the Rossby number is often around 1, with all three forces comparable. An atmospheric system moving at U = 10 m/s (22 mph) occupying a spatial distance of L = 1,000 km (621 mi), has a Rossby number of approximately 0.1. A baseball pitcher may throw the ball at U = 45 m/s (100 mph) for a distance of L = 18.3 m (60 ft). The Rossby number in this case would be 32,000 (at latitude 31°47'46.382"). Baseball players don't care about which hemisphere they're playing in. However, an unguided missile obeys exactly the same physics as a baseball, but can travel far enough and be in the air long enough to experience the effect of Coriolis force. Long-range shells in the Northern Hemisphere landed close to, but to the right of, where they were aimed until this was noted. (Those fired in the Southern Hemisphere landed to the left.) In fact, it was this effect that first got the attention of Coriolis himself. The figure illustrates a ball tossed from 12:00 o'clock toward the center of a counter-clockwise rotating carousel. On the left, the ball is seen by a stationary observer above the carousel, and the ball travels in a straight line to the center, while the ball-thrower rotates counter-clockwise with the carousel. On the right, the ball is seen by an observer rotating with the carousel, so the ball-thrower appears to stay at 12:00 o'clock. The figure shows how the trajectory of the ball as seen by the rotating observer can be constructed. On the left, two arrows locate the ball relative to the ball-thrower. One of these arrows is from the thrower to the center of the carousel (providing the ball-thrower's line of sight), and the other points from the center of the carousel to the ball. (This arrow gets shorter as the ball approaches the center.) A shifted version of the two arrows is shown dotted. On the right is shown this same dotted pair of arrows, but now the pair are rigidly rotated so the arrow corresponding to the line of sight of the ball-thrower toward the center of the carousel is aligned with 12:00 o'clock. The other arrow of the pair locates the ball relative to the center of the carousel, providing the position of the ball as seen by the rotating observer. By following this procedure for several positions, the trajectory in the rotating frame of reference is established as shown by the curved path in the right-hand panel. The ball travels in the air, and there is no net force upon it. To the stationary observer, the ball follows a straight-line path, so there is no problem squaring this trajectory with zero net force. However, the rotating observer sees a curved path. Kinematics insists that a force (pushing to the right of the instantaneous direction of travel for a counter-clockwise rotation) must be present to cause this curvature, so the rotating observer is forced to invoke a combination of centrifugal and Coriolis forces to provide the net force required to cause the curved trajectory. The figure describes a more complex situation where the tossed ball on a turntable bounces off the edge of the carousel and then returns to the tosser, who catches the ball. The effect of Coriolis force on its trajectory is shown again as seen by two observers: an observer (referred to as the "camera") that rotates with the carousel, and an inertial observer. The figure shows a bird's-eye view based upon the same ball speed on forward and return paths. Within each circle, plotted dots show the same time points. In the left panel, from the camera's viewpoint at the center of rotation, the tosser (smiley face) and the rail both are at fixed locations, and the ball makes a very considerable arc on its travel toward the rail, and takes a more direct route on the way back. From the ball tosser's viewpoint, the ball seems to return more quickly than it went (because the tosser is rotating toward the ball on the return flight). On the carousel, instead of tossing the ball straight at a rail to bounce back, the tosser must throw the ball toward the right of the target and the ball then seems to the camera to bear continuously to the left of its direction of travel to hit the rail (left because the carousel is turning clockwise). The ball appears to bear to the left from direction of travel on both inward and return trajectories. The curved path demands this observer to recognize a leftward net force on the ball. (This force is "fictitious" because it disappears for a stationary observer, as is discussed shortly.) For some angles of launch, a path has portions where the trajectory is approximately radial, and Coriolis force is primarily responsible for the apparent deflection of the ball (centrifugal force is radial from the center of rotation, and causes little deflection on these segments). When a path curves away from radial, however, centrifugal force contributes significantly to deflection. The ball's path through the air is straight when viewed by observers standing on the ground (right panel). In the right panel (stationary observer), the ball tosser (smiley face) is at 12 o'clock and the rail the ball bounces from is at position 1. From the inertial viewer's standpoint, positions 1, 2, and 3 are occupied in sequence. At position 2, the ball strikes the rail, and at position 3, the ball returns to the tosser. Straight-line paths are followed because the ball is in free flight, so this observer requires that no net force is applied. The acceleration affecting the motion of air "sliding" over the Earth's surface is the horizontal component of the Coriolis term This component is orthogonal to the velocity over the Earth surface and is given by the expression where In the northern hemisphere, where the latitude is positive, this acceleration, as viewed from above, is to the right of the direction of motion. Conversely, it is to the left in the southern hemisphere. Consider a location with latitude φ on a sphere that is rotating around the north–south axis. A local coordinate system is set up with the x axis horizontally due east, the y axis horizontally due north and the z axis vertically upwards. The rotation vector, velocity of movement and Coriolis acceleration expressed in this local coordinate system (listing components in the order east (e), north (n) and upward (u)) are: When considering atmospheric or oceanic dynamics, the vertical velocity is small, and the vertical component of the Coriolis acceleration ( v e cos φ {\displaystyle v_{e}\cos \varphi } ) is small compared with the acceleration due to gravity (g, approximately 9.81 m/s (32.2 ft/s) near Earth's surface). For such cases, only the horizontal (east and north) components matter. The restriction of the above to the horizontal plane is (setting vu = 0): where f = 2 ω sin φ {\displaystyle f=2\omega \sin \varphi \,} is called the Coriolis parameter. By setting vn = 0, it can be seen immediately that (for positive φ and ω) a movement due east results in an acceleration due south; similarly, setting ve = 0, it is seen that a movement due north results in an acceleration due east. In general, observed horizontally, looking along the direction of the movement causing the acceleration, the acceleration always is turned 90° to the right (for positive φ) and of the same size regardless of the horizontal orientation. In the case of equatorial motion, setting φ = 0° yields: Ω in this case is parallel to the north axis. Accordingly, an eastward motion (that is, in the same direction as the rotation of the sphere) provides an upward acceleration known as the Eötvös effect, and an upward motion produces an acceleration due west. Perhaps the most important impact of the Coriolis effect is in the large-scale dynamics of the oceans and the atmosphere. In meteorology and oceanography, it is convenient to postulate a rotating frame of reference wherein the Earth is stationary. In accommodation of that provisional postulation, the centrifugal and Coriolis forces are introduced. Their relative importance is determined by the applicable Rossby numbers. Tornadoes have high Rossby numbers, so, while tornado-associated centrifugal forces are quite substantial, Coriolis forces associated with tornadoes are for practical purposes negligible. Because surface ocean currents are driven by the movement of wind over the water's surface, the Coriolis force also affects the movement of ocean currents and cyclones as well. Many of the ocean's largest currents circulate around warm, high-pressure areas called gyres. Though the circulation is not as significant as that in the air, the deflection caused by the Coriolis effect is what creates the spiralling pattern in these gyres. The spiralling wind pattern helps the hurricane form. The stronger the force from the Coriolis effect, the faster the wind spins and picks up additional energy, increasing the strength of the hurricane. Air within high-pressure systems rotates in a direction such that the Coriolis force is directed radially inwards, and nearly balanced by the outwardly radial pressure gradient. As a result, air travels clockwise around high pressure in the Northern Hemisphere and anticlockwise in the Southern Hemisphere. Air around low-pressure rotates in the opposite direction, so that the Coriolis force is directed radially outward and nearly balances an inwardly radial pressure gradient. If a low-pressure area forms in the atmosphere, air tends to flow in towards it, but is deflected perpendicular to its velocity by the Coriolis force. A system of equilibrium can then establish itself creating circular movement, or a cyclonic flow. Because the Rossby number is low, the force balance is largely between the pressure-gradient force acting towards the low-pressure area and the Coriolis force acting away from the center of the low pressure. Instead of flowing down the gradient, large scale motions in the atmosphere and ocean tend to occur perpendicular to the pressure gradient. This is known as geostrophic flow. On a non-rotating planet, fluid would flow along the straightest possible line, quickly eliminating pressure gradients. The geostrophic balance is thus very different from the case of "inertial motions" (see below), which explains why mid-latitude cyclones are larger by an order of magnitude than inertial circle flow would be. This pattern of deflection, and the direction of movement, is called Buys-Ballot's law. In the atmosphere, the pattern of flow is called a cyclone. In the Northern Hemisphere the direction of movement around a low-pressure area is anticlockwise. In the Southern Hemisphere, the direction of movement is clockwise because the rotational dynamics is a mirror image there. At high altitudes, outward-spreading air rotates in the opposite direction. Cyclones rarely form along the equator due to the weak Coriolis effect present in this region. An air or water mass moving with speed v {\displaystyle v\,} subject only to the Coriolis force travels in a circular trajectory called an inertial circle. Since the force is directed at right angles to the motion of the particle, it moves with a constant speed around a circle whose radius R {\displaystyle R} is given by: where f {\displaystyle f} is the Coriolis parameter 2 Ω sin φ {\displaystyle 2\Omega \sin \varphi } , introduced above (where φ {\displaystyle \varphi } is the latitude). The time taken for the mass to complete a full circle is therefore 2 π / f {\displaystyle 2\pi /f} . The Coriolis parameter typically has a mid-latitude value of about 10 s; hence for a typical atmospheric speed of 10 m/s (22 mph), the radius is 100 km (62 mi) with a period of about 17 hours. For an ocean current with a typical speed of 10 cm/s (0.22 mph), the radius of an inertial circle is 1 km (0.6 mi). These inertial circles are clockwise in the northern hemisphere (where trajectories are bent to the right) and anticlockwise in the southern hemisphere. If the rotating system is a parabolic turntable, then f {\displaystyle f} is constant and the trajectories are exact circles. On a rotating planet, f {\displaystyle f} varies with latitude and the paths of particles do not form exact circles. Since the parameter f {\displaystyle f} varies as the sine of the latitude, the radius of the oscillations associated with a given speed are smallest at the poles (latitude of ±90°), and increase toward the equator. The Coriolis effect strongly affects the large-scale oceanic and atmospheric circulation, leading to the formation of robust features like jet streams and western boundary currents. Such features are in geostrophic balance, meaning that the Coriolis and pressure gradient forces balance each other. Coriolis acceleration is also responsible for the propagation of many types of waves in the ocean and atmosphere, including Rossby waves and Kelvin waves. It is also instrumental in the so-called Ekman dynamics in the ocean, and in the establishment of the large-scale ocean flow pattern called the Sverdrup balance. The practical impact of the "Coriolis effect" is mostly caused by the horizontal acceleration component produced by horizontal motion. There are other components of the Coriolis effect. Westward-traveling objects are deflected downwards, while eastward-traveling objects are deflected upwards. This is known as the Eötvös effect. This aspect of the Coriolis effect is greatest near the equator. The force produced by the Eötvös effect is similar to the horizontal component, but the much larger vertical forces due to gravity and pressure suggest that it is unimportant in the hydrostatic equilibrium. However, in the atmosphere, winds are associated with small deviations of pressure from the hydrostatic equilibrium. In the tropical atmosphere, the order of magnitude of the pressure deviations is so small that the contribution of the Eötvös effect to the pressure deviations is considerable. In addition, objects traveling upwards (i.e. out) or downwards (i.e. in) are deflected to the west or east respectively. This effect is also the greatest near the equator. Since vertical movement is usually of limited extent and duration, the size of the effect is smaller and requires precise instruments to detect. For example, idealized numerical modeling studies suggest that this effect can directly affect tropical large-scale wind field by roughly 10% given long-duration (2 weeks or more) heating or cooling in the atmosphere. Moreover, in the case of large changes of momentum, such as a spacecraft being launched into orbit, the effect becomes significant. The fastest and most fuel-efficient path to orbit is a launch from the equator that curves to a directly eastward heading. Imagine a train that travels through a frictionless railway line along the equator. Assume that, when in motion, it moves at the necessary speed to complete a trip around the world in one day (465 m/s). The Coriolis effect can be considered in three cases: when the train travels west, when it is at rest, and when it travels east. In each case, the Coriolis effect can be calculated from the rotating frame of reference on Earth first, and then checked against a fixed inertial frame. The image below illustrates the three cases as viewed by an observer at rest in a (near) inertial frame from a fixed point above the North Pole along the Earth's axis of rotation; the train is denoted by a few red pixels, fixed at the left side in the leftmost picture, moving in the others ( 1 day = ∧ 8 s ) : {\displaystyle \left(1{\text{ day}}\mathrel {\overset {\land }{=}} 8{\text{ s}}\right):} This also explains why high-speed projectiles that travel west are deflected down, and those that travel east are deflected up. This vertical component of the Coriolis effect is called the Eötvös effect. The above example can be used to explain why the Eötvös effect starts diminishing when an object is traveling westward as its tangential speed increases above Earth's rotation (465 m/s). If the westward train in the above example increases speed, part of the force of gravity that pushes against the track accounts for the centripetal force needed to keep it in circular motion on the inertial frame. Once the train doubles its westward speed at 930 m/s (2,100 mph) that centripetal force becomes equal to the force the train experiences when it stops. From the inertial frame, in both cases it rotates at the same speed but in the opposite directions. Thus, the force is the same cancelling completely the Eötvös effect. Any object that moves westward at a speed above 930 m/s (2,100 mph) experiences an upward force instead. In the figure, the Eötvös effect is illustrated for a 10-kilogram (22 lb) object on the train at different speeds. The parabolic shape is because the centripetal force is proportional to the square of the tangential speed. On the inertial frame, the bottom of the parabola is centered at the origin. The offset is because this argument uses the Earth's rotating frame of reference. The graph shows that the Eötvös effect is not symmetrical, and that the resulting downward force experienced by an object that travels west at high velocity is less than the resulting upward force when it travels east at the same speed. Contrary to popular misconception, bathtubs, toilets, and other water receptacles do not drain in opposite directions in the Northern and Southern Hemispheres. This is because the magnitude of the Coriolis force is negligible at this scale. Forces determined by the initial conditions of the water (e.g. the geometry of the drain, the geometry of the receptacle, preexisting momentum of the water, etc.) are likely to be orders of magnitude greater than the Coriolis force and hence will determine the direction of water rotation, if any. For example, identical toilets flushed in both hemispheres drain in the same direction, and this direction is determined mostly by the shape of the toilet bowl. Under real-world conditions, the Coriolis force does not influence the direction of water flow perceptibly. Only if the water is so still that the effective rotation rate of the Earth is faster than that of the water relative to its container, and if externally applied torques (such as might be caused by flow over an uneven bottom surface) are small enough, the Coriolis effect may indeed determine the direction of the vortex. Without such careful preparation, the Coriolis effect will be much smaller than various other influences on drain direction such as any residual rotation of the water and the geometry of the container. In 1962, Ascher Shapiro performed an experiment at MIT to test the Coriolis force on a large basin of water, 2 meters (6 ft 7 in) across, with a small wooden cross above the plug hole to display the direction of rotation, covering it and waiting for at least 24 hours for the water to settle. Under these precise laboratory conditions, he demonstrated the effect and consistent counterclockwise rotation. The experiment required extreme precision, since the acceleration due to Coriolis effect is only 3 × 10 − 7 {\displaystyle 3\times 10^{-7}} that of gravity. The vortex was measured by a cross made of two slivers of wood pinned above the draining hole. It takes 20 minutes to drain, and the cross starts turning only around 15 minutes. At the end it is turning at 1 rotation every 3 to 4 seconds. He reported that, Both schools of thought are in some sense correct. For the everyday observations of the kitchen sink and bath-tub variety, the direction of the vortex seems to vary in an unpredictable manner with the date, the time of day, and the particular household of the experimenter. But under well-controlled conditions of experimentation, the observer looking downward at a drain in the northern hemisphere will always see a counter-clockwise vortex, while one in the southern hemisphere will always see a clockwise vortex. In a properly designed experiment, the vortex is produced by Coriolis forces, which are counter-clockwise in the northern hemisphere. Lloyd Trefethen reported clockwise rotation in the Southern Hemisphere at the University of Sydney in five tests with settling times of 18 h or more. The Coriolis force is important in external ballistics for calculating the trajectories of very long-range artillery shells. The most famous historical example was the Paris gun, used by the Germans during World War I to bombard Paris from a range of about 120 km (75 mi). The Coriolis force minutely changes the trajectory of a bullet, affecting accuracy at extremely long distances. It is adjusted for by accurate long-distance shooters, such as snipers. At the latitude of Sacramento, California, a 1,000 yd (910 m) northward shot would be deflected 2.8 in (71 mm) to the right. There is also a vertical component, explained in the Eötvös effect section above, which causes westward shots to hit low, and eastward shots to hit high. The effects of the Coriolis force on ballistic trajectories should not be confused with the curvature of the paths of missiles, satellites, and similar objects when the paths are plotted on two-dimensional (flat) maps, such as the Mercator projection. The projections of the three-dimensional curved surface of the Earth to a two-dimensional surface (the map) necessarily results in distorted features. The apparent curvature of the path is a consequence of the sphericity of the Earth and would occur even in a non-rotating frame. The Coriolis force on a moving projectile depends on velocity components in all three directions, latitude, and azimuth. The directions are typically downrange (the direction that the gun is initially pointing), vertical, and cross-range. where To demonstrate the Coriolis effect, a parabolic turntable can be used. On a flat turntable, the inertia of a co-rotating object forces it off the edge. However, if the turntable surface has the correct paraboloid (parabolic bowl) shape (see the figure) and rotates at the corresponding rate, the force components shown in the figure make the component of gravity tangential to the bowl surface exactly equal to the centripetal force necessary to keep the object rotating at its velocity and radius of curvature (assuming no friction). (See banked turn.) This carefully contoured surface allows the Coriolis force to be displayed in isolation. Discs cut from cylinders of dry ice can be used as pucks, moving around almost frictionlessly over the surface of the parabolic turntable, allowing effects of Coriolis on dynamic phenomena to show themselves. To get a view of the motions as seen from the reference frame rotating with the turntable, a video camera is attached to the turntable so as to co-rotate with the turntable, with results as shown in the figure. In the left panel of the figure, which is the viewpoint of a stationary observer, the gravitational force in the inertial frame pulling the object toward the center (bottom ) of the dish is proportional to the distance of the object from the center. A centripetal force of this form causes the elliptical motion. In the right panel, which shows the viewpoint of the rotating frame, the inward gravitational force in the rotating frame (the same force as in the inertial frame) is balanced by the outward centrifugal force (present only in the rotating frame). With these two forces balanced, in the rotating frame the only unbalanced force is Coriolis (also present only in the rotating frame), and the motion is an inertial circle. Analysis and observation of circular motion in the rotating frame is a simplification compared with analysis and observation of elliptical motion in the inertial frame. Because this reference frame rotates several times a minute rather than only once a day like the Earth, the Coriolis acceleration produced is many times larger and so easier to observe on small time and spatial scales than is the Coriolis acceleration caused by the rotation of the Earth. In a manner of speaking, the Earth is analogous to such a turntable. The rotation has caused the planet to settle on a spheroid shape, such that the normal force, the gravitational force and the centrifugal force exactly balance each other on a "horizontal" surface. (See equatorial bulge.) The Coriolis effect caused by the rotation of the Earth can be seen indirectly through the motion of a Foucault pendulum. A practical application of the Coriolis effect is the mass flow meter, an instrument that measures the mass flow rate and density of a fluid flowing through a tube. The operating principle involves inducing a vibration of the tube through which the fluid passes. The vibration, though not completely circular, provides the rotating reference frame that gives rise to the Coriolis effect. While specific methods vary according to the design of the flow meter, sensors monitor and analyze changes in frequency, phase shift, and amplitude of the vibrating flow tubes. The changes observed represent the mass flow rate and density of the fluid. In polyatomic molecules, the molecule motion can be described by a rigid body rotation and internal vibration of atoms about their equilibrium position. As a result of the vibrations of the atoms, the atoms are in motion relative to the rotating coordinate system of the molecule. Coriolis effects are therefore present, and make the atoms move in a direction perpendicular to the original oscillations. This leads to a mixing in molecular spectra between the rotational and vibrational levels, from which Coriolis coupling constants can be determined. When an external torque is applied to a spinning gyroscope along an axis that is at right angles to the spin axis, the rim velocity that is associated with the spin becomes radially directed in relation to the external torque axis. This causes a torque-induced force to act on the rim in such a way as to tilt the gyroscope at right angles to the direction that the external torque would have tilted it. This tendency has the effect of keeping spinning bodies in their rotational frame. Flies (Diptera) and some moths (Lepidoptera) exploit the Coriolis effect in flight with specialized appendages and organs that relay information about the angular velocity of their bodies. Coriolis forces resulting from linear motion of these appendages are detected within the rotating frame of reference of the insects' bodies. In the case of flies, their specialized appendages are dumbbell shaped organs located just behind their wings called "halteres". The fly's halteres oscillate in a plane at the same beat frequency as the main wings so that any body rotation results in lateral deviation of the halteres from their plane of motion. In moths, their antennae are known to be responsible for the sensing of Coriolis forces in the similar manner as with the halteres in flies. In both flies and moths, a collection of mechanosensors at the base of the appendage are sensitive to deviations at the beat frequency, correlating to rotation in the pitch and roll planes, and at twice the beat frequency, correlating to rotation in the yaw plane. In astronomy, Lagrangian points are five positions in the orbital plane of two large orbiting bodies where a small object affected only by gravity can maintain a stable position relative to the two large bodies. The first three Lagrangian points (L1, L2, L3) lie along the line connecting the two large bodies, while the last two points (L4 and L5) each form an equilateral triangle with the two large bodies. The L4 and L5 points, although they correspond to maxima of the effective potential in the coordinate frame that rotates with the two large bodies, are stable due to the Coriolis effect. The stability can result in orbits around just L4 or L5, known as tadpole orbits, where trojans can be found. It can also result in orbits that encircle L3, L4, and L5, known as horseshoe orbits.
[ { "paragraph_id": 0, "text": "In physics, the Coriolis force is an inertial (or fictitious) force that acts on objects in motion within a frame of reference that rotates with respect to an inertial frame. In a reference frame with clockwise rotation, the force acts to the left of the motion of the object. In one with anticlockwise (or counterclockwise) rotation, the force acts to the right. Deflection of an object due to the Coriolis force is called the Coriolis effect. Though recognized previously by others, the mathematical expression for the Coriolis force appeared in an 1835 paper by French scientist Gaspard-Gustave de Coriolis, in connection with the theory of water wheels. Early in the 20th century, the term Coriolis force began to be used in connection with meteorology.", "title": "" }, { "paragraph_id": 1, "text": "Newton's laws of motion describe the motion of an object in an inertial (non-accelerating) frame of reference. When Newton's laws are transformed to a rotating frame of reference, the Coriolis and centrifugal accelerations appear. When applied to objects with masses, the respective forces are proportional to their masses. The magnitude of the Coriolis force is proportional to the rotation rate, and the magnitude of the centrifugal force is proportional to the square of the rotation rate. The Coriolis force acts in a direction perpendicular to two quantities: the angular velocity of the rotating frame relative to the inertial frame and the velocity of the body relative to the rotating frame, and its magnitude is proportional to the object's speed in the rotating frame (more precisely, to the component of its velocity that is perpendicular to the axis of rotation). The centrifugal force acts outwards in the radial direction and is proportional to the distance of the body from the axis of the rotating frame. These additional forces are termed inertial forces, fictitious forces, or pseudo forces. By introducing these fictitious forces to a rotating frame of reference, Newton's laws of motion can be applied to the rotating system as though it were an inertial system; these forces are correction factors that are not required in a non-rotating system.", "title": "" }, { "paragraph_id": 2, "text": "In popular (non-technical) usage of the term \"Coriolis effect\", the rotating reference frame implied is almost always the Earth. Because the Earth spins, Earth-bound observers need to account for the Coriolis force to correctly analyze the motion of objects. The Earth completes one rotation for each day/night cycle, so for motions of everyday objects the Coriolis force is imperceptible; its effects become noticeable only for motions occurring over large distances and long periods of time, such as large-scale movement of air in the atmosphere or water in the ocean; or where high precision is important, such as long-range artillery or missile trajectories. Such motions are constrained by the surface of the Earth, so only the horizontal component of the Coriolis force is generally important. This force causes moving objects on the surface of the Earth to be deflected to the right (with respect to the direction of travel) in the Northern Hemisphere and to the left in the Southern Hemisphere. The horizontal deflection effect is greater near the poles, since the effective rotation rate about a local vertical axis is largest there, and decreases to zero at the equator. Rather than flowing directly from areas of high pressure to low pressure, as they would in a non-rotating system, winds and currents tend to flow to the right of this direction north of the equator (anticlockwise) and to the left of this direction south of it (clockwise). This effect is responsible for the rotation and thus formation of cyclones (see Coriolis effects in meteorology).", "title": "" }, { "paragraph_id": 3, "text": "Italian scientist Giovanni Battista Riccioli and his assistant Francesco Maria Grimaldi described the effect in connection with artillery in the 1651 Almagestum Novum, writing that rotation of the Earth should cause a cannonball fired to the north to deflect to the east. In 1674, Claude François Milliet Dechales described in his Cursus seu Mundus Mathematicus how the rotation of the Earth should cause a deflection in the trajectories of both falling bodies and projectiles aimed toward one of the planet's poles. Riccioli, Grimaldi, and Dechales all described the effect as part of an argument against the heliocentric system of Copernicus. In other words, they argued that the Earth's rotation should create the effect, and so failure to detect the effect was evidence for an immobile Earth. The Coriolis acceleration equation was derived by Euler in 1749, and the effect was described in the tidal equations of Pierre-Simon Laplace in 1778.", "title": "History" }, { "paragraph_id": 4, "text": "Gaspard-Gustave Coriolis published a paper in 1835 on the energy yield of machines with rotating parts, such as waterwheels. That paper considered the supplementary forces that are detected in a rotating frame of reference. Coriolis divided these supplementary forces into two categories. The second category contained a force that arises from the cross product of the angular velocity of a coordinate system and the projection of a particle's velocity into a plane perpendicular to the system's axis of rotation. Coriolis referred to this force as the \"compound centrifugal force\" due to its analogies with the centrifugal force already considered in category one. The effect was known in the early 20th century as the \"acceleration of Coriolis\", and by 1920 as \"Coriolis force\".", "title": "History" }, { "paragraph_id": 5, "text": "In 1856, William Ferrel proposed the existence of a circulation cell in the mid-latitudes with air being deflected by the Coriolis force to create the prevailing westerly winds.", "title": "History" }, { "paragraph_id": 6, "text": "The understanding of the kinematics of how exactly the rotation of the Earth affects airflow was partial at first. Late in the 19th century, the full extent of the large scale interaction of pressure-gradient force and deflecting force that in the end causes air masses to move along isobars was understood.", "title": "History" }, { "paragraph_id": 7, "text": "In Newtonian mechanics, the equation of motion for an object in an inertial reference frame is:", "title": "Formula" }, { "paragraph_id": 8, "text": "where F {\\displaystyle {\\boldsymbol {F}}} is the vector sum of the physical forces acting on the object, m {\\displaystyle m} is the mass of the object, and a {\\displaystyle {\\boldsymbol {a}}} is the acceleration of the object relative to the inertial reference frame.", "title": "Formula" }, { "paragraph_id": 9, "text": "Transforming this equation to a reference frame rotating about a fixed axis through the origin with angular velocity ω {\\displaystyle {\\boldsymbol {\\omega }}} having variable rotation rate, the equation takes the form:", "title": "Formula" }, { "paragraph_id": 10, "text": "where", "title": "Formula" }, { "paragraph_id": 11, "text": "The fictitious forces as they are perceived in the rotating frame act as additional forces that contribute to the apparent acceleration just like the real external forces. The fictitious force terms of the equation are, reading from left to right:", "title": "Formula" }, { "paragraph_id": 12, "text": "As seen in these formulas the Euler and centrifugal forces depend on the position vector r ′ {\\displaystyle {\\boldsymbol {r'}}} of the object, while the Coriolis force depends on the object's velocity v ′ {\\displaystyle {\\boldsymbol {v'}}} as measured in the rotating reference frame. As expected, for a non-rotating inertial frame of reference ( ω = 0 ) {\\displaystyle ({\\boldsymbol {\\omega }}=0)} the Coriolis force and all other fictitious forces disappear.", "title": "Formula" }, { "paragraph_id": 13, "text": "As the Coriolis force is proportional to a cross product of two vectors, it is perpendicular to both vectors, in this case the object's velocity and the frame's rotation vector. It therefore follows that:", "title": "Formula" }, { "paragraph_id": 14, "text": "For an intuitive explanation of the origin of the Coriolis force, consider an object, constrained to follow the Earth's surface and moving northward in the Northern Hemisphere. Viewed from outer space, the object does not appear to go due north, but has an eastward motion (it rotates around toward the right along with the surface of the Earth). The further north it travels, the smaller the \"radius of its parallel (latitude)\" (the minimum distance from the surface point to the axis of rotation, which is in a plane orthogonal to the axis), and so the slower the eastward motion of its surface. As the object moves north, to higher latitudes, it has a tendency to maintain the eastward speed it started with (rather than slowing down to match the reduced eastward speed of local objects on the Earth's surface), so it veers east (i.e. to the right of its initial motion).", "title": "Intuitive explanation" }, { "paragraph_id": 15, "text": "Though not obvious from this example, which considers northward motion, the horizontal deflection occurs equally for objects moving eastward or westward (or in any other direction). However, the theory that the effect determines the rotation of draining water in a typical size household bathtub, sink or toilet has been repeatedly disproven by modern-day scientists; the force is negligibly small compared to the many other influences on the rotation.", "title": "Intuitive explanation" }, { "paragraph_id": 16, "text": "The time, space, and velocity scales are important in determining the importance of the Coriolis force. Whether rotation is important in a system can be determined by its Rossby number, which is the ratio of the velocity, U, of a system to the product of the Coriolis parameter, f = 2 ω sin φ {\\displaystyle f=2\\omega \\sin \\varphi \\,} , and the length scale, L, of the motion:", "title": "Length scales and the Rossby number" }, { "paragraph_id": 17, "text": "Hence, it is the ratio of inertial to Coriolis forces; a small Rossby number indicates a system is strongly affected by Coriolis forces, and a large Rossby number indicates a system in which inertial forces dominate. For example, in tornadoes, the Rossby number is large, so in them the Coriolis force is negligible, and balance is between pressure and centrifugal forces. In low-pressure systems the Rossby number is low, as the centrifugal force is negligible; there, the balance is between Coriolis and pressure forces. In oceanic systems the Rossby number is often around 1, with all three forces comparable.", "title": "Length scales and the Rossby number" }, { "paragraph_id": 18, "text": "An atmospheric system moving at U = 10 m/s (22 mph) occupying a spatial distance of L = 1,000 km (621 mi), has a Rossby number of approximately 0.1.", "title": "Length scales and the Rossby number" }, { "paragraph_id": 19, "text": "A baseball pitcher may throw the ball at U = 45 m/s (100 mph) for a distance of L = 18.3 m (60 ft). The Rossby number in this case would be 32,000 (at latitude 31°47'46.382\").", "title": "Length scales and the Rossby number" }, { "paragraph_id": 20, "text": "Baseball players don't care about which hemisphere they're playing in. However, an unguided missile obeys exactly the same physics as a baseball, but can travel far enough and be in the air long enough to experience the effect of Coriolis force. Long-range shells in the Northern Hemisphere landed close to, but to the right of, where they were aimed until this was noted. (Those fired in the Southern Hemisphere landed to the left.) In fact, it was this effect that first got the attention of Coriolis himself.", "title": "Length scales and the Rossby number" }, { "paragraph_id": 21, "text": "The figure illustrates a ball tossed from 12:00 o'clock toward the center of a counter-clockwise rotating carousel. On the left, the ball is seen by a stationary observer above the carousel, and the ball travels in a straight line to the center, while the ball-thrower rotates counter-clockwise with the carousel. On the right, the ball is seen by an observer rotating with the carousel, so the ball-thrower appears to stay at 12:00 o'clock. The figure shows how the trajectory of the ball as seen by the rotating observer can be constructed.", "title": "Simple cases" }, { "paragraph_id": 22, "text": "On the left, two arrows locate the ball relative to the ball-thrower. One of these arrows is from the thrower to the center of the carousel (providing the ball-thrower's line of sight), and the other points from the center of the carousel to the ball. (This arrow gets shorter as the ball approaches the center.) A shifted version of the two arrows is shown dotted.", "title": "Simple cases" }, { "paragraph_id": 23, "text": "On the right is shown this same dotted pair of arrows, but now the pair are rigidly rotated so the arrow corresponding to the line of sight of the ball-thrower toward the center of the carousel is aligned with 12:00 o'clock. The other arrow of the pair locates the ball relative to the center of the carousel, providing the position of the ball as seen by the rotating observer. By following this procedure for several positions, the trajectory in the rotating frame of reference is established as shown by the curved path in the right-hand panel.", "title": "Simple cases" }, { "paragraph_id": 24, "text": "The ball travels in the air, and there is no net force upon it. To the stationary observer, the ball follows a straight-line path, so there is no problem squaring this trajectory with zero net force. However, the rotating observer sees a curved path. Kinematics insists that a force (pushing to the right of the instantaneous direction of travel for a counter-clockwise rotation) must be present to cause this curvature, so the rotating observer is forced to invoke a combination of centrifugal and Coriolis forces to provide the net force required to cause the curved trajectory.", "title": "Simple cases" }, { "paragraph_id": 25, "text": "The figure describes a more complex situation where the tossed ball on a turntable bounces off the edge of the carousel and then returns to the tosser, who catches the ball. The effect of Coriolis force on its trajectory is shown again as seen by two observers: an observer (referred to as the \"camera\") that rotates with the carousel, and an inertial observer. The figure shows a bird's-eye view based upon the same ball speed on forward and return paths. Within each circle, plotted dots show the same time points. In the left panel, from the camera's viewpoint at the center of rotation, the tosser (smiley face) and the rail both are at fixed locations, and the ball makes a very considerable arc on its travel toward the rail, and takes a more direct route on the way back. From the ball tosser's viewpoint, the ball seems to return more quickly than it went (because the tosser is rotating toward the ball on the return flight).", "title": "Simple cases" }, { "paragraph_id": 26, "text": "On the carousel, instead of tossing the ball straight at a rail to bounce back, the tosser must throw the ball toward the right of the target and the ball then seems to the camera to bear continuously to the left of its direction of travel to hit the rail (left because the carousel is turning clockwise). The ball appears to bear to the left from direction of travel on both inward and return trajectories. The curved path demands this observer to recognize a leftward net force on the ball. (This force is \"fictitious\" because it disappears for a stationary observer, as is discussed shortly.) For some angles of launch, a path has portions where the trajectory is approximately radial, and Coriolis force is primarily responsible for the apparent deflection of the ball (centrifugal force is radial from the center of rotation, and causes little deflection on these segments). When a path curves away from radial, however, centrifugal force contributes significantly to deflection.", "title": "Simple cases" }, { "paragraph_id": 27, "text": "The ball's path through the air is straight when viewed by observers standing on the ground (right panel). In the right panel (stationary observer), the ball tosser (smiley face) is at 12 o'clock and the rail the ball bounces from is at position 1. From the inertial viewer's standpoint, positions 1, 2, and 3 are occupied in sequence. At position 2, the ball strikes the rail, and at position 3, the ball returns to the tosser. Straight-line paths are followed because the ball is in free flight, so this observer requires that no net force is applied.", "title": "Simple cases" }, { "paragraph_id": 28, "text": "The acceleration affecting the motion of air \"sliding\" over the Earth's surface is the horizontal component of the Coriolis term", "title": "Applied to the Earth" }, { "paragraph_id": 29, "text": "This component is orthogonal to the velocity over the Earth surface and is given by the expression", "title": "Applied to the Earth" }, { "paragraph_id": 30, "text": "where", "title": "Applied to the Earth" }, { "paragraph_id": 31, "text": "In the northern hemisphere, where the latitude is positive, this acceleration, as viewed from above, is to the right of the direction of motion. Conversely, it is to the left in the southern hemisphere.", "title": "Applied to the Earth" }, { "paragraph_id": 32, "text": "Consider a location with latitude φ on a sphere that is rotating around the north–south axis. A local coordinate system is set up with the x axis horizontally due east, the y axis horizontally due north and the z axis vertically upwards. The rotation vector, velocity of movement and Coriolis acceleration expressed in this local coordinate system (listing components in the order east (e), north (n) and upward (u)) are:", "title": "Applied to the Earth" }, { "paragraph_id": 33, "text": "When considering atmospheric or oceanic dynamics, the vertical velocity is small, and the vertical component of the Coriolis acceleration ( v e cos φ {\\displaystyle v_{e}\\cos \\varphi } ) is small compared with the acceleration due to gravity (g, approximately 9.81 m/s (32.2 ft/s) near Earth's surface). For such cases, only the horizontal (east and north) components matter. The restriction of the above to the horizontal plane is (setting vu = 0):", "title": "Applied to the Earth" }, { "paragraph_id": 34, "text": "where f = 2 ω sin φ {\\displaystyle f=2\\omega \\sin \\varphi \\,} is called the Coriolis parameter.", "title": "Applied to the Earth" }, { "paragraph_id": 35, "text": "By setting vn = 0, it can be seen immediately that (for positive φ and ω) a movement due east results in an acceleration due south; similarly, setting ve = 0, it is seen that a movement due north results in an acceleration due east. In general, observed horizontally, looking along the direction of the movement causing the acceleration, the acceleration always is turned 90° to the right (for positive φ) and of the same size regardless of the horizontal orientation.", "title": "Applied to the Earth" }, { "paragraph_id": 36, "text": "In the case of equatorial motion, setting φ = 0° yields:", "title": "Applied to the Earth" }, { "paragraph_id": 37, "text": "Ω in this case is parallel to the north axis.", "title": "Applied to the Earth" }, { "paragraph_id": 38, "text": "Accordingly, an eastward motion (that is, in the same direction as the rotation of the sphere) provides an upward acceleration known as the Eötvös effect, and an upward motion produces an acceleration due west.", "title": "Applied to the Earth" }, { "paragraph_id": 39, "text": "Perhaps the most important impact of the Coriolis effect is in the large-scale dynamics of the oceans and the atmosphere. In meteorology and oceanography, it is convenient to postulate a rotating frame of reference wherein the Earth is stationary. In accommodation of that provisional postulation, the centrifugal and Coriolis forces are introduced. Their relative importance is determined by the applicable Rossby numbers. Tornadoes have high Rossby numbers, so, while tornado-associated centrifugal forces are quite substantial, Coriolis forces associated with tornadoes are for practical purposes negligible.", "title": "Applied to the Earth" }, { "paragraph_id": 40, "text": "Because surface ocean currents are driven by the movement of wind over the water's surface, the Coriolis force also affects the movement of ocean currents and cyclones as well. Many of the ocean's largest currents circulate around warm, high-pressure areas called gyres. Though the circulation is not as significant as that in the air, the deflection caused by the Coriolis effect is what creates the spiralling pattern in these gyres. The spiralling wind pattern helps the hurricane form. The stronger the force from the Coriolis effect, the faster the wind spins and picks up additional energy, increasing the strength of the hurricane.", "title": "Applied to the Earth" }, { "paragraph_id": 41, "text": "Air within high-pressure systems rotates in a direction such that the Coriolis force is directed radially inwards, and nearly balanced by the outwardly radial pressure gradient. As a result, air travels clockwise around high pressure in the Northern Hemisphere and anticlockwise in the Southern Hemisphere. Air around low-pressure rotates in the opposite direction, so that the Coriolis force is directed radially outward and nearly balances an inwardly radial pressure gradient.", "title": "Applied to the Earth" }, { "paragraph_id": 42, "text": "If a low-pressure area forms in the atmosphere, air tends to flow in towards it, but is deflected perpendicular to its velocity by the Coriolis force. A system of equilibrium can then establish itself creating circular movement, or a cyclonic flow. Because the Rossby number is low, the force balance is largely between the pressure-gradient force acting towards the low-pressure area and the Coriolis force acting away from the center of the low pressure.", "title": "Applied to the Earth" }, { "paragraph_id": 43, "text": "Instead of flowing down the gradient, large scale motions in the atmosphere and ocean tend to occur perpendicular to the pressure gradient. This is known as geostrophic flow. On a non-rotating planet, fluid would flow along the straightest possible line, quickly eliminating pressure gradients. The geostrophic balance is thus very different from the case of \"inertial motions\" (see below), which explains why mid-latitude cyclones are larger by an order of magnitude than inertial circle flow would be.", "title": "Applied to the Earth" }, { "paragraph_id": 44, "text": "This pattern of deflection, and the direction of movement, is called Buys-Ballot's law. In the atmosphere, the pattern of flow is called a cyclone. In the Northern Hemisphere the direction of movement around a low-pressure area is anticlockwise. In the Southern Hemisphere, the direction of movement is clockwise because the rotational dynamics is a mirror image there. At high altitudes, outward-spreading air rotates in the opposite direction. Cyclones rarely form along the equator due to the weak Coriolis effect present in this region.", "title": "Applied to the Earth" }, { "paragraph_id": 45, "text": "An air or water mass moving with speed v {\\displaystyle v\\,} subject only to the Coriolis force travels in a circular trajectory called an inertial circle. Since the force is directed at right angles to the motion of the particle, it moves with a constant speed around a circle whose radius R {\\displaystyle R} is given by:", "title": "Applied to the Earth" }, { "paragraph_id": 46, "text": "where f {\\displaystyle f} is the Coriolis parameter 2 Ω sin φ {\\displaystyle 2\\Omega \\sin \\varphi } , introduced above (where φ {\\displaystyle \\varphi } is the latitude). The time taken for the mass to complete a full circle is therefore 2 π / f {\\displaystyle 2\\pi /f} . The Coriolis parameter typically has a mid-latitude value of about 10 s; hence for a typical atmospheric speed of 10 m/s (22 mph), the radius is 100 km (62 mi) with a period of about 17 hours. For an ocean current with a typical speed of 10 cm/s (0.22 mph), the radius of an inertial circle is 1 km (0.6 mi). These inertial circles are clockwise in the northern hemisphere (where trajectories are bent to the right) and anticlockwise in the southern hemisphere.", "title": "Applied to the Earth" }, { "paragraph_id": 47, "text": "If the rotating system is a parabolic turntable, then f {\\displaystyle f} is constant and the trajectories are exact circles. On a rotating planet, f {\\displaystyle f} varies with latitude and the paths of particles do not form exact circles. Since the parameter f {\\displaystyle f} varies as the sine of the latitude, the radius of the oscillations associated with a given speed are smallest at the poles (latitude of ±90°), and increase toward the equator.", "title": "Applied to the Earth" }, { "paragraph_id": 48, "text": "The Coriolis effect strongly affects the large-scale oceanic and atmospheric circulation, leading to the formation of robust features like jet streams and western boundary currents. Such features are in geostrophic balance, meaning that the Coriolis and pressure gradient forces balance each other. Coriolis acceleration is also responsible for the propagation of many types of waves in the ocean and atmosphere, including Rossby waves and Kelvin waves. It is also instrumental in the so-called Ekman dynamics in the ocean, and in the establishment of the large-scale ocean flow pattern called the Sverdrup balance.", "title": "Applied to the Earth" }, { "paragraph_id": 49, "text": "The practical impact of the \"Coriolis effect\" is mostly caused by the horizontal acceleration component produced by horizontal motion.", "title": "Applied to the Earth" }, { "paragraph_id": 50, "text": "There are other components of the Coriolis effect. Westward-traveling objects are deflected downwards, while eastward-traveling objects are deflected upwards. This is known as the Eötvös effect. This aspect of the Coriolis effect is greatest near the equator. The force produced by the Eötvös effect is similar to the horizontal component, but the much larger vertical forces due to gravity and pressure suggest that it is unimportant in the hydrostatic equilibrium. However, in the atmosphere, winds are associated with small deviations of pressure from the hydrostatic equilibrium. In the tropical atmosphere, the order of magnitude of the pressure deviations is so small that the contribution of the Eötvös effect to the pressure deviations is considerable.", "title": "Applied to the Earth" }, { "paragraph_id": 51, "text": "In addition, objects traveling upwards (i.e. out) or downwards (i.e. in) are deflected to the west or east respectively. This effect is also the greatest near the equator. Since vertical movement is usually of limited extent and duration, the size of the effect is smaller and requires precise instruments to detect. For example, idealized numerical modeling studies suggest that this effect can directly affect tropical large-scale wind field by roughly 10% given long-duration (2 weeks or more) heating or cooling in the atmosphere. Moreover, in the case of large changes of momentum, such as a spacecraft being launched into orbit, the effect becomes significant. The fastest and most fuel-efficient path to orbit is a launch from the equator that curves to a directly eastward heading.", "title": "Applied to the Earth" }, { "paragraph_id": 52, "text": "Imagine a train that travels through a frictionless railway line along the equator. Assume that, when in motion, it moves at the necessary speed to complete a trip around the world in one day (465 m/s). The Coriolis effect can be considered in three cases: when the train travels west, when it is at rest, and when it travels east. In each case, the Coriolis effect can be calculated from the rotating frame of reference on Earth first, and then checked against a fixed inertial frame. The image below illustrates the three cases as viewed by an observer at rest in a (near) inertial frame from a fixed point above the North Pole along the Earth's axis of rotation; the train is denoted by a few red pixels, fixed at the left side in the leftmost picture, moving in the others ( 1 day = ∧ 8 s ) : {\\displaystyle \\left(1{\\text{ day}}\\mathrel {\\overset {\\land }{=}} 8{\\text{ s}}\\right):}", "title": "Applied to the Earth" }, { "paragraph_id": 53, "text": "This also explains why high-speed projectiles that travel west are deflected down, and those that travel east are deflected up. This vertical component of the Coriolis effect is called the Eötvös effect.", "title": "Applied to the Earth" }, { "paragraph_id": 54, "text": "The above example can be used to explain why the Eötvös effect starts diminishing when an object is traveling westward as its tangential speed increases above Earth's rotation (465 m/s). If the westward train in the above example increases speed, part of the force of gravity that pushes against the track accounts for the centripetal force needed to keep it in circular motion on the inertial frame. Once the train doubles its westward speed at 930 m/s (2,100 mph) that centripetal force becomes equal to the force the train experiences when it stops. From the inertial frame, in both cases it rotates at the same speed but in the opposite directions. Thus, the force is the same cancelling completely the Eötvös effect. Any object that moves westward at a speed above 930 m/s (2,100 mph) experiences an upward force instead. In the figure, the Eötvös effect is illustrated for a 10-kilogram (22 lb) object on the train at different speeds. The parabolic shape is because the centripetal force is proportional to the square of the tangential speed. On the inertial frame, the bottom of the parabola is centered at the origin. The offset is because this argument uses the Earth's rotating frame of reference. The graph shows that the Eötvös effect is not symmetrical, and that the resulting downward force experienced by an object that travels west at high velocity is less than the resulting upward force when it travels east at the same speed.", "title": "Applied to the Earth" }, { "paragraph_id": 55, "text": "Contrary to popular misconception, bathtubs, toilets, and other water receptacles do not drain in opposite directions in the Northern and Southern Hemispheres. This is because the magnitude of the Coriolis force is negligible at this scale. Forces determined by the initial conditions of the water (e.g. the geometry of the drain, the geometry of the receptacle, preexisting momentum of the water, etc.) are likely to be orders of magnitude greater than the Coriolis force and hence will determine the direction of water rotation, if any. For example, identical toilets flushed in both hemispheres drain in the same direction, and this direction is determined mostly by the shape of the toilet bowl.", "title": "Applied to the Earth" }, { "paragraph_id": 56, "text": "Under real-world conditions, the Coriolis force does not influence the direction of water flow perceptibly. Only if the water is so still that the effective rotation rate of the Earth is faster than that of the water relative to its container, and if externally applied torques (such as might be caused by flow over an uneven bottom surface) are small enough, the Coriolis effect may indeed determine the direction of the vortex. Without such careful preparation, the Coriolis effect will be much smaller than various other influences on drain direction such as any residual rotation of the water and the geometry of the container.", "title": "Applied to the Earth" }, { "paragraph_id": 57, "text": "In 1962, Ascher Shapiro performed an experiment at MIT to test the Coriolis force on a large basin of water, 2 meters (6 ft 7 in) across, with a small wooden cross above the plug hole to display the direction of rotation, covering it and waiting for at least 24 hours for the water to settle. Under these precise laboratory conditions, he demonstrated the effect and consistent counterclockwise rotation. The experiment required extreme precision, since the acceleration due to Coriolis effect is only 3 × 10 − 7 {\\displaystyle 3\\times 10^{-7}} that of gravity. The vortex was measured by a cross made of two slivers of wood pinned above the draining hole. It takes 20 minutes to drain, and the cross starts turning only around 15 minutes. At the end it is turning at 1 rotation every 3 to 4 seconds.", "title": "Applied to the Earth" }, { "paragraph_id": 58, "text": "He reported that,", "title": "Applied to the Earth" }, { "paragraph_id": 59, "text": "Both schools of thought are in some sense correct. For the everyday observations of the kitchen sink and bath-tub variety, the direction of the vortex seems to vary in an unpredictable manner with the date, the time of day, and the particular household of the experimenter. But under well-controlled conditions of experimentation, the observer looking downward at a drain in the northern hemisphere will always see a counter-clockwise vortex, while one in the southern hemisphere will always see a clockwise vortex. In a properly designed experiment, the vortex is produced by Coriolis forces, which are counter-clockwise in the northern hemisphere.", "title": "Applied to the Earth" }, { "paragraph_id": 60, "text": "Lloyd Trefethen reported clockwise rotation in the Southern Hemisphere at the University of Sydney in five tests with settling times of 18 h or more.", "title": "Applied to the Earth" }, { "paragraph_id": 61, "text": "The Coriolis force is important in external ballistics for calculating the trajectories of very long-range artillery shells. The most famous historical example was the Paris gun, used by the Germans during World War I to bombard Paris from a range of about 120 km (75 mi). The Coriolis force minutely changes the trajectory of a bullet, affecting accuracy at extremely long distances. It is adjusted for by accurate long-distance shooters, such as snipers. At the latitude of Sacramento, California, a 1,000 yd (910 m) northward shot would be deflected 2.8 in (71 mm) to the right. There is also a vertical component, explained in the Eötvös effect section above, which causes westward shots to hit low, and eastward shots to hit high.", "title": "Applied to the Earth" }, { "paragraph_id": 62, "text": "The effects of the Coriolis force on ballistic trajectories should not be confused with the curvature of the paths of missiles, satellites, and similar objects when the paths are plotted on two-dimensional (flat) maps, such as the Mercator projection. The projections of the three-dimensional curved surface of the Earth to a two-dimensional surface (the map) necessarily results in distorted features. The apparent curvature of the path is a consequence of the sphericity of the Earth and would occur even in a non-rotating frame.", "title": "Applied to the Earth" }, { "paragraph_id": 63, "text": "The Coriolis force on a moving projectile depends on velocity components in all three directions, latitude, and azimuth. The directions are typically downrange (the direction that the gun is initially pointing), vertical, and cross-range.", "title": "Applied to the Earth" }, { "paragraph_id": 64, "text": "where", "title": "Applied to the Earth" }, { "paragraph_id": 65, "text": "To demonstrate the Coriolis effect, a parabolic turntable can be used. On a flat turntable, the inertia of a co-rotating object forces it off the edge. However, if the turntable surface has the correct paraboloid (parabolic bowl) shape (see the figure) and rotates at the corresponding rate, the force components shown in the figure make the component of gravity tangential to the bowl surface exactly equal to the centripetal force necessary to keep the object rotating at its velocity and radius of curvature (assuming no friction). (See banked turn.) This carefully contoured surface allows the Coriolis force to be displayed in isolation.", "title": "Visualization of the Coriolis effect" }, { "paragraph_id": 66, "text": "Discs cut from cylinders of dry ice can be used as pucks, moving around almost frictionlessly over the surface of the parabolic turntable, allowing effects of Coriolis on dynamic phenomena to show themselves. To get a view of the motions as seen from the reference frame rotating with the turntable, a video camera is attached to the turntable so as to co-rotate with the turntable, with results as shown in the figure. In the left panel of the figure, which is the viewpoint of a stationary observer, the gravitational force in the inertial frame pulling the object toward the center (bottom ) of the dish is proportional to the distance of the object from the center. A centripetal force of this form causes the elliptical motion. In the right panel, which shows the viewpoint of the rotating frame, the inward gravitational force in the rotating frame (the same force as in the inertial frame) is balanced by the outward centrifugal force (present only in the rotating frame). With these two forces balanced, in the rotating frame the only unbalanced force is Coriolis (also present only in the rotating frame), and the motion is an inertial circle. Analysis and observation of circular motion in the rotating frame is a simplification compared with analysis and observation of elliptical motion in the inertial frame.", "title": "Visualization of the Coriolis effect" }, { "paragraph_id": 67, "text": "Because this reference frame rotates several times a minute rather than only once a day like the Earth, the Coriolis acceleration produced is many times larger and so easier to observe on small time and spatial scales than is the Coriolis acceleration caused by the rotation of the Earth.", "title": "Visualization of the Coriolis effect" }, { "paragraph_id": 68, "text": "In a manner of speaking, the Earth is analogous to such a turntable. The rotation has caused the planet to settle on a spheroid shape, such that the normal force, the gravitational force and the centrifugal force exactly balance each other on a \"horizontal\" surface. (See equatorial bulge.)", "title": "Visualization of the Coriolis effect" }, { "paragraph_id": 69, "text": "The Coriolis effect caused by the rotation of the Earth can be seen indirectly through the motion of a Foucault pendulum.", "title": "Visualization of the Coriolis effect" }, { "paragraph_id": 70, "text": "A practical application of the Coriolis effect is the mass flow meter, an instrument that measures the mass flow rate and density of a fluid flowing through a tube. The operating principle involves inducing a vibration of the tube through which the fluid passes. The vibration, though not completely circular, provides the rotating reference frame that gives rise to the Coriolis effect. While specific methods vary according to the design of the flow meter, sensors monitor and analyze changes in frequency, phase shift, and amplitude of the vibrating flow tubes. The changes observed represent the mass flow rate and density of the fluid.", "title": "Coriolis effects in other areas" }, { "paragraph_id": 71, "text": "In polyatomic molecules, the molecule motion can be described by a rigid body rotation and internal vibration of atoms about their equilibrium position. As a result of the vibrations of the atoms, the atoms are in motion relative to the rotating coordinate system of the molecule. Coriolis effects are therefore present, and make the atoms move in a direction perpendicular to the original oscillations. This leads to a mixing in molecular spectra between the rotational and vibrational levels, from which Coriolis coupling constants can be determined.", "title": "Coriolis effects in other areas" }, { "paragraph_id": 72, "text": "When an external torque is applied to a spinning gyroscope along an axis that is at right angles to the spin axis, the rim velocity that is associated with the spin becomes radially directed in relation to the external torque axis. This causes a torque-induced force to act on the rim in such a way as to tilt the gyroscope at right angles to the direction that the external torque would have tilted it. This tendency has the effect of keeping spinning bodies in their rotational frame.", "title": "Coriolis effects in other areas" }, { "paragraph_id": 73, "text": "Flies (Diptera) and some moths (Lepidoptera) exploit the Coriolis effect in flight with specialized appendages and organs that relay information about the angular velocity of their bodies. Coriolis forces resulting from linear motion of these appendages are detected within the rotating frame of reference of the insects' bodies. In the case of flies, their specialized appendages are dumbbell shaped organs located just behind their wings called \"halteres\".", "title": "Coriolis effects in other areas" }, { "paragraph_id": 74, "text": "The fly's halteres oscillate in a plane at the same beat frequency as the main wings so that any body rotation results in lateral deviation of the halteres from their plane of motion.", "title": "Coriolis effects in other areas" }, { "paragraph_id": 75, "text": "In moths, their antennae are known to be responsible for the sensing of Coriolis forces in the similar manner as with the halteres in flies. In both flies and moths, a collection of mechanosensors at the base of the appendage are sensitive to deviations at the beat frequency, correlating to rotation in the pitch and roll planes, and at twice the beat frequency, correlating to rotation in the yaw plane.", "title": "Coriolis effects in other areas" }, { "paragraph_id": 76, "text": "In astronomy, Lagrangian points are five positions in the orbital plane of two large orbiting bodies where a small object affected only by gravity can maintain a stable position relative to the two large bodies. The first three Lagrangian points (L1, L2, L3) lie along the line connecting the two large bodies, while the last two points (L4 and L5) each form an equilateral triangle with the two large bodies. The L4 and L5 points, although they correspond to maxima of the effective potential in the coordinate frame that rotates with the two large bodies, are stable due to the Coriolis effect. The stability can result in orbits around just L4 or L5, known as tadpole orbits, where trojans can be found. It can also result in orbits that encircle L3, L4, and L5, known as horseshoe orbits.", "title": "Coriolis effects in other areas" } ]
In physics, the Coriolis force is an inertial force that acts on objects in motion within a frame of reference that rotates with respect to an inertial frame. In a reference frame with clockwise rotation, the force acts to the left of the motion of the object. In one with anticlockwise rotation, the force acts to the right. Deflection of an object due to the Coriolis force is called the Coriolis effect. Though recognized previously by others, the mathematical expression for the Coriolis force appeared in an 1835 paper by French scientist Gaspard-Gustave de Coriolis, in connection with the theory of water wheels. Early in the 20th century, the term Coriolis force began to be used in connection with meteorology. Newton's laws of motion describe the motion of an object in an inertial (non-accelerating) frame of reference. When Newton's laws are transformed to a rotating frame of reference, the Coriolis and centrifugal accelerations appear. When applied to objects with masses, the respective forces are proportional to their masses. The magnitude of the Coriolis force is proportional to the rotation rate, and the magnitude of the centrifugal force is proportional to the square of the rotation rate. The Coriolis force acts in a direction perpendicular to two quantities: the angular velocity of the rotating frame relative to the inertial frame and the velocity of the body relative to the rotating frame, and its magnitude is proportional to the object's speed in the rotating frame. The centrifugal force acts outwards in the radial direction and is proportional to the distance of the body from the axis of the rotating frame. These additional forces are termed inertial forces, fictitious forces, or pseudo forces. By introducing these fictitious forces to a rotating frame of reference, Newton's laws of motion can be applied to the rotating system as though it were an inertial system; these forces are correction factors that are not required in a non-rotating system. In popular (non-technical) usage of the term "Coriolis effect", the rotating reference frame implied is almost always the Earth. Because the Earth spins, Earth-bound observers need to account for the Coriolis force to correctly analyze the motion of objects. The Earth completes one rotation for each day/night cycle, so for motions of everyday objects the Coriolis force is imperceptible; its effects become noticeable only for motions occurring over large distances and long periods of time, such as large-scale movement of air in the atmosphere or water in the ocean; or where high precision is important, such as long-range artillery or missile trajectories. Such motions are constrained by the surface of the Earth, so only the horizontal component of the Coriolis force is generally important. This force causes moving objects on the surface of the Earth to be deflected to the right in the Northern Hemisphere and to the left in the Southern Hemisphere. The horizontal deflection effect is greater near the poles, since the effective rotation rate about a local vertical axis is largest there, and decreases to zero at the equator. Rather than flowing directly from areas of high pressure to low pressure, as they would in a non-rotating system, winds and currents tend to flow to the right of this direction north of the equator (anticlockwise) and to the left of this direction south of it (clockwise). This effect is responsible for the rotation and thus formation of cyclones (see Coriolis effects in meteorology).
2002-01-16T14:06:48Z
2023-12-23T20:15:17Z
[ "Template:See also", "Template:ISBN", "Template:Better source", "Template:Classical mechanics", "Template:Unreferenced section", "Template:Cite book", "Template:Cite web", "Template:Geophysics navbox", "Template:Rp", "Template:Div col end", "Template:Cite journal", "Template:Cite news", "Template:Citation", "Template:Ocean", "Template:Authority control", "Template:Short description", "Template:Redirect", "Template:Clear", "Template:Further", "Template:Convert", "Template:Cn", "Template:Paragraph", "Template:Use dmy dates", "Template:Fact", "Template:Self reference", "Template:Div col", "Template:Reflist", "Template:Commons", "Template:Physical oceanography", "Template:Main", "Template:Full", "Template:Blockquote", "Template:Anchor", "Template:Portal", "Template:Webarchive" ]
https://en.wikipedia.org/wiki/Coriolis_force
7,786
Challenger Deep
The Challenger Deep is the deepest known point of the seabed of Earth, located in the western Pacific Ocean at the southern end of the Mariana Trench, in the ocean territory of the Federated States of Micronesia. According to the GEBCO Gazetteer of Undersea Feature Names the depression's depth is 10,920 ± 10 m (35,827 ± 33 ft) at 11°22.4′N 142°35.5′E / 11.3733°N 142.5917°E / 11.3733; 142.5917, although its exact geodetic location remains inconclusive and its depth has been measured at 10,902–10,929 m (35,768–35,856 ft) by deep-diving submersibles, remotely operated underwater vehicles and benthic landers, and (sometimes) slightly more by sonar bathymetry. The differences in depth estimates and their geodetic positions are scientifically explainable by the difficulty of researching such deep locations. The depression is named after the British Royal Navy survey ships HMS Challenger, whose expedition of 1872–1876 first located it, and HMS Challenger II, whose expedition of 1950-1952 established its record-setting depth. The first descent by any vehicle was by the bathyscaphe Trieste in January 1960. In March 2012, a solo descent was made by film director James Cameron in the deep-submergence vehicle Deepsea Challenger. As of July 2022, 27 people have descended to Challenger Deep. The Challenger Deep is a relatively small slot-shaped depression in the bottom of a considerably larger crescent-shaped oceanic trench, which itself is an unusually deep feature in the ocean floor. The Challenger Deep consists of three basins, each 6 to 10 km (3.7 to 6.2 mi) long, 2 km (1.2 mi) wide, and over 10,850 m (35,597 ft) in depth, oriented in echelon from west to east, separated by mounds between the basins 200 to 300 m (660 to 980 ft) higher. The three basins feature extends about 48 km (30 mi) west to east if measured at the 10,650 m (34,941 ft) isobath. Both the western and eastern basins have recorded depths (by sonar bathymetry) in excess of 10,920 m (35,827 ft), while the center basin is slightly less deep. The closest land to the Challenger Deep is Fais Island (one of the outer islands of Yap), 287 km (178 mi) southwest, and Guam, 304 km (189 mi) to the northeast. Detailed sonar mapping of the western, center and eastern basins in June 2020 by the DSSV Pressure Drop combined with crewed descents revealed that they undulate with slopes and piles of rocks above a bed of pelagic ooze. This conforms with the description of Challenger Deep as consisting of an elongated seabed section with distinct sub-basins or sediment-filled pools. Over many years, the search for, and investigation of, the location of the maximum depth of the world's oceans has involved many different vessels, and continues into the twenty-first century. The accuracy of determining geographical location, and the beamwidth of (multibeam) echosounder systems, limits the horizontal and vertical bathymetric sensor resolution that hydrographers can obtain from onsite data. This is especially important when sounding in deep water, as the resulting footprint of an acoustic pulse gets large once it reaches a distant sea floor. Further, sonar operation is affected by variations in sound speed, particularly in the vertical plane. The speed is determined by the water's bulk modulus, mass, and density. The bulk modulus is affected by temperature, pressure, and dissolved impurities (usually salinity). In 1875, during her transit from the Admiralty Islands in the Bismarck Archipelago to Yokohama in Japan, the three-masted sailing corvette HMS Challenger attempted to make landfall at Spanish Marianas (now Guam), but was set to the west by "baffling winds" preventing her crew from "visiting either the Carolines or the Ladrones." Their altered path took them over the undersea canyon which later became known as the Challenger Deep. Depth soundings were taken by Baillie-weighted marked rope, and geographical locations were determined by celestial navigation (to an estimated accuracy of two nautical miles). One of their samples was taken within fifteen miles of the deepest spot in all of Earth's oceans. On 23 March 1875, at sample station number #225, HMS Challenger recorded the bottom at 4,475 fathoms (26,850 ft; 8,184 m) deep, (the deepest sounding of her three-plus-year eastward circumnavigation of the Earth) at 11°24′N 143°16′E / 11.400°N 143.267°E / 11.400; 143.267 – and confirmed it with a second sounding at the same location. The serendipitous discovery of Earth's deepest depression by history's first major scientific expedition devoted entirely to the emerging science of oceanography, was incredibly good fortune, and especially notable when compared to the Earth's third deepest site (the Sirena Deep only 150 nautical miles east of the Challenger Deep), which would remain undiscovered for another 122 years. Seventy-five years later, the 1,140-ton British survey vessel HMS Challenger II, on her three-year westward circumnavigation of Earth, investigated the extreme depths southwest of Guam reported in 1875 by her predecessor, HMS Challenger. On her southbound track from Japan to New Zealand (May–July 1951), Challenger II conducted a survey of the Marianas Trench between Guam and Ulithi atoll, using seismic-sized bomb-soundings and recorded a maximum depth of 5,663 fathoms (33,978 ft; 10,356 m). The depth was beyond Challenger II's echo sounder capability to verify, so they resorted to using a taut wire with "140 lbs of scrap iron", and documented a depth of 5,899 fathoms (35,394 ft; 10,788 m). The Senior Scientist aboard Challenger II, Thomas Gaskell, recalled: [I]t took from ten past five in the evening until twenty to seven, that is an hour and a half, for the iron weight to fall to the sea-bottom. It was almost dark by the time the weight struck, but great excitement greeted the reading... In New Zealand, the Challenger II team gained the assistance of the Royal New Zealand Dockyard, "who managed to boost the echo sounder to record at the greatest depths". They returned to the "Marianas Deep" (sic) in October 1951. Using their newly improved echo sounder, they ran survey lines at right angles to the axis of the trench and discovered "a considerable area of a depth greater than 5,900 fathoms (35,400 ft; 10,790 m)" – later identified as the Challenger Deep's western basin. The greatest depth recorded was 5,940 fathoms (35,640 ft; 10,863 m), at 11°19′N 142°15′E / 11.317°N 142.250°E / 11.317; 142.250. Navigational accuracy of several hundred meters was attained by celestial navigation and LORAN-A. As Gaskell explained, the measurement was not more than 50 miles from the spot where the nineteenth-century Challenger found her deepest depth [...] and it may be thought fitting that a ship with the name Challenger should put the seal on the work of that great pioneering expedition of oceanography. The term "Challenger Deep" came into use after this 1951–52 Challenger circumnavigation, and commemorates both British ships of that name involved with the discovery of the deepest basin of the world's oceans. In August 1957, the Soviet 3,248-ton Vernadsky Institute of Geochemistry research vessel Vityaz recorded a maximum depth of 11,034 ± 50 m (36,201 ± 164 ft) at 11°20.9′N 142°11.5′E / 11.3483°N 142.1917°E / 11.3483; 142.1917 in the western basin of the Challenger Deep during a brief transit of the area on Cruise #25. She returned in 1958, Cruise #27, to conduct a detailed single beam bathymetry survey involving over a dozen transects of the Deep, with an extensive examination of the western basin and a quick peek into the eastern basin. Fisher records a total of three Vityaz sounding locations on Fig.2 "Trenches" (1963), one within yards of the 142°11.5' E location, and a third at 11°20.0′N 142°07′E / 11.3333°N 142.117°E / 11.3333; 142.117, all with 11,034 ± 50 m (36,201 ± 164 ft) depth. The depths were considered statistical outliers, and a depth greater than 11,000 m has never been proven. Taira reports that if Vityaz's depth was corrected with the same methodology used by the Japanese RV Hakuho Maru expedition of December 1992, it would be presented as 10,983 ± 50 m (36,033 ± 164 ft), as opposed to modern depths from multibeam echosounder systems greater than 10,900 metres (35,800 ft) with the NOAA accepted maximum of 10,995 ± 10 m (36,073 ± 33 ft) in the western basin. The first definitive verification of both the depth and location of the Challenger Deep (western basin) was determined by Dr. R. L. Fisher from the Scripps Institution of Oceanography, aboard the 325-ton research vessel Stranger. Using explosive soundings, they recorded 10,850 ± 20 m (35,597 ± 66 ft) at/near 11°18′N 142°14′E / 11.300°N 142.233°E / 11.300; 142.233 in July 1959. Stranger used celestial and LORAN-C for navigation. LORAN-C navigation provided geographical accuracy of 460 m (1,509 ft) or better. According to another source RV Stranger using bomb-sounding surveyed a maximum depth of 10,915 ± 10 m (35,810 ± 33 ft) at 11°20.0′N 142°11.8′E / 11.3333°N 142.1967°E / 11.3333; 142.1967. Discrepancies between the geographical location (lat/long) of Stranger's deepest depths and those from earlier expeditions (Challenger II 1951; Vityaz 1957 and 1958) "are probably due to uncertainties in fixing the ships' positions". Stranger's north-south zig-zag survey passed well to the east of the eastern basin southbound, and well to the west of the eastern basin northbound, thus failed to discover the eastern basin of the Challenger Deep. The maximum depth measured near longitude 142°30'E was 10,760 ± 20 m (35,302 ± 66 ft), about 10 km west of the eastern basin's deepest point. This was an important gap in information, as the eastern basin was later reported as deeper than the other two basins. Stranger crossed the center basin twice, measuring a maximum depth of 10,830 ± 20 m (35,531 ± 66 ft) in the vicinity of 142°22'E. At the western end of the central basin (approximately 142°18'E), they recorded a depth of 10,805 ± 20 m (35,449 ± 66 ft). The western basin received four transects by Stranger, recording depths of 10,830 ± 20 m (35,531 ± 66 ft) toward the central basin, near where Trieste dived in 1960 (vicinity 11°18.5′N 142°15.5′E / 11.3083°N 142.2583°E / 11.3083; 142.2583, and where Challenger II, in 1950, recorded 10,863 ± 35 m (35,640 ± 115 ft). At the far western end of the western basin (about 142°11'E), the Stranger recorded 10,850 ± 20 m (35,597 ± 66 ft), some 6 km south of the location where Vityaz recorded 11,034 ± 50 m (36,201 ± 164 ft) in 1957–1958. Fisher stated: "differences in the Vitiaz [sic] and Stranger–Challenger II depths can be attributed to the [sound] velocity correction function used". After investigating the Challenger Deep, Stranger proceeded to the Philippine Trench and transected the trench over twenty times in August 1959, finding a maximum depth of 10,030 ± 10 m (32,907 ± 33 ft), and thus established that the Challenger Deep was about 800 metres (2,600 ft) deeper than the Philippine Trench. The 1959 Stranger surveys of the Challenger Deep and of the Philippine Trench informed the U.S. Navy as to the appropriate site for Trieste's record dive in 1960. The Proa Expedition, Leg 2, returned Fisher to the Challenger Deep on 12–13 April 1962 aboard the Scripps research vessel Spencer F. Baird (formerly the steel-hulled US Army large tug LT-581) and employed a Precision Depth Recorder (PDR) to verify the extreme depths previously reported. They recorded a maximum depth of 10,915 metres (35,810 ft) (location not available). Additionally, at location "H-4" in the Challenger Deep, the expedition cast three taut-wire soundings: on 12 April, the first cast was to 5,078 fathoms (corrected for wire angle) 9,287 metres (30,469 ft) at 11°23′N 142°19.5′E / 11.383°N 142.3250°E / 11.383; 142.3250 in the central basin (Up until 1965, US research vessels recorded soundings in fathoms). The second cast, also on 12 April, was to 5,000 fathoms at 11°20.5′N 142°22.5′E / 11.3417°N 142.3750°E / 11.3417; 142.3750 in the central basin. On 13 April, the final cast recorded 5,297 fathoms (corrected for wire angle) 9,687 metres (31,781 ft) at 11°17.5′N 142°11′E / 11.2917°N 142.183°E / 11.2917; 142.183 (the western basin). They were chased off by a hurricane after only two days on-site. Once again, Fisher entirely missed the eastern basin of the Challenger Deep, which later proved to contain the deepest depths. The Scripps Institution of Oceanography deployed the 1,490-ton Navy-owned, civilian-crewed research vessel Thomas Washington (AGOR-10) to the Mariana Trench on several expeditions from 1975 to 1986. The first of these was the Eurydice Expedition, Leg 8 which brought Fisher back to the Challenger Deep's western basin from 28–31 March 1975. Thomas Washington established geodetic positioning by (SATNAV) with Autolog Gyro and EM Log. Bathymetrics were by a 12 kHz Precision Depth Recorder (PDR) with a single 60° beam. They mapped one, "possibly two", axial basins with a depth of 10,915 ± 20 m (35,810 ± 66 ft). Five dredges were hauled 27–31 March, all into or slightly north of the deepest depths of the western basin. Fisher noted that this survey of the Challenger Deep (western basin) had "provided nothing to support and much to refute recent claims of depths there greater than 10,915 ± 20 m (35,810 ± 66 ft)." While Fisher missed the eastern basin of the Challenger Deep (for the third time), he did report a deep depression about 150 nautical miles east of the western basin. The 25 March dredge haul at 12°03.72′N 142°33.42′E / 12.06200°N 142.55700°E / 12.06200; 142.55700 encountered 10,015 metres (32,858 ft), which pre-shadowed by 22 years the discovery of HMRG Deep/Sirena Deep in 1997. The deepest waters of the HMRG Deep/Serina Deep at 10,714 ± 20 m (35,151 ± 66 ft) are centered at/near 12°03.94′N 142°34.866′E / 12.06567°N 142.581100°E / 12.06567; 142.581100, approximately 2.65 km from Fisher's 25 March 1975 10,015 metres (32,858 ft) dredge haul. On Scripps Institution of Oceanography's INDOPAC Expedition Leg 3, the chief scientist, Dr. Joseph L. Reid, and oceanographer Arnold W. Mantyla made a hydrocast of a free vehicle (a special-purpose benthic lander (or "baited camera") for measurements of water temperature and salinity) on 27 May 1976 into the western basin of the Challenger Deep, "Station 21", at 11°19.9′N 142°10.8′E / 11.3317°N 142.1800°E / 11.3317; 142.1800 at about 10,840 metres (35,560 ft) depth. On INDOPAC Expedition Leg 9, under chief scientist A. Aristides Yayanos, Thomas Washington spent nine days from 13–21 January 1977 conducting an extensive and detailed investigation of the Challenger Deep, mainly with biological objectives. "Echo soundings were carried out primarily with a 3.5 kHz single-beam system, with a 12 kHz echosounder operated in addition some of the time" (the 12 kHz system was activated for testing on 16 January). A benthic lander was put into the western basin (11°19.7′N 142°09.3′E / 11.3283°N 142.1550°E / 11.3283; 142.1550) on 13 January, bottoming at 10,663 metres (34,984 ft) and recovered 50 hours later in damaged condition. Quickly repaired, it was again put down on the 15th to 10,559 metres (34,642 ft) depth at 11°23.3′N 142°13.8′E / 11.3883°N 142.2300°E / 11.3883; 142.2300. It was recovered on the 17th with excellent photography of amphipods (shrimp) from the Challenger Deep's western basin. The benthic lander was put down for the third and last time on the 17th, at 11°20.1′N 142°25.2′E / 11.3350°N 142.4200°E / 11.3350; 142.4200, in the central basin at a depth of 10,285 metres (33,743 ft). The benthic lander was not recovered and may remain on the bottom in the vicinity of 11°20.1′N 142°25.2′E / 11.3350°N 142.4200°E / 11.3350; 142.4200. Free traps and pressure-retaining traps were put down at eight locations from 13 to 19 January into the western basin, at depths ranging from 7,353 to 10,715 metres (24,124–35,154 ft). Both the free traps and the pressure-retaining traps brought up good sample amphipods for study. While the ship briefly visited the area of the eastern basin, the expedition did not recognize it as potentially the deepest of the three Challenger Deep basins. Thomas Washington returned briefly to the Challenger Deep on 17–19 October 1978 during Mariana Expedition Leg 5 under chief scientist James W. Hawkins. The ship tracked to the south and west of the eastern basin, and recorded depths between 5,093 and 7,182 metres (16,709–23,563 ft). Another miss. On Mariana Expedition Leg 8, under chief scientist Yayanos, Thomas Washington was again involved, from 12–21 December 1978, with an intensive biological study of the western and central basins of the Challenger Deep. Fourteen traps and pressure-retaining traps were put down to depths ranging from 10,455 to 10,927 metres (34,301–35,850 ft); the greatest depth was at 11°20.0′N 142°11.8′E / 11.3333°N 142.1967°E / 11.3333; 142.1967. All of the 10,900-plus m recordings were in the western basin. The 10,455 metres (34,301 ft) depth was furthest east at 142°26.4' E (in the central basin), about 17 km west of the eastern basin. Again, focused efforts on the known areas of extreme depths (the western and central basins) were so tight that the eastern basin again was missed by this expedition. From 20 to 30 November 1980, Thomas Washington was on site at the western basin of the Challenger Deep, as part of Rama Expedition Leg 7, again with chief-scientist Dr. A. A. Yayanos. Yayanos directed Thomas Washington in arguably the most extensive and wide-ranging of all single-beam bathymetric examinations of the Challenger Deep ever undertaken, with dozens of transits of the western basin, and ranging far into the backarc of the Challenger Deep (northward), with significant excursions into the Pacific Plate (southward) and along the trench axis to the east. They hauled eight dredges in the western basin to depths ranging from 10,015 to 10,900 metres (32,858–35,761 ft), and between hauls, cast thirteen free vertical traps. The dredging and traps were for biological investigation of the bottom. In the first successful retrieval of a live animal from the Challenger Deep, on 21 November 1980 in the western basin at 11°18.7′N 142°11.6′E / 11.3117°N 142.1933°E / 11.3117; 142.1933, Yayanos recovered a live amphipod from about 10,900 meters depth with a pressurized trap. Once again, other than a brief look into the eastern basin, all bathymetric and biological investigations were into the western basin. On Leg 3 of the Hawaii Institute of Geophysics' (HIG) expedition 76010303, the 156-foot (48 m) research vessel Kana Keoki departed Guam primarily for a seismic investigation of the Challenger Deep area, under chief scientist Donald M. Hussong. The ship was equipped with air guns (for seismic reflection soundings deep into the Earth's mantle), magnetometer, gravimeter, 3.5 kHz and 12 kHz sonar transducers, and precision depth recorders. They ran the Deep from east to west, collecting single beam bathymetry, magnetic and gravity measurements, and employed the air guns along the trench axis, and well into the backarc and forearc, from 13 to 15 March 1976. Thence they proceeded south to the Ontong Java Plateau. All three deep basins of the Challenger Deep were covered, but Kana Keoki recorded a maximum depth of 7,800 m (25,591 ft). Seismic information developed from this survey was instrumental in gaining an understanding of the subduction of the Pacific Plate under the Philippine Sea Plate. In 1977, Kana Keoki returned to the Challenger Deep area for wider coverage of the forearc and backarc. The Hydrographic Department, Maritime Safety Agency, Japan (JHOD) deployed the newly commissioned 2,600-ton survey vessel Takuyo (HL 02) to the Challenger Deep 17–19 February 1984. Takuyo was the first Japanese ship to be equipped with the new narrowbeam SeaBeam multi-beam sonar echosounder, and was the first survey ship with multi-beam capability to survey the Challenger Deep. The system was so new that JHOD had to develop their own software for drawing bathymetric charts based on the SeaBeam digital data. In just three days, they tracked 500 miles of sounding lines, and covered about 140 km of the Challenger Deep with multibeam ensonification. Under chief scientist Hideo Nishida, they used CTD temperature and salinity data from the top 4,500 metres (14,764 ft) of the water column to correct depth measurements, and later conferred with Scripps Institution of Oceanography (including Fisher), and other GEBCO experts to confirm their depth correction methodology. They employed a combination of NAVSAT, LORAN-C and OMEGA systems for geodetic positioning with accuracy better than 400 metres (1,300 ft). The deepest location recorded was 10,920 ± 10 m (35,827 ± 33 ft) at 11°22.4′N 142°35.5′E / 11.3733°N 142.5917°E / 11.3733; 142.5917; for the first time documenting the eastern basin as the deepest of the three en echelon pools. In 1993, GEBCO recognized the 10,920 ± 10 m (35,827 ± 33 ft) report as the deepest depth of the world's oceans. Technological advances such as improved multi-beam sonar would be the driving force in uncovering the mysteries of the Challenger Deep into the future. The Scripps research vessel Thomas Washington's returned to the Challenger Deep in 1986 during the Papatua Expedition, Leg 8, mounting one of the first commercial multi-beam echosounders capable of reaching into the deepest trenches, i.e. the 16-beam Seabeam "Classic". This allowed chief scientist Yayanos an opportunity to transit the Challenger Deep with the most modern depth-sounding equipment available. During the pre-midnight hours of 21 April 1986, the multibeam echosounder produced a map of the Challenger Deep bottom with a swath of about 5–7 miles wide. The maximum depth recorded was 10,804 metres (35,446 ft) (location of depth is not available). Yayanos noted: "The lasting impression from this cruise comes from the thoughts of the revolutionary things that Seabeam data can do for deep biology." On 22 August 1988, the U.S. Navy-owned 1,000-ton research vessel Moana Wave (AGOR-22), operated by the Hawaii Institute of Geophysics (HIG), University of Hawaii, under the direction of chief scientist Robert C. Thunell from the University of South Carolina, transited northwesterly across the central basin of the Challenger Deep, conducting a single-beam bathymetry track by their 3.5 kHz narrow (30-degs) beam echosounder with a Precision Depth Recorder. In addition to sonar bathymetry, they took 44 gravity cores and 21 box cores of bottom sediments. The deepest echosoundings recorded were 10,656 to 10,916 metres (34,961–35,814 ft), with the greatest depth at 11°22′N 142°25′E in the central basin. This was the first indication that all three basins contained depths in excess of 10,900 metres (35,800 ft). The 3,987-ton Japanese research vessel Hakuhō Maru, an Ocean Research Institute – University of Tokyo sponsored ship, on cruise KH-92-5 cast three Sea-Bird SBE-9 ultra-deep CTD (conductivity-temperature-depth) profilers in a transverse line across the Challenger Deep on 1 December 1992. The center CTD was located at 11°22.78′N 142°34.95′E / 11.37967°N 142.58250°E / 11.37967; 142.58250, in the eastern basin, at 10,989 metres (36,053 ft) by the SeaBeam depth recorder and 10,884 metres (35,709 ft) by the CTD. The other two CTDs were cast 19.9 km to the north and 16.1 km to the south. Hakuhō Maru was equipped with a narrow beam SeaBeam 500 multi-beam echosounder for depth determination, and had an Auto-Nav system with inputs from NAVSAT/NNSS, GPS, Doppler Log, EM log and track display, with a geodetic positioning accuracy approaching 100 metres (330 ft). When conducting CTD operations in the Challenger deep, they used the SeaBeam as a single beam depth recorder. At 11°22.6′N 142°35.0′E / 11.3767°N 142.5833°E / 11.3767; 142.5833 the corrected depth was 10,989 metres (36,053 ft), and at 11°22.0′N 142°34.0′E / 11.3667°N 142.5667°E / 11.3667; 142.5667 the depth was 10,927 metres (35,850 ft); both in the eastern basin. This may demonstrate that the basins might not be flat sedimentary pools but rather undulate with a difference of 50 metres (160 ft) or more. Taira revealed, "We considered that a trough deeper that Vitiaz's record by 5 metres (16 ft) was detected. There is a possibility that a depth exceeding 11,000 metres (36,089 ft) with a horizontal scale less than the beam width of measurements exists in the Challenger Deep. Since each SeaBeam 2.7-degree beam width sonar ping expands to cover a circular area about 500 metres (1,640 ft) in diameter at 11,000 metres (36,089 ft) depth, dips in the bottom that are less than that size would be difficult to detect from a sonar-emitting platform seven miles above. For most of 1995 and into 1996, the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) employed the 4,439-ton Research Vessel Yokosuka to conduct the testing and workup of the 11,000-meter remotely-operated vehicle (ROV) Kaikō, and the 6,500 meter ROV Shinkai. It was not until February 1996, during Yokosuka's cruise Y96-06, that Kaikō was ready for its first full depth dives. On this cruise, JAMSTEC established an area of the Challenger Deep (11°10'N to 11°30'N, by 141°50'E to 143°00'E – which later was recognized as containing three separate pools/basins en echelon, each with depths in excess of 10,900 m (35,761 ft)) toward which JAMSTEC expeditions would concentrate their investigations for the next two decades. The Yokosuka employed a 151-beam SeaBeam 2112 12 kHz multibeam echosounder, allowing search swaths 12–15 km in width at 11,000 metres (36,089 ft) depth. The depth accuracy of Yokosuka's Seabeam was about 0.1% of water depth (i.e. ± 110 metres (361 ft) for 11,000 metres (36,089 ft) depth). The ship's dual GPS systems attained geodetic positioning within double digit meter (100 metres (328 ft) or better) accuracy. Cruise KR98-01 sent JAMSTEC's two-year-old 4,517-ton Deep Sea Research Vessel RV Kairei south for a quick but thorough depth survey of the Challenger Deep, 11–13 January 1998, under chief scientist Kantaro Fujioka. Tracking largely along the trench axis of 070–250° they made five 80-km bathymetric survey tracks, spaced about 15 km apart, overlapping their SeaBeam 2112-004 (which now allowed sub-bottom profiling penetrating as much as 75 m below the bottom) while gaining gravity and magnetic data covering the entire Challenger Deep: western, central, and eastern basins. Kairei returned in May 1998, cruise KR98-05, with ROV Kaikō, under the direction of chief scientist Jun Hashimoto with both geophysical and biological goals. Their bathymetric survey from 14–26 May was the most intensive and thorough depth and seismic survey of the Challenger Deep performed to date. Each evening, Kaikō deployed for about four hours of bottom time for biological-related sampling, plus about seven hours of vertical transit time. When Kaikō was onboard for servicing, Kairei conducted bathymetric surveys and observations. Kairei gridded a survey area about 130 km N–S by 110 km E–W. Kaikō made six dives (#71–75) all to the same location, (11°20.8' N, 142°12.35' E), near the 10,900 metres (35,800 ft) bottom contour line in the western basin. The regional bathymetric map made from the data obtained in 1998 shows that the greatest depths in the eastern, central, and western depressions are 10,922 ± 74 m (35,833 ± 243 ft), 10,898 ± 62 m (35,755 ± 203 ft), and 10,908 ± 36 m (35,787 ± 118 ft), respectively, making the eastern depression the deepest of the three. In 1999, Kairei revisited the Challenger Deep during cruise KR99-06. The results of the 1998–1999 surveys include the first recognition that the Challenger Deep consists of three "right-stepping en echelon individual basins bounded by the 10,500 metres (34,400 ft) depth contour line. The size of [each of] the deeps are almost identical, 14–20 km long, 4 km wide". They concluded with the proposal "that these three individual elongated deeps constitute the 'Challenger Deep', and [we] identify them as the East, Central and West Deep. The deepest depth we obtained during the swath mapping is 10,938 metres (35,886 ft) in the West Deep (11°20.34' N, 142°13.20 E)." The depth was "obtained during swath mapping ... confirmed in both N–S and E-W swaths." Speed of sound corrections were from XBT to 1,800 metres (5,900 ft), and CTD below 1,800 metres (5,900 ft). The cross track survey of the 1999 Kairei cruise shows that the greatest depths in the eastern, central, and western depressions are 10,920 ± 10 m (35,827 ± 33 ft), 10,894 ± 14 m (35,741 ± 46 ft), and 10,907 ± 13 m (35,784 ± 43 ft), respectively, which supports the results of the previous survey. In 2002 Kairei revisited the Challenger Deep 16–25 October 2002, as cruise KR02-13 (a cooperative Japan-US-South Korea research program) with chief scientist Jun Hashimoto in charge; again with Kazuyoshi Hirata managing the ROV Kaikō team. On this survey, the size of each of the three basins was refined to 6–10 km long by about 2 km wide and in excess of 10,850 m (35,597 ft) deep. In marked contrast to the Kairei surveys of 1998 and 1999, the detailed survey in 2002 determined that the deepest point in the Challenger Deep is located in the eastern basin around 11°22.260′N 142°35.589′E / 11.371000°N 142.593150°E / 11.371000; 142.593150, with a depth of 10,920 ± 5 m (35,827 ± 16 ft), located about 290 m (950 ft) southeast of the deepest site determined by the survey vessel Takuyo in 1984. The 2002 surveys of both the western and eastern basins were tight, with especially meticulous cross-gridding of the eastern basin with ten parallel tracks N–S and E–W less than 250 meters apart. On the morning of 17 October, ROV Kaikō dive #272 began and recovered over 33 hours later, with the ROV working at the bottom of the western basin for 26 hours (vicinity of 11°20.148' N, 142°11.774 E at 10,893 m (35,738 ft)). Five Kaikō dives followed on a daily basis into the same area to service benthic landers and other scientific equipment, with dive #277 recovered on 25 October. Traps brought up large numbers of amphipods (sea fleas), and cameras recorded holothurians (sea cucumbers), White polychaetes (bristle worms), tube worms, and other biological species. During its 1998, 1999 surveys, Kairei was equipped with a GPS satellite-based radionavigation system. The United States government lifted the GPS selective availability in 2000, so during its 2002 survey, Kairei had access to non-degraded GPS positional services and achieved single-digit meter accuracy in geodetic positioning. The 2.516-ton research vessel Melville, at the time operated by the Scripps Institution of Oceanography, took the Cook Expedition, Leg 6 with chief scientist Patricia Fryer of the University of Hawaii from Guam on 10 February 2001 to the Challenger Deep for a survey titled "Subduction Factory Studies in the Southern Mariana", including HMR-1 sonar mapping, magnetics, gravity measurements, and dredging in the Mariana arc region. They covered all three basins, then tracked 120-nautical-mile-long (222.2 km) lines of bathymetry East-West, stepping northward from the Challenger Deep in 12 km (7.5 mi) sidesteps, covering more than 90 nmi (166.7 km) north into the backarc with overlapping swaths from their SeaBeam 2000 12 kHz multi-beam echosounder and MR1 towed system. They also gathered magnetic and gravity information, but no seismic data. Their primary survey instrument was the MR1 towed sonar, a shallow-towed 11/12 kHz bathymetric sidescan sonar developed and operated by the Hawaii Mapping Research Group (HMRG), a research and operational group within University of Hawaii's School of Ocean and Earth Science and Technology (SOEST) and the Hawaii Institute of Geophysics and Planetology (HIGP). The MR1 is full-ocean-depth capable, providing both bathymetry and sidescan data. Leg 7 of the Cook Expedition continued the MR-1 survey of the Mariana Trench backarc from 4 March to 12 April 2001 under chief scientist Sherman Bloomer of Oregon State University. In May/June 2009, the US Navy-owned 3,064-ton twin-hulled research vessel Kilo Moana (T-AGOR 26) was sent to the Challenger Deep area to conduct research. Kilo Moana is civilian-crewed and operated by SOEST. It is equipped with two multibeam echosounders with sub-bottom profiler add-ons (the 191-beam 12 kHz Kongsberg Simrad EM120 with SBP-1200, capable of accuracies of 0.2–0.5% of water depth across the entire swath), gravimeter, and magnetometer. The EM-120 uses 1 by 1 degree sonar-emissions at the sea surface. Each 1 degree beam width sonar ping expands to cover a circular area about 192 metres (630 ft) in diameter at 11,000 metres (36,089 ft) depth. Whilst mapping the Challenger Deep the sonar equipment indicated a maximum depth of 10,971 m (35,994 ft) at an undisclosed position. Navigation equipment includes the Applanix POS MV320 V4, rated at accuracies of 0.5–2 m. RV Kilo Moana was also used as the support ship of the hybrid remotely operated underwater vehicle (HROV) Nereus that dived three times to the Challenger Deep bottom during the May/June 2009 cruise and did not confirm the sonar established maximum depth by its support ship. Cruise YK09-08 brought the JAMSTEC 4,429-ton research vessel Yokosuka back to the Mariana Trough and to the Challenger Deep June–July 2009. Their mission was a two-part program: surveying three hydrothermal vent sites in the southern Mariana Trough backarc basin near 12°57'N, 143°37'E about 130 nmi northeast of the central basin of the Challenger Deep, using the autonomous underwater vehicle Urashima. AUV Urashima dives #90–94, were to a maximum depth of 3500 meters, and were successful in surveying all three sites with a Reson SEABAT7125AUV multibeam echosounder for bathymetry, and multiple water testers to detect and map trace elements spewed into the water from hydrothermal vents, white smokers, and hot spots. Kyoko OKINO from the Ocean Research Institute, University of Tokyo, was principal investigator for this aspect of the cruise. The second goal of the cruise was to deploy a new "10K free fall camera system" called Ashura, to sample sediments and biologics at the bottom of the Challenger Deep. The principal investigator at the Challenger Deep was Taishi Tsubouchi of JAMSTEC. The lander Ashura made two descents: on the first, 6 July 2009, Ashura bottomed at 11°22.3130′N 142°25.9412′E / 11.3718833°N 142.4323533°E / 11.3718833; 142.4323533 at 10,867 metres (35,653 ft). The second descent (on 10 July 2009) was to 11°22.1136′N 142°25.8547′E / 11.3685600°N 142.4309117°E / 11.3685600; 142.4309117 at 10,897 metres (35,751 ft). The 270 kg Ashura was equipped with multiple baited traps, a HTDV video camera, and devices to recover sediment, water, and biological samples (mostly amphipods at the bait, and bacteria and fungus from the sediment and water samples). On 7 October 2010, further sonar mapping of the Challenger Deep area was conducted by the US Center for Coastal & Ocean Mapping/Joint Hydrographic Center (CCOM/JHC) aboard the 4.762-ton Sumner. The results were reported in December 2011 at the annual American Geophysical Union fall meeting. Using a Kongsberg Maritime EM 122 multi-beam echosounder system coupled to positioning equipment that can determine latitude and longitude up to 50 cm (20 in) accuracy, from thousands of individual soundings around the deepest part the CCOM/JHC team preliminary determined that the Challenger Deep has a maximum depth of 10,994 m (36,070 ft) at 11°19′35″N 142°11′14″E / 11.326344°N 142.187248°E / 11.326344; 142.187248, with an estimated vertical uncertainty of ±40 m (131 ft) at two standard deviations (i.e. ≈ 95.4%) confidence level. A secondary deep with a depth of 10,951 m (35,928 ft) was located at approximately 23.75 nmi (44.0 km) to the east at 11°22′11″N 142°35′19″E / 11.369639°N 142.588582°E / 11.369639; 142.588582 in the eastern basin of the Challenger Deep. JAMSTEC returned Yokosuka to the Challenger Deep with cruise YK10-16, 21–28 November 2010. The chief scientist of this joint Japanese-Danish expedition was Hiroshi Kitazato of the Institute of Biogeosciences, JAMSTEC. The cruise was titled "Biogeosciences at the Challenger Deep: relict organisms and their relations to biogeochemical cycles". The Japanese teams made five deployments of their 11,000-meter camera system (three to 6,000 meters – two into the central basin of the Challenger Deep) which returned with 15 sediment cores, video records and 140 scavenging amphipod specimens. The Danish Ultra Deep Lander System was employed by Ronnie Glud et al on four casts, two into the central basin of the Challenger Deep and two to 6,000 m some 34 nmi west of the central basin. The deepest depth recorded was on 28 November 2010 – camera cast CS5 – 11°21.9810′N 142°25.8680′E / 11.3663500°N 142.4311333°E / 11.3663500; 142.4311333}, at a corrected depth of 10,889.6 metres (35,727 ft) (the central basin). With JAMSTEC Cruises YK13-09 and YK13-12, Yokosuka hosted chief scientist Hidetaka Nomaki for a trip to New Zealand waters (YK13-09), with the return cruise identified as YK13-12. The project name was QUELLE2013; and the cruise title was: "In situ experimental & sampling study to understand abyssal biodiversity and biogeochemical cycles". They spent one day on the return trip at the Challenger Deep to obtain DNA/RNA on the large amphipods inhabiting the Deep (Hirondellea gigas). Hideki Kobayashi (Biogeos, JAMSTEC) and the team deployed a benthic lander on 23 November 2013 with eleven baited traps (three bald, five covered by insulating materials, and three automatically sealed after nine hours) into the central basin of the Challenger Deep at 11°21.9082′N 142°25.7606′E / 11.3651367°N 142.4293433°E / 11.3651367; 142.4293433, depth 10,896 metres (35,748 ft). After an eight-hour, 46-minute stay at the bottom, they recovered some 90 individual Hirondellea gigas. JAMSTEC deployed Kairei to the Challenger Deep again 11–17 January 2014, under the leadership of chief scientist Takuro Nunora. The cruise identifier was KR14-01, titled: "Trench biosphere expedition for the Challenger Deep, Mariana Trench". The expedition sampled at six stations transecting the central basin, with only two deployments of the "11-K camera system" lander for sediment cores and water samples to "Station C" at the deepest depth, i.e. 11°22.19429′N 142°25.7574′E / 11.36990483°N 142.4292900°E / 11.36990483; 142.4292900, at 10,903 metres (35,771 ft). The other stations were investigated with the "Multi-core" lander, both to the backarc northward, and to the Pacific Plate southward. The 11,000-meter capable crawler-driven ROV ABIMSO was sent to 7,646 m depth about 20 nmi due north of the central basin (ABISMO dive #21) specifically to identify possible hydrothermal activity on the north slope of the Challenger Deep, as suggested by findings from Kairei cruise KR08-05 in 2008. AMISMO's dives #20 and #22 were to 7,900 meters about 15 nmi north of the deepest waters of the central basin. Italian researchers under the leadership of Laura Carugati from the Polytechnic University of Marche, Italy (UNIVPM) were investigating the dynamics in virus/prokaryotes interactions in the Mariana Trench. From 16–19 December 2014, the Schmidt Ocean Institute's 2,024-ton research vessel Falkor, under chief scientist Douglas Bartlett from the Scripps Institution of Oceanography, deployed four different untethered instruments into the Challenger Deep for seven total releases. Four landers were deployed on 16 December into the central basin: the baited video-equipped lander Leggo for biologics; the lander ARI to 11°21.5809′N 142°27.2969′E / 11.3596817°N 142.4549483°E / 11.3596817; 142.4549483 for water chemistry; and the probes Deep Sound 3 and Deep Sound 2. Both Deep Sound probes recorded acoustics floating at 9,000 metres (29,528 ft) depth, until Deep Sound 3 imploded at the depth of 8,620 metres (28,281 ft) (about 2,200 metres (7,218 ft) above the bottom) at 11°21.99′N 142°27.2484′E / 11.36650°N 142.4541400°E / 11.36650; 142.4541400. The Deep Sound 2 recorded the implosion of Deep Sound 3, providing a unique recording of an implosion within the Challenger Deep depression. In addition to the loss of the Deep Sound 3 by implosion, the lander ARI failed to respond upon receiving its instruction to drop weights, and was never recovered. On 16/17 December, Leggo was returned to the central basin baited for amphipods. On the 17th, RV Falkor relocated 17 nms eastward to the eastern basin, where they again deployed both the Leggo (baited and with its full camera load), and the Deep Sound 2. Deep Sound 2 was programmed to drop to 9,000 metres (29,528 ft) and remain at that depth during its recording of sounds within the trench. On 19 December Leggo landed at 11°22.11216′N 142°35.250996′E / 11.36853600°N 142.587516600°E / 11.36853600; 142.587516600 at a uncorrected depth of 11,168 metres (36,640 ft) according to its pressure sensor readings. This reading was corrected to 10,929 metres (35,856 ft) depth. Leggo returned with good photography of amphipods feeding on the lander's mackerel bait and with sample amphipods. Falknor departed the Challenger Deep on 19 December en route the Marianas Trench Marine National Monument to the Sirena Deep. RV Falkor had both a Kongsberg EM302 and EM710 multibeam echosounder for bathymetry, and an Oceaneering C-Nav 3050 global navigation satellite system receiver, capable of calculating geodetic positioning with an accuracy better than 5 cm (2.0 in) horizontally and 15 cm (5.9 in) vertically. From 10 to 13 July 2015, the Guam-based 1,930-ton US Coast Guard Cutter Sequoia (WLB 215) hosted a team of researchers, under chief scientist Robert P. Dziak, from the NOAA Pacific Marine Environmental Laboratory (PMEL), the University of Washington, and Oregon State University, in deploying PMEL's "Full-Ocean Depth Mooring", a 45-meter-long moored deep-ocean hydrophone and pressure sensor array into the western basin of the Challenger Deep. A 6-hour descent into the western basin anchored the array at 10,854.7 ± 8.9 m (35,613 ± 29 ft) of water depth, at 11°20.127′N 142°12.0233′E / 11.335450°N 142.2003883°E / 11.335450; 142.2003883, about 1 km northeast of Sumner's deepest depth, recorded in 2010. After 16 weeks, the moored array was recovered on 2–4 November 2015. "Observed sound sources included earthquake signals (T phases), baleen and odontocete cetacean vocalizations, ship propeller sounds, airguns, active sonar and the passing of a Category 4 typhoon." The science team described their results as "the first multiday, broadband record of ambient sound at Challenger Deep, as well as only the fifth direct depth measurement". The 3,536-ton research vessel Xiangyanghong 09 deployed on Leg II of the 37th China Cruise Dayang (DY37II) sponsored by the National Deep Sea Center, Qingdao and the Institute of Deep-Sea Science and Engineering, Chinese Academy of Sciences (Sanya, Hainan), to the Challenger Deep western basin area (11°22' N, 142°25' E) 4 June – 12 July 2016. As the mother ship for China's crewed deep submersible Jiaolong, the expedition carried out an exploration of the Challenger Deep to investigate the geological, biological, and chemical characteristics of the hadal zone. The diving area for this leg was on the southern slope of the Challenger Deep, at depths from about 6,300 to 8,300 metres (20,669 to 27,231 ft). The submersible completed nine piloted dives on the northern backarc and south area (Pacific plate) of the Challenger Deep to depths from 5,500 to 6,700 metres (18,045 to 21,982 ft). During the cruise, Jiaolong regularly deployed gas-tight samplers to collect water near the sea bottom. In a test of navigational proficiency, Jiaolong used an Ultra-Short Base Line (USBL) positioning system at a depth more than 6,600 metres (21,654 ft) to retrieve sampling bottles. From 22 June to 12 August 2016 (cruises 2016S1 and 2016S2), the Chinese Academy of Sciences' 6,250-ton submersible support ship Tansuo 1 (meaning: to explore) on her maiden voyage deployed to the Challenger Deep from her home port of Sanya, Hainan Island. On 12 July 2016, the ROV Haidou-1 dived to a depth of 10,767 metres (35,325 ft) in the Challenger Deep area. They also cast a free-drop lander, 9,000 metres (29,528 ft) rated free-drop ocean-floor seismic instruments (deployed to 7,731 metres (25,364 ft)), obtained sediment core samples, and collected over 2000 biological samples from depths ranging from 5,000 to 10,000 metres (16,404–32,808 ft). The Tansuo 01 operated along the 142°30.00' longitude line, about 30 nmi east of the earlier DY37II cruise survey (see Xiangyanghong 09 above). In November 2016 sonar mapping of the Challenger Deep area was conducted by the Royal Netherlands Institute for Sea Research (NIOZ)/GEOMAR Helmholtz Centre for Ocean Research Kiel aboard the 8,554-ton Deep Ocean Research Vessel Sonne. The results were reported in 2017. Using a Kongsberg Maritime EM 122 multi-beam echosounder system coupled to positioning equipment that can determine latitude and longitude the team determined that the Challenger Deep has a maximum depth of 10,925 m (35,843 ft) at 11°19.945′N 142°12.123′E / 11.332417°N 142.202050°E / 11.332417; 142.202050 (11°19′57″N 142°12′07″E / 11.332417°N 142.20205°E / 11.332417; 142.20205), with an estimated vertical uncertainty of ±12 m (39 ft) at one standard deviation (≈ 68.3%) confidence level. The analysis of the sonar survey offered a 100 by 100 metres (328 ft × 328 ft) grid resolution at bottom depth, so small dips in the bottom that are less than that size would be difficult to detect from the 0.5 by 1 degree sonar-emissions at the sea surface. Each 0.5-degree beam width sonar ping expands to cover a circular area about 96 metres (315 ft) in diameter at 11,000 metres (36,089 ft) depth. The horizontal position of the grid point has an uncertainty of ±50 to 100 m (164 to 328 ft), depending on along-track or across-track direction. This depth (59 m (194 ft)) and position (about 410 m (1,345 ft) to the northeast) measurements differ significantly from the deepest point determined by the Gardner et al. (2014) study. The observed depth discrepancy with the 2010 sonar mapping and Gardner et al 2014 study are related to the application of differing sound velocity profiles, which are essential for accurate depth determination. Sonne used CTD casts about 1.6 km west of the deepest sounding to near the bottom of the Challenger Deep that were used for sound velocity profile calibration and optimization. Likewise, the impact of using different projections, datum and ellipsoids during data acquisition can cause positional discrepancies between surveys. In December 2016, the CAS 3,300-ton research vessel Shiyan 3 deployed 33 broadband seismometers onto both the backarc northwest of the Challenger Deep, and onto the near southern Pacific Plate to the southeast, at depths of up to 8,137 m (26,696 ft). This cruise was part of a $12 million Chinese-U.S. initiative, led by co-leader Jian Lin of the Woods Hole Oceanographic Institution; a 5-year effort (2017–2021) to image in fine detail the rock layers in and around the Challenger Deep. The newly launched 4,800-ton research vessel (and mothership for the Rainbow Fish series of deep submersibles), the Zhang Jian departed Shanghai on 3 December. Their cruise was to test three new deep-sea landers, one uncrewed search submersible and the new Rainbow Fish 11,000-meter crewed deep submersible, all capable of diving to 10,000 meters. From 25 to 27 December, three deep-sea landing devices descended into the trench. The first Rainbow Fish lander took photographs, the second took sediment samples, and the third took biological samples. All three landers reached over 10,000 meters, and the third device brought back 103 amphipods. Cui Weicheng, director of Hadal Life Science Research Center at Shanghai Ocean University, led the team of scientists to carry out research at the Challenger Deep in the Mariana Trench. The ship is part of China's national marine research fleet but is owned by a Shanghai marine technology company. CAS' Institute of Deep-sea Science and Engineering sponsored Tansuo-1's return to the Challenger Deep 20 January – 5 February 2017 (cruise TS03) with baited traps for the capture of fish and other macrobiology near the Challenger and Sirena Deeps. On 29 January they recovered photography and samples of a new species of snailfish from the Northern slope of the Challenger Deep at 7,581 metres (24,872 ft), newly designated Pseudoliparis swirei. They also placed four or more CTD casts into the central and eastern basins of the Challenger Deep, as part of the World Ocean Circulation Experiment (WOCE). Tokyo University of Marine Science and Technology dispatched the research vessel Shinyo Maru to the Mariana Trench from 20 January to 5 February 2017 with baited traps for the capture of fish and other macrobiology near the Challenger and Sirena Deeps. On 29 January they recovered photography and samples of a new species of snailfish from the Northern slope of the Challenger Deep at 7,581 metres (24,872 ft), which has been newly designated Pseudoliparis swirei. Water samples were collected at Challenger Deep from 11 layers of the Mariana Trench in March 2017. Seawater samples from 4 to 4,000 m were collected by Niskin bottles mounted to a Seabird SBE25 CTDs; whereas water samples at depths from 6,050 m to 8,320 m were collected by a self-designed acoustic-controlled full ocean depth water samplers. In this study, scientists studied the RNA of pico- and nano-plankton from the surface to the hadal zone. JAMSTEC deployed Kairei to the Challenger Deep in May 2017 for the express purpose of testing the new full-ocean depth ROV UROV11K (Underwater ROV 11,000-meter-capable), as cruise KR 17-08C, under chief scientist Takashi Murashima. The cruise title was: "Sea trial of a full depth ROV UROV11K system in the Mariana Trench". UROV11K carried a new 4K High Definition video camera system, and new sensors to monitor the hydrogen-sulfide, methane, oxygen, and hydrogen content of the water. Unfortunately, on UROV11K's ascent from 10,899 metres (35,758 ft) (at about 11°22.30'N 142°35.8 E, in the eastern basin) on 14 May 2017, the ROV's buoyancy failed at 5,320 metres (17,454 ft) depth, and all efforts to retrieve the ROV were unsuccessful. The rate of descent and drift is not available, but the ROV bottomed to the east of the deepest waters of the eastern basin as revealed by the ship's maneuvering on 14 May. Murashima then directed the Kairei to a location about 35 nmi east of the eastern basin of the Challenger Deep to test a new "Compact Hadal Lander" which made three descents to depths from 7,498 to 8,178 m for testing the Sony 4K camera and for photography of fish and other macro-biologics. On its maiden voyage, the 2,150-ton twin-hulled scientific research vessel Shen Kuo (also Shengkuo, Shen Ko, or Shen Quo), departed Shanghai on 25 November 2018 and returned on 8 January 2019. They operated in the Mariana Trench area, and on 13 December tested a system of underwater navigation at a depth exceeding 10,000 metres, during a field trial of the Tsaihungyuy (ultra-short baseline) system. Project leader Tsui Veichen stated that, with the Tsaihungyuy equipment at depth, it was possible to obtain a signal and determine exact geolocations. The research team from Shanghai Ocean University and Westlake University was led by Cui Weicheng, director of Shanghai Ocean University's Hadal Science and Technology Research Center (HSRC). The equipment to be tested included a piloted submersible (not full ocean depth – depth achieved not available) and two deep-sea landers, all capable of diving to depths of 10,000 meters, as well as an ROV that can go to 4,500 meters. They took photographs and obtained samples from the trench, including water, sediment, macro-organisms and micro-organisms. Cui says, "If we can take photos of fish more than 8,145 meters under water, ... we will break the current world record. We will test our new equipment including the landing devices. They are second generation. The first generation could only take samples in one spot per dive, but this new second generation can take samples at different depths in one dive. We also tested the ultra short baseline acoustic positioning system on the manned submersible, the future of underwater navigation." In November 2019, as cruise SR1916, a NIOZ team led by chief scientist Hans van Haren, with Scripps technicians, deployed to the Challenger Deep aboard the 2,641-ton research vessel Sally Ride, to recover a mooring line from the western basin of the Challenger Deep. The 7 km (4.3 mi) long mooring line in the Challenger Deep consisted of top-floatation positioned around 4 km (2.5 mi) depth, two sections of Dyneema neutrally buoyant 6 mm (0.2 in) line, two Benthos acoustic releases and two sections of self-contained instrumentation to measure and store current, salinity and temperature. Around the 6 km (3.7 mi) depth position two current meters were mounted below a 200 m (656 ft) long array of 100 high-resolution temperature sensors. In the lower position starting 600 m (1,969 ft) above the sea floor 295 specially designed high-resolution temperature sensors were mounted, the lowest of which was 8 m (26 ft) above the trench floor. The mooring line was deployed and left by the NIOZ team during the November 2016 RV Sonne expedition with the intention to be recovered in late 2018 by Sonne. The acoustic commanded release mechanism near the bottom of the Challenger Deep failed at the 2018 attempt. RV Sally Ride was made available exclusively for a final attempt to retrieve the mooring line before the release mechanism batteries expired. Sally Ride arrived at the Challenger Deep on 2 November. This time a 'deep release unit' lowered by one of Sally Ride's winch-cables to around 1,000 m depth pinged release commands and managed to contact the near-bottom releases. After being submerged for nearly three years, mechanical problems occurred in 15 of the 395 temperature sensors. The first results indicate the occurrence of internal waves in the Challenger Deep. Since May 2000, with the help of non-degraded signal satellite navigation, civilian surface vessels equipped with professional dual-frequency capable satellite navigation equipment can measure and establish their geodetic position with an accuracy in the order of meters to tens of meters whilst the western, central and eastern basins are kilometers apart. In 2014, a study was conducted regarding the determination of the depth and location of the Challenger Deep based on data collected previous to and during the 2010 sonar mapping of the Mariana Trench with a Kongsberg Maritime EM 122 multibeam echosounder system aboard USNS Sumner. This study by James. V. Gardner et al. of the Center for Coastal & Ocean Mapping-Joint Hydrographic Center (CCOM/JHC), Chase Ocean Engineering Laboratory of the University of New Hampshire splits the measurement attempt history into three main groups: early single-beam echo sounders (1950s–1970s), early multibeam echo sounders (1980s – 21st century), and modern (i.e., post-GPS, high-resolution) multibeam echo sounders. Taking uncertainties in depth measurements and position estimation into account, the raw data of the 2010 bathymetry of the Challenger Deep vicinity consisting of 2,051,371 soundings from eight survey lines was analyzed. The study concludes that with the best of 2010 multibeam echosounder technologies after the analysis a depth uncertainty of ±25 m (82 ft) (95% confidence level) on 9 degrees of freedom and a positional uncertainty of ±20 to 25 m (66 to 82 ft) (2drms) remain and the location of the deepest depth recorded in the 2010 mapping is 10,984 m (36,037 ft) at 11°19′48″N 142°11′57″E / 11.329903°N 142.199305°E / 11.329903; 142.199305. The depth measurement uncertainty is a composite of measured uncertainties in the spatial variations in sound-speed through the water volume, the ray-tracing and bottom-detection algorithms of the multibeam system, the accuracies and calibration of the motion sensor and navigation systems, estimates of spherical spreading, attenuation throughout the water volume, and so forth. Both the RV Sonne expedition in 2016, and the RV Sally Ride expedition in 2019 expressed strong reservations concerning the depth corrections applied by the Gardner et al. study of 2014, and serious doubt concerning the accuracy of the deepest depth calculated by Gardner (in the western basin), of 10,984 m (36,037 ft) after analysis of their multibeam data on a 100 m (328 ft) grid. Dr. Hans van Haren, chief scientist on the RV Sally Ride cruise SR1916, indicated that Gardner's calculations were 69 m (226 ft) too deep due to the "sound velocity profiling by Gardner et al. (2014)." In 2018-2019, the deepest points of each ocean were mapped using a full‐ocean depth Kongsberg EM 124 multibeam echosounder aboard DSSV Pressure Drop. In 2021, a data paper was published by Cassandra Bongiovanni, Heather A. Stewart and Alan J. Jamieson regarding the gathered data donated to GEBCO. The deepest depth recorded in the 2019 Challenger Deep sonar mapping was 10,924 m (35,840 ft) ±15 m (49 ft) at 11°22′08″N 142°35′13″E / 11.369°N 142.587°E / 11.369; 142.587 in the eastern basin. This depth closely agrees with the deepest point (10,925 m (35,843 ft) ±12 m (39 ft)) determined by the Van Haren et al. sonar bathymetry. The geodetic position of the deepest depth according to the Van Haren et al. significantly differs (about 42 km (26 mi) to the west) with the 2021 paper. After post-processing the initial depth estimates by application of a full-ocean depth sound velocity profile Bongiovanni et al. report an (almost) as deep point at 11°19′52″N 142°12′18″E / 11.331°N 142.205°E / 11.331; 142.205 in the western basin that geodetically differs about 350 m (1,150 ft) with the deepest point position determined by Van Haren et al. (11°19′57″N 142°12′07″E / 11.332417°N 142.20205°E / 11.332417; 142.20205 in the western basin). After analysis of their multibeam data on a 75 m (246 ft) grid, the Bongiovanni et al. 2021 paper states the technological accuracy does not currently exist on low-frequency ship-mounted sonars required to determine which location was truly the deepest, nor does it currently exist on deep-sea pressure sensors. In 2021, a study by Samuel F. Greenaway, Kathryn D. Sullivan, Samuel H. Umfress, Alice B. Beittel and Karl D. Wagner was published presenting a revised estimate of the maximum depth of the Challenger Deep based on a series of submersible dives conducted in June 2020. These depth estimates are derived from acoustic echo sounding profiles referenced to in-situ direct pressure measurements and corrected for observed oceanographic properties of the water-column, atmospheric pressure, gravity and gravity-gradient anomalies, and water-level effects. The study concludes according to their calculations the deepest observed seafloor depth was 10,935 m (35,876 ft) ±6 m (20 ft) below mean sea level at a 95% confidence level at 11°22.3′N 142°35.3′E / 11.3717°N 142.5883°E / 11.3717; 142.5883 in the eastern basin. For this estimate, the error term is dominated by the uncertainty of the employed pressure sensor, but Greenaway et al. show that the gravity correction is also substantial. The Greenaway et al. study compares its results with other recent acoustic and pressure-based measurements for the Challenger Deep and concludes the deepest depth in the western basin is very nearly as deep as the eastern basin. The disagreement between the maximum depth estimates and their geodetic positions between post-2000 published depths however exceed the accompanying margins of uncertainty, raising questions regarding the measurements or the reported uncertainties. Another 2021 paper by Scott Loranger, David Barclay and Michael Buckingham, besides a December 2014 implosion shock wave based depth estimate of 10,983 m (36,033 ft), which is among the deepest estimated depths, also treatises the differences between various maximum depth estimates and their geodetic positions. The 2010 maximal sonar mapping depths reported by Gardner et.al. in 2014 and Greenaway et al. study in 2021 have not been confirmed by direct descent (pressure gauge/manometer) measurements at full-ocean depth.Expeditions have reported direct measured maximal depths in a narrow range. For the western basin deepest depths were reported as 10,913 m (35,804 ft) by Trieste in 1960 and 10,923 m (35,837 ft) ±4 m (13 ft) by DSV Limiting Factor in June 2020. For the central basin the greatest reported depth is 10,915 m (35,810 ft) ±4 m (13 ft) by DSV Limiting Factor in June 2020. For the eastern basin deepest depths were reported as 10,911 m (35,797 ft) by ROV Kaikō in 1995, 10,902 m (35,768 ft) by ROV Nereus in 2009, 10,908 m (35,787 ft) by Deepsea Challenger in 2012, 10,929 m (35,856 ft) by benthic lander "Leggo" in May 2019, and 10,925 m (35,843 ft) ±4 m (13 ft) by DSV Limiting Factor in May 2019. On 23 January 1960, the Swiss-designed Trieste, originally built in Italy and acquired by the U.S. Navy, supported by the USS Wandank (ATF 204) and escorted by the USS Lewis (DE 535), descended to the ocean floor in the trench piloted by Jacques Piccard (who co-designed the submersible along with his father, Auguste Piccard) and USN Lieutenant Don Walsh. Their crew compartment was inside a spherical pressure vessel – measuring 2.16 metres in diameter suspended beneath a buoyancy tank 18.4 metres in length – which was a heavy-duty replacement (of the Italian original) built by Krupp Steel Works of Essen, Germany. The steel walls were 12.7 cm (5.0 in) thick and designed to withstand pressure of up to 1250 kilograms per square centimetre (17800 psi; 1210 atm; 123 MPa). Their descent took almost five hours and the two men spent barely twenty minutes on the ocean floor before undertaking the three-hour-and-fifteen-minute ascent. Their early departure from the ocean floor was due to their concern over a crack in the outer window caused by the temperature differences during their descent. Trieste dived at/near 11°18.5′N 142°15.5′E / 11.3083°N 142.2583°E / 11.3083; 142.2583, bottoming at 10,911 metres (35,797 ft) ±7 m (23 ft) into the Challenger Deep's western basin, as measured by an onboard manometer. Another source states the measured depth at the bottom was measured with a manometer at 10,913 m (35,804 ft) ±5 m (16 ft). Navigation of the support ships was by celestial and LORAN-C with an accuracy of 460 metres (1,510 ft) or less. Fisher noted that the Trieste's reported depth "agrees well with the sonic sounding." On 26 March 2012 (local time), Canadian film director James Cameron made a solo descent in the DSV Deepsea Challenger to the bottom of the Challenger Deep. At approximately 05:15 ChST on 26 March (19:15 UTC on 25 March), the descent began. At 07:52 ChST (21:52 UTC), Deepsea Challenger arrived at the bottom. The descent lasted 2 hours and 36 minutes and the recorded depth was 10,908 metres (35,787 ft) when Deepsea Challenger touched down. Cameron had planned to spend about six hours near the ocean floor exploring but decided to start the ascent to the surface after only 2 hours and 34 minutes. The time on the bottom was shortened because a hydraulic fluid leak in the lines controlling the manipulator arm obscured the visibility out the only viewing port. It also caused the loss of the submersible's starboard thrusters. At around 12:00 ChST (02:00 UTC on 26 March), the Deepsea Challenger website says the sub resurfaced after a 90-minute ascent, although Paul Allen's tweets indicate the ascent took only about 67 minutes. During a post-dive press conference Cameron said: "I landed on a very soft, almost gelatinous flat plain. Once I got my bearings, I drove across it for quite a distance ... and finally worked my way up the slope." The whole time, Cameron said, he didn't see any fish, or any living creatures more than an inch (2.54 cm) long: "The only free swimmers I saw were small amphipods" – shrimplike bottom-feeders. The Five Deeps Expedition's objective was to thoroughly map and visit the deepest points of all five of the world's oceans by the end of September 2019. On 28 April 2019, explorer Victor Vescovo descended to the "Eastern Pool" of the Challenger Deep in the Deep-Submergence Vehicle Limiting Factor (a Triton 36000/2 model submersible). Between 28 April and 4 May 2019, the Limiting Factor completed four dives to the bottom of Challenger Deep. The fourth dive descended to the slightly less deep "Central Pool" of the Challenger Deep (crew: Patrick Lahey, Pilot; John Ramsay, Sub Designer). The Five Deeps Expedition estimated maximum depths of 10,927 m (35,850 ft) ±8 m (26 ft) and 10,928 m (35,853 ft) ±10.5 m (34 ft) at (11°22′09″N 142°35′20″E / 11.3693°N 142.5889°E / 11.3693; 142.5889) by direct CTD pressure measurements and a survey of the operating area by the support ship, the Deep Submersible Support Vessel DSSV Pressure Drop, with a Kongsberg SIMRAD EM124 multibeam echosounder system. The CTD measured pressure at 10,928 m (35,853 ft) of seawater depth was 1,126.79 bar (112.679 MPa; 16,342.7 psi). Due to a technical problem the (uncrewed) ultra-deep-sea lander Skaff used by the Five Deeps Expedition stayed on the bottom for two and half days before it was salvaged by the Limiting Factor (crew: Patrick Lahey, Pilot; Jonathan Struwe, DNV GL Specialist) from an estimated depth of 10,927 m (35,850 ft). The gathered data was published with the caveat that it was subject to further analysis and could possibly be revised in the future. The data will be donated to the GEBCO Seabed 2030 initiative. Later in 2019, following a review of bathymetric data, and multiple sensor recordings taken by the DSV Limiting Factor and the ultra-deep-sea landers Closp, Flere and Skaff, the Five Deeps Expedition revised the maximum depth to 10,925 m (35,843 ft) ±4 m (13 ft). Caladan Oceanic's "Ring of Fire" expedition in the Pacific included six crewed descents and twenty-five lander deployments into all three basins of the Challenger Deep all piloted by Victor Vescovo and further topographical and marine life survey of the entire Challenger Deep. The expedition craft used are the Deep Submersible Support Vessel DSSV Pressure Drop, Deep-Submergence Vehicle DSV Limiting Factor and the ultra-deep-sea landers Closp, Flere and Skaff. During the first crewed dive on 7 June 2020 Victor Vescovo and former US astronaut (and former NOAA Administrator) Kathryn D. Sullivan descended to the "Eastern Pool" of the Challenger Deep in the Deep-Submergence Vehicle Limiting Factor. On 12 June 2020, Victor Vescovo and mountaineer and explorer Vanessa O'Brien descended to the "Eastern Pool" of the Challenger Deep spending three hours mapping the bottom. O'Brien said her dive scanned about a mile of desolate bottom terrain, finding that the surface is not flat, as once was thought, but sloping by about 18 ft (5.5 m) per mile, subject to verification. On 14 June 2020, Victor Vescovo and John Rost descended to the "Eastern Pool" of the Challenger Deep in the Deep-Submergence Vehicle Limiting Factor spending four hours at depth and transiting the bottom for nearly 2 miles. On 20 June 2020, Victor Vescovo and Kelly Walsh descended to the "Western Pool" of the Challenger Deep in the Deep-Submergence Vehicle Limiting Factor spending four hours at the bottom. They reached a maximum depth of 10,923 m (35,837 ft). Kelly Walsh is the son of the Trieste's captain Don Walsh who descended there in 1960 with Jacques Piccard. On 21 June 2020, Victor Vescovo and Woods Hole Oceanographic Institution researcher Ying-Tsong Lin descended to the "Central Pool" of the Challenger Deep in the Deep-Submergence Vehicle Limiting Factor. They reached a maximum depth of 10,915 m (35,810 ft) ±4 m (13 ft). On 26 June 2020 Victor Vescovo and Jim Wigginton descended to the "Eastern Pool" of the Challenger Deep in the Deep-Submergence Vehicle Limiting Factor. Fendouzhe (奋斗者, Striver) is a crewed Chinese deep-sea submersible developed by the China Ship Scientific Research Center (CSSRC). Between 10 October and 28 November 28, 2020, it carried out thirteen dives in the Mariana Trench as part of a test programme. Of these, eight led to depths of more than 10,000 m (32,808 ft). On 10 November 2020, the bottom of the Challenger Deep was reached by Fendouzhe with three Chinese scientists (Zhāng Wěi 张伟 [pilot], Zhào Yáng 赵洋, and Wáng Zhìqiáng 王治强) onboard whilst live-streaming the descent to a reported depth of 10,909 m (35,791 ft). This makes the Fendouzhe the fourth crewed submersible vehicle achieving a successful descent. The pressure hull of Fendouzhe, made from a newly developed titanium alloy, offers space for three people in addition to technical equipment. Fendouzhe is equipped with cameras made by the Norwegian manufacturer Imenco. According to Ye Cong 叶聪, the chief designer of the submersible, China's goals for the dive aren't just scientific investigation but also the future use of deep-sea seabed resources. On 28 February 2021 Caladan Oceanic's "Ring of Fire 2" expedition arrived over the Challenger Deep and conducted crewed descents and lander deployments into the Challenger Deep. At the start the (uncrewed) ultra-deep-sea lander Skaff was deployed to collect water column data by CTD for the expedition. The effects of the Pacific subducting plate crashing into the Philippine Plate was among the things researched onsite. On 1 March 2021, the first crewed descent to the eastern pool was made by Victor Vescovo and Richard Garriott. Garriott became the 17th person to descend to the bottom. On 2 March 2021, a descent to the eastern pool was made by Victor Vescovo and Michael Dubno. On 5 March a descent to the eastern pool was made by Victor Vescovo and Hamish Harding. They traversed the bottom of Challenger Deep. On 11 March 2021 a descent to the Western Pool was made by Victor Vescovo and marine botanist Nicole Yamase. On 13 April 2021 a descent was made by deep water submersible operations expert Rob McCallum and Tim Macdonald who piloted the dive. A 2021 descent with a Japanese citizen is planned. All crewed descents were conducted in the Deep-Submergence Vehicle DSV Limiting Factor. In July 2022 for the fourth consecutive year, Caladan Oceanic's deep submergence system, consisting of the deep submersible DSV Limiting Factor supported by the mother ship DSSV Pressure Drop, returned to the Challenger Deep for dives into the Challenger Deep. In early July 2022, Victor Vescovo was joined by Aaron Newman as a mission specialist for a dive into the Central pool. On 5 July 2022, Tim Macdonald as pilot and Jim Kitchen as mission specialist for a dive into the Eastern pool. On 8 July 2022 Victor Vescovo was joined by Dylan Taylor as mission specialist for a dive into the Eastern pool. Victor Vescovo (for his 15th dive into the Challenger Deep) was joined by geographer and oceanographer Dawn Wright as mission specialist on the 12 July 2022 dive to 10,919 m (35,823 ft) in the Western Pool. Wright operated the world's first sidescan sonar to ever operate at full-ocean depth to capture detailed imagery along short transects of the southern wall of the Western Pool. The remotely operated vehicle (ROV) Kaikō made many uncrewed descents to the Mariana Trench from its support ship RV Yokosuka during two expeditions in 1996 and 1998. From 29 February to 4 March the ROV Kaiko made three dives into the central basin, Kaiko #21 – Kaiko #23, . Depths ranged from 10,898 metres (35,755 ft) at 11°22.536′N 142°26.418′E / 11.375600°N 142.440300°E / 11.375600; 142.440300, to 10,896 metres (35,748 ft) at 11°22.59′N 142°25.848′E / 11.37650°N 142.430800°E / 11.37650; 142.430800; dives #22 & #23 to the north, and dive #21 northeast of the deepest waters of the central basin. During the 1996 measurements the temperature (water temperature increases at great depth due to adiabatic compression), salinity and water pressure at the sampling station was 2.6 °C (36.7 °F), 34.7‰ and 1,113 bar (111.3 MPa; 16,140 psi), respectively at 10,897 m (35,751 ft) depth. The Japanese robotic deep-sea probe Kaikō broke the depth record for uncrewed probes when it reached close to the surveyed bottom of the Challenger Deep. Created by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), it was one of the few uncrewed deep-sea probes in operation that could dive deeper than 6,000 metres (20,000 ft). The manometer measured depth of 10,911.4 m (35,799 ft) ±3 m (10 ft) at 11°22.39′N 142°35.54′E / 11.37317°N 142.59233°E / 11.37317; 142.59233 for the Challenger Deep is believed to be the most accurate measurement taken up to then. Another source states the greatest depth measured by Kaikō in 1996 was 10,898 m (35,755 ft) at 11°22.10′N 142°25.85′E / 11.36833°N 142.43083°E / 11.36833; 142.43083 and 10,907 m (35,784 ft) at 11°22.95′N 142°12.42′E / 11.38250°N 142.20700°E / 11.38250; 142.20700 in 1998. The ROV Kaiko was the first vehicle to visit to the bottom of the Challenger Deep since the bathyscaph Trieste's dive in 1960, and the first success in sampling the trench bottom sediment/mud, from which Kaiko obtained over 360 samples. Approximately 3,000 different microbes were identified in the samples. Kaikō was lost at sea off Shikoku Island during Typhoon Chan-Hom on 29 May 2003. From 2 May to 5 June 2009, the RV Kilo Moana hosted the Woods Hole Oceanographic Institution (WHOI) hybrid remotely operated vehicle (HROV) Nereus team for the first operational test of the Nereus in its 3-ton tethered ROV mode. The Nereus team was headed by the Expedition Leader Andy Bowen of WHOI, Louis Whitcomb of Johns Hopkins University, and Dana Yoerger, also of WHOI. The expedition had co-chief scientists: biologist Tim Shank of WHOI, and geologist Patricia Fryer of the University of Hawaii, to head the science team exploiting the ship's bathymetry and organizing the science experiments deployed by the Nereus. From Nereus dive #007ROV to 880 m (2,887 ft) just south of Guam, to dive #010ROV into the Nero Deep at 9,050 m (29,692 ft), the testing gradually increased depths and complexities of activities at the bottom. Dive #011ROV, on 31 May 2009, saw the Nereus piloted on a 27.8-hour underwater mission, with about ten hours transversing the eastern basin of the Challenger Deep – from the south wall, northwest to the north wall – streaming live video and data back to its mothership. A maximum depth of 10,902 m (35,768 ft) was registered at 11°22.10′N 142°35.48′E / 11.36833°N 142.59133°E / 11.36833; 142.59133. The RV Kilo Moana then relocated to the western basin, where a 19.3-hour underwater dive found a maximum depth of 10,899 m (35,758 ft) on dive #012ROV, and on dive #014ROV in the same area (11°19.59 N, 142°12.99 E) encountered a maximum depth of 10,176 m (33,386 ft). The Nereus was successful in recovering both sediment and rock samples from the eastern and the western basins with its manipulator arm for further scientific analysis. The HROV's final dive was about 80 nmi (148.2 km) to the north of the Challenger Deep, in the backarc, where they dived 2,963 m (9,721 ft) at the TOTO Caldera (12°42.00 N, 143°31.5 E). Nereus thus became the first vehicle to reach the Mariana Trench since 1998 and the deepest-diving vehicle then in operation. Project manager and developer Andy Bowen heralded the achievement as "the start of a new era in ocean exploration". Nereus, unlike Kaikō, did not need to be powered or controlled by a cable connected to a ship on the ocean surface. The HROV Nereus was lost on 10 May 2014 while conducting a dive at 9,900 metres (32,500 ft) in depth in the Kermadec Trench. In June 2008, the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) deployed the research vessel Kairei to the area of Guam for cruise KR08-05 Leg 1 and Leg 2. On 1–3 June 2008, during Leg 1, the Japanese robotic deep-sea probe ABISMO (Automatic Bottom Inspection and Sampling Mobile) on dives 11–13 almost reached the bottom about 150 km (93 mi) east of the Challenger Deep: "Unfortunately, we were unable to dive to the sea floor because the legacy primary cable of the Kaiko system was a little bit short. The 2-m long gravity core sampler was dropped in free fall, and sediment samples of 1.6m length were obtained. Twelve bottles of water samples were also obtained at various depths..." ABISMO's dive #14 was into the TOTO caldera (12°42.7777 N, 143°32.4055 E), about 60 nmi northeast of the deepest waters of the central basin of the Challenger Deep, where they obtained videos of the hydrothermal plume. Upon successful testing to 10,000 m (32,808 ft), JAMSTEC' ROV ABISMO became, briefly, the only full-ocean-depth rated ROV in existence. On 31 May 2009, the ABISMO was joined by the Woods Hole Oceanographic Institution's HROV Nereus as the only two operational full ocean depth capable remotely operated vehicles in existence. During the ROV ABISMO's deepest sea trails dive its manometer measured a depth of 10,257 m (33,652 ft) ±3 m (10 ft) in "Area 1" (vicinity of 12°43' N, 143°33' E). Leg 2, under chief scientist Takashi Murashima, operated at the Challenger Deep 8–9 June 2008, testing JAMSTEC's new full ocean depth "Free Fall Mooring System," i.e. a lander. The lander was successfully tested twice to 10,895 m (35,745 ft) depth, taking video images and sediment samplings at 11°22.14′N 142°25.76′E / 11.36900°N 142.42933°E / 11.36900; 142.42933, in the central basin of the Challenger Deep. On 23 May 2016, the Chinese submersible Haidou-1 dived to a depth of 10,767 m (35,325 ft) at an undisclosed position in the Mariana Trench, making China the third country after Japan (ROV Kaikō), and the US (HROV Nereus), to deploy a full-ocean-depth ROV. This autonomous and remotely operated vehicle has a design depth of 11,000 m (36,089 ft). On 8 May 2020, the Russian submersible Vityaz-D dived to a depth of 10,028 m (32,900 ft) at an undisclosed position in the Mariana Trench. The summary report of the HMS Challenger expedition lists radiolaria from the two dredged samples taken when the Challenger Deep was first discovered. These (Nassellaria and Spumellaria) were reported in the Report on Radiolaria (1887) written by Ernst Haeckel. On their 1960 descent, the crew of the Trieste noted that the floor consisted of diatomaceous ooze and reported observing "some type of flatfish" lying on the seabed. And as we were settling this final fathom, I saw a wonderful thing. Lying on the bottom just beneath us was some type of flatfish, resembling a sole, about 1 foot [30 cm] long and 6 inches [15 cm] across. Even as I saw him, his two round eyes on top of his head spied us – a monster of steel – invading his silent realm. Eyes? Why should he have eyes? Merely to see phosphorescence? The floodlight that bathed him was the first real light ever to enter this hadal realm. Here, in an instant, was the answer that biologists had asked for the decades. Could life exist in the greatest depths of the ocean? It could! And not only that, here apparently, was a true, bony teleost fish, not a primitive ray or elasmobranch. Yes, a highly evolved vertebrate, in time's arrow very close to man himself. Slowly, extremely slowly, this flatfish swam away. Moving along the bottom, partly in the ooze and partly in the water, he disappeared into his night. Slowly too – perhaps everything is slow at the bottom of the sea – Walsh and I shook hands. Many marine biologists are now skeptical of this supposed sighting, and it is suggested that the creature may instead have been a sea cucumber. The video camera on board the Kaiko probe spotted a sea cucumber, a scale worm and a shrimp at the bottom. At the bottom of the Challenger Deep, the Nereus probe spotted one polychaete worm (a multi-legged predator) about an inch long. An analysis of the sediment samples collected by Kaiko found large numbers of simple organisms at 10,900 m (35,800 ft). While similar lifeforms have been known to exist in shallower ocean trenches (> 7,000 m) and on the abyssal plain, the lifeforms discovered in the Challenger Deep possibly represent taxa distinct from those in shallower ecosystems. Most of the organisms collected were simple, soft-shelled foraminifera (432 species according to National Geographic), with four of the others representing species of the complex, multi-chambered genera Leptohalysis and Reophax. Eighty-five per cent of the specimens were organic, soft-shelled allogromiids, which is unusual compared to samples of sediment-dwelling organisms from other deep-sea environments, where the percentage of organic-walled foraminifera ranges from 5% to 20%. As small organisms with hard, calcareous shells have trouble growing at extreme depths because of the high solubility of calcium carbonate in the pressurized water, scientists theorize that the preponderance of soft-shelled organisms in the Challenger Deep may have resulted from the typical biosphere present when the Challenger Deep was shallower than it is now. Over the course of six to nine million years, as the Challenger Deep grew to its present depth, many of the species present in the sediment died out or were unable to adapt to the increasing water pressure and changing environment. On 17 March 2013, researchers reported data that suggested piezophilic microorganisms thrive in the Challenger Deep. Other researchers reported related studies that microbes thrive inside rocks up to 579 m (1,900 ft) below the sea floor under 2,591 m (8,500 ft) of ocean off the coast of the northwestern United States. According to one of the researchers, "You can find microbes everywhere – they're extremely adaptable to conditions, and survive wherever they are." 11°22.4′N 142°35.5′E / 11.3733°N 142.5917°E / 11.3733; 142.5917
[ { "paragraph_id": 0, "text": "The Challenger Deep is the deepest known point of the seabed of Earth, located in the western Pacific Ocean at the southern end of the Mariana Trench, in the ocean territory of the Federated States of Micronesia. According to the GEBCO Gazetteer of Undersea Feature Names the depression's depth is 10,920 ± 10 m (35,827 ± 33 ft) at 11°22.4′N 142°35.5′E / 11.3733°N 142.5917°E / 11.3733; 142.5917, although its exact geodetic location remains inconclusive and its depth has been measured at 10,902–10,929 m (35,768–35,856 ft) by deep-diving submersibles, remotely operated underwater vehicles and benthic landers, and (sometimes) slightly more by sonar bathymetry. The differences in depth estimates and their geodetic positions are scientifically explainable by the difficulty of researching such deep locations.", "title": "" }, { "paragraph_id": 1, "text": "The depression is named after the British Royal Navy survey ships HMS Challenger, whose expedition of 1872–1876 first located it, and HMS Challenger II, whose expedition of 1950-1952 established its record-setting depth. The first descent by any vehicle was by the bathyscaphe Trieste in January 1960. In March 2012, a solo descent was made by film director James Cameron in the deep-submergence vehicle Deepsea Challenger. As of July 2022, 27 people have descended to Challenger Deep.", "title": "" }, { "paragraph_id": 2, "text": "The Challenger Deep is a relatively small slot-shaped depression in the bottom of a considerably larger crescent-shaped oceanic trench, which itself is an unusually deep feature in the ocean floor. The Challenger Deep consists of three basins, each 6 to 10 km (3.7 to 6.2 mi) long, 2 km (1.2 mi) wide, and over 10,850 m (35,597 ft) in depth, oriented in echelon from west to east, separated by mounds between the basins 200 to 300 m (660 to 980 ft) higher. The three basins feature extends about 48 km (30 mi) west to east if measured at the 10,650 m (34,941 ft) isobath. Both the western and eastern basins have recorded depths (by sonar bathymetry) in excess of 10,920 m (35,827 ft), while the center basin is slightly less deep. The closest land to the Challenger Deep is Fais Island (one of the outer islands of Yap), 287 km (178 mi) southwest, and Guam, 304 km (189 mi) to the northeast. Detailed sonar mapping of the western, center and eastern basins in June 2020 by the DSSV Pressure Drop combined with crewed descents revealed that they undulate with slopes and piles of rocks above a bed of pelagic ooze. This conforms with the description of Challenger Deep as consisting of an elongated seabed section with distinct sub-basins or sediment-filled pools.", "title": "Topography" }, { "paragraph_id": 3, "text": "Over many years, the search for, and investigation of, the location of the maximum depth of the world's oceans has involved many different vessels, and continues into the twenty-first century.", "title": "Surveys and bathymetry" }, { "paragraph_id": 4, "text": "The accuracy of determining geographical location, and the beamwidth of (multibeam) echosounder systems, limits the horizontal and vertical bathymetric sensor resolution that hydrographers can obtain from onsite data. This is especially important when sounding in deep water, as the resulting footprint of an acoustic pulse gets large once it reaches a distant sea floor. Further, sonar operation is affected by variations in sound speed, particularly in the vertical plane. The speed is determined by the water's bulk modulus, mass, and density. The bulk modulus is affected by temperature, pressure, and dissolved impurities (usually salinity).", "title": "Surveys and bathymetry" }, { "paragraph_id": 5, "text": "In 1875, during her transit from the Admiralty Islands in the Bismarck Archipelago to Yokohama in Japan, the three-masted sailing corvette HMS Challenger attempted to make landfall at Spanish Marianas (now Guam), but was set to the west by \"baffling winds\" preventing her crew from \"visiting either the Carolines or the Ladrones.\" Their altered path took them over the undersea canyon which later became known as the Challenger Deep. Depth soundings were taken by Baillie-weighted marked rope, and geographical locations were determined by celestial navigation (to an estimated accuracy of two nautical miles). One of their samples was taken within fifteen miles of the deepest spot in all of Earth's oceans. On 23 March 1875, at sample station number #225, HMS Challenger recorded the bottom at 4,475 fathoms (26,850 ft; 8,184 m) deep, (the deepest sounding of her three-plus-year eastward circumnavigation of the Earth) at 11°24′N 143°16′E / 11.400°N 143.267°E / 11.400; 143.267 – and confirmed it with a second sounding at the same location. The serendipitous discovery of Earth's deepest depression by history's first major scientific expedition devoted entirely to the emerging science of oceanography, was incredibly good fortune, and especially notable when compared to the Earth's third deepest site (the Sirena Deep only 150 nautical miles east of the Challenger Deep), which would remain undiscovered for another 122 years.", "title": "Surveys and bathymetry" }, { "paragraph_id": 6, "text": "Seventy-five years later, the 1,140-ton British survey vessel HMS Challenger II, on her three-year westward circumnavigation of Earth, investigated the extreme depths southwest of Guam reported in 1875 by her predecessor, HMS Challenger. On her southbound track from Japan to New Zealand (May–July 1951), Challenger II conducted a survey of the Marianas Trench between Guam and Ulithi atoll, using seismic-sized bomb-soundings and recorded a maximum depth of 5,663 fathoms (33,978 ft; 10,356 m). The depth was beyond Challenger II's echo sounder capability to verify, so they resorted to using a taut wire with \"140 lbs of scrap iron\", and documented a depth of 5,899 fathoms (35,394 ft; 10,788 m). The Senior Scientist aboard Challenger II, Thomas Gaskell, recalled:", "title": "Surveys and bathymetry" }, { "paragraph_id": 7, "text": "[I]t took from ten past five in the evening until twenty to seven, that is an hour and a half, for the iron weight to fall to the sea-bottom. It was almost dark by the time the weight struck, but great excitement greeted the reading...", "title": "Surveys and bathymetry" }, { "paragraph_id": 8, "text": "In New Zealand, the Challenger II team gained the assistance of the Royal New Zealand Dockyard, \"who managed to boost the echo sounder to record at the greatest depths\". They returned to the \"Marianas Deep\" (sic) in October 1951. Using their newly improved echo sounder, they ran survey lines at right angles to the axis of the trench and discovered \"a considerable area of a depth greater than 5,900 fathoms (35,400 ft; 10,790 m)\" – later identified as the Challenger Deep's western basin. The greatest depth recorded was 5,940 fathoms (35,640 ft; 10,863 m), at 11°19′N 142°15′E / 11.317°N 142.250°E / 11.317; 142.250. Navigational accuracy of several hundred meters was attained by celestial navigation and LORAN-A. As Gaskell explained, the measurement", "title": "Surveys and bathymetry" }, { "paragraph_id": 9, "text": "was not more than 50 miles from the spot where the nineteenth-century Challenger found her deepest depth [...] and it may be thought fitting that a ship with the name Challenger should put the seal on the work of that great pioneering expedition of oceanography.", "title": "Surveys and bathymetry" }, { "paragraph_id": 10, "text": "The term \"Challenger Deep\" came into use after this 1951–52 Challenger circumnavigation, and commemorates both British ships of that name involved with the discovery of the deepest basin of the world's oceans.", "title": "Surveys and bathymetry" }, { "paragraph_id": 11, "text": "In August 1957, the Soviet 3,248-ton Vernadsky Institute of Geochemistry research vessel Vityaz recorded a maximum depth of 11,034 ± 50 m (36,201 ± 164 ft) at 11°20.9′N 142°11.5′E / 11.3483°N 142.1917°E / 11.3483; 142.1917 in the western basin of the Challenger Deep during a brief transit of the area on Cruise #25. She returned in 1958, Cruise #27, to conduct a detailed single beam bathymetry survey involving over a dozen transects of the Deep, with an extensive examination of the western basin and a quick peek into the eastern basin. Fisher records a total of three Vityaz sounding locations on Fig.2 \"Trenches\" (1963), one within yards of the 142°11.5' E location, and a third at 11°20.0′N 142°07′E / 11.3333°N 142.117°E / 11.3333; 142.117, all with 11,034 ± 50 m (36,201 ± 164 ft) depth. The depths were considered statistical outliers, and a depth greater than 11,000 m has never been proven. Taira reports that if Vityaz's depth was corrected with the same methodology used by the Japanese RV Hakuho Maru expedition of December 1992, it would be presented as 10,983 ± 50 m (36,033 ± 164 ft), as opposed to modern depths from multibeam echosounder systems greater than 10,900 metres (35,800 ft) with the NOAA accepted maximum of 10,995 ± 10 m (36,073 ± 33 ft) in the western basin.", "title": "Surveys and bathymetry" }, { "paragraph_id": 12, "text": "The first definitive verification of both the depth and location of the Challenger Deep (western basin) was determined by Dr. R. L. Fisher from the Scripps Institution of Oceanography, aboard the 325-ton research vessel Stranger. Using explosive soundings, they recorded 10,850 ± 20 m (35,597 ± 66 ft) at/near 11°18′N 142°14′E / 11.300°N 142.233°E / 11.300; 142.233 in July 1959. Stranger used celestial and LORAN-C for navigation. LORAN-C navigation provided geographical accuracy of 460 m (1,509 ft) or better. According to another source RV Stranger using bomb-sounding surveyed a maximum depth of 10,915 ± 10 m (35,810 ± 33 ft) at 11°20.0′N 142°11.8′E / 11.3333°N 142.1967°E / 11.3333; 142.1967. Discrepancies between the geographical location (lat/long) of Stranger's deepest depths and those from earlier expeditions (Challenger II 1951; Vityaz 1957 and 1958) \"are probably due to uncertainties in fixing the ships' positions\". Stranger's north-south zig-zag survey passed well to the east of the eastern basin southbound, and well to the west of the eastern basin northbound, thus failed to discover the eastern basin of the Challenger Deep. The maximum depth measured near longitude 142°30'E was 10,760 ± 20 m (35,302 ± 66 ft), about 10 km west of the eastern basin's deepest point. This was an important gap in information, as the eastern basin was later reported as deeper than the other two basins. Stranger crossed the center basin twice, measuring a maximum depth of 10,830 ± 20 m (35,531 ± 66 ft) in the vicinity of 142°22'E. At the western end of the central basin (approximately 142°18'E), they recorded a depth of 10,805 ± 20 m (35,449 ± 66 ft). The western basin received four transects by Stranger, recording depths of 10,830 ± 20 m (35,531 ± 66 ft) toward the central basin, near where Trieste dived in 1960 (vicinity 11°18.5′N 142°15.5′E / 11.3083°N 142.2583°E / 11.3083; 142.2583, and where Challenger II, in 1950, recorded 10,863 ± 35 m (35,640 ± 115 ft). At the far western end of the western basin (about 142°11'E), the Stranger recorded 10,850 ± 20 m (35,597 ± 66 ft), some 6 km south of the location where Vityaz recorded 11,034 ± 50 m (36,201 ± 164 ft) in 1957–1958. Fisher stated: \"differences in the Vitiaz [sic] and Stranger–Challenger II depths can be attributed to the [sound] velocity correction function used\". After investigating the Challenger Deep, Stranger proceeded to the Philippine Trench and transected the trench over twenty times in August 1959, finding a maximum depth of 10,030 ± 10 m (32,907 ± 33 ft), and thus established that the Challenger Deep was about 800 metres (2,600 ft) deeper than the Philippine Trench. The 1959 Stranger surveys of the Challenger Deep and of the Philippine Trench informed the U.S. Navy as to the appropriate site for Trieste's record dive in 1960.", "title": "Surveys and bathymetry" }, { "paragraph_id": 13, "text": "The Proa Expedition, Leg 2, returned Fisher to the Challenger Deep on 12–13 April 1962 aboard the Scripps research vessel Spencer F. Baird (formerly the steel-hulled US Army large tug LT-581) and employed a Precision Depth Recorder (PDR) to verify the extreme depths previously reported. They recorded a maximum depth of 10,915 metres (35,810 ft) (location not available). Additionally, at location \"H-4\" in the Challenger Deep, the expedition cast three taut-wire soundings: on 12 April, the first cast was to 5,078 fathoms (corrected for wire angle) 9,287 metres (30,469 ft) at 11°23′N 142°19.5′E / 11.383°N 142.3250°E / 11.383; 142.3250 in the central basin (Up until 1965, US research vessels recorded soundings in fathoms). The second cast, also on 12 April, was to 5,000 fathoms at 11°20.5′N 142°22.5′E / 11.3417°N 142.3750°E / 11.3417; 142.3750 in the central basin. On 13 April, the final cast recorded 5,297 fathoms (corrected for wire angle) 9,687 metres (31,781 ft) at 11°17.5′N 142°11′E / 11.2917°N 142.183°E / 11.2917; 142.183 (the western basin). They were chased off by a hurricane after only two days on-site. Once again, Fisher entirely missed the eastern basin of the Challenger Deep, which later proved to contain the deepest depths.", "title": "Surveys and bathymetry" }, { "paragraph_id": 14, "text": "The Scripps Institution of Oceanography deployed the 1,490-ton Navy-owned, civilian-crewed research vessel Thomas Washington (AGOR-10) to the Mariana Trench on several expeditions from 1975 to 1986. The first of these was the Eurydice Expedition, Leg 8 which brought Fisher back to the Challenger Deep's western basin from 28–31 March 1975. Thomas Washington established geodetic positioning by (SATNAV) with Autolog Gyro and EM Log. Bathymetrics were by a 12 kHz Precision Depth Recorder (PDR) with a single 60° beam. They mapped one, \"possibly two\", axial basins with a depth of 10,915 ± 20 m (35,810 ± 66 ft). Five dredges were hauled 27–31 March, all into or slightly north of the deepest depths of the western basin. Fisher noted that this survey of the Challenger Deep (western basin) had \"provided nothing to support and much to refute recent claims of depths there greater than 10,915 ± 20 m (35,810 ± 66 ft).\" While Fisher missed the eastern basin of the Challenger Deep (for the third time), he did report a deep depression about 150 nautical miles east of the western basin. The 25 March dredge haul at 12°03.72′N 142°33.42′E / 12.06200°N 142.55700°E / 12.06200; 142.55700 encountered 10,015 metres (32,858 ft), which pre-shadowed by 22 years the discovery of HMRG Deep/Sirena Deep in 1997. The deepest waters of the HMRG Deep/Serina Deep at 10,714 ± 20 m (35,151 ± 66 ft) are centered at/near 12°03.94′N 142°34.866′E / 12.06567°N 142.581100°E / 12.06567; 142.581100, approximately 2.65 km from Fisher's 25 March 1975 10,015 metres (32,858 ft) dredge haul.", "title": "Surveys and bathymetry" }, { "paragraph_id": 15, "text": "On Scripps Institution of Oceanography's INDOPAC Expedition Leg 3, the chief scientist, Dr. Joseph L. Reid, and oceanographer Arnold W. Mantyla made a hydrocast of a free vehicle (a special-purpose benthic lander (or \"baited camera\") for measurements of water temperature and salinity) on 27 May 1976 into the western basin of the Challenger Deep, \"Station 21\", at 11°19.9′N 142°10.8′E / 11.3317°N 142.1800°E / 11.3317; 142.1800 at about 10,840 metres (35,560 ft) depth. On INDOPAC Expedition Leg 9, under chief scientist A. Aristides Yayanos, Thomas Washington spent nine days from 13–21 January 1977 conducting an extensive and detailed investigation of the Challenger Deep, mainly with biological objectives. \"Echo soundings were carried out primarily with a 3.5 kHz single-beam system, with a 12 kHz echosounder operated in addition some of the time\" (the 12 kHz system was activated for testing on 16 January). A benthic lander was put into the western basin (11°19.7′N 142°09.3′E / 11.3283°N 142.1550°E / 11.3283; 142.1550) on 13 January, bottoming at 10,663 metres (34,984 ft) and recovered 50 hours later in damaged condition. Quickly repaired, it was again put down on the 15th to 10,559 metres (34,642 ft) depth at 11°23.3′N 142°13.8′E / 11.3883°N 142.2300°E / 11.3883; 142.2300. It was recovered on the 17th with excellent photography of amphipods (shrimp) from the Challenger Deep's western basin. The benthic lander was put down for the third and last time on the 17th, at 11°20.1′N 142°25.2′E / 11.3350°N 142.4200°E / 11.3350; 142.4200, in the central basin at a depth of 10,285 metres (33,743 ft). The benthic lander was not recovered and may remain on the bottom in the vicinity of 11°20.1′N 142°25.2′E / 11.3350°N 142.4200°E / 11.3350; 142.4200. Free traps and pressure-retaining traps were put down at eight locations from 13 to 19 January into the western basin, at depths ranging from 7,353 to 10,715 metres (24,124–35,154 ft). Both the free traps and the pressure-retaining traps brought up good sample amphipods for study. While the ship briefly visited the area of the eastern basin, the expedition did not recognize it as potentially the deepest of the three Challenger Deep basins.", "title": "Surveys and bathymetry" }, { "paragraph_id": 16, "text": "Thomas Washington returned briefly to the Challenger Deep on 17–19 October 1978 during Mariana Expedition Leg 5 under chief scientist James W. Hawkins. The ship tracked to the south and west of the eastern basin, and recorded depths between 5,093 and 7,182 metres (16,709–23,563 ft). Another miss. On Mariana Expedition Leg 8, under chief scientist Yayanos, Thomas Washington was again involved, from 12–21 December 1978, with an intensive biological study of the western and central basins of the Challenger Deep. Fourteen traps and pressure-retaining traps were put down to depths ranging from 10,455 to 10,927 metres (34,301–35,850 ft); the greatest depth was at 11°20.0′N 142°11.8′E / 11.3333°N 142.1967°E / 11.3333; 142.1967. All of the 10,900-plus m recordings were in the western basin. The 10,455 metres (34,301 ft) depth was furthest east at 142°26.4' E (in the central basin), about 17 km west of the eastern basin. Again, focused efforts on the known areas of extreme depths (the western and central basins) were so tight that the eastern basin again was missed by this expedition.", "title": "Surveys and bathymetry" }, { "paragraph_id": 17, "text": "From 20 to 30 November 1980, Thomas Washington was on site at the western basin of the Challenger Deep, as part of Rama Expedition Leg 7, again with chief-scientist Dr. A. A. Yayanos. Yayanos directed Thomas Washington in arguably the most extensive and wide-ranging of all single-beam bathymetric examinations of the Challenger Deep ever undertaken, with dozens of transits of the western basin, and ranging far into the backarc of the Challenger Deep (northward), with significant excursions into the Pacific Plate (southward) and along the trench axis to the east. They hauled eight dredges in the western basin to depths ranging from 10,015 to 10,900 metres (32,858–35,761 ft), and between hauls, cast thirteen free vertical traps. The dredging and traps were for biological investigation of the bottom. In the first successful retrieval of a live animal from the Challenger Deep, on 21 November 1980 in the western basin at 11°18.7′N 142°11.6′E / 11.3117°N 142.1933°E / 11.3117; 142.1933, Yayanos recovered a live amphipod from about 10,900 meters depth with a pressurized trap. Once again, other than a brief look into the eastern basin, all bathymetric and biological investigations were into the western basin.", "title": "Surveys and bathymetry" }, { "paragraph_id": 18, "text": "On Leg 3 of the Hawaii Institute of Geophysics' (HIG) expedition 76010303, the 156-foot (48 m) research vessel Kana Keoki departed Guam primarily for a seismic investigation of the Challenger Deep area, under chief scientist Donald M. Hussong. The ship was equipped with air guns (for seismic reflection soundings deep into the Earth's mantle), magnetometer, gravimeter, 3.5 kHz and 12 kHz sonar transducers, and precision depth recorders. They ran the Deep from east to west, collecting single beam bathymetry, magnetic and gravity measurements, and employed the air guns along the trench axis, and well into the backarc and forearc, from 13 to 15 March 1976. Thence they proceeded south to the Ontong Java Plateau. All three deep basins of the Challenger Deep were covered, but Kana Keoki recorded a maximum depth of 7,800 m (25,591 ft). Seismic information developed from this survey was instrumental in gaining an understanding of the subduction of the Pacific Plate under the Philippine Sea Plate. In 1977, Kana Keoki returned to the Challenger Deep area for wider coverage of the forearc and backarc.", "title": "Surveys and bathymetry" }, { "paragraph_id": 19, "text": "The Hydrographic Department, Maritime Safety Agency, Japan (JHOD) deployed the newly commissioned 2,600-ton survey vessel Takuyo (HL 02) to the Challenger Deep 17–19 February 1984. Takuyo was the first Japanese ship to be equipped with the new narrowbeam SeaBeam multi-beam sonar echosounder, and was the first survey ship with multi-beam capability to survey the Challenger Deep. The system was so new that JHOD had to develop their own software for drawing bathymetric charts based on the SeaBeam digital data. In just three days, they tracked 500 miles of sounding lines, and covered about 140 km of the Challenger Deep with multibeam ensonification. Under chief scientist Hideo Nishida, they used CTD temperature and salinity data from the top 4,500 metres (14,764 ft) of the water column to correct depth measurements, and later conferred with Scripps Institution of Oceanography (including Fisher), and other GEBCO experts to confirm their depth correction methodology. They employed a combination of NAVSAT, LORAN-C and OMEGA systems for geodetic positioning with accuracy better than 400 metres (1,300 ft). The deepest location recorded was 10,920 ± 10 m (35,827 ± 33 ft) at 11°22.4′N 142°35.5′E / 11.3733°N 142.5917°E / 11.3733; 142.5917; for the first time documenting the eastern basin as the deepest of the three en echelon pools. In 1993, GEBCO recognized the 10,920 ± 10 m (35,827 ± 33 ft) report as the deepest depth of the world's oceans. Technological advances such as improved multi-beam sonar would be the driving force in uncovering the mysteries of the Challenger Deep into the future.", "title": "Surveys and bathymetry" }, { "paragraph_id": 20, "text": "The Scripps research vessel Thomas Washington's returned to the Challenger Deep in 1986 during the Papatua Expedition, Leg 8, mounting one of the first commercial multi-beam echosounders capable of reaching into the deepest trenches, i.e. the 16-beam Seabeam \"Classic\". This allowed chief scientist Yayanos an opportunity to transit the Challenger Deep with the most modern depth-sounding equipment available. During the pre-midnight hours of 21 April 1986, the multibeam echosounder produced a map of the Challenger Deep bottom with a swath of about 5–7 miles wide. The maximum depth recorded was 10,804 metres (35,446 ft) (location of depth is not available). Yayanos noted: \"The lasting impression from this cruise comes from the thoughts of the revolutionary things that Seabeam data can do for deep biology.\"", "title": "Surveys and bathymetry" }, { "paragraph_id": 21, "text": "On 22 August 1988, the U.S. Navy-owned 1,000-ton research vessel Moana Wave (AGOR-22), operated by the Hawaii Institute of Geophysics (HIG), University of Hawaii, under the direction of chief scientist Robert C. Thunell from the University of South Carolina, transited northwesterly across the central basin of the Challenger Deep, conducting a single-beam bathymetry track by their 3.5 kHz narrow (30-degs) beam echosounder with a Precision Depth Recorder. In addition to sonar bathymetry, they took 44 gravity cores and 21 box cores of bottom sediments. The deepest echosoundings recorded were 10,656 to 10,916 metres (34,961–35,814 ft), with the greatest depth at 11°22′N 142°25′E in the central basin. This was the first indication that all three basins contained depths in excess of 10,900 metres (35,800 ft).", "title": "Surveys and bathymetry" }, { "paragraph_id": 22, "text": "The 3,987-ton Japanese research vessel Hakuhō Maru, an Ocean Research Institute – University of Tokyo sponsored ship, on cruise KH-92-5 cast three Sea-Bird SBE-9 ultra-deep CTD (conductivity-temperature-depth) profilers in a transverse line across the Challenger Deep on 1 December 1992. The center CTD was located at 11°22.78′N 142°34.95′E / 11.37967°N 142.58250°E / 11.37967; 142.58250, in the eastern basin, at 10,989 metres (36,053 ft) by the SeaBeam depth recorder and 10,884 metres (35,709 ft) by the CTD. The other two CTDs were cast 19.9 km to the north and 16.1 km to the south. Hakuhō Maru was equipped with a narrow beam SeaBeam 500 multi-beam echosounder for depth determination, and had an Auto-Nav system with inputs from NAVSAT/NNSS, GPS, Doppler Log, EM log and track display, with a geodetic positioning accuracy approaching 100 metres (330 ft). When conducting CTD operations in the Challenger deep, they used the SeaBeam as a single beam depth recorder. At 11°22.6′N 142°35.0′E / 11.3767°N 142.5833°E / 11.3767; 142.5833 the corrected depth was 10,989 metres (36,053 ft), and at 11°22.0′N 142°34.0′E / 11.3667°N 142.5667°E / 11.3667; 142.5667 the depth was 10,927 metres (35,850 ft); both in the eastern basin. This may demonstrate that the basins might not be flat sedimentary pools but rather undulate with a difference of 50 metres (160 ft) or more. Taira revealed, \"We considered that a trough deeper that Vitiaz's record by 5 metres (16 ft) was detected. There is a possibility that a depth exceeding 11,000 metres (36,089 ft) with a horizontal scale less than the beam width of measurements exists in the Challenger Deep. Since each SeaBeam 2.7-degree beam width sonar ping expands to cover a circular area about 500 metres (1,640 ft) in diameter at 11,000 metres (36,089 ft) depth, dips in the bottom that are less than that size would be difficult to detect from a sonar-emitting platform seven miles above.", "title": "Surveys and bathymetry" }, { "paragraph_id": 23, "text": "For most of 1995 and into 1996, the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) employed the 4,439-ton Research Vessel Yokosuka to conduct the testing and workup of the 11,000-meter remotely-operated vehicle (ROV) Kaikō, and the 6,500 meter ROV Shinkai. It was not until February 1996, during Yokosuka's cruise Y96-06, that Kaikō was ready for its first full depth dives. On this cruise, JAMSTEC established an area of the Challenger Deep (11°10'N to 11°30'N, by 141°50'E to 143°00'E – which later was recognized as containing three separate pools/basins en echelon, each with depths in excess of 10,900 m (35,761 ft)) toward which JAMSTEC expeditions would concentrate their investigations for the next two decades. The Yokosuka employed a 151-beam SeaBeam 2112 12 kHz multibeam echosounder, allowing search swaths 12–15 km in width at 11,000 metres (36,089 ft) depth. The depth accuracy of Yokosuka's Seabeam was about 0.1% of water depth (i.e. ± 110 metres (361 ft) for 11,000 metres (36,089 ft) depth). The ship's dual GPS systems attained geodetic positioning within double digit meter (100 metres (328 ft) or better) accuracy.", "title": "Surveys and bathymetry" }, { "paragraph_id": 24, "text": "Cruise KR98-01 sent JAMSTEC's two-year-old 4,517-ton Deep Sea Research Vessel RV Kairei south for a quick but thorough depth survey of the Challenger Deep, 11–13 January 1998, under chief scientist Kantaro Fujioka. Tracking largely along the trench axis of 070–250° they made five 80-km bathymetric survey tracks, spaced about 15 km apart, overlapping their SeaBeam 2112-004 (which now allowed sub-bottom profiling penetrating as much as 75 m below the bottom) while gaining gravity and magnetic data covering the entire Challenger Deep: western, central, and eastern basins.", "title": "Surveys and bathymetry" }, { "paragraph_id": 25, "text": "Kairei returned in May 1998, cruise KR98-05, with ROV Kaikō, under the direction of chief scientist Jun Hashimoto with both geophysical and biological goals. Their bathymetric survey from 14–26 May was the most intensive and thorough depth and seismic survey of the Challenger Deep performed to date. Each evening, Kaikō deployed for about four hours of bottom time for biological-related sampling, plus about seven hours of vertical transit time. When Kaikō was onboard for servicing, Kairei conducted bathymetric surveys and observations. Kairei gridded a survey area about 130 km N–S by 110 km E–W. Kaikō made six dives (#71–75) all to the same location, (11°20.8' N, 142°12.35' E), near the 10,900 metres (35,800 ft) bottom contour line in the western basin.", "title": "Surveys and bathymetry" }, { "paragraph_id": 26, "text": "The regional bathymetric map made from the data obtained in 1998 shows that the greatest depths in the eastern, central, and western depressions are 10,922 ± 74 m (35,833 ± 243 ft), 10,898 ± 62 m (35,755 ± 203 ft), and 10,908 ± 36 m (35,787 ± 118 ft), respectively, making the eastern depression the deepest of the three.", "title": "Surveys and bathymetry" }, { "paragraph_id": 27, "text": "In 1999, Kairei revisited the Challenger Deep during cruise KR99-06. The results of the 1998–1999 surveys include the first recognition that the Challenger Deep consists of three \"right-stepping en echelon individual basins bounded by the 10,500 metres (34,400 ft) depth contour line. The size of [each of] the deeps are almost identical, 14–20 km long, 4 km wide\". They concluded with the proposal \"that these three individual elongated deeps constitute the 'Challenger Deep', and [we] identify them as the East, Central and West Deep. The deepest depth we obtained during the swath mapping is 10,938 metres (35,886 ft) in the West Deep (11°20.34' N, 142°13.20 E).\" The depth was \"obtained during swath mapping ... confirmed in both N–S and E-W swaths.\" Speed of sound corrections were from XBT to 1,800 metres (5,900 ft), and CTD below 1,800 metres (5,900 ft).", "title": "Surveys and bathymetry" }, { "paragraph_id": 28, "text": "The cross track survey of the 1999 Kairei cruise shows that the greatest depths in the eastern, central, and western depressions are 10,920 ± 10 m (35,827 ± 33 ft), 10,894 ± 14 m (35,741 ± 46 ft), and 10,907 ± 13 m (35,784 ± 43 ft), respectively, which supports the results of the previous survey.", "title": "Surveys and bathymetry" }, { "paragraph_id": 29, "text": "In 2002 Kairei revisited the Challenger Deep 16–25 October 2002, as cruise KR02-13 (a cooperative Japan-US-South Korea research program) with chief scientist Jun Hashimoto in charge; again with Kazuyoshi Hirata managing the ROV Kaikō team. On this survey, the size of each of the three basins was refined to 6–10 km long by about 2 km wide and in excess of 10,850 m (35,597 ft) deep. In marked contrast to the Kairei surveys of 1998 and 1999, the detailed survey in 2002 determined that the deepest point in the Challenger Deep is located in the eastern basin around 11°22.260′N 142°35.589′E / 11.371000°N 142.593150°E / 11.371000; 142.593150, with a depth of 10,920 ± 5 m (35,827 ± 16 ft), located about 290 m (950 ft) southeast of the deepest site determined by the survey vessel Takuyo in 1984. The 2002 surveys of both the western and eastern basins were tight, with especially meticulous cross-gridding of the eastern basin with ten parallel tracks N–S and E–W less than 250 meters apart. On the morning of 17 October, ROV Kaikō dive #272 began and recovered over 33 hours later, with the ROV working at the bottom of the western basin for 26 hours (vicinity of 11°20.148' N, 142°11.774 E at 10,893 m (35,738 ft)). Five Kaikō dives followed on a daily basis into the same area to service benthic landers and other scientific equipment, with dive #277 recovered on 25 October. Traps brought up large numbers of amphipods (sea fleas), and cameras recorded holothurians (sea cucumbers), White polychaetes (bristle worms), tube worms, and other biological species. During its 1998, 1999 surveys, Kairei was equipped with a GPS satellite-based radionavigation system. The United States government lifted the GPS selective availability in 2000, so during its 2002 survey, Kairei had access to non-degraded GPS positional services and achieved single-digit meter accuracy in geodetic positioning.", "title": "Surveys and bathymetry" }, { "paragraph_id": 30, "text": "The 2.516-ton research vessel Melville, at the time operated by the Scripps Institution of Oceanography, took the Cook Expedition, Leg 6 with chief scientist Patricia Fryer of the University of Hawaii from Guam on 10 February 2001 to the Challenger Deep for a survey titled \"Subduction Factory Studies in the Southern Mariana\", including HMR-1 sonar mapping, magnetics, gravity measurements, and dredging in the Mariana arc region. They covered all three basins, then tracked 120-nautical-mile-long (222.2 km) lines of bathymetry East-West, stepping northward from the Challenger Deep in 12 km (7.5 mi) sidesteps, covering more than 90 nmi (166.7 km) north into the backarc with overlapping swaths from their SeaBeam 2000 12 kHz multi-beam echosounder and MR1 towed system. They also gathered magnetic and gravity information, but no seismic data. Their primary survey instrument was the MR1 towed sonar, a shallow-towed 11/12 kHz bathymetric sidescan sonar developed and operated by the Hawaii Mapping Research Group (HMRG), a research and operational group within University of Hawaii's School of Ocean and Earth Science and Technology (SOEST) and the Hawaii Institute of Geophysics and Planetology (HIGP). The MR1 is full-ocean-depth capable, providing both bathymetry and sidescan data.", "title": "Surveys and bathymetry" }, { "paragraph_id": 31, "text": "Leg 7 of the Cook Expedition continued the MR-1 survey of the Mariana Trench backarc from 4 March to 12 April 2001 under chief scientist Sherman Bloomer of Oregon State University.", "title": "Surveys and bathymetry" }, { "paragraph_id": 32, "text": "In May/June 2009, the US Navy-owned 3,064-ton twin-hulled research vessel Kilo Moana (T-AGOR 26) was sent to the Challenger Deep area to conduct research. Kilo Moana is civilian-crewed and operated by SOEST. It is equipped with two multibeam echosounders with sub-bottom profiler add-ons (the 191-beam 12 kHz Kongsberg Simrad EM120 with SBP-1200, capable of accuracies of 0.2–0.5% of water depth across the entire swath), gravimeter, and magnetometer. The EM-120 uses 1 by 1 degree sonar-emissions at the sea surface. Each 1 degree beam width sonar ping expands to cover a circular area about 192 metres (630 ft) in diameter at 11,000 metres (36,089 ft) depth. Whilst mapping the Challenger Deep the sonar equipment indicated a maximum depth of 10,971 m (35,994 ft) at an undisclosed position. Navigation equipment includes the Applanix POS MV320 V4, rated at accuracies of 0.5–2 m. RV Kilo Moana was also used as the support ship of the hybrid remotely operated underwater vehicle (HROV) Nereus that dived three times to the Challenger Deep bottom during the May/June 2009 cruise and did not confirm the sonar established maximum depth by its support ship.", "title": "Surveys and bathymetry" }, { "paragraph_id": 33, "text": "Cruise YK09-08 brought the JAMSTEC 4,429-ton research vessel Yokosuka back to the Mariana Trough and to the Challenger Deep June–July 2009. Their mission was a two-part program: surveying three hydrothermal vent sites in the southern Mariana Trough backarc basin near 12°57'N, 143°37'E about 130 nmi northeast of the central basin of the Challenger Deep, using the autonomous underwater vehicle Urashima. AUV Urashima dives #90–94, were to a maximum depth of 3500 meters, and were successful in surveying all three sites with a Reson SEABAT7125AUV multibeam echosounder for bathymetry, and multiple water testers to detect and map trace elements spewed into the water from hydrothermal vents, white smokers, and hot spots. Kyoko OKINO from the Ocean Research Institute, University of Tokyo, was principal investigator for this aspect of the cruise. The second goal of the cruise was to deploy a new \"10K free fall camera system\" called Ashura, to sample sediments and biologics at the bottom of the Challenger Deep. The principal investigator at the Challenger Deep was Taishi Tsubouchi of JAMSTEC. The lander Ashura made two descents: on the first, 6 July 2009, Ashura bottomed at 11°22.3130′N 142°25.9412′E / 11.3718833°N 142.4323533°E / 11.3718833; 142.4323533 at 10,867 metres (35,653 ft). The second descent (on 10 July 2009) was to 11°22.1136′N 142°25.8547′E / 11.3685600°N 142.4309117°E / 11.3685600; 142.4309117 at 10,897 metres (35,751 ft). The 270 kg Ashura was equipped with multiple baited traps, a HTDV video camera, and devices to recover sediment, water, and biological samples (mostly amphipods at the bait, and bacteria and fungus from the sediment and water samples).", "title": "Surveys and bathymetry" }, { "paragraph_id": 34, "text": "On 7 October 2010, further sonar mapping of the Challenger Deep area was conducted by the US Center for Coastal & Ocean Mapping/Joint Hydrographic Center (CCOM/JHC) aboard the 4.762-ton Sumner. The results were reported in December 2011 at the annual American Geophysical Union fall meeting. Using a Kongsberg Maritime EM 122 multi-beam echosounder system coupled to positioning equipment that can determine latitude and longitude up to 50 cm (20 in) accuracy, from thousands of individual soundings around the deepest part the CCOM/JHC team preliminary determined that the Challenger Deep has a maximum depth of 10,994 m (36,070 ft) at 11°19′35″N 142°11′14″E / 11.326344°N 142.187248°E / 11.326344; 142.187248, with an estimated vertical uncertainty of ±40 m (131 ft) at two standard deviations (i.e. ≈ 95.4%) confidence level. A secondary deep with a depth of 10,951 m (35,928 ft) was located at approximately 23.75 nmi (44.0 km) to the east at 11°22′11″N 142°35′19″E / 11.369639°N 142.588582°E / 11.369639; 142.588582 in the eastern basin of the Challenger Deep.", "title": "Surveys and bathymetry" }, { "paragraph_id": 35, "text": "JAMSTEC returned Yokosuka to the Challenger Deep with cruise YK10-16, 21–28 November 2010. The chief scientist of this joint Japanese-Danish expedition was Hiroshi Kitazato of the Institute of Biogeosciences, JAMSTEC. The cruise was titled \"Biogeosciences at the Challenger Deep: relict organisms and their relations to biogeochemical cycles\". The Japanese teams made five deployments of their 11,000-meter camera system (three to 6,000 meters – two into the central basin of the Challenger Deep) which returned with 15 sediment cores, video records and 140 scavenging amphipod specimens. The Danish Ultra Deep Lander System was employed by Ronnie Glud et al on four casts, two into the central basin of the Challenger Deep and two to 6,000 m some 34 nmi west of the central basin. The deepest depth recorded was on 28 November 2010 – camera cast CS5 – 11°21.9810′N 142°25.8680′E / 11.3663500°N 142.4311333°E / 11.3663500; 142.4311333}, at a corrected depth of 10,889.6 metres (35,727 ft) (the central basin).", "title": "Surveys and bathymetry" }, { "paragraph_id": 36, "text": "With JAMSTEC Cruises YK13-09 and YK13-12, Yokosuka hosted chief scientist Hidetaka Nomaki for a trip to New Zealand waters (YK13-09), with the return cruise identified as YK13-12. The project name was QUELLE2013; and the cruise title was: \"In situ experimental & sampling study to understand abyssal biodiversity and biogeochemical cycles\". They spent one day on the return trip at the Challenger Deep to obtain DNA/RNA on the large amphipods inhabiting the Deep (Hirondellea gigas). Hideki Kobayashi (Biogeos, JAMSTEC) and the team deployed a benthic lander on 23 November 2013 with eleven baited traps (three bald, five covered by insulating materials, and three automatically sealed after nine hours) into the central basin of the Challenger Deep at 11°21.9082′N 142°25.7606′E / 11.3651367°N 142.4293433°E / 11.3651367; 142.4293433, depth 10,896 metres (35,748 ft). After an eight-hour, 46-minute stay at the bottom, they recovered some 90 individual Hirondellea gigas.", "title": "Surveys and bathymetry" }, { "paragraph_id": 37, "text": "JAMSTEC deployed Kairei to the Challenger Deep again 11–17 January 2014, under the leadership of chief scientist Takuro Nunora. The cruise identifier was KR14-01, titled: \"Trench biosphere expedition for the Challenger Deep, Mariana Trench\". The expedition sampled at six stations transecting the central basin, with only two deployments of the \"11-K camera system\" lander for sediment cores and water samples to \"Station C\" at the deepest depth, i.e. 11°22.19429′N 142°25.7574′E / 11.36990483°N 142.4292900°E / 11.36990483; 142.4292900, at 10,903 metres (35,771 ft). The other stations were investigated with the \"Multi-core\" lander, both to the backarc northward, and to the Pacific Plate southward. The 11,000-meter capable crawler-driven ROV ABIMSO was sent to 7,646 m depth about 20 nmi due north of the central basin (ABISMO dive #21) specifically to identify possible hydrothermal activity on the north slope of the Challenger Deep, as suggested by findings from Kairei cruise KR08-05 in 2008. AMISMO's dives #20 and #22 were to 7,900 meters about 15 nmi north of the deepest waters of the central basin. Italian researchers under the leadership of Laura Carugati from the Polytechnic University of Marche, Italy (UNIVPM) were investigating the dynamics in virus/prokaryotes interactions in the Mariana Trench.", "title": "Surveys and bathymetry" }, { "paragraph_id": 38, "text": "From 16–19 December 2014, the Schmidt Ocean Institute's 2,024-ton research vessel Falkor, under chief scientist Douglas Bartlett from the Scripps Institution of Oceanography, deployed four different untethered instruments into the Challenger Deep for seven total releases. Four landers were deployed on 16 December into the central basin: the baited video-equipped lander Leggo for biologics; the lander ARI to 11°21.5809′N 142°27.2969′E / 11.3596817°N 142.4549483°E / 11.3596817; 142.4549483 for water chemistry; and the probes Deep Sound 3 and Deep Sound 2. Both Deep Sound probes recorded acoustics floating at 9,000 metres (29,528 ft) depth, until Deep Sound 3 imploded at the depth of 8,620 metres (28,281 ft) (about 2,200 metres (7,218 ft) above the bottom) at 11°21.99′N 142°27.2484′E / 11.36650°N 142.4541400°E / 11.36650; 142.4541400. The Deep Sound 2 recorded the implosion of Deep Sound 3, providing a unique recording of an implosion within the Challenger Deep depression. In addition to the loss of the Deep Sound 3 by implosion, the lander ARI failed to respond upon receiving its instruction to drop weights, and was never recovered. On 16/17 December, Leggo was returned to the central basin baited for amphipods. On the 17th, RV Falkor relocated 17 nms eastward to the eastern basin, where they again deployed both the Leggo (baited and with its full camera load), and the Deep Sound 2. Deep Sound 2 was programmed to drop to 9,000 metres (29,528 ft) and remain at that depth during its recording of sounds within the trench. On 19 December Leggo landed at 11°22.11216′N 142°35.250996′E / 11.36853600°N 142.587516600°E / 11.36853600; 142.587516600 at a uncorrected depth of 11,168 metres (36,640 ft) according to its pressure sensor readings. This reading was corrected to 10,929 metres (35,856 ft) depth. Leggo returned with good photography of amphipods feeding on the lander's mackerel bait and with sample amphipods. Falknor departed the Challenger Deep on 19 December en route the Marianas Trench Marine National Monument to the Sirena Deep. RV Falkor had both a Kongsberg EM302 and EM710 multibeam echosounder for bathymetry, and an Oceaneering C-Nav 3050 global navigation satellite system receiver, capable of calculating geodetic positioning with an accuracy better than 5 cm (2.0 in) horizontally and 15 cm (5.9 in) vertically.", "title": "Surveys and bathymetry" }, { "paragraph_id": 39, "text": "From 10 to 13 July 2015, the Guam-based 1,930-ton US Coast Guard Cutter Sequoia (WLB 215) hosted a team of researchers, under chief scientist Robert P. Dziak, from the NOAA Pacific Marine Environmental Laboratory (PMEL), the University of Washington, and Oregon State University, in deploying PMEL's \"Full-Ocean Depth Mooring\", a 45-meter-long moored deep-ocean hydrophone and pressure sensor array into the western basin of the Challenger Deep. A 6-hour descent into the western basin anchored the array at 10,854.7 ± 8.9 m (35,613 ± 29 ft) of water depth, at 11°20.127′N 142°12.0233′E / 11.335450°N 142.2003883°E / 11.335450; 142.2003883, about 1 km northeast of Sumner's deepest depth, recorded in 2010. After 16 weeks, the moored array was recovered on 2–4 November 2015. \"Observed sound sources included earthquake signals (T phases), baleen and odontocete cetacean vocalizations, ship propeller sounds, airguns, active sonar and the passing of a Category 4 typhoon.\" The science team described their results as \"the first multiday, broadband record of ambient sound at Challenger Deep, as well as only the fifth direct depth measurement\".", "title": "Surveys and bathymetry" }, { "paragraph_id": 40, "text": "The 3,536-ton research vessel Xiangyanghong 09 deployed on Leg II of the 37th China Cruise Dayang (DY37II) sponsored by the National Deep Sea Center, Qingdao and the Institute of Deep-Sea Science and Engineering, Chinese Academy of Sciences (Sanya, Hainan), to the Challenger Deep western basin area (11°22' N, 142°25' E) 4 June – 12 July 2016. As the mother ship for China's crewed deep submersible Jiaolong, the expedition carried out an exploration of the Challenger Deep to investigate the geological, biological, and chemical characteristics of the hadal zone. The diving area for this leg was on the southern slope of the Challenger Deep, at depths from about 6,300 to 8,300 metres (20,669 to 27,231 ft). The submersible completed nine piloted dives on the northern backarc and south area (Pacific plate) of the Challenger Deep to depths from 5,500 to 6,700 metres (18,045 to 21,982 ft). During the cruise, Jiaolong regularly deployed gas-tight samplers to collect water near the sea bottom. In a test of navigational proficiency, Jiaolong used an Ultra-Short Base Line (USBL) positioning system at a depth more than 6,600 metres (21,654 ft) to retrieve sampling bottles.", "title": "Surveys and bathymetry" }, { "paragraph_id": 41, "text": "From 22 June to 12 August 2016 (cruises 2016S1 and 2016S2), the Chinese Academy of Sciences' 6,250-ton submersible support ship Tansuo 1 (meaning: to explore) on her maiden voyage deployed to the Challenger Deep from her home port of Sanya, Hainan Island. On 12 July 2016, the ROV Haidou-1 dived to a depth of 10,767 metres (35,325 ft) in the Challenger Deep area. They also cast a free-drop lander, 9,000 metres (29,528 ft) rated free-drop ocean-floor seismic instruments (deployed to 7,731 metres (25,364 ft)), obtained sediment core samples, and collected over 2000 biological samples from depths ranging from 5,000 to 10,000 metres (16,404–32,808 ft). The Tansuo 01 operated along the 142°30.00' longitude line, about 30 nmi east of the earlier DY37II cruise survey (see Xiangyanghong 09 above).", "title": "Surveys and bathymetry" }, { "paragraph_id": 42, "text": "In November 2016 sonar mapping of the Challenger Deep area was conducted by the Royal Netherlands Institute for Sea Research (NIOZ)/GEOMAR Helmholtz Centre for Ocean Research Kiel aboard the 8,554-ton Deep Ocean Research Vessel Sonne. The results were reported in 2017. Using a Kongsberg Maritime EM 122 multi-beam echosounder system coupled to positioning equipment that can determine latitude and longitude the team determined that the Challenger Deep has a maximum depth of 10,925 m (35,843 ft) at 11°19.945′N 142°12.123′E / 11.332417°N 142.202050°E / 11.332417; 142.202050 (11°19′57″N 142°12′07″E / 11.332417°N 142.20205°E / 11.332417; 142.20205), with an estimated vertical uncertainty of ±12 m (39 ft) at one standard deviation (≈ 68.3%) confidence level. The analysis of the sonar survey offered a 100 by 100 metres (328 ft × 328 ft) grid resolution at bottom depth, so small dips in the bottom that are less than that size would be difficult to detect from the 0.5 by 1 degree sonar-emissions at the sea surface. Each 0.5-degree beam width sonar ping expands to cover a circular area about 96 metres (315 ft) in diameter at 11,000 metres (36,089 ft) depth. The horizontal position of the grid point has an uncertainty of ±50 to 100 m (164 to 328 ft), depending on along-track or across-track direction. This depth (59 m (194 ft)) and position (about 410 m (1,345 ft) to the northeast) measurements differ significantly from the deepest point determined by the Gardner et al. (2014) study. The observed depth discrepancy with the 2010 sonar mapping and Gardner et al 2014 study are related to the application of differing sound velocity profiles, which are essential for accurate depth determination. Sonne used CTD casts about 1.6 km west of the deepest sounding to near the bottom of the Challenger Deep that were used for sound velocity profile calibration and optimization. Likewise, the impact of using different projections, datum and ellipsoids during data acquisition can cause positional discrepancies between surveys.", "title": "Surveys and bathymetry" }, { "paragraph_id": 43, "text": "In December 2016, the CAS 3,300-ton research vessel Shiyan 3 deployed 33 broadband seismometers onto both the backarc northwest of the Challenger Deep, and onto the near southern Pacific Plate to the southeast, at depths of up to 8,137 m (26,696 ft). This cruise was part of a $12 million Chinese-U.S. initiative, led by co-leader Jian Lin of the Woods Hole Oceanographic Institution; a 5-year effort (2017–2021) to image in fine detail the rock layers in and around the Challenger Deep.", "title": "Surveys and bathymetry" }, { "paragraph_id": 44, "text": "The newly launched 4,800-ton research vessel (and mothership for the Rainbow Fish series of deep submersibles), the Zhang Jian departed Shanghai on 3 December. Their cruise was to test three new deep-sea landers, one uncrewed search submersible and the new Rainbow Fish 11,000-meter crewed deep submersible, all capable of diving to 10,000 meters. From 25 to 27 December, three deep-sea landing devices descended into the trench. The first Rainbow Fish lander took photographs, the second took sediment samples, and the third took biological samples. All three landers reached over 10,000 meters, and the third device brought back 103 amphipods. Cui Weicheng, director of Hadal Life Science Research Center at Shanghai Ocean University, led the team of scientists to carry out research at the Challenger Deep in the Mariana Trench. The ship is part of China's national marine research fleet but is owned by a Shanghai marine technology company.", "title": "Surveys and bathymetry" }, { "paragraph_id": 45, "text": "CAS' Institute of Deep-sea Science and Engineering sponsored Tansuo-1's return to the Challenger Deep 20 January – 5 February 2017 (cruise TS03) with baited traps for the capture of fish and other macrobiology near the Challenger and Sirena Deeps. On 29 January they recovered photography and samples of a new species of snailfish from the Northern slope of the Challenger Deep at 7,581 metres (24,872 ft), newly designated Pseudoliparis swirei. They also placed four or more CTD casts into the central and eastern basins of the Challenger Deep, as part of the World Ocean Circulation Experiment (WOCE).", "title": "Surveys and bathymetry" }, { "paragraph_id": 46, "text": "Tokyo University of Marine Science and Technology dispatched the research vessel Shinyo Maru to the Mariana Trench from 20 January to 5 February 2017 with baited traps for the capture of fish and other macrobiology near the Challenger and Sirena Deeps. On 29 January they recovered photography and samples of a new species of snailfish from the Northern slope of the Challenger Deep at 7,581 metres (24,872 ft), which has been newly designated Pseudoliparis swirei.", "title": "Surveys and bathymetry" }, { "paragraph_id": 47, "text": "Water samples were collected at Challenger Deep from 11 layers of the Mariana Trench in March 2017. Seawater samples from 4 to 4,000 m were collected by Niskin bottles mounted to a Seabird SBE25 CTDs; whereas water samples at depths from 6,050 m to 8,320 m were collected by a self-designed acoustic-controlled full ocean depth water samplers. In this study, scientists studied the RNA of pico- and nano-plankton from the surface to the hadal zone.", "title": "Surveys and bathymetry" }, { "paragraph_id": 48, "text": "JAMSTEC deployed Kairei to the Challenger Deep in May 2017 for the express purpose of testing the new full-ocean depth ROV UROV11K (Underwater ROV 11,000-meter-capable), as cruise KR 17-08C, under chief scientist Takashi Murashima. The cruise title was: \"Sea trial of a full depth ROV UROV11K system in the Mariana Trench\". UROV11K carried a new 4K High Definition video camera system, and new sensors to monitor the hydrogen-sulfide, methane, oxygen, and hydrogen content of the water. Unfortunately, on UROV11K's ascent from 10,899 metres (35,758 ft) (at about 11°22.30'N 142°35.8 E, in the eastern basin) on 14 May 2017, the ROV's buoyancy failed at 5,320 metres (17,454 ft) depth, and all efforts to retrieve the ROV were unsuccessful. The rate of descent and drift is not available, but the ROV bottomed to the east of the deepest waters of the eastern basin as revealed by the ship's maneuvering on 14 May. Murashima then directed the Kairei to a location about 35 nmi east of the eastern basin of the Challenger Deep to test a new \"Compact Hadal Lander\" which made three descents to depths from 7,498 to 8,178 m for testing the Sony 4K camera and for photography of fish and other macro-biologics.", "title": "Surveys and bathymetry" }, { "paragraph_id": 49, "text": "On its maiden voyage, the 2,150-ton twin-hulled scientific research vessel Shen Kuo (also Shengkuo, Shen Ko, or Shen Quo), departed Shanghai on 25 November 2018 and returned on 8 January 2019. They operated in the Mariana Trench area, and on 13 December tested a system of underwater navigation at a depth exceeding 10,000 metres, during a field trial of the Tsaihungyuy (ultra-short baseline) system. Project leader Tsui Veichen stated that, with the Tsaihungyuy equipment at depth, it was possible to obtain a signal and determine exact geolocations. The research team from Shanghai Ocean University and Westlake University was led by Cui Weicheng, director of Shanghai Ocean University's Hadal Science and Technology Research Center (HSRC). The equipment to be tested included a piloted submersible (not full ocean depth – depth achieved not available) and two deep-sea landers, all capable of diving to depths of 10,000 meters, as well as an ROV that can go to 4,500 meters. They took photographs and obtained samples from the trench, including water, sediment, macro-organisms and micro-organisms. Cui says, \"If we can take photos of fish more than 8,145 meters under water, ... we will break the current world record. We will test our new equipment including the landing devices. They are second generation. The first generation could only take samples in one spot per dive, but this new second generation can take samples at different depths in one dive. We also tested the ultra short baseline acoustic positioning system on the manned submersible, the future of underwater navigation.\"", "title": "Surveys and bathymetry" }, { "paragraph_id": 50, "text": "In November 2019, as cruise SR1916, a NIOZ team led by chief scientist Hans van Haren, with Scripps technicians, deployed to the Challenger Deep aboard the 2,641-ton research vessel Sally Ride, to recover a mooring line from the western basin of the Challenger Deep. The 7 km (4.3 mi) long mooring line in the Challenger Deep consisted of top-floatation positioned around 4 km (2.5 mi) depth, two sections of Dyneema neutrally buoyant 6 mm (0.2 in) line, two Benthos acoustic releases and two sections of self-contained instrumentation to measure and store current, salinity and temperature. Around the 6 km (3.7 mi) depth position two current meters were mounted below a 200 m (656 ft) long array of 100 high-resolution temperature sensors. In the lower position starting 600 m (1,969 ft) above the sea floor 295 specially designed high-resolution temperature sensors were mounted, the lowest of which was 8 m (26 ft) above the trench floor. The mooring line was deployed and left by the NIOZ team during the November 2016 RV Sonne expedition with the intention to be recovered in late 2018 by Sonne. The acoustic commanded release mechanism near the bottom of the Challenger Deep failed at the 2018 attempt. RV Sally Ride was made available exclusively for a final attempt to retrieve the mooring line before the release mechanism batteries expired. Sally Ride arrived at the Challenger Deep on 2 November. This time a 'deep release unit' lowered by one of Sally Ride's winch-cables to around 1,000 m depth pinged release commands and managed to contact the near-bottom releases. After being submerged for nearly three years, mechanical problems occurred in 15 of the 395 temperature sensors. The first results indicate the occurrence of internal waves in the Challenger Deep.", "title": "Surveys and bathymetry" }, { "paragraph_id": 51, "text": "Since May 2000, with the help of non-degraded signal satellite navigation, civilian surface vessels equipped with professional dual-frequency capable satellite navigation equipment can measure and establish their geodetic position with an accuracy in the order of meters to tens of meters whilst the western, central and eastern basins are kilometers apart.", "title": "Study of the depth and location of the Challenger Deep" }, { "paragraph_id": 52, "text": "In 2014, a study was conducted regarding the determination of the depth and location of the Challenger Deep based on data collected previous to and during the 2010 sonar mapping of the Mariana Trench with a Kongsberg Maritime EM 122 multibeam echosounder system aboard USNS Sumner. This study by James. V. Gardner et al. of the Center for Coastal & Ocean Mapping-Joint Hydrographic Center (CCOM/JHC), Chase Ocean Engineering Laboratory of the University of New Hampshire splits the measurement attempt history into three main groups: early single-beam echo sounders (1950s–1970s), early multibeam echo sounders (1980s – 21st century), and modern (i.e., post-GPS, high-resolution) multibeam echo sounders. Taking uncertainties in depth measurements and position estimation into account, the raw data of the 2010 bathymetry of the Challenger Deep vicinity consisting of 2,051,371 soundings from eight survey lines was analyzed. The study concludes that with the best of 2010 multibeam echosounder technologies after the analysis a depth uncertainty of ±25 m (82 ft) (95% confidence level) on 9 degrees of freedom and a positional uncertainty of ±20 to 25 m (66 to 82 ft) (2drms) remain and the location of the deepest depth recorded in the 2010 mapping is 10,984 m (36,037 ft) at 11°19′48″N 142°11′57″E / 11.329903°N 142.199305°E / 11.329903; 142.199305. The depth measurement uncertainty is a composite of measured uncertainties in the spatial variations in sound-speed through the water volume, the ray-tracing and bottom-detection algorithms of the multibeam system, the accuracies and calibration of the motion sensor and navigation systems, estimates of spherical spreading, attenuation throughout the water volume, and so forth.", "title": "Study of the depth and location of the Challenger Deep" }, { "paragraph_id": 53, "text": "Both the RV Sonne expedition in 2016, and the RV Sally Ride expedition in 2019 expressed strong reservations concerning the depth corrections applied by the Gardner et al. study of 2014, and serious doubt concerning the accuracy of the deepest depth calculated by Gardner (in the western basin), of 10,984 m (36,037 ft) after analysis of their multibeam data on a 100 m (328 ft) grid. Dr. Hans van Haren, chief scientist on the RV Sally Ride cruise SR1916, indicated that Gardner's calculations were 69 m (226 ft) too deep due to the \"sound velocity profiling by Gardner et al. (2014).\"", "title": "Study of the depth and location of the Challenger Deep" }, { "paragraph_id": 54, "text": "In 2018-2019, the deepest points of each ocean were mapped using a full‐ocean depth Kongsberg EM 124 multibeam echosounder aboard DSSV Pressure Drop. In 2021, a data paper was published by Cassandra Bongiovanni, Heather A. Stewart and Alan J. Jamieson regarding the gathered data donated to GEBCO. The deepest depth recorded in the 2019 Challenger Deep sonar mapping was 10,924 m (35,840 ft) ±15 m (49 ft) at 11°22′08″N 142°35′13″E / 11.369°N 142.587°E / 11.369; 142.587 in the eastern basin. This depth closely agrees with the deepest point (10,925 m (35,843 ft) ±12 m (39 ft)) determined by the Van Haren et al. sonar bathymetry. The geodetic position of the deepest depth according to the Van Haren et al. significantly differs (about 42 km (26 mi) to the west) with the 2021 paper. After post-processing the initial depth estimates by application of a full-ocean depth sound velocity profile Bongiovanni et al. report an (almost) as deep point at 11°19′52″N 142°12′18″E / 11.331°N 142.205°E / 11.331; 142.205 in the western basin that geodetically differs about 350 m (1,150 ft) with the deepest point position determined by Van Haren et al. (11°19′57″N 142°12′07″E / 11.332417°N 142.20205°E / 11.332417; 142.20205 in the western basin). After analysis of their multibeam data on a 75 m (246 ft) grid, the Bongiovanni et al. 2021 paper states the technological accuracy does not currently exist on low-frequency ship-mounted sonars required to determine which location was truly the deepest, nor does it currently exist on deep-sea pressure sensors.", "title": "Study of the depth and location of the Challenger Deep" }, { "paragraph_id": 55, "text": "In 2021, a study by Samuel F. Greenaway, Kathryn D. Sullivan, Samuel H. Umfress, Alice B. Beittel and Karl D. Wagner was published presenting a revised estimate of the maximum depth of the Challenger Deep based on a series of submersible dives conducted in June 2020. These depth estimates are derived from acoustic echo sounding profiles referenced to in-situ direct pressure measurements and corrected for observed oceanographic properties of the water-column, atmospheric pressure, gravity and gravity-gradient anomalies, and water-level effects. The study concludes according to their calculations the deepest observed seafloor depth was 10,935 m (35,876 ft) ±6 m (20 ft) below mean sea level at a 95% confidence level at 11°22.3′N 142°35.3′E / 11.3717°N 142.5883°E / 11.3717; 142.5883 in the eastern basin. For this estimate, the error term is dominated by the uncertainty of the employed pressure sensor, but Greenaway et al. show that the gravity correction is also substantial. The Greenaway et al. study compares its results with other recent acoustic and pressure-based measurements for the Challenger Deep and concludes the deepest depth in the western basin is very nearly as deep as the eastern basin. The disagreement between the maximum depth estimates and their geodetic positions between post-2000 published depths however exceed the accompanying margins of uncertainty, raising questions regarding the measurements or the reported uncertainties.", "title": "Study of the depth and location of the Challenger Deep" }, { "paragraph_id": 56, "text": "Another 2021 paper by Scott Loranger, David Barclay and Michael Buckingham, besides a December 2014 implosion shock wave based depth estimate of 10,983 m (36,033 ft), which is among the deepest estimated depths, also treatises the differences between various maximum depth estimates and their geodetic positions.", "title": "Study of the depth and location of the Challenger Deep" }, { "paragraph_id": 57, "text": "The 2010 maximal sonar mapping depths reported by Gardner et.al. in 2014 and Greenaway et al. study in 2021 have not been confirmed by direct descent (pressure gauge/manometer) measurements at full-ocean depth.Expeditions have reported direct measured maximal depths in a narrow range. For the western basin deepest depths were reported as 10,913 m (35,804 ft) by Trieste in 1960 and 10,923 m (35,837 ft) ±4 m (13 ft) by DSV Limiting Factor in June 2020. For the central basin the greatest reported depth is 10,915 m (35,810 ft) ±4 m (13 ft) by DSV Limiting Factor in June 2020. For the eastern basin deepest depths were reported as 10,911 m (35,797 ft) by ROV Kaikō in 1995, 10,902 m (35,768 ft) by ROV Nereus in 2009, 10,908 m (35,787 ft) by Deepsea Challenger in 2012, 10,929 m (35,856 ft) by benthic lander \"Leggo\" in May 2019, and 10,925 m (35,843 ft) ±4 m (13 ft) by DSV Limiting Factor in May 2019.", "title": "Study of the depth and location of the Challenger Deep" }, { "paragraph_id": 58, "text": "On 23 January 1960, the Swiss-designed Trieste, originally built in Italy and acquired by the U.S. Navy, supported by the USS Wandank (ATF 204) and escorted by the USS Lewis (DE 535), descended to the ocean floor in the trench piloted by Jacques Piccard (who co-designed the submersible along with his father, Auguste Piccard) and USN Lieutenant Don Walsh. Their crew compartment was inside a spherical pressure vessel – measuring 2.16 metres in diameter suspended beneath a buoyancy tank 18.4 metres in length – which was a heavy-duty replacement (of the Italian original) built by Krupp Steel Works of Essen, Germany. The steel walls were 12.7 cm (5.0 in) thick and designed to withstand pressure of up to 1250 kilograms per square centimetre (17800 psi; 1210 atm; 123 MPa). Their descent took almost five hours and the two men spent barely twenty minutes on the ocean floor before undertaking the three-hour-and-fifteen-minute ascent. Their early departure from the ocean floor was due to their concern over a crack in the outer window caused by the temperature differences during their descent.", "title": "Descents" }, { "paragraph_id": 59, "text": "Trieste dived at/near 11°18.5′N 142°15.5′E / 11.3083°N 142.2583°E / 11.3083; 142.2583, bottoming at 10,911 metres (35,797 ft) ±7 m (23 ft) into the Challenger Deep's western basin, as measured by an onboard manometer. Another source states the measured depth at the bottom was measured with a manometer at 10,913 m (35,804 ft) ±5 m (16 ft). Navigation of the support ships was by celestial and LORAN-C with an accuracy of 460 metres (1,510 ft) or less. Fisher noted that the Trieste's reported depth \"agrees well with the sonic sounding.\"", "title": "Descents" }, { "paragraph_id": 60, "text": "On 26 March 2012 (local time), Canadian film director James Cameron made a solo descent in the DSV Deepsea Challenger to the bottom of the Challenger Deep. At approximately 05:15 ChST on 26 March (19:15 UTC on 25 March), the descent began. At 07:52 ChST (21:52 UTC), Deepsea Challenger arrived at the bottom. The descent lasted 2 hours and 36 minutes and the recorded depth was 10,908 metres (35,787 ft) when Deepsea Challenger touched down. Cameron had planned to spend about six hours near the ocean floor exploring but decided to start the ascent to the surface after only 2 hours and 34 minutes. The time on the bottom was shortened because a hydraulic fluid leak in the lines controlling the manipulator arm obscured the visibility out the only viewing port. It also caused the loss of the submersible's starboard thrusters. At around 12:00 ChST (02:00 UTC on 26 March), the Deepsea Challenger website says the sub resurfaced after a 90-minute ascent, although Paul Allen's tweets indicate the ascent took only about 67 minutes. During a post-dive press conference Cameron said: \"I landed on a very soft, almost gelatinous flat plain. Once I got my bearings, I drove across it for quite a distance ... and finally worked my way up the slope.\" The whole time, Cameron said, he didn't see any fish, or any living creatures more than an inch (2.54 cm) long: \"The only free swimmers I saw were small amphipods\" – shrimplike bottom-feeders.", "title": "Descents" }, { "paragraph_id": 61, "text": "The Five Deeps Expedition's objective was to thoroughly map and visit the deepest points of all five of the world's oceans by the end of September 2019. On 28 April 2019, explorer Victor Vescovo descended to the \"Eastern Pool\" of the Challenger Deep in the Deep-Submergence Vehicle Limiting Factor (a Triton 36000/2 model submersible). Between 28 April and 4 May 2019, the Limiting Factor completed four dives to the bottom of Challenger Deep. The fourth dive descended to the slightly less deep \"Central Pool\" of the Challenger Deep (crew: Patrick Lahey, Pilot; John Ramsay, Sub Designer). The Five Deeps Expedition estimated maximum depths of 10,927 m (35,850 ft) ±8 m (26 ft) and 10,928 m (35,853 ft) ±10.5 m (34 ft) at (11°22′09″N 142°35′20″E / 11.3693°N 142.5889°E / 11.3693; 142.5889) by direct CTD pressure measurements and a survey of the operating area by the support ship, the Deep Submersible Support Vessel DSSV Pressure Drop, with a Kongsberg SIMRAD EM124 multibeam echosounder system. The CTD measured pressure at 10,928 m (35,853 ft) of seawater depth was 1,126.79 bar (112.679 MPa; 16,342.7 psi). Due to a technical problem the (uncrewed) ultra-deep-sea lander Skaff used by the Five Deeps Expedition stayed on the bottom for two and half days before it was salvaged by the Limiting Factor (crew: Patrick Lahey, Pilot; Jonathan Struwe, DNV GL Specialist) from an estimated depth of 10,927 m (35,850 ft). The gathered data was published with the caveat that it was subject to further analysis and could possibly be revised in the future. The data will be donated to the GEBCO Seabed 2030 initiative. Later in 2019, following a review of bathymetric data, and multiple sensor recordings taken by the DSV Limiting Factor and the ultra-deep-sea landers Closp, Flere and Skaff, the Five Deeps Expedition revised the maximum depth to 10,925 m (35,843 ft) ±4 m (13 ft).", "title": "Descents" }, { "paragraph_id": 62, "text": "Caladan Oceanic's \"Ring of Fire\" expedition in the Pacific included six crewed descents and twenty-five lander deployments into all three basins of the Challenger Deep all piloted by Victor Vescovo and further topographical and marine life survey of the entire Challenger Deep. The expedition craft used are the Deep Submersible Support Vessel DSSV Pressure Drop, Deep-Submergence Vehicle DSV Limiting Factor and the ultra-deep-sea landers Closp, Flere and Skaff. During the first crewed dive on 7 June 2020 Victor Vescovo and former US astronaut (and former NOAA Administrator) Kathryn D. Sullivan descended to the \"Eastern Pool\" of the Challenger Deep in the Deep-Submergence Vehicle Limiting Factor.", "title": "Descents" }, { "paragraph_id": 63, "text": "On 12 June 2020, Victor Vescovo and mountaineer and explorer Vanessa O'Brien descended to the \"Eastern Pool\" of the Challenger Deep spending three hours mapping the bottom. O'Brien said her dive scanned about a mile of desolate bottom terrain, finding that the surface is not flat, as once was thought, but sloping by about 18 ft (5.5 m) per mile, subject to verification. On 14 June 2020, Victor Vescovo and John Rost descended to the \"Eastern Pool\" of the Challenger Deep in the Deep-Submergence Vehicle Limiting Factor spending four hours at depth and transiting the bottom for nearly 2 miles. On 20 June 2020, Victor Vescovo and Kelly Walsh descended to the \"Western Pool\" of the Challenger Deep in the Deep-Submergence Vehicle Limiting Factor spending four hours at the bottom. They reached a maximum depth of 10,923 m (35,837 ft). Kelly Walsh is the son of the Trieste's captain Don Walsh who descended there in 1960 with Jacques Piccard. On 21 June 2020, Victor Vescovo and Woods Hole Oceanographic Institution researcher Ying-Tsong Lin descended to the \"Central Pool\" of the Challenger Deep in the Deep-Submergence Vehicle Limiting Factor. They reached a maximum depth of 10,915 m (35,810 ft) ±4 m (13 ft). On 26 June 2020 Victor Vescovo and Jim Wigginton descended to the \"Eastern Pool\" of the Challenger Deep in the Deep-Submergence Vehicle Limiting Factor.", "title": "Descents" }, { "paragraph_id": 64, "text": "Fendouzhe (奋斗者, Striver) is a crewed Chinese deep-sea submersible developed by the China Ship Scientific Research Center (CSSRC). Between 10 October and 28 November 28, 2020, it carried out thirteen dives in the Mariana Trench as part of a test programme. Of these, eight led to depths of more than 10,000 m (32,808 ft). On 10 November 2020, the bottom of the Challenger Deep was reached by Fendouzhe with three Chinese scientists (Zhāng Wěi 张伟 [pilot], Zhào Yáng 赵洋, and Wáng Zhìqiáng 王治强) onboard whilst live-streaming the descent to a reported depth of 10,909 m (35,791 ft). This makes the Fendouzhe the fourth crewed submersible vehicle achieving a successful descent. The pressure hull of Fendouzhe, made from a newly developed titanium alloy, offers space for three people in addition to technical equipment. Fendouzhe is equipped with cameras made by the Norwegian manufacturer Imenco. According to Ye Cong 叶聪, the chief designer of the submersible, China's goals for the dive aren't just scientific investigation but also the future use of deep-sea seabed resources.", "title": "Descents" }, { "paragraph_id": 65, "text": "On 28 February 2021 Caladan Oceanic's \"Ring of Fire 2\" expedition arrived over the Challenger Deep and conducted crewed descents and lander deployments into the Challenger Deep. At the start the (uncrewed) ultra-deep-sea lander Skaff was deployed to collect water column data by CTD for the expedition. The effects of the Pacific subducting plate crashing into the Philippine Plate was among the things researched onsite. On 1 March 2021, the first crewed descent to the eastern pool was made by Victor Vescovo and Richard Garriott. Garriott became the 17th person to descend to the bottom. On 2 March 2021, a descent to the eastern pool was made by Victor Vescovo and Michael Dubno. On 5 March a descent to the eastern pool was made by Victor Vescovo and Hamish Harding. They traversed the bottom of Challenger Deep. On 11 March 2021 a descent to the Western Pool was made by Victor Vescovo and marine botanist Nicole Yamase. On 13 April 2021 a descent was made by deep water submersible operations expert Rob McCallum and Tim Macdonald who piloted the dive. A 2021 descent with a Japanese citizen is planned. All crewed descents were conducted in the Deep-Submergence Vehicle DSV Limiting Factor.", "title": "Descents" }, { "paragraph_id": 66, "text": "In July 2022 for the fourth consecutive year, Caladan Oceanic's deep submergence system, consisting of the deep submersible DSV Limiting Factor supported by the mother ship DSSV Pressure Drop, returned to the Challenger Deep for dives into the Challenger Deep. In early July 2022, Victor Vescovo was joined by Aaron Newman as a mission specialist for a dive into the Central pool. On 5 July 2022, Tim Macdonald as pilot and Jim Kitchen as mission specialist for a dive into the Eastern pool. On 8 July 2022 Victor Vescovo was joined by Dylan Taylor as mission specialist for a dive into the Eastern pool. Victor Vescovo (for his 15th dive into the Challenger Deep) was joined by geographer and oceanographer Dawn Wright as mission specialist on the 12 July 2022 dive to 10,919 m (35,823 ft) in the Western Pool. Wright operated the world's first sidescan sonar to ever operate at full-ocean depth to capture detailed imagery along short transects of the southern wall of the Western Pool.", "title": "Descents" }, { "paragraph_id": 67, "text": "The remotely operated vehicle (ROV) Kaikō made many uncrewed descents to the Mariana Trench from its support ship RV Yokosuka during two expeditions in 1996 and 1998. From 29 February to 4 March the ROV Kaiko made three dives into the central basin, Kaiko #21 – Kaiko #23, . Depths ranged from 10,898 metres (35,755 ft) at 11°22.536′N 142°26.418′E / 11.375600°N 142.440300°E / 11.375600; 142.440300, to 10,896 metres (35,748 ft) at 11°22.59′N 142°25.848′E / 11.37650°N 142.430800°E / 11.37650; 142.430800; dives #22 & #23 to the north, and dive #21 northeast of the deepest waters of the central basin. During the 1996 measurements the temperature (water temperature increases at great depth due to adiabatic compression), salinity and water pressure at the sampling station was 2.6 °C (36.7 °F), 34.7‰ and 1,113 bar (111.3 MPa; 16,140 psi), respectively at 10,897 m (35,751 ft) depth. The Japanese robotic deep-sea probe Kaikō broke the depth record for uncrewed probes when it reached close to the surveyed bottom of the Challenger Deep. Created by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), it was one of the few uncrewed deep-sea probes in operation that could dive deeper than 6,000 metres (20,000 ft). The manometer measured depth of 10,911.4 m (35,799 ft) ±3 m (10 ft) at 11°22.39′N 142°35.54′E / 11.37317°N 142.59233°E / 11.37317; 142.59233 for the Challenger Deep is believed to be the most accurate measurement taken up to then. Another source states the greatest depth measured by Kaikō in 1996 was 10,898 m (35,755 ft) at 11°22.10′N 142°25.85′E / 11.36833°N 142.43083°E / 11.36833; 142.43083 and 10,907 m (35,784 ft) at 11°22.95′N 142°12.42′E / 11.38250°N 142.20700°E / 11.38250; 142.20700 in 1998. The ROV Kaiko was the first vehicle to visit to the bottom of the Challenger Deep since the bathyscaph Trieste's dive in 1960, and the first success in sampling the trench bottom sediment/mud, from which Kaiko obtained over 360 samples. Approximately 3,000 different microbes were identified in the samples. Kaikō was lost at sea off Shikoku Island during Typhoon Chan-Hom on 29 May 2003.", "title": "Descents" }, { "paragraph_id": 68, "text": "From 2 May to 5 June 2009, the RV Kilo Moana hosted the Woods Hole Oceanographic Institution (WHOI) hybrid remotely operated vehicle (HROV) Nereus team for the first operational test of the Nereus in its 3-ton tethered ROV mode. The Nereus team was headed by the Expedition Leader Andy Bowen of WHOI, Louis Whitcomb of Johns Hopkins University, and Dana Yoerger, also of WHOI. The expedition had co-chief scientists: biologist Tim Shank of WHOI, and geologist Patricia Fryer of the University of Hawaii, to head the science team exploiting the ship's bathymetry and organizing the science experiments deployed by the Nereus. From Nereus dive #007ROV to 880 m (2,887 ft) just south of Guam, to dive #010ROV into the Nero Deep at 9,050 m (29,692 ft), the testing gradually increased depths and complexities of activities at the bottom.", "title": "Descents" }, { "paragraph_id": 69, "text": "Dive #011ROV, on 31 May 2009, saw the Nereus piloted on a 27.8-hour underwater mission, with about ten hours transversing the eastern basin of the Challenger Deep – from the south wall, northwest to the north wall – streaming live video and data back to its mothership. A maximum depth of 10,902 m (35,768 ft) was registered at 11°22.10′N 142°35.48′E / 11.36833°N 142.59133°E / 11.36833; 142.59133. The RV Kilo Moana then relocated to the western basin, where a 19.3-hour underwater dive found a maximum depth of 10,899 m (35,758 ft) on dive #012ROV, and on dive #014ROV in the same area (11°19.59 N, 142°12.99 E) encountered a maximum depth of 10,176 m (33,386 ft). The Nereus was successful in recovering both sediment and rock samples from the eastern and the western basins with its manipulator arm for further scientific analysis. The HROV's final dive was about 80 nmi (148.2 km) to the north of the Challenger Deep, in the backarc, where they dived 2,963 m (9,721 ft) at the TOTO Caldera (12°42.00 N, 143°31.5 E). Nereus thus became the first vehicle to reach the Mariana Trench since 1998 and the deepest-diving vehicle then in operation. Project manager and developer Andy Bowen heralded the achievement as \"the start of a new era in ocean exploration\". Nereus, unlike Kaikō, did not need to be powered or controlled by a cable connected to a ship on the ocean surface. The HROV Nereus was lost on 10 May 2014 while conducting a dive at 9,900 metres (32,500 ft) in depth in the Kermadec Trench.", "title": "Descents" }, { "paragraph_id": 70, "text": "In June 2008, the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) deployed the research vessel Kairei to the area of Guam for cruise KR08-05 Leg 1 and Leg 2. On 1–3 June 2008, during Leg 1, the Japanese robotic deep-sea probe ABISMO (Automatic Bottom Inspection and Sampling Mobile) on dives 11–13 almost reached the bottom about 150 km (93 mi) east of the Challenger Deep: \"Unfortunately, we were unable to dive to the sea floor because the legacy primary cable of the Kaiko system was a little bit short. The 2-m long gravity core sampler was dropped in free fall, and sediment samples of 1.6m length were obtained. Twelve bottles of water samples were also obtained at various depths...\" ABISMO's dive #14 was into the TOTO caldera (12°42.7777 N, 143°32.4055 E), about 60 nmi northeast of the deepest waters of the central basin of the Challenger Deep, where they obtained videos of the hydrothermal plume. Upon successful testing to 10,000 m (32,808 ft), JAMSTEC' ROV ABISMO became, briefly, the only full-ocean-depth rated ROV in existence. On 31 May 2009, the ABISMO was joined by the Woods Hole Oceanographic Institution's HROV Nereus as the only two operational full ocean depth capable remotely operated vehicles in existence. During the ROV ABISMO's deepest sea trails dive its manometer measured a depth of 10,257 m (33,652 ft) ±3 m (10 ft) in \"Area 1\" (vicinity of 12°43' N, 143°33' E).", "title": "Descents" }, { "paragraph_id": 71, "text": "Leg 2, under chief scientist Takashi Murashima, operated at the Challenger Deep 8–9 June 2008, testing JAMSTEC's new full ocean depth \"Free Fall Mooring System,\" i.e. a lander. The lander was successfully tested twice to 10,895 m (35,745 ft) depth, taking video images and sediment samplings at 11°22.14′N 142°25.76′E / 11.36900°N 142.42933°E / 11.36900; 142.42933, in the central basin of the Challenger Deep.", "title": "Descents" }, { "paragraph_id": 72, "text": "On 23 May 2016, the Chinese submersible Haidou-1 dived to a depth of 10,767 m (35,325 ft) at an undisclosed position in the Mariana Trench, making China the third country after Japan (ROV Kaikō), and the US (HROV Nereus), to deploy a full-ocean-depth ROV. This autonomous and remotely operated vehicle has a design depth of 11,000 m (36,089 ft).", "title": "Descents" }, { "paragraph_id": 73, "text": "On 8 May 2020, the Russian submersible Vityaz-D dived to a depth of 10,028 m (32,900 ft) at an undisclosed position in the Mariana Trench.", "title": "Descents" }, { "paragraph_id": 74, "text": "The summary report of the HMS Challenger expedition lists radiolaria from the two dredged samples taken when the Challenger Deep was first discovered. These (Nassellaria and Spumellaria) were reported in the Report on Radiolaria (1887) written by Ernst Haeckel.", "title": "Lifeforms" }, { "paragraph_id": 75, "text": "On their 1960 descent, the crew of the Trieste noted that the floor consisted of diatomaceous ooze and reported observing \"some type of flatfish\" lying on the seabed.", "title": "Lifeforms" }, { "paragraph_id": 76, "text": "And as we were settling this final fathom, I saw a wonderful thing. Lying on the bottom just beneath us was some type of flatfish, resembling a sole, about 1 foot [30 cm] long and 6 inches [15 cm] across. Even as I saw him, his two round eyes on top of his head spied us – a monster of steel – invading his silent realm. Eyes? Why should he have eyes? Merely to see phosphorescence? The floodlight that bathed him was the first real light ever to enter this hadal realm. Here, in an instant, was the answer that biologists had asked for the decades. Could life exist in the greatest depths of the ocean? It could! And not only that, here apparently, was a true, bony teleost fish, not a primitive ray or elasmobranch. Yes, a highly evolved vertebrate, in time's arrow very close to man himself. Slowly, extremely slowly, this flatfish swam away. Moving along the bottom, partly in the ooze and partly in the water, he disappeared into his night. Slowly too – perhaps everything is slow at the bottom of the sea – Walsh and I shook hands.", "title": "Lifeforms" }, { "paragraph_id": 77, "text": "Many marine biologists are now skeptical of this supposed sighting, and it is suggested that the creature may instead have been a sea cucumber. The video camera on board the Kaiko probe spotted a sea cucumber, a scale worm and a shrimp at the bottom. At the bottom of the Challenger Deep, the Nereus probe spotted one polychaete worm (a multi-legged predator) about an inch long.", "title": "Lifeforms" }, { "paragraph_id": 78, "text": "An analysis of the sediment samples collected by Kaiko found large numbers of simple organisms at 10,900 m (35,800 ft). While similar lifeforms have been known to exist in shallower ocean trenches (> 7,000 m) and on the abyssal plain, the lifeforms discovered in the Challenger Deep possibly represent taxa distinct from those in shallower ecosystems.", "title": "Lifeforms" }, { "paragraph_id": 79, "text": "Most of the organisms collected were simple, soft-shelled foraminifera (432 species according to National Geographic), with four of the others representing species of the complex, multi-chambered genera Leptohalysis and Reophax. Eighty-five per cent of the specimens were organic, soft-shelled allogromiids, which is unusual compared to samples of sediment-dwelling organisms from other deep-sea environments, where the percentage of organic-walled foraminifera ranges from 5% to 20%. As small organisms with hard, calcareous shells have trouble growing at extreme depths because of the high solubility of calcium carbonate in the pressurized water, scientists theorize that the preponderance of soft-shelled organisms in the Challenger Deep may have resulted from the typical biosphere present when the Challenger Deep was shallower than it is now. Over the course of six to nine million years, as the Challenger Deep grew to its present depth, many of the species present in the sediment died out or were unable to adapt to the increasing water pressure and changing environment.", "title": "Lifeforms" }, { "paragraph_id": 80, "text": "On 17 March 2013, researchers reported data that suggested piezophilic microorganisms thrive in the Challenger Deep. Other researchers reported related studies that microbes thrive inside rocks up to 579 m (1,900 ft) below the sea floor under 2,591 m (8,500 ft) of ocean off the coast of the northwestern United States. According to one of the researchers, \"You can find microbes everywhere – they're extremely adaptable to conditions, and survive wherever they are.\"", "title": "Lifeforms" }, { "paragraph_id": 81, "text": "11°22.4′N 142°35.5′E / 11.3733°N 142.5917°E / 11.3733; 142.5917", "title": "External links" } ]
The Challenger Deep is the deepest known point of the seabed of Earth, located in the western Pacific Ocean at the southern end of the Mariana Trench, in the ocean territory of the Federated States of Micronesia. According to the GEBCO Gazetteer of Undersea Feature Names the depression's depth is 10,920 ± 10 m (35,827 ± 33 ft) at 11°22.4′N 142°35.5′E, although its exact geodetic location remains inconclusive and its depth has been measured at 10,902–10,929 m (35,768–35,856 ft) by deep-diving submersibles, remotely operated underwater vehicles and benthic landers, and (sometimes) slightly more by sonar bathymetry. The differences in depth estimates and their geodetic positions are scientifically explainable by the difficulty of researching such deep locations. The depression is named after the British Royal Navy survey ships HMS Challenger, whose expedition of 1872–1876 first located it, and HMS Challenger II, whose expedition of 1950-1952 established its record-setting depth. The first descent by any vehicle was by the bathyscaphe Trieste in January 1960. In March 2012, a solo descent was made by film director James Cameron in the deep-submergence vehicle Deepsea Challenger. As of July 2022, 27 people have descended to Challenger Deep.
2002-01-16T16:11:29Z
2023-12-17T19:47:33Z
[ "Template:Citation needed", "Template:Cite book", "Template:Cbignore", "Template:Cite press release", "Template:Short description", "Template:Use dmy dates", "Template:Convert", "Template:Sup", "Template:Blockquote", "Template:Webarchive", "Template:Cite AV media", "Template:Coord", "Template:Duplication", "Template:HMS", "Template:Ship", "Template:Main", "Template:'s", "Template:Failed verification", "Template:Snd", "Template:Multiple image", "Template:Reflist", "Template:Cite web", "Template:Cite journal", "Template:Full citation needed", "Template:Cite report", "Template:Spnd", "Template:Cite tweet", "Template:Cvt", "Template:Portal", "Template:Cite news", "Template:Commons" ]
https://en.wikipedia.org/wiki/Challenger_Deep
7,787
Claude Louis Berthollet
Claude Louis Berthollet (French pronunciation: [klod lwi bɛʁtɔlɛ], 9 December 1748 – 6 November 1822) was a Savoyard-French chemist who became vice president of the French Senate in 1804. He is known for his scientific contributions to theory of chemical equilibria via the mechanism of reverse chemical reactions, and for his contribution to modern chemical nomenclature. On a practical basis, Berthollet was the first to demonstrate the bleaching action of chlorine gas, and was first to develop a solution of sodium hypochlorite as a modern bleaching agent. Claude Louis Berthollet was born in Talloires, near Annecy, then part of the Duchy of Savoy, in 1749. He started his studies at Chambéry and then in Turin where he graduated in medicine. Berthollet's great new developments in works regarding chemistry made him, in a short period of time, an active participant of the Academy of Science in 1780. Berthollet, along with Antoine Lavoisier and others, devised a chemical nomenclature, or a system of names, which serves as the basis of the modern system of naming chemical compounds. He also carried out research into dyes and bleaches, being first to introduce the use of chlorine gas as a commercial bleach in 1785. He first produced a modern bleaching liquid in 1789 in his laboratory on the quay Javel in Paris, France, by passing chlorine gas through a solution of sodium carbonate. The resulting liquid, known as "Eau de Javel" ("Javel water"), was a weak solution of sodium hypochlorite. Another strong chlorine oxidant and bleach which he investigated and was the first to produce, potassium chlorate (KClO3), is known as Berthollet's Salt. Berthollet first determined the elemental composition of the gas ammonia, in 1785. Berthollet was one of the first chemists to recognize the characteristics of a reverse reaction, and hence, chemical equilibrium. Berthollet was engaged in a long-term battle with another French chemist, Joseph Proust, on the validity of the law of definite proportions. While Proust believed that chemical compounds are composed of a fixed ratio of their constituent elements irrespective of the methods of production, Berthollet believed that this ratio can change according to the ratio of the reactants initially taken. Although Proust proved his theory by accurate measurements, his theory was not immediately accepted partially due to Berthollet's authority. His law was finally accepted when Berzelius confirmed it in 1811, but it was found later that Berthollet was not completely wrong because there exists a class of compounds that do not obey the law of definite proportions. These non-stoichiometric compounds are also named berthollides in his honor. Berthollet was one of several scientists who went with Napoleon to Egypt and was a member of the physics and natural history section of the Institut d'Égypte. In April, 1789 Berthollet was elected a Fellow of the Royal Society of London. In 1801, he was elected a foreign member of the Royal Swedish Academy of Sciences. In 1809, Berthollet was elected an associate member first class of the Royal Institute of the Netherlands, predecessor of the Royal Netherlands Academy of Arts and Sciences. He was elected an Honorary Fellow of the Royal Society of Edinburgh in 1820 and a Foreign Honorary Member of the American Academy of Arts and Sciences in 1822. Claude-Louis Berthollet's 1788 publication entitled Méthode de Nomenclature Chimique, published with colleagues Antoine Lavoisier, Louis Bernard Guyton de Morveau, and Antoine François, comte de Fourcroy, was honored by a Citation for Chemical Breakthrough Award from the Division of History of Chemistry of the American Chemical Society, presented at the Académie des Sciences (Paris) in 2015. A French High School located in Annecy is named after him (Lycée Claude Louis Berthollet). Berthollet married Marie Marguerite Baur in 1788. Their son, Amédée-Barthélémy Berthollet, died in 1811 of carbon monoxide poisoning via charcoal-burning suicide in which he had recorded his physiological and psychological experiences as a final scientific contribution before losing consciousness and succumbing to the fumes. Berthollet was accused of being an atheist. He died in Arcueil, France in 1822.
[ { "paragraph_id": 0, "text": "Claude Louis Berthollet (French pronunciation: [klod lwi bɛʁtɔlɛ], 9 December 1748 – 6 November 1822) was a Savoyard-French chemist who became vice president of the French Senate in 1804. He is known for his scientific contributions to theory of chemical equilibria via the mechanism of reverse chemical reactions, and for his contribution to modern chemical nomenclature. On a practical basis, Berthollet was the first to demonstrate the bleaching action of chlorine gas, and was first to develop a solution of sodium hypochlorite as a modern bleaching agent.", "title": "" }, { "paragraph_id": 1, "text": "Claude Louis Berthollet was born in Talloires, near Annecy, then part of the Duchy of Savoy, in 1749.", "title": "Biography" }, { "paragraph_id": 2, "text": "He started his studies at Chambéry and then in Turin where he graduated in medicine. Berthollet's great new developments in works regarding chemistry made him, in a short period of time, an active participant of the Academy of Science in 1780.", "title": "Biography" }, { "paragraph_id": 3, "text": "Berthollet, along with Antoine Lavoisier and others, devised a chemical nomenclature, or a system of names, which serves as the basis of the modern system of naming chemical compounds.", "title": "Biography" }, { "paragraph_id": 4, "text": "He also carried out research into dyes and bleaches, being first to introduce the use of chlorine gas as a commercial bleach in 1785. He first produced a modern bleaching liquid in 1789 in his laboratory on the quay Javel in Paris, France, by passing chlorine gas through a solution of sodium carbonate. The resulting liquid, known as \"Eau de Javel\" (\"Javel water\"), was a weak solution of sodium hypochlorite. Another strong chlorine oxidant and bleach which he investigated and was the first to produce, potassium chlorate (KClO3), is known as Berthollet's Salt.", "title": "Biography" }, { "paragraph_id": 5, "text": "Berthollet first determined the elemental composition of the gas ammonia, in 1785.", "title": "Biography" }, { "paragraph_id": 6, "text": "Berthollet was one of the first chemists to recognize the characteristics of a reverse reaction, and hence, chemical equilibrium.", "title": "Biography" }, { "paragraph_id": 7, "text": "Berthollet was engaged in a long-term battle with another French chemist, Joseph Proust, on the validity of the law of definite proportions. While Proust believed that chemical compounds are composed of a fixed ratio of their constituent elements irrespective of the methods of production, Berthollet believed that this ratio can change according to the ratio of the reactants initially taken. Although Proust proved his theory by accurate measurements, his theory was not immediately accepted partially due to Berthollet's authority. His law was finally accepted when Berzelius confirmed it in 1811, but it was found later that Berthollet was not completely wrong because there exists a class of compounds that do not obey the law of definite proportions. These non-stoichiometric compounds are also named berthollides in his honor.", "title": "Biography" }, { "paragraph_id": 8, "text": "Berthollet was one of several scientists who went with Napoleon to Egypt and was a member of the physics and natural history section of the Institut d'Égypte.", "title": "Biography" }, { "paragraph_id": 9, "text": "In April, 1789 Berthollet was elected a Fellow of the Royal Society of London. In 1801, he was elected a foreign member of the Royal Swedish Academy of Sciences. In 1809, Berthollet was elected an associate member first class of the Royal Institute of the Netherlands, predecessor of the Royal Netherlands Academy of Arts and Sciences. He was elected an Honorary Fellow of the Royal Society of Edinburgh in 1820 and a Foreign Honorary Member of the American Academy of Arts and Sciences in 1822.", "title": "Awards and honours" }, { "paragraph_id": 10, "text": "Claude-Louis Berthollet's 1788 publication entitled Méthode de Nomenclature Chimique, published with colleagues Antoine Lavoisier, Louis Bernard Guyton de Morveau, and Antoine François, comte de Fourcroy, was honored by a Citation for Chemical Breakthrough Award from the Division of History of Chemistry of the American Chemical Society, presented at the Académie des Sciences (Paris) in 2015.", "title": "Awards and honours" }, { "paragraph_id": 11, "text": "A French High School located in Annecy is named after him (Lycée Claude Louis Berthollet).", "title": "Awards and honours" }, { "paragraph_id": 12, "text": "Berthollet married Marie Marguerite Baur in 1788. Their son, Amédée-Barthélémy Berthollet, died in 1811 of carbon monoxide poisoning via charcoal-burning suicide in which he had recorded his physiological and psychological experiences as a final scientific contribution before losing consciousness and succumbing to the fumes.", "title": "Personal life" }, { "paragraph_id": 13, "text": "Berthollet was accused of being an atheist.", "title": "Personal life" }, { "paragraph_id": 14, "text": "He died in Arcueil, France in 1822.", "title": "Personal life" } ]
Claude Louis Berthollet was a Savoyard-French chemist who became vice president of the French Senate in 1804. He is known for his scientific contributions to theory of chemical equilibria via the mechanism of reverse chemical reactions, and for his contribution to modern chemical nomenclature. On a practical basis, Berthollet was the first to demonstrate the bleaching action of chlorine gas, and was first to develop a solution of sodium hypochlorite as a modern bleaching agent.
2002-02-25T15:51:15Z
2024-01-01T00:16:04Z
[ "Template:IPA-fr", "Template:Cite journal", "Template:Doi", "Template:Authority control", "Template:Short description", "Template:More citations needed", "Template:Reflist", "Template:Infobox scientist", "Template:Cite book", "Template:Gutenberg author", "Template:Internet Archive author", "Template:Use dmy dates", "Template:Cite web", "Template:DSB" ]
https://en.wikipedia.org/wiki/Claude_Louis_Berthollet
7,791
Chilean Constitution of 1980
The Political Constitution of the Republic of Chile of 1980 (Spanish: Constitución Política de la República de Chile) is the fundamental law in force in Chile. It was approved and promulgated under the military dictatorship headed by Augusto Pinochet, being ratified by the Chilean citizenry through a referendum on September 11, 1980, although being held under restrictions and without electoral registers. While 69% of the population was reported to have voted yes, the vote was questioned by hundreds of denunciations of irregularities and fraud. The constitutional text took effect, in a transitory regime, on March 11, 1981, and then entered into full force on March 11, 1990, with the return to electoral democracy. It was amended for the first time in 1989 (through a referendum), and afterward in 1991, 1994, 1997, each year from 1999 to 2001, 2003, each year from 2007 to 2015, and each year from 2017 to 2021, with the last three amendments concerning the constituent process of 2020–2022. In September 2005, under Ricardo Lagos's presidency, a large amendment of the Constitution was approved by parliamentarians, removing from the text some of the less democratic dispositions coming from Pinochet's regime, such as senators-for-life and appointed senators, as well as the armed forces' warranty of the democratic regime. On November 15, 2019, following a series of popular protests in October 2019, a political agreement between parties with parliamentary representation called for a national referendum on the proposal of writing a new Constitution and on the mechanism to draft it. A plebiscite held on October 25, 2020, approved drafting a new fundamental charter, as well as choosing by popular vote delegates to a Constitutional Convention which was to fulfill this objective. The members of the convention were elected in May, 2021, and first convened on July 4, 2021. However, on 4 September 2022, voters rejected the new constitution in the constitutional referendum. According to the law professor Camel Cazor Aliste, the Constitution of 1980 has problems of legitimacy stemming from two facts. First, the constitutional commission was not representative of the political spectrum of Chile: its members had been handpicked by the Pinochet dictatorship, and opponents of the regime had been deliberately excluded. Secondly, the constitution's approval was achieved by the government in a controversial and tightly controlled referendum in 1980. Campaigning for the referendum was irregular, with the government calling people to vote positively on the reform, and also using radio and television commercial spots, while the opposition urging people to vote negatively were only able of doing small public demonstrations, without access to television time and limited radio access. There was no electoral roll for this vote, as the register had been burned during the dictatorship. There were multiple cases of double voting, with at least 3000 CNI agents doing so. Since the return to democracy, the constitution has been amended nearly 60 times. A document from September 13, 1973, shows that Jaime Guzmán had by then already been tasked by the Junta to study the creation of a new constitution. It has been argued the 1980 Constitution was designed to favor the election of right-wing legislative majorities. Several rounds of constitutional amendments have been enacted since 1989 to address this concern. In July 2022, a proposed replacement constitution was submitted for national debate and general referendum, but it was rejected on September 4 despite having had the support of left-leaning President Gabriel Boric. The document had faced intense criticism that it was "too long, too left-wing and too radical", and was rejected by a margin of 62% to 38%. On March 6, 2023, a group of experts appointed by Congress began a second attempt to prepare a preliminary draft of a new constitution. The group, with lawyer Veronica Undurraga serving as its president, was scheduled to work for three months on 12 institutional bases agreed to by lawmakers, after which the draft would be given to an elected Constitutional Council, whose members would be voted upon on May 7, 2023. At the same time, a 14-member Technical Admissibility Committee began serving as arbitrator. On December 17, 2023, Chileans voted 55.8% to 44.2% against the second proposed constitution. President Boric stated that he would not seek a third referendum; this outcome effectively guarenteed the 1980 charter would remain in effect.
[ { "paragraph_id": 0, "text": "The Political Constitution of the Republic of Chile of 1980 (Spanish: Constitución Política de la República de Chile) is the fundamental law in force in Chile. It was approved and promulgated under the military dictatorship headed by Augusto Pinochet, being ratified by the Chilean citizenry through a referendum on September 11, 1980, although being held under restrictions and without electoral registers. While 69% of the population was reported to have voted yes, the vote was questioned by hundreds of denunciations of irregularities and fraud. The constitutional text took effect, in a transitory regime, on March 11, 1981, and then entered into full force on March 11, 1990, with the return to electoral democracy. It was amended for the first time in 1989 (through a referendum), and afterward in 1991, 1994, 1997, each year from 1999 to 2001, 2003, each year from 2007 to 2015, and each year from 2017 to 2021, with the last three amendments concerning the constituent process of 2020–2022. In September 2005, under Ricardo Lagos's presidency, a large amendment of the Constitution was approved by parliamentarians, removing from the text some of the less democratic dispositions coming from Pinochet's regime, such as senators-for-life and appointed senators, as well as the armed forces' warranty of the democratic regime.", "title": "" }, { "paragraph_id": 1, "text": "On November 15, 2019, following a series of popular protests in October 2019, a political agreement between parties with parliamentary representation called for a national referendum on the proposal of writing a new Constitution and on the mechanism to draft it. A plebiscite held on October 25, 2020, approved drafting a new fundamental charter, as well as choosing by popular vote delegates to a Constitutional Convention which was to fulfill this objective. The members of the convention were elected in May, 2021, and first convened on July 4, 2021. However, on 4 September 2022, voters rejected the new constitution in the constitutional referendum.", "title": "" }, { "paragraph_id": 2, "text": "According to the law professor Camel Cazor Aliste, the Constitution of 1980 has problems of legitimacy stemming from two facts. First, the constitutional commission was not representative of the political spectrum of Chile: its members had been handpicked by the Pinochet dictatorship, and opponents of the regime had been deliberately excluded. Secondly, the constitution's approval was achieved by the government in a controversial and tightly controlled referendum in 1980. Campaigning for the referendum was irregular, with the government calling people to vote positively on the reform, and also using radio and television commercial spots, while the opposition urging people to vote negatively were only able of doing small public demonstrations, without access to television time and limited radio access. There was no electoral roll for this vote, as the register had been burned during the dictatorship. There were multiple cases of double voting, with at least 3000 CNI agents doing so.", "title": "Legitimacy" }, { "paragraph_id": 3, "text": "Since the return to democracy, the constitution has been amended nearly 60 times.", "title": "Legitimacy" }, { "paragraph_id": 4, "text": "A document from September 13, 1973, shows that Jaime Guzmán had by then already been tasked by the Junta to study the creation of a new constitution.", "title": "Legitimacy" }, { "paragraph_id": 5, "text": "It has been argued the 1980 Constitution was designed to favor the election of right-wing legislative majorities. Several rounds of constitutional amendments have been enacted since 1989 to address this concern.", "title": "Legitimacy" }, { "paragraph_id": 6, "text": "In July 2022, a proposed replacement constitution was submitted for national debate and general referendum, but it was rejected on September 4 despite having had the support of left-leaning President Gabriel Boric. The document had faced intense criticism that it was \"too long, too left-wing and too radical\", and was rejected by a margin of 62% to 38%.", "title": "Replacement" }, { "paragraph_id": 7, "text": "On March 6, 2023, a group of experts appointed by Congress began a second attempt to prepare a preliminary draft of a new constitution. The group, with lawyer Veronica Undurraga serving as its president, was scheduled to work for three months on 12 institutional bases agreed to by lawmakers, after which the draft would be given to an elected Constitutional Council, whose members would be voted upon on May 7, 2023. At the same time, a 14-member Technical Admissibility Committee began serving as arbitrator.", "title": "Replacement" }, { "paragraph_id": 8, "text": "On December 17, 2023, Chileans voted 55.8% to 44.2% against the second proposed constitution. President Boric stated that he would not seek a third referendum; this outcome effectively guarenteed the 1980 charter would remain in effect.", "title": "Replacement" } ]
The Political Constitution of the Republic of Chile of 1980 is the fundamental law in force in Chile. It was approved and promulgated under the military dictatorship headed by Augusto Pinochet, being ratified by the Chilean citizenry through a referendum on September 11, 1980, although being held under restrictions and without electoral registers. While 69% of the population was reported to have voted yes, the vote was questioned by hundreds of denunciations of irregularities and fraud. The constitutional text took effect, in a transitory regime, on March 11, 1981, and then entered into full force on March 11, 1990, with the return to electoral democracy. It was amended for the first time in 1989, and afterward in 1991, 1994, 1997, each year from 1999 to 2001, 2003, each year from 2007 to 2015, and each year from 2017 to 2021, with the last three amendments concerning the constituent process of 2020–2022. In September 2005, under Ricardo Lagos's presidency, a large amendment of the Constitution was approved by parliamentarians, removing from the text some of the less democratic dispositions coming from Pinochet's regime, such as senators-for-life and appointed senators, as well as the armed forces' warranty of the democratic regime. On November 15, 2019, following a series of popular protests in October 2019, a political agreement between parties with parliamentary representation called for a national referendum on the proposal of writing a new Constitution and on the mechanism to draft it. A plebiscite held on October 25, 2020, approved drafting a new fundamental charter, as well as choosing by popular vote delegates to a Constitutional Convention which was to fulfill this objective. The members of the convention were elected in May, 2021, and first convened on July 4, 2021. However, on 4 September 2022, voters rejected the new constitution in the constitutional referendum.
2002-02-25T15:51:15Z
2023-12-31T23:22:29Z
[ "Template:Update", "Template:Politics of Chile", "Template:Lang-es", "Template:Reflist", "Template:Use American English", "Template:Wikisourcelang", "Template:Expand section", "Template:See also", "Template:Cite web", "Template:Cite news", "Template:Chile topics", "Template:Short description", "Template:Use mdy dates", "Template:Cite journal", "Template:Webarchive", "Template:Wikisource", "Template:Americas topic" ]
https://en.wikipedia.org/wiki/Chilean_Constitution_of_1980
7,794
Crystallography
Crystallography is the experimental science of determining the arrangement of atoms in crystalline solids. Crystallography is a fundamental subject in the fields of materials science and solid-state physics (condensed matter physics). The word crystallography is derived from the Ancient Greek word κρύσταλλος (krústallos; "clear ice, rock-crystal"), with its meaning extending to all solids with some degree of transparency, and γράφειν (gráphein; "to write"). In July 2012, the United Nations recognised the importance of the science of crystallography by proclaiming that 2014 would be the International Year of Crystallography. Before the development of X-ray diffraction crystallography (see below), the study of crystals was based on physical measurements of their geometry using a goniometer. This involved measuring the angles of crystal faces relative to each other and to theoretical reference axes (crystallographic axes), and establishing the symmetry of the crystal in question. The position in 3D space of each crystal face is plotted on a stereographic net such as a Wulff net or Lambert net. The pole to each face is plotted on the net. Each point is labelled with its Miller index. The final plot allows the symmetry of the crystal to be established. Crystallographic methods depend mainly on analysis of the diffraction patterns of a sample targeted by a beam of some type. X-rays are most commonly used; other beams used include electrons or neutrons. Crystallographers often explicitly state the type of beam used, as in the terms X-ray crystallography, neutron diffraction and electron diffraction. These three types of radiation interact with the specimen in different ways. It is hard to focus x-rays or neutrons, but since electrons are charged they can be focused and are used in electron microscope to produce magnified images. There are many ways that transmission electron microscopy and related techniques such as scanning transmission electron microscopy, high-resolution electron microscopy can be used to obtain images with in many cases atomic resolution from which crystallographic information can be obtained. There are also other methods such as low-energy electron diffraction, low-energy electron microscopy and reflection high-energy electron diffraction which can be used to obtain crystallographic information about surfaces. Crystallography is used by materials scientists to characterize different materials. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically because the natural shapes of crystals reflect the atomic structure. In addition, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Most materials do not occur as a single crystal, but are poly-crystalline in nature (they exist as an aggregate of small crystals with different orientations). As such, powder diffraction techniques, which takes diffraction patterns of polycrystalline samples with a large number of crystals, plays an important role in structural determination. Other physical properties are also linked to crystallography. For example, the minerals in clay form small, flat, platelike structures. Clay can be easily deformed because the platelike particles can slip along each other in the plane of the plates, yet remain strongly connected in the direction perpendicular to the plates. Such mechanisms can be studied by crystallographic texture measurements. In another example, iron transforms from a body-centered cubic (bcc) structure called ferrite to a face-centered cubic (fcc) structure called austenite when it is heated. The fcc structure is a close-packed structure unlike the bcc structure; thus the volume of the iron decreases when this transformation occurs. Crystallography is useful in phase identification. When manufacturing or using a material, it is generally desirable to know what compounds and what phases are present in the material, as their composition, structure and proportions will influence the material's properties. Each phase has a characteristic arrangement of atoms. X-ray or neutron diffraction can be used to identify which structures are present in the material, and thus which compounds are present. Crystallography covers the enumeration of the symmetry patterns which can be formed by atoms in a crystal and for this reason is related to group theory. X-ray crystallography is the primary method for determining the molecular conformations of biological macromolecules, particularly protein and nucleic acids such as DNA and RNA. The double-helical structure of DNA was deduced from crystallographic data. The first crystal structure of a macromolecule was solved in 1958, a three-dimensional model of the myoglobin molecule obtained by X-ray analysis. The Protein Data Bank (PDB) is a freely accessible repository for the structures of proteins and other biological macromolecules. Computer programs such as RasMol, Pymol or VMD can be used to visualize biological molecular structures. Neutron crystallography is often used to help refine structures obtained by X-ray methods or to solve a specific bond; the methods are often viewed as complementary, as X-rays are sensitive to electron positions and scatter most strongly off heavy atoms, while neutrons are sensitive to nucleus positions and scatter strongly even off many light isotopes, including hydrogen and deuterium. Electron crystallography has been used to determine some protein structures, most notably membrane proteins and viral capsids. The International Tables for Crystallography is an eight-book series that outlines the standard notations for formatting, describing and testing crystals. The series contains books that covers analysis methods and the mathematical procedures for determining organic structure through x-ray crystallography, electron diffraction, and neutron diffraction. The International tables are focused on procedures, techniques and descriptions and do not list the physical properties of individual crystals themselves. Each book is about 1000 pages and the titles of the books are:
[ { "paragraph_id": 0, "text": "Crystallography is the experimental science of determining the arrangement of atoms in crystalline solids. Crystallography is a fundamental subject in the fields of materials science and solid-state physics (condensed matter physics). The word crystallography is derived from the Ancient Greek word κρύσταλλος (krústallos; \"clear ice, rock-crystal\"), with its meaning extending to all solids with some degree of transparency, and γράφειν (gráphein; \"to write\"). In July 2012, the United Nations recognised the importance of the science of crystallography by proclaiming that 2014 would be the International Year of Crystallography.", "title": "" }, { "paragraph_id": 1, "text": "Before the development of X-ray diffraction crystallography (see below), the study of crystals was based on physical measurements of their geometry using a goniometer. This involved measuring the angles of crystal faces relative to each other and to theoretical reference axes (crystallographic axes), and establishing the symmetry of the crystal in question. The position in 3D space of each crystal face is plotted on a stereographic net such as a Wulff net or Lambert net. The pole to each face is plotted on the net. Each point is labelled with its Miller index. The final plot allows the symmetry of the crystal to be established.", "title": "" }, { "paragraph_id": 2, "text": "Crystallographic methods depend mainly on analysis of the diffraction patterns of a sample targeted by a beam of some type. X-rays are most commonly used; other beams used include electrons or neutrons. Crystallographers often explicitly state the type of beam used, as in the terms X-ray crystallography, neutron diffraction and electron diffraction. These three types of radiation interact with the specimen in different ways.", "title": "" }, { "paragraph_id": 3, "text": "It is hard to focus x-rays or neutrons, but since electrons are charged they can be focused and are used in electron microscope to produce magnified images. There are many ways that transmission electron microscopy and related techniques such as scanning transmission electron microscopy, high-resolution electron microscopy can be used to obtain images with in many cases atomic resolution from which crystallographic information can be obtained. There are also other methods such as low-energy electron diffraction, low-energy electron microscopy and reflection high-energy electron diffraction which can be used to obtain crystallographic information about surfaces.", "title": "" }, { "paragraph_id": 4, "text": "Crystallography is used by materials scientists to characterize different materials. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically because the natural shapes of crystals reflect the atomic structure. In addition, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Most materials do not occur as a single crystal, but are poly-crystalline in nature (they exist as an aggregate of small crystals with different orientations). As such, powder diffraction techniques, which takes diffraction patterns of polycrystalline samples with a large number of crystals, plays an important role in structural determination.", "title": "Applications in various areas" }, { "paragraph_id": 5, "text": "Other physical properties are also linked to crystallography. For example, the minerals in clay form small, flat, platelike structures. Clay can be easily deformed because the platelike particles can slip along each other in the plane of the plates, yet remain strongly connected in the direction perpendicular to the plates. Such mechanisms can be studied by crystallographic texture measurements.", "title": "Applications in various areas" }, { "paragraph_id": 6, "text": "In another example, iron transforms from a body-centered cubic (bcc) structure called ferrite to a face-centered cubic (fcc) structure called austenite when it is heated. The fcc structure is a close-packed structure unlike the bcc structure; thus the volume of the iron decreases when this transformation occurs.", "title": "Applications in various areas" }, { "paragraph_id": 7, "text": "Crystallography is useful in phase identification. When manufacturing or using a material, it is generally desirable to know what compounds and what phases are present in the material, as their composition, structure and proportions will influence the material's properties. Each phase has a characteristic arrangement of atoms. X-ray or neutron diffraction can be used to identify which structures are present in the material, and thus which compounds are present. Crystallography covers the enumeration of the symmetry patterns which can be formed by atoms in a crystal and for this reason is related to group theory.", "title": "Applications in various areas" }, { "paragraph_id": 8, "text": "X-ray crystallography is the primary method for determining the molecular conformations of biological macromolecules, particularly protein and nucleic acids such as DNA and RNA. The double-helical structure of DNA was deduced from crystallographic data. The first crystal structure of a macromolecule was solved in 1958, a three-dimensional model of the myoglobin molecule obtained by X-ray analysis. The Protein Data Bank (PDB) is a freely accessible repository for the structures of proteins and other biological macromolecules. Computer programs such as RasMol, Pymol or VMD can be used to visualize biological molecular structures. Neutron crystallography is often used to help refine structures obtained by X-ray methods or to solve a specific bond; the methods are often viewed as complementary, as X-rays are sensitive to electron positions and scatter most strongly off heavy atoms, while neutrons are sensitive to nucleus positions and scatter strongly even off many light isotopes, including hydrogen and deuterium. Electron crystallography has been used to determine some protein structures, most notably membrane proteins and viral capsids.", "title": "Applications in various areas" }, { "paragraph_id": 9, "text": "The International Tables for Crystallography is an eight-book series that outlines the standard notations for formatting, describing and testing crystals. The series contains books that covers analysis methods and the mathematical procedures for determining organic structure through x-ray crystallography, electron diffraction, and neutron diffraction. The International tables are focused on procedures, techniques and descriptions and do not list the physical properties of individual crystals themselves. Each book is about 1000 pages and the titles of the books are:", "title": "Reference literature" } ]
Crystallography is the experimental science of determining the arrangement of atoms in crystalline solids. Crystallography is a fundamental subject in the fields of materials science and solid-state physics. The word crystallography is derived from the Ancient Greek word κρύσταλλος, with its meaning extending to all solids with some degree of transparency, and γράφειν. In July 2012, the United Nations recognised the importance of the science of crystallography by proclaiming that 2014 would be the International Year of Crystallography. Before the development of X-ray diffraction crystallography, the study of crystals was based on physical measurements of their geometry using a goniometer. This involved measuring the angles of crystal faces relative to each other and to theoretical reference axes, and establishing the symmetry of the crystal in question. The position in 3D space of each crystal face is plotted on a stereographic net such as a Wulff net or Lambert net. The pole to each face is plotted on the net. Each point is labelled with its Miller index. The final plot allows the symmetry of the crystal to be established. Crystallographic methods depend mainly on analysis of the diffraction patterns of a sample targeted by a beam of some type. X-rays are most commonly used; other beams used include electrons or neutrons. Crystallographers often explicitly state the type of beam used, as in the terms X-ray crystallography, neutron diffraction and electron diffraction. These three types of radiation interact with the specimen in different ways. X-rays interact with the spatial distribution of electrons in the sample. Neutrons are scattered by the atomic nuclei through the strong nuclear forces, but in addition, the magnetic moment of neutrons is non-zero. They are therefore also scattered by magnetic fields. When neutrons are scattered from hydrogen-containing materials, they produce diffraction patterns with high noise levels. However, the material can sometimes be treated to substitute deuterium for hydrogen. Because of these different forms of interaction, the three types of radiation are suitable for different crystallographic studies. Electrons are charged particles and therefore interact with the total charge distribution of both the atomic nuclei and the electrons of the sample. It is hard to focus x-rays or neutrons, but since electrons are charged they can be focused and are used in electron microscope to produce magnified images. There are many ways that transmission electron microscopy and related techniques such as scanning transmission electron microscopy, high-resolution electron microscopy can be used to obtain images with in many cases atomic resolution from which crystallographic information can be obtained. There are also other methods such as low-energy electron diffraction, low-energy electron microscopy and reflection high-energy electron diffraction which can be used to obtain crystallographic information about surfaces.
2002-01-17T11:52:38Z
2023-12-15T04:30:25Z
[ "Template:Cite web", "Template:Geology", "Template:Authority control", "Template:Short description", "Template:Main", "Template:Div col end", "Template:Cite book", "Template:Branches of chemistry", "Template:Grc-transl", "Template:Wikt-lang", "Template:Reflist", "Template:Crystallography", "Template:Branches of materials science", "Template:For", "Template:Columns-list", "Template:Div col", "Template:Cite journal", "Template:Further" ]
https://en.wikipedia.org/wiki/Crystallography
7,796
Claude Auchinleck
Field Marshal Sir Claude John Eyre Auchinleck, (/ˌɒxɪnˈlɛk/ OKH-in-LEK), GCB, GCIE, CSI, DSO, OBE (21 June 1884 – 23 March 1981), was a British Indian Army commander who saw active service during the world wars. A career soldier who spent much of his military career in India, he rose to become commander-in-chief of the Indian Army by early 1941 during the Second World War. In July 1941 he was appointed commander-in-chief of the Middle East Theatre, but after initial successes, the war in North Africa turned against the British-led forces under his command, and he was relieved of the post in August 1942 during the North African campaign. In June 1943, he was once again appointed Commander-in-Chief, India, where his support through the organisation of supply, maintenance and training for General William Slim's Fourteenth Army played an important role in its success. He served as Commander-in-Chief, India, until the Partition in 1947, when he assumed the role of Supreme Commander of all British forces in India and Pakistan until late 1948. Born at 89 Victoria Road in Aldershot, Hampshire, the son of John Claud Alexander Auchinleck and Mary Eleanor (Eyre) Auchinleck. His father, a colonel in the Royal Horse Artillery of the British Army, was posted to Bangalore in British India, with his family accompanying him, while Claude was very young. It was from here that he developed a love for the country that would last for most of his life. Returning to England after the death of his father in 1892, Auchinleck attended Eagle House School at Crowthorne and then Wellington College on scholarships. From there he went on to the Royal Military College, Sandhurst and was commissioned as an unattached second lieutenant in the Indian Army on 21 January 1903, and joined the 62nd Punjabis in April 1904. He soon learned several Indian languages, and, able to speak fluently with his soldiers, he absorbed a knowledge of local dialects and customs: this familiarity engendered a lasting mutual respect, enhanced by his own personality. He was promoted to lieutenant on 21 April 1905, and then spent the next two years in Tibet and Sikkim before moving to Benares in 1907 where he caught diphtheria. After briefly serving with the Royal Inniskilling Fusiliers at Aldershot he returned to Benares in 1909 and became adjutant of the 62nd Punjabis with promotion to captain on 21 January 1912. Auchinleck was an active freemason. Auchinleck saw active service in the First World War and was deployed with his regiment to defend the Suez Canal: in February 1915 he was in action against the Turks at Ismaïlia. His regiment moved into Aden to counter the Turkish threat there in July 1915. The 6th Indian Division, of which the 62nd Punjabis were a part, was landed at Basra on 31 December 1915 for the Mesopotamian campaign. In July 1916 Auchinleck was promoted acting major and made second in command of his battalion. He took part in a series of fruitless attacks on the Turks at the Battle of Hanna in January 1916 and was one of the few British officers in his regiment to survive these actions. He became acting commanding officer of his battalion in February 1917 and led his regiment at the Second Battle of Kut in February 1917 and the Fall of Baghdad in March 1917. Having been mentioned in despatches and having received the Distinguished Service Order in 1917 for his service in Mesopotamia, he was promoted to the substantive rank of major on 21 January 1918, to temporary lieutenant-colonel on 23 May 1919 and to brevet lieutenant-colonel on 15 November 1919 for his "distinguished service in Southern and Central Kurdistan" on the recommendation of the Commander-in-Chief of the Mesopotamia Expeditionary Force. Auchinleck attended the Staff College, Quetta, between 1920 and 1921. As a lieutenant colonel, he outranked most of his fellow students and even some members of the staff. Despite performing well there – passing the course and being among the top ten students – he was critical of many aspects of the college, which he believed to be too theoretical and with little emphasis being placed on matters such as supply and administration, both of which he thought had been mishandled in the campaign in Mesopotamia. He married Jessie Stewart in 1921. Jessie had been born in 1900 in Tacoma, Washington, to Alexander Stewart, head of the Blue Funnel Line that plied the west coast of the United States. When he died about 1919, their mother took her, her twin brother Alan and her younger brother Hepburne back to Bun Rannoch, the family estate at Innerhadden in Perthshire. Holidaying at Grasse on the French Riviera, Auchinleck, who was on leave from India at the time, met Jessie on the tennis courts. She was a high-spirited, blue-eyed beauty. Things moved quickly, and they were married within five months. Sixteen years younger than Auchinleck, Jessie became known as 'the little American girl' in India, but adapted readily to life there. They had no children. Auchinleck became temporary Deputy Assistant Quartermaster-General at Army Headquarters in February 1923 and then second-in-command of his regiment, which in the 1923 reorganisation of the Indian Army had become the 1st Punjab Regiment, in September 1925. He attended the Imperial Defence College in 1927 and, having been promoted to the permanent rank of lieutenant-colonel on 21 January 1929 he was appointed to command his regiment. Promoted to full colonel on 1 February 1930 with seniority from 15 November 1923, he became an instructor at the Staff College, Quetta in February 1930 where he remained until April 1933. He was promoted to temporary brigadier on 1 July 1933 and given command of the Peshawar Brigade, which was active in the pacification of the adjacent tribal areas during the Mohmand and Bajaur Operations between July and October 1933: during his period of command he was mentioned in despatches. He led a second punitive expedition during the Second Mohmand Campaign in August 1935 for which he was again mentioned in despatches, promoted to major-general on 30 November 1935 and appointed a Companion of the Order of the Star of India on 8 May 1936. On leaving his brigade command in April 1936, Auchinleck was on the unemployed list (on half pay) until September 1936 when he was appointed Deputy Chief of the General Staff and Director of Staff Duties in Delhi. He was then appointed to command the Meerut District in India in July 1938. In 1938 Auchinleck was appointed to chair a committee to consider the modernisation, composition and re-equipment of the British Indian Army: the committee's recommendations formed the basis of the 1939 Chatfield Report which outlined the transformation of the Indian Army – it grew from 183,000 in 1939 to over 2,250,000 men by the end of the war. On the outbreak of war, Auchinleck was appointed to command the Indian 3rd Infantry Division, but in January 1940 was summoned to the United Kingdom to command IV Corps, the only time in the war that a wholly British corps was commanded by an Indian Army officer. He received promotion to acting lieutenant general on 1 February 1940 and to the substantive rank of lieutenant general on 16 March 1940. In May 1940 Auchinleck took over command of the Anglo-French ground forces during the Norwegian campaign, a military operation that was doomed to fail. Auchinleck arrived in Greenock, after the fall of Norway, on 12 June, by which time the Battle of France was nearing its end, with the majority of the BEF in France having been evacuated from the port of Dunkirk, with the French surrender only a few days away. Due to these reasons, all attention was now given to the defence of the UK which many believed would soon be invaded by the Germans (see Operation Sea Lion). In mid-June he was given command of the recently established V Corps, then serving in Southern Command under Lieutenant General Sir Alan Brooke. His stay was not to be for very long, however, as, just a few weeks later, Brooke succeeded General Sir Edmund Ironside as Commander-in-Chief, Home Forces, with Auchinleck succeeding Brooke as GOC-in-C of Southern Command, responsible for the defence of Southern England, where the expected invasion would come from. The recently vacated V Corps was taken over by Lieutenant General Bernard Montgomery, who disliked Auchinleck intensely, possibly due to his disdain for the Indian Army and its officers. The relationship between the two future field marshals was not easy, with Montgomery later writing: In the 5th Corps I first served under Auchinleck, who had the Southern Command; I cannot recall that we ever agreed on anything. Many of Montgomery's actions in the next few weeks and months could be considered as insubordination, with one incident in particular standing out, when Montgomery went over Auchinleck's head directly to the Adjutant-General on issues related to officers and men being transferred to and from Montgomery's V Corps. Auchinleck was not to deal with this behaviour for long as in December he was ordered to succeed his friend, General Sir Robert Cassels, as Commander-in-Chief, India. By now known throughout the army as "the Auk", he was destined to encounter Montgomery again, although the circumstances there would not be at all pleasant. Promoted to full general on 26 December, Auchinleck returned to India in January 1941 to assume his new appointment, in which position he was also appointed to the Executive Council of the Viceroy of India and appointed ADC General to the King, a ceremonial position he was to hold until after the end of the war. In April 1941, RAF Habbaniya was threatened by the new pro-Axis regime of Rashid Ali. This large Royal Air Force station was west of Baghdad in Iraq and General Archibald Wavell, Commander-in-Chief Middle East Command, was reluctant to intervene, despite the urgings of Winston Churchill, because of his pressing commitments in the Western Desert and Greece. Auchinleck, however, acted decisively, sending the 1st Battalion of the King's Own Royal Regiment (Lancaster) by air to Habbaniya and shipping the 10th Indian Infantry Division by sea to Basra. Wavell was prevailed upon by London to send Habforce, a relief column, from the British Mandate of Palestine but by the time it arrived in Habbaniya on 18 May the Anglo-Iraqi War was virtually over. Following the see-saw of Allied and Axis successes and reverses in North Africa, Auchinleck was appointed to succeed General Sir Archibald Wavell as Commander-in-Chief Middle East Command in July 1941; Wavell took up Auchinleck's post as Commander-in-Chief of the Indian Army, swapping jobs with him. As Commander-in-Chief Middle East, Auchinleck, based in Cairo, held responsibility not just for North Africa but also for Persia and the Middle East. He launched an offensive in the Western Desert, Operation Crusader, in November 1941: despite some tactical reverses during the fighting which resulted in Auchinleck replacing the Eighth Army commander Alan Cunningham with Neil Ritchie, by the end of December the besieged garrison of Tobruk had been relieved and Rommel obliged to withdraw to El Agheila. Auchinleck appears to have believed that the enemy had been defeated, writing on 12 January 1942 that the Axis forces were "beginning to feel the strain" and were "hard pressed". In fact the Axis forces had managed to withdraw in good order and a few days after Auchinleck's optimistic appreciation, having reorganised and been reinforced, struck at the dispersed and weakened British forces, driving them back to the Gazala positions near Tobruk. The British Chief of the Imperial General Staff (CIGS), General Sir Alan Brooke, wrote in his diary that it was "nothing less than bad generalship on the part of Auchinleck. He has been overconfident and has believed everything his overoptimistic [DMI] Shearer has told him". Brooke commented that Auchinleck "could have been one of the finest of commanders" but lacked the ability to select the men to serve him. Brooke sent him one of his best armoured division commanders Richard McCreery, whose advice was ignored in favour of that of Auchinleck's controversial chief of operations, Major-General Dorman-Smith. Rommel's attack at the Battle of Gazala of 26 May 1942 resulted in a significant defeat for the British. Auchinleck's appreciation of the situation written to Ritchie on 20 May had suggested that the armoured reserves be concentrated in a position suitable to meet both a flanking attack around the south of the front or a direct attack through the centre (which was the likelihood more favoured by Auchinleck). In the event, Ritchie chose a more dispersed and rearward positioning of his two armoured divisions and when the attack in the centre came, it proved to be a diversion and the main attack, by Rommel's armoured formations, came round the southern flank. Poor initial positioning and subsequent handling and coordination of Allied formations by Ritchie and his corps commanders resulted in their heavy defeat and the Eighth Army retreating into Egypt; Tobruk fell to the Axis on 21 June 1942. On 24 June Auchinleck stepped in to take direct command of the Eighth Army, having lost confidence in Neil Ritchie's ability to control and direct his forces. Auchinleck discarded Ritchie's plan to stand at Mersa Matruh, deciding to fight only a delaying action there, while withdrawing to the more easily defendable position at El Alamein. Here Auchinleck tailored a defence that took advantage of the terrain and the fresh troops at his disposal, stopping the exhausted German/Italian advance in the First Battle of El Alamein. Enjoying a considerable superiority of material and men over the weak German/Italian forces, Auchinleck organised a series of counter-attacks. Poorly conceived and badly coordinated, these attacks achieved little. "The Auk", as he was known, appointed a number of senior commanders who proved to be unsuitable for their positions, and command arrangements were often characterised by bitter personality clashes. Auchinleck was an Indian Army officer and was criticised for apparently having little direct experience or understanding of British and Dominion troops. Dorman-Smith was regarded with considerable distrust by many of the senior commanders in Eighth Army. By July 1942 Auchinleck had lost the confidence of Dominion commanders and relations with his British commanders had become strained. Like his foe Rommel (and his predecessor Wavell and successor Montgomery), Auchinleck was subjected to constant political interference, having to weather a barrage of hectoring telegrams and instructions from Prime Minister Churchill throughout late 1941 and the spring and summer of 1942. Churchill constantly sought an offensive from Auchinleck, and was downcast at the military reverses in Egypt and Cyrenaica. Churchill was desperate for some sort of British victory before the planned Allied landings in North Africa, Operation Torch, scheduled for November 1942. He badgered Auchinleck immediately after the Eighth Army had all but exhausted itself after the first battle of El Alamein. Churchill and the Chief of the Imperial General Staff, Sir Alan Brooke, flew to Cairo in early August 1942 to meet Auchinleck, where it emerged he had lost the confidence of both men. He was replaced as Commander-in-Chief Middle East Command by General Sir Harold Alexander (later Field Marshal The Earl Alexander of Tunis). Joseph M. Horodyski and Maurice Remy both praise Auchinleck as an underrated military leader who contributed the most to the successful defence of El Alamein and consequently the final defeat of Rommel in Africa. The two historians also criticize Churchill for the unreasonable decision to put the blame on Auchinleck and to relieve him. Churchill offered Auchinleck command of the newly created Persia and Iraq Command (this having been separated from Alexander's command), but Auchinleck declined this post, as he believed that separating the area from the Middle East Command was not good policy and the new arrangements would not be workable. He set his reasons out in his letter to the Chief of the Imperial General Staff dated 14 August 1942. Instead he returned to India, where he spent almost a year "unemployed" before in June 1943 being again appointed Commander-in-Chief of the Indian Army, General Wavell meanwhile having been appointed Viceroy, on this appointment it was announced that responsibility for the prosecution of the war with Japan would move from the Commander-in-Chief India to a newly created South East Asia Command. However, the appointment of the new command's Supreme Commander, Acting Vice Admiral Lord Louis Mountbatten, was not announced until August 1943 and until Mountbatten could set up his headquarters and assume control (in November), Auchinleck retained responsibility for operations in India and Burma while conducting a review and revision of Allied plans based on the decisions taken by the Allied Combined Chiefs of Staff at the Quadrant Conference, which ended in August. Following Mountbatten's arrival, Auchinleck's India Command (which had equal status with South East Asia Command in the military hierarchy) was responsible for the internal security of India, the defence of the North West Frontier and the buildup of India as a base, including most importantly the reorganisation of the Indian Army, the training of forces destined for SEAC and the lines of communication carrying men and material to the forward areas and to China. Auchinleck made the supply of Fourteenth Army, with probably the worst lines of communication of the war, his immediate priority; as Sir William Slim, commander of the Fourteenth Army, was later to write: It was a good day for us when he [Auchinleck] took command of India, our main base, recruiting area and training ground. The Fourteenth Army, from its birth to its final victory, owed much to his unselfish support and never-failing understanding. Without him and what he and the Army of India did for us we could not have existed, let alone conquered. Auchinleck suffered a personal disappointment when his wife Jessie left him for his friend, Air Chief Marshal Sir Richard Peirse. Peirse and Auchinleck had been students together at the Imperial Defence College, but that was long before. Peirse was now Allied Air Commander-in-Chief, South-East Asia, and also based in India. The affair became known to Mountbatten in early 1944, and he passed the information to the Chief of the RAF, Sir Charles Portal, hoping that Peirse would be recalled. The affair was common knowledge by September 1944, and Peirse was neglecting his duties. Mountbatten sent Peirse and Lady Auchinleck back to England on 28 November 1944, where they lived together at a Brighton hotel. Peirse had his marriage dissolved, and Auchinleck obtained a divorce in 1946. Auchinleck was reportedly very badly affected. According to his sister, he was never the same after the break-up. He always carried a photograph of Jessie in his wallet even after the divorce. There is scholarly dispute whether Auchinleck was homosexual. His biographer, Philip Warner, addressed the rumours but dismissed them; however historian Ronald Hyam has alleged that "sexually based moral-revulsion" was the reason for Montgomery's inability to get on with Auchinleck, and further, that Auchinleck was "let off with a high-level warning" over his relationships with Indian boys. Auchinleck continued as Commander-in-Chief of the Indian Army after the end of the war helping, though much against his own convictions, to prepare the future Indian and Pakistani armies for the Partition of India: in November 1945 he was forced to commute the more serious judicial sentences awarded against officers of the Indian National Army in face of growing unease and unrest both within the Indian population, and the British Indian Army. On 1 June 1946 he was promoted to field marshal, but he refused to accept a peerage, lest he be thought associated with a policy (i.e. Partition) that he thought fundamentally dishonourable. Sending a report to the British Government on 28 September 1947, Field Marshal Auchinleck wrote: "I have no hesitation, whatever, in affirming that the present Indian Cabinet are implacably determined to do all in their power to prevent the establishment of the Dominion of Pakistan on firm basis." He stated in the second, political part of his assessment, "Since 15th August, the situation has steadily deteriorated and the Indian leaders, cabinet ministers, civil officials and others have persistently tried to obstruct the work of partition of the armed forces." When partition was effected in August 1947, Auchinleck was appointed Supreme Commander of all British forces remaining in India and Pakistan and remained in this role until the winding up and closure of the Supreme H.Q. at the end of November 1947. This marked his effective retirement from the army (although technically field marshals in the British Army never retire, remaining on the active list on half pay). He left India on 1 December. After a brief period in Italy in connection with an unsuccessful business project, Auchinleck retired to London, where he occupied himself with a number of charitable and business interests and became a respectably skilled watercolour painter. In 1960 he settled in Beccles in the county of Suffolk, remaining there for seven years until, at the age of eighty-four, he decided to emigrate and set up home in Marrakesh, where he died on 23 March 1981. Auchinleck is buried in Ben M'Sik European Cemetery, Casablanca, in the Commonwealth War Graves Commission plot in the cemetery, next to the grave of Raymond Steed who was the second youngest non-civilian Commonwealth casualty of the Second World War. A memorial plaque was erected in the crypt of St Paul's Cathedral. A bronze statue of Auchinleck can be seen on Broad Street adjacent to Auchinleck House, Five Ways, Birmingham.
[ { "paragraph_id": 0, "text": "Field Marshal Sir Claude John Eyre Auchinleck, (/ˌɒxɪnˈlɛk/ OKH-in-LEK), GCB, GCIE, CSI, DSO, OBE (21 June 1884 – 23 March 1981), was a British Indian Army commander who saw active service during the world wars. A career soldier who spent much of his military career in India, he rose to become commander-in-chief of the Indian Army by early 1941 during the Second World War. In July 1941 he was appointed commander-in-chief of the Middle East Theatre, but after initial successes, the war in North Africa turned against the British-led forces under his command, and he was relieved of the post in August 1942 during the North African campaign.", "title": "" }, { "paragraph_id": 1, "text": "In June 1943, he was once again appointed Commander-in-Chief, India, where his support through the organisation of supply, maintenance and training for General William Slim's Fourteenth Army played an important role in its success. He served as Commander-in-Chief, India, until the Partition in 1947, when he assumed the role of Supreme Commander of all British forces in India and Pakistan until late 1948.", "title": "" }, { "paragraph_id": 2, "text": "Born at 89 Victoria Road in Aldershot, Hampshire, the son of John Claud Alexander Auchinleck and Mary Eleanor (Eyre) Auchinleck. His father, a colonel in the Royal Horse Artillery of the British Army, was posted to Bangalore in British India, with his family accompanying him, while Claude was very young. It was from here that he developed a love for the country that would last for most of his life. Returning to England after the death of his father in 1892, Auchinleck attended Eagle House School at Crowthorne and then Wellington College on scholarships. From there he went on to the Royal Military College, Sandhurst and was commissioned as an unattached second lieutenant in the Indian Army on 21 January 1903, and joined the 62nd Punjabis in April 1904. He soon learned several Indian languages, and, able to speak fluently with his soldiers, he absorbed a knowledge of local dialects and customs: this familiarity engendered a lasting mutual respect, enhanced by his own personality.", "title": "Early life and career" }, { "paragraph_id": 3, "text": "He was promoted to lieutenant on 21 April 1905, and then spent the next two years in Tibet and Sikkim before moving to Benares in 1907 where he caught diphtheria. After briefly serving with the Royal Inniskilling Fusiliers at Aldershot he returned to Benares in 1909 and became adjutant of the 62nd Punjabis with promotion to captain on 21 January 1912. Auchinleck was an active freemason.", "title": "Early life and career" }, { "paragraph_id": 4, "text": "Auchinleck saw active service in the First World War and was deployed with his regiment to defend the Suez Canal: in February 1915 he was in action against the Turks at Ismaïlia. His regiment moved into Aden to counter the Turkish threat there in July 1915. The 6th Indian Division, of which the 62nd Punjabis were a part, was landed at Basra on 31 December 1915 for the Mesopotamian campaign. In July 1916 Auchinleck was promoted acting major and made second in command of his battalion. He took part in a series of fruitless attacks on the Turks at the Battle of Hanna in January 1916 and was one of the few British officers in his regiment to survive these actions.", "title": "First World War" }, { "paragraph_id": 5, "text": "He became acting commanding officer of his battalion in February 1917 and led his regiment at the Second Battle of Kut in February 1917 and the Fall of Baghdad in March 1917. Having been mentioned in despatches and having received the Distinguished Service Order in 1917 for his service in Mesopotamia, he was promoted to the substantive rank of major on 21 January 1918, to temporary lieutenant-colonel on 23 May 1919 and to brevet lieutenant-colonel on 15 November 1919 for his \"distinguished service in Southern and Central Kurdistan\" on the recommendation of the Commander-in-Chief of the Mesopotamia Expeditionary Force.", "title": "First World War" }, { "paragraph_id": 6, "text": "Auchinleck attended the Staff College, Quetta, between 1920 and 1921. As a lieutenant colonel, he outranked most of his fellow students and even some members of the staff. Despite performing well there – passing the course and being among the top ten students – he was critical of many aspects of the college, which he believed to be too theoretical and with little emphasis being placed on matters such as supply and administration, both of which he thought had been mishandled in the campaign in Mesopotamia. He married Jessie Stewart in 1921. Jessie had been born in 1900 in Tacoma, Washington, to Alexander Stewart, head of the Blue Funnel Line that plied the west coast of the United States. When he died about 1919, their mother took her, her twin brother Alan and her younger brother Hepburne back to Bun Rannoch, the family estate at Innerhadden in Perthshire. Holidaying at Grasse on the French Riviera, Auchinleck, who was on leave from India at the time, met Jessie on the tennis courts. She was a high-spirited, blue-eyed beauty. Things moved quickly, and they were married within five months. Sixteen years younger than Auchinleck, Jessie became known as 'the little American girl' in India, but adapted readily to life there. They had no children.", "title": "Between the world wars" }, { "paragraph_id": 7, "text": "Auchinleck became temporary Deputy Assistant Quartermaster-General at Army Headquarters in February 1923 and then second-in-command of his regiment, which in the 1923 reorganisation of the Indian Army had become the 1st Punjab Regiment, in September 1925. He attended the Imperial Defence College in 1927 and, having been promoted to the permanent rank of lieutenant-colonel on 21 January 1929 he was appointed to command his regiment. Promoted to full colonel on 1 February 1930 with seniority from 15 November 1923, he became an instructor at the Staff College, Quetta in February 1930 where he remained until April 1933.", "title": "Between the world wars" }, { "paragraph_id": 8, "text": "He was promoted to temporary brigadier on 1 July 1933 and given command of the Peshawar Brigade, which was active in the pacification of the adjacent tribal areas during the Mohmand and Bajaur Operations between July and October 1933: during his period of command he was mentioned in despatches. He led a second punitive expedition during the Second Mohmand Campaign in August 1935 for which he was again mentioned in despatches, promoted to major-general on 30 November 1935 and appointed a Companion of the Order of the Star of India on 8 May 1936.", "title": "Between the world wars" }, { "paragraph_id": 9, "text": "On leaving his brigade command in April 1936, Auchinleck was on the unemployed list (on half pay) until September 1936 when he was appointed Deputy Chief of the General Staff and Director of Staff Duties in Delhi. He was then appointed to command the Meerut District in India in July 1938. In 1938 Auchinleck was appointed to chair a committee to consider the modernisation, composition and re-equipment of the British Indian Army: the committee's recommendations formed the basis of the 1939 Chatfield Report which outlined the transformation of the Indian Army – it grew from 183,000 in 1939 to over 2,250,000 men by the end of the war.", "title": "Between the world wars" }, { "paragraph_id": 10, "text": "On the outbreak of war, Auchinleck was appointed to command the Indian 3rd Infantry Division, but in January 1940 was summoned to the United Kingdom to command IV Corps, the only time in the war that a wholly British corps was commanded by an Indian Army officer. He received promotion to acting lieutenant general on 1 February 1940 and to the substantive rank of lieutenant general on 16 March 1940. In May 1940 Auchinleck took over command of the Anglo-French ground forces during the Norwegian campaign, a military operation that was doomed to fail.", "title": "Second World War" }, { "paragraph_id": 11, "text": "Auchinleck arrived in Greenock, after the fall of Norway, on 12 June, by which time the Battle of France was nearing its end, with the majority of the BEF in France having been evacuated from the port of Dunkirk, with the French surrender only a few days away. Due to these reasons, all attention was now given to the defence of the UK which many believed would soon be invaded by the Germans (see Operation Sea Lion). In mid-June he was given command of the recently established V Corps, then serving in Southern Command under Lieutenant General Sir Alan Brooke. His stay was not to be for very long, however, as, just a few weeks later, Brooke succeeded General Sir Edmund Ironside as Commander-in-Chief, Home Forces, with Auchinleck succeeding Brooke as GOC-in-C of Southern Command, responsible for the defence of Southern England, where the expected invasion would come from. The recently vacated V Corps was taken over by Lieutenant General Bernard Montgomery, who disliked Auchinleck intensely, possibly due to his disdain for the Indian Army and its officers. The relationship between the two future field marshals was not easy, with Montgomery later writing:", "title": "Second World War" }, { "paragraph_id": 12, "text": "In the 5th Corps I first served under Auchinleck, who had the Southern Command; I cannot recall that we ever agreed on anything.", "title": "Second World War" }, { "paragraph_id": 13, "text": "Many of Montgomery's actions in the next few weeks and months could be considered as insubordination, with one incident in particular standing out, when Montgomery went over Auchinleck's head directly to the Adjutant-General on issues related to officers and men being transferred to and from Montgomery's V Corps. Auchinleck was not to deal with this behaviour for long as in December he was ordered to succeed his friend, General Sir Robert Cassels, as Commander-in-Chief, India. By now known throughout the army as \"the Auk\", he was destined to encounter Montgomery again, although the circumstances there would not be at all pleasant.", "title": "Second World War" }, { "paragraph_id": 14, "text": "Promoted to full general on 26 December, Auchinleck returned to India in January 1941 to assume his new appointment, in which position he was also appointed to the Executive Council of the Viceroy of India and appointed ADC General to the King, a ceremonial position he was to hold until after the end of the war.", "title": "Second World War" }, { "paragraph_id": 15, "text": "In April 1941, RAF Habbaniya was threatened by the new pro-Axis regime of Rashid Ali. This large Royal Air Force station was west of Baghdad in Iraq and General Archibald Wavell, Commander-in-Chief Middle East Command, was reluctant to intervene, despite the urgings of Winston Churchill, because of his pressing commitments in the Western Desert and Greece. Auchinleck, however, acted decisively, sending the 1st Battalion of the King's Own Royal Regiment (Lancaster) by air to Habbaniya and shipping the 10th Indian Infantry Division by sea to Basra. Wavell was prevailed upon by London to send Habforce, a relief column, from the British Mandate of Palestine but by the time it arrived in Habbaniya on 18 May the Anglo-Iraqi War was virtually over.", "title": "Second World War" }, { "paragraph_id": 16, "text": "Following the see-saw of Allied and Axis successes and reverses in North Africa, Auchinleck was appointed to succeed General Sir Archibald Wavell as Commander-in-Chief Middle East Command in July 1941; Wavell took up Auchinleck's post as Commander-in-Chief of the Indian Army, swapping jobs with him.", "title": "Second World War" }, { "paragraph_id": 17, "text": "As Commander-in-Chief Middle East, Auchinleck, based in Cairo, held responsibility not just for North Africa but also for Persia and the Middle East. He launched an offensive in the Western Desert, Operation Crusader, in November 1941: despite some tactical reverses during the fighting which resulted in Auchinleck replacing the Eighth Army commander Alan Cunningham with Neil Ritchie, by the end of December the besieged garrison of Tobruk had been relieved and Rommel obliged to withdraw to El Agheila. Auchinleck appears to have believed that the enemy had been defeated, writing on 12 January 1942 that the Axis forces were \"beginning to feel the strain\" and were \"hard pressed\".", "title": "Second World War" }, { "paragraph_id": 18, "text": "In fact the Axis forces had managed to withdraw in good order and a few days after Auchinleck's optimistic appreciation, having reorganised and been reinforced, struck at the dispersed and weakened British forces, driving them back to the Gazala positions near Tobruk. The British Chief of the Imperial General Staff (CIGS), General Sir Alan Brooke, wrote in his diary that it was \"nothing less than bad generalship on the part of Auchinleck. He has been overconfident and has believed everything his overoptimistic [DMI] Shearer has told him\". Brooke commented that Auchinleck \"could have been one of the finest of commanders\" but lacked the ability to select the men to serve him. Brooke sent him one of his best armoured division commanders Richard McCreery, whose advice was ignored in favour of that of Auchinleck's controversial chief of operations, Major-General Dorman-Smith.", "title": "Second World War" }, { "paragraph_id": 19, "text": "Rommel's attack at the Battle of Gazala of 26 May 1942 resulted in a significant defeat for the British. Auchinleck's appreciation of the situation written to Ritchie on 20 May had suggested that the armoured reserves be concentrated in a position suitable to meet both a flanking attack around the south of the front or a direct attack through the centre (which was the likelihood more favoured by Auchinleck). In the event, Ritchie chose a more dispersed and rearward positioning of his two armoured divisions and when the attack in the centre came, it proved to be a diversion and the main attack, by Rommel's armoured formations, came round the southern flank. Poor initial positioning and subsequent handling and coordination of Allied formations by Ritchie and his corps commanders resulted in their heavy defeat and the Eighth Army retreating into Egypt; Tobruk fell to the Axis on 21 June 1942.", "title": "Second World War" }, { "paragraph_id": 20, "text": "On 24 June Auchinleck stepped in to take direct command of the Eighth Army, having lost confidence in Neil Ritchie's ability to control and direct his forces. Auchinleck discarded Ritchie's plan to stand at Mersa Matruh, deciding to fight only a delaying action there, while withdrawing to the more easily defendable position at El Alamein. Here Auchinleck tailored a defence that took advantage of the terrain and the fresh troops at his disposal, stopping the exhausted German/Italian advance in the First Battle of El Alamein. Enjoying a considerable superiority of material and men over the weak German/Italian forces, Auchinleck organised a series of counter-attacks. Poorly conceived and badly coordinated, these attacks achieved little.", "title": "Second World War" }, { "paragraph_id": 21, "text": "\"The Auk\", as he was known, appointed a number of senior commanders who proved to be unsuitable for their positions, and command arrangements were often characterised by bitter personality clashes. Auchinleck was an Indian Army officer and was criticised for apparently having little direct experience or understanding of British and Dominion troops. Dorman-Smith was regarded with considerable distrust by many of the senior commanders in Eighth Army. By July 1942 Auchinleck had lost the confidence of Dominion commanders and relations with his British commanders had become strained.", "title": "Second World War" }, { "paragraph_id": 22, "text": "Like his foe Rommel (and his predecessor Wavell and successor Montgomery), Auchinleck was subjected to constant political interference, having to weather a barrage of hectoring telegrams and instructions from Prime Minister Churchill throughout late 1941 and the spring and summer of 1942. Churchill constantly sought an offensive from Auchinleck, and was downcast at the military reverses in Egypt and Cyrenaica. Churchill was desperate for some sort of British victory before the planned Allied landings in North Africa, Operation Torch, scheduled for November 1942. He badgered Auchinleck immediately after the Eighth Army had all but exhausted itself after the first battle of El Alamein. Churchill and the Chief of the Imperial General Staff, Sir Alan Brooke, flew to Cairo in early August 1942 to meet Auchinleck, where it emerged he had lost the confidence of both men. He was replaced as Commander-in-Chief Middle East Command by General Sir Harold Alexander (later Field Marshal The Earl Alexander of Tunis).", "title": "Second World War" }, { "paragraph_id": 23, "text": "Joseph M. Horodyski and Maurice Remy both praise Auchinleck as an underrated military leader who contributed the most to the successful defence of El Alamein and consequently the final defeat of Rommel in Africa. The two historians also criticize Churchill for the unreasonable decision to put the blame on Auchinleck and to relieve him.", "title": "Second World War" }, { "paragraph_id": 24, "text": "Churchill offered Auchinleck command of the newly created Persia and Iraq Command (this having been separated from Alexander's command), but Auchinleck declined this post, as he believed that separating the area from the Middle East Command was not good policy and the new arrangements would not be workable. He set his reasons out in his letter to the Chief of the Imperial General Staff dated 14 August 1942. Instead he returned to India, where he spent almost a year \"unemployed\" before in June 1943 being again appointed Commander-in-Chief of the Indian Army,", "title": "Second World War" }, { "paragraph_id": 25, "text": "General Wavell meanwhile having been appointed Viceroy, on this appointment it was announced that responsibility for the prosecution of the war with Japan would move from the Commander-in-Chief India to a newly created South East Asia Command. However, the appointment of the new command's Supreme Commander, Acting Vice Admiral Lord Louis Mountbatten, was not announced until August 1943 and until Mountbatten could set up his headquarters and assume control (in November), Auchinleck retained responsibility for operations in India and Burma while conducting a review and revision of Allied plans based on the decisions taken by the Allied Combined Chiefs of Staff at the Quadrant Conference, which ended in August.", "title": "Second World War" }, { "paragraph_id": 26, "text": "Following Mountbatten's arrival, Auchinleck's India Command (which had equal status with South East Asia Command in the military hierarchy) was responsible for the internal security of India, the defence of the North West Frontier and the buildup of India as a base, including most importantly the reorganisation of the Indian Army, the training of forces destined for SEAC and the lines of communication carrying men and material to the forward areas and to China. Auchinleck made the supply of Fourteenth Army, with probably the worst lines of communication of the war, his immediate priority; as Sir William Slim, commander of the Fourteenth Army, was later to write:", "title": "Second World War" }, { "paragraph_id": 27, "text": "It was a good day for us when he [Auchinleck] took command of India, our main base, recruiting area and training ground. The Fourteenth Army, from its birth to its final victory, owed much to his unselfish support and never-failing understanding. Without him and what he and the Army of India did for us we could not have existed, let alone conquered.", "title": "Second World War" }, { "paragraph_id": 28, "text": "Auchinleck suffered a personal disappointment when his wife Jessie left him for his friend, Air Chief Marshal Sir Richard Peirse. Peirse and Auchinleck had been students together at the Imperial Defence College, but that was long before. Peirse was now Allied Air Commander-in-Chief, South-East Asia, and also based in India. The affair became known to Mountbatten in early 1944, and he passed the information to the Chief of the RAF, Sir Charles Portal, hoping that Peirse would be recalled. The affair was common knowledge by September 1944, and Peirse was neglecting his duties. Mountbatten sent Peirse and Lady Auchinleck back to England on 28 November 1944, where they lived together at a Brighton hotel. Peirse had his marriage dissolved, and Auchinleck obtained a divorce in 1946. Auchinleck was reportedly very badly affected. According to his sister, he was never the same after the break-up. He always carried a photograph of Jessie in his wallet even after the divorce.", "title": "Second World War" }, { "paragraph_id": 29, "text": "There is scholarly dispute whether Auchinleck was homosexual. His biographer, Philip Warner, addressed the rumours but dismissed them; however historian Ronald Hyam has alleged that \"sexually based moral-revulsion\" was the reason for Montgomery's inability to get on with Auchinleck, and further, that Auchinleck was \"let off with a high-level warning\" over his relationships with Indian boys.", "title": "Second World War" }, { "paragraph_id": 30, "text": "Auchinleck continued as Commander-in-Chief of the Indian Army after the end of the war helping, though much against his own convictions, to prepare the future Indian and Pakistani armies for the Partition of India: in November 1945 he was forced to commute the more serious judicial sentences awarded against officers of the Indian National Army in face of growing unease and unrest both within the Indian population, and the British Indian Army. On 1 June 1946 he was promoted to field marshal, but he refused to accept a peerage, lest he be thought associated with a policy (i.e. Partition) that he thought fundamentally dishonourable.", "title": "Partition of India and later years" }, { "paragraph_id": 31, "text": "Sending a report to the British Government on 28 September 1947, Field Marshal Auchinleck wrote: \"I have no hesitation, whatever, in affirming that the present Indian Cabinet are implacably determined to do all in their power to prevent the establishment of the Dominion of Pakistan on firm basis.\" He stated in the second, political part of his assessment, \"Since 15th August, the situation has steadily deteriorated and the Indian leaders, cabinet ministers, civil officials and others have persistently tried to obstruct the work of partition of the armed forces.\"", "title": "Partition of India and later years" }, { "paragraph_id": 32, "text": "When partition was effected in August 1947, Auchinleck was appointed Supreme Commander of all British forces remaining in India and Pakistan and remained in this role until the winding up and closure of the Supreme H.Q. at the end of November 1947. This marked his effective retirement from the army (although technically field marshals in the British Army never retire, remaining on the active list on half pay). He left India on 1 December.", "title": "Partition of India and later years" }, { "paragraph_id": 33, "text": "After a brief period in Italy in connection with an unsuccessful business project, Auchinleck retired to London, where he occupied himself with a number of charitable and business interests and became a respectably skilled watercolour painter. In 1960 he settled in Beccles in the county of Suffolk, remaining there for seven years until, at the age of eighty-four, he decided to emigrate and set up home in Marrakesh, where he died on 23 March 1981.", "title": "Partition of India and later years" }, { "paragraph_id": 34, "text": "Auchinleck is buried in Ben M'Sik European Cemetery, Casablanca, in the Commonwealth War Graves Commission plot in the cemetery, next to the grave of Raymond Steed who was the second youngest non-civilian Commonwealth casualty of the Second World War.", "title": "Memorials" }, { "paragraph_id": 35, "text": "A memorial plaque was erected in the crypt of St Paul's Cathedral. A bronze statue of Auchinleck can be seen on Broad Street adjacent to Auchinleck House, Five Ways, Birmingham.", "title": "Memorials" } ]
Field Marshal Sir Claude John Eyre Auchinleck,, , was a British Indian Army commander who saw active service during the world wars. A career soldier who spent much of his military career in India, he rose to become commander-in-chief of the Indian Army by early 1941 during the Second World War. In July 1941 he was appointed commander-in-chief of the Middle East Theatre, but after initial successes, the war in North Africa turned against the British-led forces under his command, and he was relieved of the post in August 1942 during the North African campaign. In June 1943, he was once again appointed Commander-in-Chief, India, where his support through the organisation of supply, maintenance and training for General William Slim's Fourteenth Army played an important role in its success. He served as Commander-in-Chief, India, until the Partition in 1947, when he assumed the role of Supreme Commander of all British forces in India and Pakistan until late 1948.
2002-01-17T14:44:39Z
2023-12-22T00:06:57Z
[ "Template:Infobox military person", "Template:S-ttl", "Template:S-bef", "Template:UK National Archives ID", "Template:Short description", "Template:IPAc-en", "Template:Respell", "Template:Blockquote", "Template:Reflist", "Template:S-aft", "Template:Commander-in-Chief, India", "Template:Use dmy dates", "Template:London Gazette", "Template:Cite journal", "Template:S-start", "Template:S-hon", "Template:Cite book", "Template:ISBN", "Template:S-mil", "Template:S-non", "Template:Quote", "Template:Cite news", "Template:Commons", "Template:S-break", "Template:Sfn", "Template:Cite web", "Template:PM20", "Template:Authority control", "Template:Use British English", "Template:Post-nominals", "Template:S-new", "Template:S-end" ]
https://en.wikipedia.org/wiki/Claude_Auchinleck
7,797
Camilla Hall
Camilla Christine Hall (March 24, 1945 – May 17, 1974) was a member of the Symbionese Liberation Army (SLA) and a social worker. She is best known for her membership in the SLA, a small, far-left militant group that committed violent acts between 1973 and 1975. They assassinated Marcus Foster, Superintendent of the Oakland Public Schools and the first black superintendent of any major school system, kidnapped white heiress Patty Hearst, and committed armed robbery of banks. Hall, one of the majority of white members in the group, died on May 17, 1974, with five other SLA members in a shootout with the Los Angeles Police Department in that city. During this, the house where the SLA members were making their stand caught fire. Police fatally shot both Hall and Nancy Ling Perry as they left the house, firing their own pistols. On March 24, 1945, Camilla Christine Hall was born in Saint Peter, Minnesota. Both her parents, George Fridolph Hall (1908-2000) and Lorena (Daeschner) Hall (1911-1995), were academics with positions at Gustavus Adolphus College in Saint Peter from 1938 to 1952. In addition, her father was a minister in the Augustana Evangelical Lutheran Church and later the Evangelical Lutheran Church in America. Her mother, Lorena (Daeschner) Hall, helped found Gustavus Adolphus College's Art Department and served as the department head. Camilla Hall was the only surviving child of four. Firstborn son Terry died of congenital heart disease in 1948; Peter died in 1951, and Nan died in 1962, both of a congenital kidney disease. The family seemed burdened by grief. In 1952, the Hall family moved to what is now Tanzania in East Africa. George and Lorena Hall taught in schools and did mission work, while Camilla and Nan played with the native children. In 1954, when Camilla was nine, the family returned to Saint Peter because of seven-year-old Nan's poor health. While Camilla attended elementary school in Minnesota and lived with relatives, her birth family moved to Montclair, New Jersey. In Minnesota, Hall attended Washburn High School in Minneapolis, where she was involved in many activities. The 1963 Washburn Yearbook states, "Candy was a member of Blue Tri, Class Play, Poplars Staff, Quill Club, Forensics, Pep Club, and Hall of Fame". Blue Tri club was an organization that encouraged Christian ideals and put together service projects. In addition, Camilla Hall was voted class clown in high school. In 1963, she graduated from Washburn High School. Hall attended Gustavus Adolphus College in St. Peter, Minnesota. She transferred to the University of Minnesota after her freshman year. On June 10, 1967, Hall graduated with a humanities degree. After graduation, Hall moved to Duluth, Minnesota, where she started as a caseworker for social services in St. Louis County. She also began to participate in Democratic Party activities. In early 1968, she was elected to carry the Eugene McCarthy banner for the St. Louis County precinct, in support of McCarthy's presidential campaign that year. Although Hall enjoyed helping people in her work, she found it difficult to keep distance from some of their problems while being a caseworker. For her job in Duluth, Hall used her musical and poetic talents in an advertising campaign. In June 1968, Hall returned to Minneapolis, where she was a caseworker for the Hennepin County, Minnesota welfare office. Co-workers and friends of Hall described her as witty, sympathetic, helpful, and compassionate. She had an outgoing personality and had a passion for literature. At the same time, Hall frequently talked with family and friends about philosophy and how she was disappointed with the state of welfare. In 1968, Hall was 23 years old. She carefully monitored the political situation in America, including the 1968 Democratic National Convention in Chicago where there was so much violence. She was active in the peace movement and food boycotts, including the Mobilization Committee to End the War in Vietnam. Despite Hall's participating in political activities, urging social change, and working to aid individuals and families, her mother could see that Camilla became dissatisfied with her work. In November 1969, Hall moved to Topanga, a northern suburb of Los Angeles, California. In March, she moved into Los Angeles proper in west Los Angeles. According to Rachael Hanel, "She lived off her savings, interest income from a trust, money from her parents, and selling her simple, Rubenesque line drawings." Although Hall didn't express dissatisfaction at being an artist, she decided to move again. Hall moved to Berkeley in northern California in February 1971, which had become a center of political activism and social movements. In May 1971, Hall moved into an apartment complex on Channing Way where she met Patricia Soltysik. The two women began a lesbian relationship, which was the first time Hall had done so publicly. Hall wrote about Soltysik in a love poem named "Mizmoon", and nicknamed her that. In Berkeley, Hall continued being politically active. She participated in the People's Park reoccupation during the summer of 1972, following the shootings there the year before. She and Soltysik became involved with the Venceremos prison outreach project, through which they became associates of two white men, Russ Little and Willie Wolfe, who were also assisting in prisoner outreach. In October 1972, Hall traveled to Europe. She stayed with friends while she traveled for three months. Once she returned to California, she continued being politically active. Through her association with Soltysik, Little, and Wolfe, she became a founding member of the Symbionese Liberation Army, a small, radical leftist group. Joe Remiro and Thero Wheeler trained the other members in handling weapons and explosives. Remiro was a veteran of Vietnam. The SLA gained notoriety in November 1973 by claiming credit for the assassination of Marcus Foster, Superintendent of the Oakland Public Schools and the first black to be superintendent of any major city's school district. Three "soldiers" also wounded his deputy. In January 1974 the SLA base was moved to Concord, California, where Nancy Ling Perry rented a house under an assumed name. Russ Little and Joe Remiro were arrested after a police stop and confrontation, convicted and sentenced to prison. In February 1974 the SLA kidnapped heiress Patty Hearst. They indoctrinated her and she said she chose to join them. Hall and Hearst were identified from security camera images as participants in the April 15, 1974, armed robbery of the Hibernia Bank in San Francisco. Two civilians were shot during the robbery. The police kept up pressure on the group, which moved to a house in Los Angeles. There Hall died in a shootout (May 17, 1974) with police in which five other SLA members also died. As their hideout burned, Hall and Nancy Ling Perry exited from the back door. Police claimed that Perry came out firing a revolver and Hall was firing an automatic pistol. Police shot them immediately, killing both. Perry was shot twice. One shot hit her right lung, the other shot severed her spine. Hall was shot once in the forehead. Angela Atwood, another SLA member, pulled Hall's body back into the burning house. Atwood died in the fire. Investigators working for Hall's parents claimed that Perry had walked out of the house intending to surrender. Hall's parents held a funeral for their daughter on May 23, 1974, at St. John's Lutheran Church, in Lincolnwood, Illinois, a Chicago suburb, where he was pastor. Seven of his fellow Lutheran ministers conducted the service. Camilla Hall's name was not mentioned. Her ashes were buried on August 19, 1974, in a small country graveyard where her late siblings were buried, who each died before she was 16. Her parents also have plots there.
[ { "paragraph_id": 0, "text": "Camilla Christine Hall (March 24, 1945 – May 17, 1974) was a member of the Symbionese Liberation Army (SLA) and a social worker. She is best known for her membership in the SLA, a small, far-left militant group that committed violent acts between 1973 and 1975. They assassinated Marcus Foster, Superintendent of the Oakland Public Schools and the first black superintendent of any major school system, kidnapped white heiress Patty Hearst, and committed armed robbery of banks.", "title": "" }, { "paragraph_id": 1, "text": "Hall, one of the majority of white members in the group, died on May 17, 1974, with five other SLA members in a shootout with the Los Angeles Police Department in that city. During this, the house where the SLA members were making their stand caught fire. Police fatally shot both Hall and Nancy Ling Perry as they left the house, firing their own pistols.", "title": "" }, { "paragraph_id": 2, "text": "On March 24, 1945, Camilla Christine Hall was born in Saint Peter, Minnesota. Both her parents, George Fridolph Hall (1908-2000) and Lorena (Daeschner) Hall (1911-1995), were academics with positions at Gustavus Adolphus College in Saint Peter from 1938 to 1952. In addition, her father was a minister in the Augustana Evangelical Lutheran Church and later the Evangelical Lutheran Church in America. Her mother, Lorena (Daeschner) Hall, helped found Gustavus Adolphus College's Art Department and served as the department head.", "title": "Early life" }, { "paragraph_id": 3, "text": "Camilla Hall was the only surviving child of four. Firstborn son Terry died of congenital heart disease in 1948; Peter died in 1951, and Nan died in 1962, both of a congenital kidney disease. The family seemed burdened by grief.", "title": "Early life" }, { "paragraph_id": 4, "text": "In 1952, the Hall family moved to what is now Tanzania in East Africa. George and Lorena Hall taught in schools and did mission work, while Camilla and Nan played with the native children. In 1954, when Camilla was nine, the family returned to Saint Peter because of seven-year-old Nan's poor health. While Camilla attended elementary school in Minnesota and lived with relatives, her birth family moved to Montclair, New Jersey.", "title": "Early life" }, { "paragraph_id": 5, "text": "In Minnesota, Hall attended Washburn High School in Minneapolis, where she was involved in many activities. The 1963 Washburn Yearbook states, \"Candy was a member of Blue Tri, Class Play, Poplars Staff, Quill Club, Forensics, Pep Club, and Hall of Fame\". Blue Tri club was an organization that encouraged Christian ideals and put together service projects. In addition, Camilla Hall was voted class clown in high school. In 1963, she graduated from Washburn High School.", "title": "Early life" }, { "paragraph_id": 6, "text": "Hall attended Gustavus Adolphus College in St. Peter, Minnesota. She transferred to the University of Minnesota after her freshman year. On June 10, 1967, Hall graduated with a humanities degree.", "title": "Education" }, { "paragraph_id": 7, "text": "After graduation, Hall moved to Duluth, Minnesota, where she started as a caseworker for social services in St. Louis County. She also began to participate in Democratic Party activities. In early 1968, she was elected to carry the Eugene McCarthy banner for the St. Louis County precinct, in support of McCarthy's presidential campaign that year.", "title": "Post-college" }, { "paragraph_id": 8, "text": "Although Hall enjoyed helping people in her work, she found it difficult to keep distance from some of their problems while being a caseworker. For her job in Duluth, Hall used her musical and poetic talents in an advertising campaign.", "title": "Post-college" }, { "paragraph_id": 9, "text": "In June 1968, Hall returned to Minneapolis, where she was a caseworker for the Hennepin County, Minnesota welfare office. Co-workers and friends of Hall described her as witty, sympathetic, helpful, and compassionate. She had an outgoing personality and had a passion for literature. At the same time, Hall frequently talked with family and friends about philosophy and how she was disappointed with the state of welfare. In 1968, Hall was 23 years old. She carefully monitored the political situation in America, including the 1968 Democratic National Convention in Chicago where there was so much violence. She was active in the peace movement and food boycotts, including the Mobilization Committee to End the War in Vietnam. Despite Hall's participating in political activities, urging social change, and working to aid individuals and families, her mother could see that Camilla became dissatisfied with her work.", "title": "Post-college" }, { "paragraph_id": 10, "text": "In November 1969, Hall moved to Topanga, a northern suburb of Los Angeles, California. In March, she moved into Los Angeles proper in west Los Angeles. According to Rachael Hanel, \"She lived off her savings, interest income from a trust, money from her parents, and selling her simple, Rubenesque line drawings.\" Although Hall didn't express dissatisfaction at being an artist, she decided to move again.", "title": "Move to California" }, { "paragraph_id": 11, "text": "Hall moved to Berkeley in northern California in February 1971, which had become a center of political activism and social movements. In May 1971, Hall moved into an apartment complex on Channing Way where she met Patricia Soltysik. The two women began a lesbian relationship, which was the first time Hall had done so publicly. Hall wrote about Soltysik in a love poem named \"Mizmoon\", and nicknamed her that.", "title": "Move to California" }, { "paragraph_id": 12, "text": "In Berkeley, Hall continued being politically active. She participated in the People's Park reoccupation during the summer of 1972, following the shootings there the year before. She and Soltysik became involved with the Venceremos prison outreach project, through which they became associates of two white men, Russ Little and Willie Wolfe, who were also assisting in prisoner outreach.", "title": "Move to California" }, { "paragraph_id": 13, "text": "In October 1972, Hall traveled to Europe. She stayed with friends while she traveled for three months. Once she returned to California, she continued being politically active. Through her association with Soltysik, Little, and Wolfe, she became a founding member of the Symbionese Liberation Army, a small, radical leftist group. Joe Remiro and Thero Wheeler trained the other members in handling weapons and explosives. Remiro was a veteran of Vietnam.", "title": "Move to California" }, { "paragraph_id": 14, "text": "The SLA gained notoriety in November 1973 by claiming credit for the assassination of Marcus Foster, Superintendent of the Oakland Public Schools and the first black to be superintendent of any major city's school district. Three \"soldiers\" also wounded his deputy. In January 1974 the SLA base was moved to Concord, California, where Nancy Ling Perry rented a house under an assumed name. Russ Little and Joe Remiro were arrested after a police stop and confrontation, convicted and sentenced to prison.", "title": "Move to California" }, { "paragraph_id": 15, "text": "In February 1974 the SLA kidnapped heiress Patty Hearst. They indoctrinated her and she said she chose to join them. Hall and Hearst were identified from security camera images as participants in the April 15, 1974, armed robbery of the Hibernia Bank in San Francisco. Two civilians were shot during the robbery.", "title": "Move to California" }, { "paragraph_id": 16, "text": "The police kept up pressure on the group, which moved to a house in Los Angeles. There Hall died in a shootout (May 17, 1974) with police in which five other SLA members also died. As their hideout burned, Hall and Nancy Ling Perry exited from the back door. Police claimed that Perry came out firing a revolver and Hall was firing an automatic pistol. Police shot them immediately, killing both. Perry was shot twice. One shot hit her right lung, the other shot severed her spine. Hall was shot once in the forehead. Angela Atwood, another SLA member, pulled Hall's body back into the burning house. Atwood died in the fire.", "title": "LA shootout" }, { "paragraph_id": 17, "text": "Investigators working for Hall's parents claimed that Perry had walked out of the house intending to surrender.", "title": "LA shootout" }, { "paragraph_id": 18, "text": "Hall's parents held a funeral for their daughter on May 23, 1974, at St. John's Lutheran Church, in Lincolnwood, Illinois, a Chicago suburb, where he was pastor. Seven of his fellow Lutheran ministers conducted the service. Camilla Hall's name was not mentioned. Her ashes were buried on August 19, 1974, in a small country graveyard where her late siblings were buried, who each died before she was 16. Her parents also have plots there.", "title": "Funeral" } ]
Camilla Christine Hall was a member of the Symbionese Liberation Army (SLA) and a social worker. She is best known for her membership in the SLA, a small, far-left militant group that committed violent acts between 1973 and 1975. They assassinated Marcus Foster, Superintendent of the Oakland Public Schools and the first black superintendent of any major school system, kidnapped white heiress Patty Hearst, and committed armed robbery of banks. Hall, one of the majority of white members in the group, died on May 17, 1974, with five other SLA members in a shootout with the Los Angeles Police Department in that city. During this, the house where the SLA members were making their stand caught fire. Police fatally shot both Hall and Nancy Ling Perry as they left the house, firing their own pistols.
2002-01-17T18:46:11Z
2023-09-12T07:54:47Z
[ "Template:Short description", "Template:Infobox person", "Template:Main", "Template:Commons category", "Template:Symbionese", "Template:Authority control", "Template:Clear left", "Template:Reflist", "Template:Cite web", "Template:Cite book" ]
https://en.wikipedia.org/wiki/Camilla_Hall
7,800
Clone
Clone or Clones or Cloning or Cloned or The Clone may refer to:
[ { "paragraph_id": 0, "text": "Clone or Clones or Cloning or Cloned or The Clone may refer to:", "title": "" } ]
Clone or Clones or Cloning or Cloned or The Clone may refer to:
2002-02-25T15:43:11Z
2023-11-22T16:43:45Z
[ "Template:Wiktionary", "Template:TOC right", "Template:Lang", "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/Clone
7,801
Critical psychology
Critical psychology is a perspective on psychology that draws extensively on critical theory. Critical psychology challenges the assumptions, theories and methods of mainstream psychology and attempts to apply psychological understandings in different ways, often looking towards social change as a means of preventing and treating psychopathology. Critical psychologists believe that mainstream psychology fails to consider how power differences and discrimination between social classes and groups can impact an individual's or a group's mental and physical well-being. Mainstream psychology does this only in part by attempting to explain behavior at the individual level. However, it largely ignores institutional racism, postcolonialism and deficits in social justice for minority groups based on differences in observable characteristics such as gender, ethnicity, religion religious minority, sexual orientation, LGBTQ+ or disability. Criticisms of mainstream psychology consistent with current critical psychology usage have existed since psychology's modern development in the late 19th century. Use of the term critical psychology started in the 1970s at the Freie Universität Berlin. The German branch of critical psychology predates and has developed largely separately from the rest of the field. As of May 2007, only a few works have been translated into English. The German Critical Psychology movement is rooted in the post-war student revolt of the late 1960s; see German student movement. Marx's Critique of Political Economy played an important role in the German branch of the student revolt, which was centered in West Berlin. At that time, the capitalist city of West Berlin was surrounded by communist-ruled East Germany, and represented a "hot spot" of political and ideological controversy for the revolutionary German students. The sociological foundations of critical psychology are decidedly Marxist. One of the most important and sophisticated books in the German development of the field is the Grundlegung der Psychologie (Foundations of Psychology) by Klaus Holzkamp, who might be considered the theoretical founder of German critical psychology. Holzkamp wrote two books on theory of science and one on sensory perception before publishing the Grundlegung der Psychologie in 1983. Holzkamp believed his work provided a solid paradigm for psychological research because viewed psychology as a pre-paradigmatic scientific discipline (T.S. Kuhn had used the term "pre-paradigmatic" for social science). Holzkamp mostly based his sophisticated attempt to provide a comprehensive and integrated set of categories defining the field of psychological research on Aleksey Leontyev's approach to cultural–historical psychology and activity theory. Leontyev had seen human action as a result of biological as well as cultural evolution and, drawing on Marx's materialist conception of culture, stressed that individual cognition is always part of social action which in turn is mediated by man-made tools (cultural artifacts), language and other man-made systems of symbols, which he viewed as a major distinguishing feature of human culture and, thus, human cognition. Another important source was Lucien Séve's theory of personality, which provided the concept of "social activity matrices" as mediating structure between individual and social reproduction. At the same time, the Grundlegung systematically integrated previous specialized work done at Free University of Berlin in the 1970s by critical psychologists who also had been influenced by Marx, Leontyev, and Seve. This included books on animal behavior/ethology, sensory perception, motivation and cognition. He also incorporated ideas from Freud's psychoanalysis and Merleau-Ponty's phenomenology into his approach. One core result of Holzkamp's historical and comparative analysis of human reproductive action, perception and cognition is a very specific concept of meaning that identifies symbolic meaning as historically and culturally constructed, purposeful conceptual structures that humans create in close relationship to material culture and within the context of historically specific formations of social reproduction. Coming from this phenomenological perspective on culturally mediated and socially situated action, Holzkamp launched a methodological attack on behaviorism (which he termed S–R (stimulus–response) psychology) based on linguistic analysis, showing in minute detail the rhetorical patterns by which this approach to psychology creates the illusion of "scientific objectivity" while at the same time losing relevance for understanding culturally situated, intentional human actions. Against this approach, he developed his own approach to generalization and objectivity, drawing on ideas from Kurt Lewin in Chapter 9 of Grundlegung der Psychologie. His last major publication before his death in 1995 was about learning. It appeared in 1993 and contained a phenomenological theory of learning from the standpoint of the subject. One important concept Holzkamp developed was "reinterpretation" of theories developed by conventional psychology. This meant to look at these concepts from the standpoint of the paradigm of critical psychology, thereby integrating their useful insights into critical psychology while at the same time identifying and criticizing their limiting implications, which in the case of S–R psychology were the rhetorical elimination of the subject and intentional action, and in the case of cognitive psychology which did take into account subjective motives and intentional actions, methodological individualism. The first part of the book thus contains an extensive look at the history of psychological theories of learning and a minute re-interpretation of those concepts from the perspective of critical psychology, which focuses on intentional action situated in specific socio-historical/cultural contexts. The conceptions of learning he found most useful in his own detailed analysis of "classroom learning" came from cognitive anthropologists Jean Lave (situated learning) and Edwin Hutchins (distributed cognition). The book's second part contained an extensive analysis on the modern state's institutionalized forms of "classroom learning" as the cultural–historical context that shapes much of modern learning and socialization. In this analysis, he heavily drew upon Michel Foucault's Discipline and Punish. Holzkamp felt that classroom learning as the historically specific form of learning does not make full use of student's potentials, but rather limits her or his learning potentials by a number of "teaching strategies." Part of his motivation for the book was to look for alternative forms of learning that made use of the enormous potential of the human psyche in more fruitful ways. Consequently, in the last section of the book, Holzkamp discusses forms of "expansive learning" that seem to avoid the limitations of classroom learning, such as apprenticeship and learning in contexts other than classrooms. This search culminated in plans to write a major work on life leadership in the specific historical context of modern (capitalist) society. Due to his death in 1995, this work never got past the stage of early (and premature) conceptualizations, some of which were published in the journals Forum Kritische Psychologie and Argument. In the 1960s and 1970s the term radical psychology was used by psychologists internationally to denote a branch of the field which rejected mainstream psychology's focus on the individual as the basic unit of analysis and sole source of psychopathology. Instead, radical psychologists examined the role of society in causing and treating problems and looked towards social change as an alternative to therapy to treat mental illness and as a means of preventing psychopathology. Within psychiatry the term anti-psychiatry was often used and now British activists prefer the term critical psychiatry. Critical psychology is currently the preferred term for the discipline of psychology keen to find alternatives to the way the discipline of psychology reduces human experience to the level of the individual and thereby strips away possibilities for radical social change. Starting in the 1990s a new wave of books started to appear on critical psychology, the most influential being the edited book Critical Psychology by Dennis Fox and Isaac Prilleltensky. Various introductory texts to critical psychology written in the United Kingdom have tended to focus on discourse, but this has been seen by some proponents of critical psychology as a reduction of human experience to language which is as politically dangerous as the way mainstream psychology reduces experience to the individual mind. Attention to language and ideological processes, others would argue, is essential to effective critical psychology – it is not simply a matter of applying mainstream psychological concepts to issues of social change. In 1999 Ian Parker published an influential manifesto in both the online journal Radical Psychology and the Annual Review of Critical Psychology. This manifesto argues that critical psychology should include the following four components: There are a few international journals devoted to critical psychology, including the no longer published International Journal of Critical Psychology (continued in the journal Subjectivity) and the Annual Review of Critical Psychology. The journals still tend to be directed to an academic audience, though the Annual Review of Critical Psychology runs as an open-access online journal. There are close links between critical psychologists and critical psychiatrists in Britain through the Asylum Collective. David Smail was one of the founders of The Midlands Psychology Group, a critical psychology collective who produced a manifesto for a social materialist psychology of distress. Critical psychology courses and research concentrations are available at Manchester Metropolitan University, York St John University, the University of East London, the University of Edinburgh, the University of KwaZulu Natal, the City University of New York Graduate Center, the University of West Georgia, Point Park University, University of Guelph, York University, and Prescott College. Undergraduate concentrations can also be found at the California Institute of Integral Studies and Prescott College. Like many critical applications, critical psychology has expanded beyond Marxist and feminist roots to benefit from other critical approaches. Consider ecopsychology and transpersonal psychology. Critical psychology and related work has also sometimes been labelled radical psychology and liberation psychology. In the field of developmental psychology, the work of Erica Burman has been influential. Various sub-disciplines within psychology have begun to establish their own critical orientations. Perhaps the most extensive are critical health psychology, community psychology, and social psychology. An early international overview of critical psychology perspectives can be found in Critical Psychology: Voices for Change, edited by Tod Sloan (Macmillan, 2000). In 2015, Ian Parker edited the Handbook of Critical Psychology. At FU-Berlin, critical psychology was not really seen as a division of psychology and followed its own methodology, trying to reformulate traditional psychology on an unorthodox Marxist base and drawing from Soviet ideas of cultural–historical psychology, particularly Aleksey Leontyev. Some years ago the department of critical psychology at FU-Berlin was merged into the traditional psychology department. An April 2009 issue of the journal Theory & Psychology (edited by Desmond Painter, Athanasios Marvakis, and Leendert Mos) is devoted to an examination of German critical psychology. The complex sociopolitical history of South Africa, and its relationship with mainstream psychology, created a setting in which critical psychology could be impactful. South Africa is a good example of a context in which mainstream psychology positioned itself alongside neo-colonialism, racism, and capitalist exploitation - during the country's Apartheid era - which led to the need for critical alternatives within the field that could challenge ideological complicities. During apartheid, mainstream psychology supported the oppressive political system - some psychologists actively and others passively. In the early 1980s, at the height of apartheid, progressive white psychologists and a growing number of black psychologists began to research and practice alternative programmes to critique and resist mainstream psychology's role in perpetuating apartheid in South Africa. In this way, critical psychology started to develop in South Africa. As is the case in other parts of the world, critical psychology in South Africa was born from interrogating psychology in relation to politics. Firstly, psychology was accused of being a product of, and supporter of, an oppressive political system in which its supposed neutrality and scientific objectivity were informed by the sectors of society that benefited from the ideological and economic dominance that it upheld. Secondly, once critical psychologists in South Africa revealed the ideological flaws in mainstream psychology within the country's context, work began to reconfigure the field as a progressive and socially relevant practice with theoretical and methodological approaches that could benefit all members of South African society. The establishment of critical psychology in South Africa took various forms between 1980 and 1994. Although the field was not necessarily fully formalised during this time, spaces and organisations were created for its ideas to be expressed and developed: such as in the University of Cape Town's (UCT) psychology department, the formation of the Organisation for Appropriate Social Services in South Africa (OASSSA), Psychologists Against Apartheid, the South African Health and Social Services Organisation (SAHSSO), and the establishment of the academic journal Psychology in Society (PINS). Some of the main theoretical and practical achievements of these developments were: the forging of a way to critique the categories of class, race, gender, and other structural factors impacting the discipline of psychology, the encouragement of students to think critically about the politics of psychology, and rebuilding international links as well as relationships with other social and health sciences in South Africa. However, not all these initiatives continued after the end of political struggle and the transition to democracy. After 1994, professional psychology in South Africa was reorganised through the establishment of the Professional Board for Psychology that exists within the Health Professions Council of South Africa (HPCSA). This statutory body regulates the profession with its systems of licensing and certification. Within these systems, critical psychology is more of an approach to the field than it is a professional category on its own. From the 2000s until recent times, critical psychology moved more toward studying certain domains, such as gender or race, and in the process, the overarching project of establishing a formalised field of critical psychology has either been discarded or broadened to refer to anything that is 'non-mainstream' in psychology. Critical psychology in South Africa is therefore mostly applied as a theoretical approach. The doctoral program in Critical Social/Personality Psychology and Environmental Psychology at the CUNY Graduate Center and the doctoral program in Critical Psychology at Point Park University, in Pittsburgh, PA are the only critical psychology specific doctoral programs in the United States. Prescott College in Prescott, Arizona offers an online Master's program in Critical Psychology and Human Services and has a critically oriented undergraduate program. The California Institute of Integral Studies in San Francisco also offers the Bachelor's Completion Program with a minor in Critical Psychology, and critical perspectives are sometimes encountered in traditional universities, perhaps especially within community psychology programs. The University of West Georgia offers a Ph.D. in Consciousness and Society with critical psychology being one of the main three theoretical orientations. North American efforts include the 1993 founding of RadPsyNet, the 1997 publication of Critical Psychology: An Introduction (edited by Dennis Fox and Isaac Prilleltensky; expanded 2009 edition edited by Dennis Fox, Isaac Prilleltensky, and Stephanie Austin), the 2001 Monterey Conference on Critical Psychology, and in underlying themes of many contributions to the Journal of Social Action in Counseling and Psychology.
[ { "paragraph_id": 0, "text": "Critical psychology is a perspective on psychology that draws extensively on critical theory. Critical psychology challenges the assumptions, theories and methods of mainstream psychology and attempts to apply psychological understandings in different ways, often looking towards social change as a means of preventing and treating psychopathology.", "title": "" }, { "paragraph_id": 1, "text": "Critical psychologists believe that mainstream psychology fails to consider how power differences and discrimination between social classes and groups can impact an individual's or a group's mental and physical well-being. Mainstream psychology does this only in part by attempting to explain behavior at the individual level. However, it largely ignores institutional racism, postcolonialism and deficits in social justice for minority groups based on differences in observable characteristics such as gender, ethnicity, religion religious minority, sexual orientation, LGBTQ+ or disability.", "title": "" }, { "paragraph_id": 2, "text": "Criticisms of mainstream psychology consistent with current critical psychology usage have existed since psychology's modern development in the late 19th century. Use of the term critical psychology started in the 1970s at the Freie Universität Berlin. The German branch of critical psychology predates and has developed largely separately from the rest of the field. As of May 2007, only a few works have been translated into English. The German Critical Psychology movement is rooted in the post-war student revolt of the late 1960s; see German student movement. Marx's Critique of Political Economy played an important role in the German branch of the student revolt, which was centered in West Berlin. At that time, the capitalist city of West Berlin was surrounded by communist-ruled East Germany, and represented a \"hot spot\" of political and ideological controversy for the revolutionary German students. The sociological foundations of critical psychology are decidedly Marxist.", "title": "Origins" }, { "paragraph_id": 3, "text": "One of the most important and sophisticated books in the German development of the field is the Grundlegung der Psychologie (Foundations of Psychology) by Klaus Holzkamp, who might be considered the theoretical founder of German critical psychology. Holzkamp wrote two books on theory of science and one on sensory perception before publishing the Grundlegung der Psychologie in 1983. Holzkamp believed his work provided a solid paradigm for psychological research because viewed psychology as a pre-paradigmatic scientific discipline (T.S. Kuhn had used the term \"pre-paradigmatic\" for social science).", "title": "Origins" }, { "paragraph_id": 4, "text": "Holzkamp mostly based his sophisticated attempt to provide a comprehensive and integrated set of categories defining the field of psychological research on Aleksey Leontyev's approach to cultural–historical psychology and activity theory. Leontyev had seen human action as a result of biological as well as cultural evolution and, drawing on Marx's materialist conception of culture, stressed that individual cognition is always part of social action which in turn is mediated by man-made tools (cultural artifacts), language and other man-made systems of symbols, which he viewed as a major distinguishing feature of human culture and, thus, human cognition. Another important source was Lucien Séve's theory of personality, which provided the concept of \"social activity matrices\" as mediating structure between individual and social reproduction. At the same time, the Grundlegung systematically integrated previous specialized work done at Free University of Berlin in the 1970s by critical psychologists who also had been influenced by Marx, Leontyev, and Seve. This included books on animal behavior/ethology, sensory perception, motivation and cognition. He also incorporated ideas from Freud's psychoanalysis and Merleau-Ponty's phenomenology into his approach.", "title": "Origins" }, { "paragraph_id": 5, "text": "One core result of Holzkamp's historical and comparative analysis of human reproductive action, perception and cognition is a very specific concept of meaning that identifies symbolic meaning as historically and culturally constructed, purposeful conceptual structures that humans create in close relationship to material culture and within the context of historically specific formations of social reproduction.", "title": "Origins" }, { "paragraph_id": 6, "text": "Coming from this phenomenological perspective on culturally mediated and socially situated action, Holzkamp launched a methodological attack on behaviorism (which he termed S–R (stimulus–response) psychology) based on linguistic analysis, showing in minute detail the rhetorical patterns by which this approach to psychology creates the illusion of \"scientific objectivity\" while at the same time losing relevance for understanding culturally situated, intentional human actions. Against this approach, he developed his own approach to generalization and objectivity, drawing on ideas from Kurt Lewin in Chapter 9 of Grundlegung der Psychologie.", "title": "Origins" }, { "paragraph_id": 7, "text": "His last major publication before his death in 1995 was about learning. It appeared in 1993 and contained a phenomenological theory of learning from the standpoint of the subject. One important concept Holzkamp developed was \"reinterpretation\" of theories developed by conventional psychology. This meant to look at these concepts from the standpoint of the paradigm of critical psychology, thereby integrating their useful insights into critical psychology while at the same time identifying and criticizing their limiting implications, which in the case of S–R psychology were the rhetorical elimination of the subject and intentional action, and in the case of cognitive psychology which did take into account subjective motives and intentional actions, methodological individualism.", "title": "Origins" }, { "paragraph_id": 8, "text": "The first part of the book thus contains an extensive look at the history of psychological theories of learning and a minute re-interpretation of those concepts from the perspective of critical psychology, which focuses on intentional action situated in specific socio-historical/cultural contexts. The conceptions of learning he found most useful in his own detailed analysis of \"classroom learning\" came from cognitive anthropologists Jean Lave (situated learning) and Edwin Hutchins (distributed cognition).", "title": "Origins" }, { "paragraph_id": 9, "text": "The book's second part contained an extensive analysis on the modern state's institutionalized forms of \"classroom learning\" as the cultural–historical context that shapes much of modern learning and socialization. In this analysis, he heavily drew upon Michel Foucault's Discipline and Punish. Holzkamp felt that classroom learning as the historically specific form of learning does not make full use of student's potentials, but rather limits her or his learning potentials by a number of \"teaching strategies.\" Part of his motivation for the book was to look for alternative forms of learning that made use of the enormous potential of the human psyche in more fruitful ways. Consequently, in the last section of the book, Holzkamp discusses forms of \"expansive learning\" that seem to avoid the limitations of classroom learning, such as apprenticeship and learning in contexts other than classrooms.", "title": "Origins" }, { "paragraph_id": 10, "text": "This search culminated in plans to write a major work on life leadership in the specific historical context of modern (capitalist) society. Due to his death in 1995, this work never got past the stage of early (and premature) conceptualizations, some of which were published in the journals Forum Kritische Psychologie and Argument.", "title": "Origins" }, { "paragraph_id": 11, "text": "In the 1960s and 1970s the term radical psychology was used by psychologists internationally to denote a branch of the field which rejected mainstream psychology's focus on the individual as the basic unit of analysis and sole source of psychopathology. Instead, radical psychologists examined the role of society in causing and treating problems and looked towards social change as an alternative to therapy to treat mental illness and as a means of preventing psychopathology. Within psychiatry the term anti-psychiatry was often used and now British activists prefer the term critical psychiatry. Critical psychology is currently the preferred term for the discipline of psychology keen to find alternatives to the way the discipline of psychology reduces human experience to the level of the individual and thereby strips away possibilities for radical social change.", "title": "Origins" }, { "paragraph_id": 12, "text": "Starting in the 1990s a new wave of books started to appear on critical psychology, the most influential being the edited book Critical Psychology by Dennis Fox and Isaac Prilleltensky. Various introductory texts to critical psychology written in the United Kingdom have tended to focus on discourse, but this has been seen by some proponents of critical psychology as a reduction of human experience to language which is as politically dangerous as the way mainstream psychology reduces experience to the individual mind. Attention to language and ideological processes, others would argue, is essential to effective critical psychology – it is not simply a matter of applying mainstream psychological concepts to issues of social change.", "title": "Origins" }, { "paragraph_id": 13, "text": "In 1999 Ian Parker published an influential manifesto in both the online journal Radical Psychology and the Annual Review of Critical Psychology. This manifesto argues that critical psychology should include the following four components:", "title": "Origins" }, { "paragraph_id": 14, "text": "There are a few international journals devoted to critical psychology, including the no longer published International Journal of Critical Psychology (continued in the journal Subjectivity) and the Annual Review of Critical Psychology. The journals still tend to be directed to an academic audience, though the Annual Review of Critical Psychology runs as an open-access online journal. There are close links between critical psychologists and critical psychiatrists in Britain through the Asylum Collective. David Smail was one of the founders of The Midlands Psychology Group, a critical psychology collective who produced a manifesto for a social materialist psychology of distress. Critical psychology courses and research concentrations are available at Manchester Metropolitan University, York St John University, the University of East London, the University of Edinburgh, the University of KwaZulu Natal, the City University of New York Graduate Center, the University of West Georgia, Point Park University, University of Guelph, York University, and Prescott College. Undergraduate concentrations can also be found at the California Institute of Integral Studies and Prescott College.", "title": "Origins" }, { "paragraph_id": 15, "text": "Like many critical applications, critical psychology has expanded beyond Marxist and feminist roots to benefit from other critical approaches. Consider ecopsychology and transpersonal psychology. Critical psychology and related work has also sometimes been labelled radical psychology and liberation psychology. In the field of developmental psychology, the work of Erica Burman has been influential.", "title": "Extensions" }, { "paragraph_id": 16, "text": "Various sub-disciplines within psychology have begun to establish their own critical orientations. Perhaps the most extensive are critical health psychology, community psychology, and social psychology.", "title": "Extensions" }, { "paragraph_id": 17, "text": "An early international overview of critical psychology perspectives can be found in Critical Psychology: Voices for Change, edited by Tod Sloan (Macmillan, 2000). In 2015, Ian Parker edited the Handbook of Critical Psychology.", "title": "Internationally" }, { "paragraph_id": 18, "text": "At FU-Berlin, critical psychology was not really seen as a division of psychology and followed its own methodology, trying to reformulate traditional psychology on an unorthodox Marxist base and drawing from Soviet ideas of cultural–historical psychology, particularly Aleksey Leontyev. Some years ago the department of critical psychology at FU-Berlin was merged into the traditional psychology department.", "title": "Internationally" }, { "paragraph_id": 19, "text": "An April 2009 issue of the journal Theory & Psychology (edited by Desmond Painter, Athanasios Marvakis, and Leendert Mos) is devoted to an examination of German critical psychology.", "title": "Internationally" }, { "paragraph_id": 20, "text": "The complex sociopolitical history of South Africa, and its relationship with mainstream psychology, created a setting in which critical psychology could be impactful. South Africa is a good example of a context in which mainstream psychology positioned itself alongside neo-colonialism, racism, and capitalist exploitation - during the country's Apartheid era - which led to the need for critical alternatives within the field that could challenge ideological complicities. During apartheid, mainstream psychology supported the oppressive political system - some psychologists actively and others passively. In the early 1980s, at the height of apartheid, progressive white psychologists and a growing number of black psychologists began to research and practice alternative programmes to critique and resist mainstream psychology's role in perpetuating apartheid in South Africa. In this way, critical psychology started to develop in South Africa.", "title": "Internationally" }, { "paragraph_id": 21, "text": "As is the case in other parts of the world, critical psychology in South Africa was born from interrogating psychology in relation to politics. Firstly, psychology was accused of being a product of, and supporter of, an oppressive political system in which its supposed neutrality and scientific objectivity were informed by the sectors of society that benefited from the ideological and economic dominance that it upheld. Secondly, once critical psychologists in South Africa revealed the ideological flaws in mainstream psychology within the country's context, work began to reconfigure the field as a progressive and socially relevant practice with theoretical and methodological approaches that could benefit all members of South African society.", "title": "Internationally" }, { "paragraph_id": 22, "text": "The establishment of critical psychology in South Africa took various forms between 1980 and 1994. Although the field was not necessarily fully formalised during this time, spaces and organisations were created for its ideas to be expressed and developed: such as in the University of Cape Town's (UCT) psychology department, the formation of the Organisation for Appropriate Social Services in South Africa (OASSSA), Psychologists Against Apartheid, the South African Health and Social Services Organisation (SAHSSO), and the establishment of the academic journal Psychology in Society (PINS). Some of the main theoretical and practical achievements of these developments were: the forging of a way to critique the categories of class, race, gender, and other structural factors impacting the discipline of psychology, the encouragement of students to think critically about the politics of psychology, and rebuilding international links as well as relationships with other social and health sciences in South Africa.", "title": "Internationally" }, { "paragraph_id": 23, "text": "However, not all these initiatives continued after the end of political struggle and the transition to democracy. After 1994, professional psychology in South Africa was reorganised through the establishment of the Professional Board for Psychology that exists within the Health Professions Council of South Africa (HPCSA). This statutory body regulates the profession with its systems of licensing and certification. Within these systems, critical psychology is more of an approach to the field than it is a professional category on its own. From the 2000s until recent times, critical psychology moved more toward studying certain domains, such as gender or race, and in the process, the overarching project of establishing a formalised field of critical psychology has either been discarded or broadened to refer to anything that is 'non-mainstream' in psychology. Critical psychology in South Africa is therefore mostly applied as a theoretical approach.", "title": "Internationally" }, { "paragraph_id": 24, "text": "The doctoral program in Critical Social/Personality Psychology and Environmental Psychology at the CUNY Graduate Center and the doctoral program in Critical Psychology at Point Park University, in Pittsburgh, PA are the only critical psychology specific doctoral programs in the United States. Prescott College in Prescott, Arizona offers an online Master's program in Critical Psychology and Human Services and has a critically oriented undergraduate program. The California Institute of Integral Studies in San Francisco also offers the Bachelor's Completion Program with a minor in Critical Psychology, and critical perspectives are sometimes encountered in traditional universities, perhaps especially within community psychology programs. The University of West Georgia offers a Ph.D. in Consciousness and Society with critical psychology being one of the main three theoretical orientations. North American efforts include the 1993 founding of RadPsyNet, the 1997 publication of Critical Psychology: An Introduction (edited by Dennis Fox and Isaac Prilleltensky; expanded 2009 edition edited by Dennis Fox, Isaac Prilleltensky, and Stephanie Austin), the 2001 Monterey Conference on Critical Psychology, and in underlying themes of many contributions to the Journal of Social Action in Counseling and Psychology.", "title": "Internationally" } ]
Critical psychology is a perspective on psychology that draws extensively on critical theory. Critical psychology challenges the assumptions, theories and methods of mainstream psychology and attempts to apply psychological understandings in different ways, often looking towards social change as a means of preventing and treating psychopathology. Critical psychologists believe that mainstream psychology fails to consider how power differences and discrimination between social classes and groups can impact an individual's or a group's mental and physical well-being. Mainstream psychology does this only in part by attempting to explain behavior at the individual level. However, it largely ignores institutional racism, postcolonialism and deficits in social justice for minority groups based on differences in observable characteristics such as gender, ethnicity, religion religious minority, sexual orientation, LGBTQ+ or disability.
2001-12-06T15:51:10Z
2023-12-29T06:22:54Z
[ "Template:Reflist", "Template:Cite web", "Template:Cite journal", "Template:Critical theory", "Template:Psychology sidebar", "Template:Lang", "Template:Portal", "Template:Wikiversity", "Template:Psychology", "Template:Short description", "Template:When", "Template:Citation" ]
https://en.wikipedia.org/wiki/Critical_psychology
7,803
Crossfire
A crossfire (also known as interlocking fire) is a military term for the siting of weapons (often automatic weapons such as assault rifles or sub-machine guns) so that their arcs of fire overlap. This tactic came to prominence in World War I. Siting weapons this way is an example of the application of the defensive principle of mutual support. The advantage of siting weapons that mutually support one another is that it is difficult for an attacker to find a covered approach to any one defensive position. Use of armour, air support, indirect fire support, and stealth are tactics that may be used to assault a defensive position. However, when combined with land mines, snipers, barbed wire, and air cover, crossfire became a difficult tactic to counter in the early 20th century. The tactic of using overlapping arcs of fire came to prominence during World War I where it was a feature of trench warfare. Machine guns were placed in groups, called machine-gun nests, and they protected the front of the trenches. Many people died in futile attempts to charge across the no man's land where these crossfires were set up. After these attacks many bodies could be found in the no man's land. To be "caught in the crossfire" is an expression that often refers to unintended casualties (bystanders, etc.) who were killed or wounded by being exposed to the gunfire of a battle or gun fight, such as in a position to be hit by bullets of either side. The phrase has come to mean any injury, damage or harm (physical or otherwise) caused to a third party due to the action of belligerents (collateral damage).
[ { "paragraph_id": 0, "text": "A crossfire (also known as interlocking fire) is a military term for the siting of weapons (often automatic weapons such as assault rifles or sub-machine guns) so that their arcs of fire overlap. This tactic came to prominence in World War I.", "title": "" }, { "paragraph_id": 1, "text": "Siting weapons this way is an example of the application of the defensive principle of mutual support. The advantage of siting weapons that mutually support one another is that it is difficult for an attacker to find a covered approach to any one defensive position. Use of armour, air support, indirect fire support, and stealth are tactics that may be used to assault a defensive position. However, when combined with land mines, snipers, barbed wire, and air cover, crossfire became a difficult tactic to counter in the early 20th century.", "title": "" }, { "paragraph_id": 2, "text": "The tactic of using overlapping arcs of fire came to prominence during World War I where it was a feature of trench warfare. Machine guns were placed in groups, called machine-gun nests, and they protected the front of the trenches. Many people died in futile attempts to charge across the no man's land where these crossfires were set up. After these attacks many bodies could be found in the no man's land.", "title": "Trench warfare" }, { "paragraph_id": 3, "text": "To be \"caught in the crossfire\" is an expression that often refers to unintended casualties (bystanders, etc.) who were killed or wounded by being exposed to the gunfire of a battle or gun fight, such as in a position to be hit by bullets of either side. The phrase has come to mean any injury, damage or harm (physical or otherwise) caused to a third party due to the action of belligerents (collateral damage).", "title": "\"Caught in the crossfire\"" } ]
A crossfire is a military term for the siting of weapons so that their arcs of fire overlap. This tactic came to prominence in World War I. Siting weapons this way is an example of the application of the defensive principle of mutual support. The advantage of siting weapons that mutually support one another is that it is difficult for an attacker to find a covered approach to any one defensive position. Use of armour, air support, indirect fire support, and stealth are tactics that may be used to assault a defensive position. However, when combined with land mines, snipers, barbed wire, and air cover, crossfire became a difficult tactic to counter in the early 20th century.
2002-01-18T14:22:03Z
2023-11-23T06:37:34Z
[ "Template:About", "Template:Wiktionary", "Template:Citation needed", "Template:Reflist", "Template:Cite news" ]
https://en.wikipedia.org/wiki/Crossfire
7,805
CNO
CNO may refer to:
[ { "paragraph_id": 0, "text": "CNO may refer to:", "title": "" } ]
CNO may refer to: C/N0, the carrier-to-noise-density ratio of a signal Casualty notification officer, a person responsible for informing relatives of death or injury Chief networking officer, a business role Chief nursing officer, a nursing management position Chief of Naval Operations, the head of the United States Navy Chino Airport, in California, IATA symbol: CNO Chronic nuisance ordinance, a law that aims to evict tenants for reporting crime Cis-Neptunian object, an astronomical body within the orbit of Neptune CNO cycle, a stellar nuclear fusion reaction Coconut oil, an edible oil Computer network operations, the optimization and use of digital telecommunications CNO Financial Group, an American financial services holding company CNO (gene), which encodes the protein cappuccino homolog Clozapine N-oxide, a synthetic ligand which activates a receptor Fulminate, a chemical compound containing the CNO− ion
2022-11-25T22:56:48Z
[ "Template:Chem2", "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/CNO
7,806
Cruising (maritime)
Cruising is a maritime activity that involves staying aboard a watercraft for extended periods of time when the vessel is traveling on water at a steady speed. Cruising generally refers to leisurely trips on yachts and luxury cruiseships, with durations varying from day-trips to months-long round-the-world voyages. "The sea, the great unifier, is man's only hope. Now, as never before, the old phrase has a literal meaning: We are all in the same boat." Jacques Cousteau Boats were almost exclusively used for working purposes prior to the nineteenth century. In 1857, the philosopher Henry David Thoreau, with his book Canoeing in Wilderness chronicling his canoe voyaging in the wilderness of Maine, is considered the first to convey the enjoyment of spiritual and lifestyle aspects of cruising. The modern conception of cruising for pleasure was first popularised by the Scottish explorer and sportsman John MacGregor. He was introduced to the canoes and kayaks of the Native Americans on a camping trip in 1858, and on his return to the United Kingdom constructed his own 'double-ended' canoe in Lambeth. The boat, nicknamed 'Rob Roy' after a famous relative of his, was built of lapstrake oak planking, decked in cedar covered with rubberized canvas with an open cockpit in the center. He cruised around the waterways of Britain, Europe and the Middle East and wrote a popular book about his experiences, A Thousand Miles in the Rob Roy Canoe. In 1866, Macgregor was a moving force behind the establishment of the Royal Canoe Club, the first club in the world to promote pleasure cruising. The first recorded regatta was held on April 27, 1867, and it received Royal patronage in 1873. The latter part of the century saw cruising for leisure being enthusiastically taken up by the middle class. The author Robert Louis Stevenson wrote An Inland Voyage in 1877 as a travelogue on his canoeing trip through France and Belgium. Stevenson and his companion, Sir Walter Grindlay Simpson travelled in two 'Rob Roys' along the Oise River and witnessed the Romantic beauty of rural Europe. The Canadian-American Joshua Slocum was one of the first people to carry out a long-distance sailing voyage for pleasure, circumnavigating the world between 1895 and 1898. Despite opinion that such a voyage was impossible, Slocum rebuilt a derelict 37-foot (11 m) sloop Spray and sailed her single-handed around the world. His book Sailing Alone Around the World was a classic adventure, and inspired many others to take to the seas. Other cruising authors have provided both inspiration and instruction to prospective cruisers. Key among these during the post World War II period are Electa and Irving Johnson, Miles and Beryl Smeeton, Bernard Moitessier, Peter Pye, and Eric and Susan Hiscock. During the 1970s - 1990s Robin Lee Graham, Lin and Larry Pardey, Annie Hill, Herb Payson, Linda and Steve Dashew, Margaret and Hal Roth, and Beth Leonard & Evans Starzinger have provided inspiration for people to set off voyaging. The development of ocean crossing rallies, most notably the ARC (Atlantic Rally for Cruisers), have encouraged less experienced sailors to undertake ocean crossings. These rallies provide a group of sailors crossing the same ocean at the same time with safety inspections, weather information and social functions. Cruising is done on both sail and power boats, monohulls and multihulls although sail predominates over longer distances, as ocean-going power boats are considerably more expensive to purchase and operate. The size of the typical cruising boat has increased over the years and is currently in the range of 10 to 15 metres (33 to 50 feet) although smaller boats have been used in around-the-world trips, but are generally not recommended given the dangers involved. Many cruisers are "long term" and travel for many years, the most adventurous among them circle the globe over a period of three to ten years. Many others take a year or two off from work and school for shorter trips and the chance to experience the cruising lifestyle. Blue-water cruising which is defined as long term open sea cruising is more involved and inherently more dangerous than coastal cruising. Before embarking on an open-ocean voyage, planning and preparation will include studying charts, weather reports/warnings, almanacs and navigation books of the route to be followed. In addition, supplies need to be stocked (including fresh water and fuel), navigation instruments checked and the ship itself needs to be inspected and the crew needs to be given exact instruction on the jobs are expected to perform (e.g. the watch, which is generally 4 hours on and 4 hours off, navigation, steering, rigging sails, ...). In addition, the crew needs to be well trained at working together and with the ship in question. Finally, the sailor must be mentally prepared for dealing with harsh situations. There have been many well-documented cases where sailors had to be rescued simply because they were not sufficiently prepared (the sailors as well as the ship) or lacked experience for their venture and ran into serious trouble. Sailing near the coast (coastal cruising) gives a certain amount of safety. A ship is always granted 'innocent passage' through the country (most countries usually claim up to 22 km (14 mi) off the coast). When this method is practiced however, if the ship needs to stop (e.g. for repairs), a trip to a customs checkpoint to have passports checked would be required. Voyage along inland waterways are called river cruises, which often involved stopping at multiple ports along the way. As many cities and towns are built around rivers and historically have relied on maritime transport, river cruise docks are frequently located in the center of cities and towns. According to Douglas Ward, "A river cruise represents life in the slow lane, sailing along at a gentle pace, soaking up the scenery, with plentiful opportunities to explore riverside towns and cities en route. It is a supremely calming experience, an antidote to the pressures of life in a fast-paced world, in surroundings that are comfortable without being fussy or pretentious, with good food and enjoyable company." River cruising is a major component of the tourist industry in many parts of the world. Cruisers use a variety of equipment and techniques to make their voyages possible, or simply more comfortable. The use of wind vane self-steering was common on long distance cruising yachts but is increasingly being supplemented or replaced by electrical auto-pilots. Though in the past many cruisers had no means of generating electricity on board and depended on kerosene and dry-cell batteries, today electrical demands are much higher and nearly all cruisers have electrical devices such as lights, communications equipment and refrigeration. Although most boats can generate power from their inboard engines, an increasing number carry auxiliary generators. Carrying sufficient fuel to power engine and generator over a long voyage can be a problem, so many cruising boats are equipped with other ancillary generating devices such as solar panels, wind turbines and towed turbines. Cruisers choosing to spend extended time in very remote locations with minimal access to marinas can opt to equip their vessels with watermakers (reverse-osmosis seawater desalination units) used to convert sea water to potable fresh water. Satellite communications are becoming more common on cruising boats. Many boats are now equipped with satellite telephone systems; however, these systems can be expensive to use, and may operate only in certain areas. Many cruisers still use short wave maritime SSB and amateur radio, which has no running costs. These radios provide two-way voice communications, can receive weather fax graphics or GRIB files via a laptop computer, and with a compatible modem (e.g. PACTOR) can send and receive email at very slow speed. Such emails are usually limited to basic communication using plain text, without HTML formatting or attachments. Awareness of impending weather conditions is particularly important to cruising sailors who are often far from safe harbours and need to steer clear of dangerous weather conditions. Most cruising boats are equipped with a barometer or a weather station that records barometric pressure as well as temperature and provides rudimentary forecasting. For more sophisticated weather forecasting, cruisers rely on their ability to receive forecasts by radio, phone or satellite. In order to avoid collisions with other vessels, cruisers rely on a maintaining a regular watch schedule. At night, color-coded running lights help determine the position and orientation of vessels. Radar and AIS systems are often employed to detect vessels positions and movement in all conditions (day, night, rain and fog). Cruisers navigate using paper charts and radar. Modern yachts are often also equipped with a chartplotter which enables the use of electronic charts and is linked to GPS satellites that provide position reports. Some chartplotters have the ability to interface charts and radar images. Those that still wish to work with traditional charts as well as with GPS may do so using a Yeoman Plotter. Certain advanced sailing vessels have a completely automated sailing system which includes a plotter, as well as course correcting through a link with the ship's steering organs (e.g. sails, propeller). One such device can be found at the Maltese Falcon. Purchasing and maintaining a yacht can be costly. Most cruising sailors do not own a house and consider their boat their home during the duration of their cruise. Many cruisers find they spend, on average, 4% of their boat's purchase price annually on boat maintenance. Like living a conventional life on land, the cost of cruising is variable. How much a person ends up spending depends largely on their spending habits (for example, eating out a lot and frequenting marinas vs. preparing local foods aboard and anchoring out) and the type of boat (fancy modern production boats are very expensive to purchase and maintain, while low-key cruising boats often involve much lower expenses). Most long-term cruisers prefer to live a simple life, usually with far lower expenses than people who live ashore. An alternative solution is to sail on someone else's yacht. Those who know how to sail can sometimes find boats looking for an extra crewmember for a long trip, while some non-sailors are also able to find boats willing to carry a hitch-hiker. Crew-finding websites exist to help match-up people looking for a crossing with yachts with a berth available or looking for a temporary crewmember, Find a Crew for example. Another common tactic for finding a yacht is to visit local yacht clubs and marinas and get to know the sailors there, in the hope that one of them will be able to provide a berth. Travel by water brings hazards: collision, weather, and equipment failure can lead to dangerous situations such as a sinking or severely disabled and dangerous vessel. For this reason many long distance cruising yachts carry with them emergency equipment such as SARTs, EPIRBs and liferafts or proactive lifeboats. Medical emergencies are also of concern, as a medical emergency can occur on a long passage when the closest port is over a week away. For this reason before going cruising many people go through first aid training and carry medical kits. In some parts of the world (e.g., near the Horn of Africa) piracy can be a problem.
[ { "paragraph_id": 0, "text": "Cruising is a maritime activity that involves staying aboard a watercraft for extended periods of time when the vessel is traveling on water at a steady speed. Cruising generally refers to leisurely trips on yachts and luxury cruiseships, with durations varying from day-trips to months-long round-the-world voyages.", "title": "" }, { "paragraph_id": 1, "text": "\"The sea, the great unifier, is man's only hope. Now, as never before, the old phrase has a literal meaning: We are all in the same boat.\"", "title": "History" }, { "paragraph_id": 2, "text": "Jacques Cousteau", "title": "History" }, { "paragraph_id": 3, "text": "Boats were almost exclusively used for working purposes prior to the nineteenth century. In 1857, the philosopher Henry David Thoreau, with his book Canoeing in Wilderness chronicling his canoe voyaging in the wilderness of Maine, is considered the first to convey the enjoyment of spiritual and lifestyle aspects of cruising.", "title": "History" }, { "paragraph_id": 4, "text": "The modern conception of cruising for pleasure was first popularised by the Scottish explorer and sportsman John MacGregor. He was introduced to the canoes and kayaks of the Native Americans on a camping trip in 1858, and on his return to the United Kingdom constructed his own 'double-ended' canoe in Lambeth. The boat, nicknamed 'Rob Roy' after a famous relative of his, was built of lapstrake oak planking, decked in cedar covered with rubberized canvas with an open cockpit in the center. He cruised around the waterways of Britain, Europe and the Middle East and wrote a popular book about his experiences, A Thousand Miles in the Rob Roy Canoe.", "title": "History" }, { "paragraph_id": 5, "text": "In 1866, Macgregor was a moving force behind the establishment of the Royal Canoe Club, the first club in the world to promote pleasure cruising. The first recorded regatta was held on April 27, 1867, and it received Royal patronage in 1873. The latter part of the century saw cruising for leisure being enthusiastically taken up by the middle class. The author Robert Louis Stevenson wrote An Inland Voyage in 1877 as a travelogue on his canoeing trip through France and Belgium. Stevenson and his companion, Sir Walter Grindlay Simpson travelled in two 'Rob Roys' along the Oise River and witnessed the Romantic beauty of rural Europe.", "title": "History" }, { "paragraph_id": 6, "text": "The Canadian-American Joshua Slocum was one of the first people to carry out a long-distance sailing voyage for pleasure, circumnavigating the world between 1895 and 1898. Despite opinion that such a voyage was impossible, Slocum rebuilt a derelict 37-foot (11 m) sloop Spray and sailed her single-handed around the world. His book Sailing Alone Around the World was a classic adventure, and inspired many others to take to the seas.", "title": "History" }, { "paragraph_id": 7, "text": "Other cruising authors have provided both inspiration and instruction to prospective cruisers. Key among these during the post World War II period are Electa and Irving Johnson, Miles and Beryl Smeeton, Bernard Moitessier, Peter Pye, and Eric and Susan Hiscock. During the 1970s - 1990s Robin Lee Graham, Lin and Larry Pardey, Annie Hill, Herb Payson, Linda and Steve Dashew, Margaret and Hal Roth, and Beth Leonard & Evans Starzinger have provided inspiration for people to set off voyaging.", "title": "History" }, { "paragraph_id": 8, "text": "The development of ocean crossing rallies, most notably the ARC (Atlantic Rally for Cruisers), have encouraged less experienced sailors to undertake ocean crossings. These rallies provide a group of sailors crossing the same ocean at the same time with safety inspections, weather information and social functions.", "title": "History" }, { "paragraph_id": 9, "text": "Cruising is done on both sail and power boats, monohulls and multihulls although sail predominates over longer distances, as ocean-going power boats are considerably more expensive to purchase and operate. The size of the typical cruising boat has increased over the years and is currently in the range of 10 to 15 metres (33 to 50 feet) although smaller boats have been used in around-the-world trips, but are generally not recommended given the dangers involved. Many cruisers are \"long term\" and travel for many years, the most adventurous among them circle the globe over a period of three to ten years. Many others take a year or two off from work and school for shorter trips and the chance to experience the cruising lifestyle.", "title": "Types of boats used" }, { "paragraph_id": 10, "text": "Blue-water cruising which is defined as long term open sea cruising is more involved and inherently more dangerous than coastal cruising. Before embarking on an open-ocean voyage, planning and preparation will include studying charts, weather reports/warnings, almanacs and navigation books of the route to be followed. In addition, supplies need to be stocked (including fresh water and fuel), navigation instruments checked and the ship itself needs to be inspected and the crew needs to be given exact instruction on the jobs are expected to perform (e.g. the watch, which is generally 4 hours on and 4 hours off, navigation, steering, rigging sails, ...). In addition, the crew needs to be well trained at working together and with the ship in question. Finally, the sailor must be mentally prepared for dealing with harsh situations. There have been many well-documented cases where sailors had to be rescued simply because they were not sufficiently prepared (the sailors as well as the ship) or lacked experience for their venture and ran into serious trouble.", "title": "Types" }, { "paragraph_id": 11, "text": "Sailing near the coast (coastal cruising) gives a certain amount of safety. A ship is always granted 'innocent passage' through the country (most countries usually claim up to 22 km (14 mi) off the coast). When this method is practiced however, if the ship needs to stop (e.g. for repairs), a trip to a customs checkpoint to have passports checked would be required.", "title": "Types" }, { "paragraph_id": 12, "text": "Voyage along inland waterways are called river cruises, which often involved stopping at multiple ports along the way. As many cities and towns are built around rivers and historically have relied on maritime transport, river cruise docks are frequently located in the center of cities and towns.", "title": "Types" }, { "paragraph_id": 13, "text": "According to Douglas Ward, \"A river cruise represents life in the slow lane, sailing along at a gentle pace, soaking up the scenery, with plentiful opportunities to explore riverside towns and cities en route. It is a supremely calming experience, an antidote to the pressures of life in a fast-paced world, in surroundings that are comfortable without being fussy or pretentious, with good food and enjoyable company.\"", "title": "Types" }, { "paragraph_id": 14, "text": "River cruising is a major component of the tourist industry in many parts of the world.", "title": "Types" }, { "paragraph_id": 15, "text": "Cruisers use a variety of equipment and techniques to make their voyages possible, or simply more comfortable. The use of wind vane self-steering was common on long distance cruising yachts but is increasingly being supplemented or replaced by electrical auto-pilots.", "title": "Equipment" }, { "paragraph_id": 16, "text": "Though in the past many cruisers had no means of generating electricity on board and depended on kerosene and dry-cell batteries, today electrical demands are much higher and nearly all cruisers have electrical devices such as lights, communications equipment and refrigeration. Although most boats can generate power from their inboard engines, an increasing number carry auxiliary generators. Carrying sufficient fuel to power engine and generator over a long voyage can be a problem, so many cruising boats are equipped with other ancillary generating devices such as solar panels, wind turbines and towed turbines. Cruisers choosing to spend extended time in very remote locations with minimal access to marinas can opt to equip their vessels with watermakers (reverse-osmosis seawater desalination units) used to convert sea water to potable fresh water.", "title": "Equipment" }, { "paragraph_id": 17, "text": "Satellite communications are becoming more common on cruising boats. Many boats are now equipped with satellite telephone systems; however, these systems can be expensive to use, and may operate only in certain areas. Many cruisers still use short wave maritime SSB and amateur radio, which has no running costs. These radios provide two-way voice communications, can receive weather fax graphics or GRIB files via a laptop computer, and with a compatible modem (e.g. PACTOR) can send and receive email at very slow speed. Such emails are usually limited to basic communication using plain text, without HTML formatting or attachments.", "title": "Equipment" }, { "paragraph_id": 18, "text": "Awareness of impending weather conditions is particularly important to cruising sailors who are often far from safe harbours and need to steer clear of dangerous weather conditions. Most cruising boats are equipped with a barometer or a weather station that records barometric pressure as well as temperature and provides rudimentary forecasting. For more sophisticated weather forecasting, cruisers rely on their ability to receive forecasts by radio, phone or satellite.", "title": "Equipment" }, { "paragraph_id": 19, "text": "In order to avoid collisions with other vessels, cruisers rely on a maintaining a regular watch schedule. At night, color-coded running lights help determine the position and orientation of vessels. Radar and AIS systems are often employed to detect vessels positions and movement in all conditions (day, night, rain and fog).", "title": "Equipment" }, { "paragraph_id": 20, "text": "Cruisers navigate using paper charts and radar. Modern yachts are often also equipped with a chartplotter which enables the use of electronic charts and is linked to GPS satellites that provide position reports. Some chartplotters have the ability to interface charts and radar images. Those that still wish to work with traditional charts as well as with GPS may do so using a Yeoman Plotter. Certain advanced sailing vessels have a completely automated sailing system which includes a plotter, as well as course correcting through a link with the ship's steering organs (e.g. sails, propeller). One such device can be found at the Maltese Falcon.", "title": "Equipment" }, { "paragraph_id": 21, "text": "Purchasing and maintaining a yacht can be costly. Most cruising sailors do not own a house and consider their boat their home during the duration of their cruise. Many cruisers find they spend, on average, 4% of their boat's purchase price annually on boat maintenance.", "title": "Expense" }, { "paragraph_id": 22, "text": "Like living a conventional life on land, the cost of cruising is variable. How much a person ends up spending depends largely on their spending habits (for example, eating out a lot and frequenting marinas vs. preparing local foods aboard and anchoring out) and the type of boat (fancy modern production boats are very expensive to purchase and maintain, while low-key cruising boats often involve much lower expenses). Most long-term cruisers prefer to live a simple life, usually with far lower expenses than people who live ashore.", "title": "Expense" }, { "paragraph_id": 23, "text": "An alternative solution is to sail on someone else's yacht. Those who know how to sail can sometimes find boats looking for an extra crewmember for a long trip, while some non-sailors are also able to find boats willing to carry a hitch-hiker. Crew-finding websites exist to help match-up people looking for a crossing with yachts with a berth available or looking for a temporary crewmember, Find a Crew for example. Another common tactic for finding a yacht is to visit local yacht clubs and marinas and get to know the sailors there, in the hope that one of them will be able to provide a berth.", "title": "Expense" }, { "paragraph_id": 24, "text": "Travel by water brings hazards: collision, weather, and equipment failure can lead to dangerous situations such as a sinking or severely disabled and dangerous vessel. For this reason many long distance cruising yachts carry with them emergency equipment such as SARTs, EPIRBs and liferafts or proactive lifeboats. Medical emergencies are also of concern, as a medical emergency can occur on a long passage when the closest port is over a week away. For this reason before going cruising many people go through first aid training and carry medical kits. In some parts of the world (e.g., near the Horn of Africa) piracy can be a problem.", "title": "Safety" } ]
Cruising is a maritime activity that involves staying aboard a watercraft for extended periods of time when the vessel is traveling on water at a steady speed. Cruising generally refers to leisurely trips on yachts and luxury cruiseships, with durations varying from day-trips to months-long round-the-world voyages.
2002-01-19T01:20:11Z
2023-08-26T17:01:35Z
[ "Template:About", "Template:Wikivoyage", "Template:Quote box", "Template:Reflist", "Template:Cite book", "Template:Short description", "Template:Cite web", "Template:Cite news", "Template:Boats and boating", "Template:Authority control", "Template:Convert", "Template:Div col", "Template:Div col end", "Template:Curlie" ]
https://en.wikipedia.org/wiki/Cruising_(maritime)
7,807
Cavitation
Cavitation in fluid mechanics and engineering normally refers to the phenomenon in which the static pressure of a liquid reduces to below the liquid's vapour pressure, leading to the formation of small vapor-filled cavities in the liquid. When subjected to higher pressure, these cavities, called "bubbles" or "voids", collapse and can generate shock waves that may damage machinery. These shock waves are strong when they are very close to the imploded bubble, but rapidly weaken as they propagate away from the implosion. Cavitation is a significant cause of wear in some engineering contexts. Collapsing voids that implode near to a metal surface cause cyclic stress through repeated implosion. This results in surface fatigue of the metal, causing a type of wear also called "cavitation". The most common examples of this kind of wear are to pump impellers, and bends where a sudden change in the direction of liquid occurs. Cavitation is usually divided into two classes of behavior: inertial (or transient) cavitation and non-inertial cavitation. The process in which a void or bubble in a liquid rapidly collapses, producing a shock wave, is called inertial cavitation. Inertial cavitation occurs in nature in the strikes of mantis shrimp and pistol shrimp, as well as in the vascular tissues of plants. In manufactured objects, it can occur in control valves, pumps, propellers and impellers. Non-inertial cavitation is the process in which a bubble in a fluid is forced to oscillate in size or shape due to some form of energy input, such as an acoustic field. The gas in the bubble may contain a portion of a different gas than the vapor phase of the liquid. Such cavitation is often employed in ultrasonic cleaning baths and can also be observed in pumps, propellers, etc. Since the shock waves formed by collapse of the voids are strong enough to cause significant damage to parts, cavitation is typically an undesirable phenomenon in machinery (although desirable if intentionally used, for example, to sterilize contaminated surgical instruments, break down pollutants in water purification systems, emulsify tissue for cataract surgery or kidney stone lithotripsy, or homogenize fluids). It is very often specifically prevented in the design of machines such as turbines or propellers, and eliminating cavitation is a major field in the study of fluid dynamics. However, it is sometimes useful and does not cause damage when the bubbles collapse away from machinery, such as in supercavitation. Inertial cavitation was first observed in the late 19th century, considering the collapse of a spherical void within a liquid. When a volume of liquid is subjected to a sufficiently low pressure, it may rupture and form a cavity. This phenomenon is coined cavitation inception and may occur behind the blade of a rapidly rotating propeller or on any surface vibrating in the liquid with sufficient amplitude and acceleration. A fast-flowing river can cause cavitation on rock surfaces, particularly when there is a drop-off, such as on a waterfall. Vapor gases evaporate into the cavity from the surrounding medium; thus, the cavity is not a vacuum at all, but rather a low-pressure vapor (gas) bubble. Once the conditions which caused the bubble to form are no longer present, such as when the bubble moves downstream, the surrounding liquid begins to implode due its higher pressure, building up momentum as it moves inward. As the bubble finally collapses, the inward momentum of the surrounding liquid causes a sharp increase of pressure and temperature of the vapor within. The bubble eventually collapses to a minute fraction of its original size, at which point the gas within dissipates into the surrounding liquid via a rather violent mechanism which releases a significant amount of energy in the form of an acoustic shock wave and as visible light. At the point of total collapse, the temperature of the vapor within the bubble may be several thousand Kelvin, and the pressure several hundred atmospheres. The physical process of cavitation inception is similar to boiling. The major difference between the two is the thermodynamic paths that precede the formation of the vapor. Boiling occurs when the local temperature of the liquid reaches the saturation temperature, and further heat is supplied to allow the liquid to sufficiently phase change into a gas. Cavitation inception occurs when the local pressure falls sufficiently far below the saturated vapor pressure, a value given by the tensile strength of the liquid at a certain temperature. In order for cavitation inception to occur, the cavitation "bubbles" generally need a surface on which they can nucleate. This surface can be provided by the sides of a container, by impurities in the liquid, or by small undissolved microbubbles within the liquid. It is generally accepted that hydrophobic surfaces stabilize small bubbles. These pre-existing bubbles start to grow unbounded when they are exposed to a pressure below the threshold pressure, termed Blake's threshold. The presence of an incompressible core inside a cavitation nucleus substantially lowers the cavitation threshold below the Blake threshold. The vapor pressure here differs from the meteorological definition of vapor pressure, which describes the partial pressure of water in the atmosphere at some value less than 100% saturation. Vapor pressure as relating to cavitation refers to the vapor pressure in equilibrium conditions and can therefore be more accurately defined as the equilibrium (or saturated) vapor pressure. Non-inertial cavitation is the process in which small bubbles in a liquid are forced to oscillate in the presence of an acoustic field, when the intensity of the acoustic field is insufficient to cause total bubble collapse. This form of cavitation causes significantly less erosion than inertial cavitation, and is often used for the cleaning of delicate materials, such as silicon wafers. Other ways of generating cavitation voids involve the local deposition of energy, such as an intense focused laser pulse (optic cavitation) or with an electrical discharge through a spark. These techniques have been used to study the evolution of the bubble that is actually created by locally boiling the liquid with a local increment of temperature. Hydrodynamic cavitation is the process of vaporisation, bubble generation and bubble implosion which occurs in a flowing liquid as a result of a decrease and subsequent increase in local pressure. Cavitation will only occur if the local pressure declines to some point below the saturated vapor pressure of the liquid and subsequent recovery above the vapor pressure. If the recovery pressure is not above the vapor pressure then flashing is said to have occurred. In pipe systems, cavitation typically occurs either as the result of an increase in the kinetic energy (through an area constriction) or an increase in the pipe elevation. Hydrodynamic cavitation can be produced by passing a liquid through a constricted channel at a specific flow velocity or by mechanical rotation of an object through a liquid. In the case of the constricted channel and based on the specific (or unique) geometry of the system, the combination of pressure and kinetic energy can create the hydrodynamic cavitation cavern downstream of the local constriction generating high energy cavitation bubbles. Based on the thermodynamic phase change diagram, an increase in temperature could initiate a known phase change mechanism known as boiling. However, a decrease in static pressure could also help one pass the multi-phase diagram and initiate another phase change mechanism known as cavitation. On the other hand, a local increase in flow velocity could lead to a static pressure drop to the critical point at which cavitation could be initiated (based on Bernoulli's principle). The critical pressure point is vapor saturated pressure. In a closed fluidic system where no flow leakage is detected, a decrease in cross-sectional area would lead to velocity increment and hence static pressure drop. This is the working principle of many hydrodynamic cavitation based reactors for different applications such as water treatment, energy harvesting, heat transfer enhancement, food processing, etc. There are different flow patterns detected as a cavitation flow progresses: inception, developed flow, supercavitation, and choked flow. Inception is the first moment that the second phase (gas phase) appears in the system. This is the weakest cavitating flow captured in a system corresponding to the highest cavitation number. When the cavities grow and becomes larger in size in the orifice or venturi structures, developed flow is recorded. The most intense cavitating flow is known as supercavitation where theoretically all the nozzle area of an orifice is filled with gas bubbles. This flow regime corresponds to the lowest cavitation number in a system. After supercavitation, the system is not capable of passing more flow. Hence, velocity does not change while the upstream pressure increase. This would lead to an increase in cavitation number which shows that choked flow occurred. The process of bubble generation, and the subsequent growth and collapse of the cavitation bubbles, results in very high energy densities and in very high local temperatures and local pressures at the surface of the bubbles for a very short time. The overall liquid medium environment, therefore, remains at ambient conditions. When uncontrolled, cavitation is damaging; by controlling the flow of the cavitation, however, the power can be harnessed and non-destructive. Controlled cavitation can be used to enhance chemical reactions or propagate certain unexpected reactions because free radicals are generated in the process due to disassociation of vapors trapped in the cavitating bubbles. Orifices and venturi are reported to be widely used for generating cavitation. A venturi has an inherent advantage over an orifice because of its smooth converging and diverging sections, such that it can generate a higher flow velocity at the throat for a given pressure drop across it. On the other hand, an orifice has an advantage that it can accommodate a greater number of holes (larger perimeter of holes) in a given cross sectional area of the pipe. The cavitation phenomenon can be controlled to enhance the performance of high-speed marine vessels and projectiles, as well as in material processing technologies, in medicine, etc. Controlling the cavitating flows in liquids can be achieved only by advancing the mathematical foundation of the cavitation processes. These processes are manifested in different ways, the most common ones and promising for control being bubble cavitation and supercavitation. The first exact classical solution should perhaps be credited to the well-known solution by Hermann von Helmholtz in 1868. The earliest distinguished studies of academic type on the theory of a cavitating flow with free boundaries and supercavitation were published in the book Jets, wakes and cavities followed by Theory of jets of ideal fluid. Widely used in these books was the well-developed theory of conformal mappings of functions of a complex variable, allowing one to derive a large number of exact solutions of plane problems. Another venue combining the existing exact solutions with approximated and heuristic models was explored in the work Hydrodynamics of Flows with Free Boundaries that refined the applied calculation techniques based on the principle of cavity expansion independence, theory of pulsations and stability of elongated axisymmetric cavities, etc. and in Dimensionality and similarity methods in the problems of the hydromechanics of vessels. A natural continuation of these studies was recently presented in The Hydrodynamics of Cavitating Flows – an encyclopedic work encompassing all the best advances in this domain for the last three decades, and blending the classical methods of mathematical research with the modern capabilities of computer technologies. These include elaboration of nonlinear numerical methods of solving 3D cavitation problems, refinement of the known plane linear theories, development of asymptotic theories of axisymmetric and nearly axisymmetric flows, etc. As compared to the classical approaches, the new trend is characterized by expansion of the theory into the 3D flows. It also reflects a certain correlation with current works of an applied character on the hydrodynamics of supercavitating bodies. Hydrodynamic cavitation can also improve some industrial processes. For instance, cavitated corn slurry shows higher yields in ethanol production compared to uncavitated corn slurry in dry milling facilities. This is also used in the mineralization of bio-refractory compounds which otherwise would need extremely high temperature and pressure conditions since free radicals are generated in the process due to the dissociation of vapors trapped in the cavitating bubbles, which results in either the intensification of the chemical reaction or may even result in the propagation of certain reactions not possible under otherwise ambient conditions. Inertial cavitation can also occur in the presence of an acoustic field. Microscopic gas bubbles that are generally present in a liquid will be forced to oscillate due to an applied acoustic field. If the acoustic intensity is sufficiently high, the bubbles will first grow in size and then rapidly collapse. Hence, inertial cavitation can occur even if the rarefaction in the liquid is insufficient for a Rayleigh-like void to occur. Ultrasonic cavitation inception will occur when the acceleration of the ultrasound source is enough to produce the needed pressure drop. This pressure drop depends on the value of the acceleration and the size of the affected volume by the pressure wave. The dimensionless number that predicts ultrasonic cavitation is the Garcia-Atance number. High power ultrasonic horns produce accelerations high enough to create a cavitating region that can be used for homogenization, dispersion, deagglomeration, erosion, cleaning, milling, emulsification, extraction, disintegration, and sonochemistry. In industry, cavitation is often used to homogenize, or mix and break down, suspended particles in a colloidal liquid compound such as paint mixtures or milk. Many industrial mixing machines are based upon this design principle. It is usually achieved through impeller design or by forcing the mixture through an annular opening that has a narrow entrance orifice with a much larger exit orifice. In the latter case, the drastic decrease in pressure as the liquid accelerates into a larger volume induces cavitation. This method can be controlled with hydraulic devices that control inlet orifice size, allowing for dynamic adjustment during the process, or modification for different substances. The surface of this type of mixing valve, against which surface the cavitation bubbles are driven causing their implosion, undergoes tremendous mechanical and thermal localized stress; they are therefore often constructed of extremely strong and hard materials such as stainless steel, Stellite, or even polycrystalline diamond (PCD). Cavitating water purification devices have also been designed, in which the extreme conditions of cavitation can break down pollutants and organic molecules. Spectral analysis of light emitted in sonochemical reactions reveal chemical and plasma-based mechanisms of energy transfer. The light emitted from cavitation bubbles is termed sonoluminescence. Use of this technology has been tried successfully in alkali refining of vegetable oils. Hydrophobic chemicals are attracted underwater by cavitation as the pressure difference between the bubbles and the liquid water forces them to join. This effect may assist in protein folding. Cavitation plays an important role for the destruction of kidney stones in shock wave lithotripsy. Currently, tests are being conducted as to whether cavitation can be used to transfer large molecules into biological cells (sonoporation). Nitrogen cavitation is a method used in research to lyse cell membranes while leaving organelles intact. Cavitation plays a key role in non-thermal, non-invasive fractionation of tissue for treatment of a variety of diseases and can be used to open the blood-brain barrier to increase uptake of neurological drugs in the brain. Cavitation also plays a role in HIFU, a thermal non-invasive treatment methodology for cancer. In wounds caused by high velocity impacts (like for example bullet wounds) there are also effects due to cavitation. The exact wounding mechanisms are not completely understood yet as there is temporary cavitation, and permanent cavitation together with crushing, tearing and stretching. Also the high variance in density within the body makes it hard to determine its effects. Ultrasound sometimes is used to increase bone formation, for instance in post-surgical applications. It has been suggested that the sound of "cracking" knuckles derives from the collapse of cavitation in the synovial fluid within the joint. Cavitation can also form Ozone micro-nanobubbles which shows promise in dental applications. In industrial cleaning applications, cavitation has sufficient power to overcome the particle-to-substrate adhesion forces, loosening contaminants. The threshold pressure required to initiate cavitation is a strong function of the pulse width and the power input. This method works by generating acoustic cavitation in the cleaning fluid, picking up and carrying contaminant particles away in the hope that they do not reattach to the material being cleaned (which is a possibility when the object is immersed, for example in an ultrasonic cleaning bath). The same physical forces that remove contaminants also have the potential to damage the target being cleaned. Cavitation has been applied to egg pasteurization. A hole-filled rotor produces cavitation bubbles, heating the liquid from within. Equipment surfaces stay cooler than the passing liquid, so eggs do not harden as they did on the hot surfaces of older equipment. The intensity of cavitation can be adjusted, making it possible to tune the process for minimum protein damage. Cavitation has been applied to vegetable oil degumming and refining since 2011 and is considered a proven and standard technology in this application. The implementation of hydrodynamic cavitation in the degumming and refining process allows for a significant reduction in process aid, such as chemicals, water and bleaching clay, use. Cavitation has been applied to Biodiesel production since 2011 and is considered a proven and standard technology in this application. The implementation of hydrodynamic cavitation in the transesterification process allows for a significant reduction in catalyst use, quality improvement and production capacity increase. Cavitation is usually an undesirable occurrence. In devices such as propellers and pumps, cavitation causes a great deal of noise, damage to components, vibrations, and a loss of efficiency. Noise caused by cavitation can be particularly undesirable in naval vessels where such noise may render them more easily detectable by passive sonar. Cavitation has also become a concern in the renewable energy sector as it may occur on the blade surface of tidal stream turbines. When the cavitation bubbles collapse, they force energetic liquid into very small volumes, thereby creating spots of high temperature and emitting shock waves, the latter of which are a source of noise. The noise created by cavitation is a particular problem for military submarines, as it increases the chances of being detected by passive sonar. Although the collapse of a small cavity is a relatively low-energy event, highly localized collapses can erode metals, such as steel, over time. The pitting caused by the collapse of cavities produces great wear on components and can dramatically shorten a propeller's or pump's lifetime. After a surface is initially affected by cavitation, it tends to erode at an accelerating pace. The cavitation pits increase the turbulence of the fluid flow and create crevices that act as nucleation sites for additional cavitation bubbles. The pits also increase the components' surface area and leave behind residual stresses. This makes the surface more prone to stress corrosion. Major places where cavitation occurs are in pumps, on propellers, or at restrictions in a flowing liquid. As an impeller's (in a pump) or propeller's (as in the case of a ship or submarine) blades move through a fluid, low-pressure areas are formed as the fluid accelerates around and moves past the blades. The faster the blade moves, the lower the pressure can become around it. As it reaches vapor pressure, the fluid vaporizes and forms small bubbles of gas. This is cavitation. When the bubbles collapse later, they typically cause very strong local shock waves in the fluid, which may be audible and may even damage the blades. Cavitation in pumps may occur in two different forms: Suction cavitation occurs when the pump suction is under a low-pressure/high-vacuum condition where the liquid turns into a vapor at the eye of the pump impeller. This vapor is carried over to the discharge side of the pump, where it no longer sees vacuum and is compressed back into a liquid by the discharge pressure. This imploding action occurs violently and attacks the face of the impeller. An impeller that has been operating under a suction cavitation condition can have large chunks of material removed from its face or very small bits of material removed, causing the impeller to look spongelike. Both cases will cause premature failure of the pump, often due to bearing failure. Suction cavitation is often identified by a sound like gravel or marbles in the pump casing. Common causes of suction cavitation can include clogged filters, pipe blockage on the suction side, poor piping design, pump running too far right on the pump curve, or conditions not meeting NPSH (net positive suction head) requirements. In automotive applications, a clogged filter in a hydraulic system (power steering, power brakes) can cause suction cavitation making a noise that rises and falls in synch with engine RPM. It is fairly often a high pitched whine, like set of nylon gears not quite meshing correctly. Discharge cavitation occurs when the pump discharge pressure is extremely high, normally occurring in a pump that is running at less than 10% of its best efficiency point. The high discharge pressure causes the majority of the fluid to circulate inside the pump instead of being allowed to flow out the discharge. As the liquid flows around the impeller, it must pass through the small clearance between the impeller and the pump housing at extremely high flow velocity. This flow velocity causes a vacuum to develop at the housing wall (similar to what occurs in a venturi), which turns the liquid into a vapor. A pump that has been operating under these conditions shows premature wear of the impeller vane tips and the pump housing. In addition, due to the high pressure conditions, premature failure of the pump's mechanical seal and bearings can be expected. Under extreme conditions, this can break the impeller shaft. Discharge cavitation in joint fluid is thought to cause the popping sound produced by bone joint cracking, for example by deliberately cracking one's knuckles. Since all pumps require well-developed inlet flow to meet their potential, a pump may not perform or be as reliable as expected due to a faulty suction piping layout such as a close-coupled elbow on the inlet flange. When poorly developed flow enters the pump impeller, it strikes the vanes and is unable to follow the impeller passage. The liquid then separates from the vanes causing mechanical problems due to cavitation, vibration and performance problems due to turbulence and poor filling of the impeller. This results in premature seal, bearing and impeller failure, high maintenance costs, high power consumption, and less-than-specified head and/or flow. To have a well-developed flow pattern, pump manufacturer's manuals recommend about (10 diameters?) of straight pipe run upstream of the pump inlet flange. Unfortunately, piping designers and plant personnel must contend with space and equipment layout constraints and usually cannot comply with this recommendation. Instead, it is common to use an elbow close-coupled to the pump suction which creates a poorly developed flow pattern at the pump suction. With a double-suction pump tied to a close-coupled elbow, flow distribution to the impeller is poor and causes reliability and performance shortfalls. The elbow divides the flow unevenly with more channeled to the outside of the elbow. Consequently, one side of the double-suction impeller receives more flow at a higher flow velocity and pressure while the starved side receives a highly turbulent and potentially damaging flow. This degrades overall pump performance (delivered head, flow and power consumption) and causes axial imbalance which shortens seal, bearing and impeller life. To overcome cavitation: Increase suction pressure if possible. Decrease liquid temperature if possible. Throttle back on the discharge valve to decrease flow-rate. Vent gases off the pump casing. Cavitation can occur in control valves. If the actual pressure drop across the valve as defined by the upstream and downstream pressures in the system is greater than the sizing calculations allow, pressure drop flashing or cavitation may occur. The change from a liquid state to a vapor state results from the increase in flow velocity at or just downstream of the greatest flow restriction which is normally the valve port. To maintain a steady flow of liquid through a valve the flow velocity must be greatest at the vena contracta or the point where the cross sectional area is the smallest. This increase in flow velocity is accompanied by a substantial decrease in the fluid pressure which is partially recovered downstream as the area increases and flow velocity decreases. This pressure recovery is never completely to the level of the upstream pressure. If the pressure at the vena contracta drops below the vapor pressure of the fluid bubbles will form in the flow stream. If the pressure recovers after the valve to a pressure that is once again above the vapor pressure, then the vapor bubbles will collapse and cavitation will occur. When water flows over a dam spillway, the irregularities on the spillway surface will cause small areas of flow separation in a high-speed flow, and, in these regions, the pressure will be lowered. If the flow velocities are high enough the pressure may fall to below the local vapor pressure of the water and vapor bubbles will form. When these are carried downstream into a high pressure region the bubbles collapse giving rise to high pressures and possible cavitation damage. Experimental investigations show that the damage on concrete chute and tunnel spillways can start at clear water flow velocities of between 12 and 15 m/s (27 and 34 mph), and, up to flow velocities of 20 m/s (45 mph), it may be possible to protect the surface by streamlining the boundaries, improving the surface finishes or using resistant materials. When some air is present in the water the resulting mixture is compressible and this damps the high pressure caused by the bubble collapses. If the flow velocities near the spillway invert are sufficiently high, aerators (or aeration devices) must be introduced to prevent cavitation. Although these have been installed for some years, the mechanisms of air entrainment at the aerators and the slow movement of the air away from the spillway surface are still challenging. The spillway aeration device design is based upon a small deflection of the spillway bed (or sidewall) such as a ramp and offset to deflect the high flow velocity flow away from the spillway surface. In the cavity formed below the nappe, a local subpressure beneath the nappe is produced by which air is sucked into the flow. The complete design includes the deflection device (ramp, offset) and the air supply system. Some larger diesel engines suffer from cavitation due to high compression and undersized cylinder walls. Vibrations of the cylinder wall induce alternating low and high pressure in the coolant against the cylinder wall. The result is pitting of the cylinder wall, which will eventually let cooling fluid leak into the cylinder and combustion gases to leak into the coolant. It is possible to prevent this from happening with the use of chemical additives in the cooling fluid that form a protective layer on the cylinder wall. This layer will be exposed to the same cavitation, but rebuilds itself. Additionally a regulated overpressure in the cooling system (regulated and maintained by the coolant filler cap spring pressure) prevents the forming of cavitation. From about the 1980s, new designs of smaller gasoline engines also displayed cavitation phenomena. One answer to the need for smaller and lighter engines was a smaller coolant volume and a correspondingly higher coolant flow velocity. This gave rise to rapid changes in flow velocity and therefore rapid changes of static pressure in areas of high heat transfer. Where resulting vapor bubbles collapsed against a surface, they had the effect of first disrupting protective oxide layers (of cast aluminium materials) and then repeatedly damaging the newly formed surface, preventing the action of some types of corrosion inhibitor (such as silicate based inhibitors). A final problem was the effect that increased material temperature had on the relative electrochemical reactivity of the base metal and its alloying constituents. The result was deep pits that could form and penetrate the engine head in a matter of hours when the engine was running at high load and high speed. These effects could largely be avoided by the use of organic corrosion inhibitors or (preferably) by designing the engine head in such a way as to avoid certain cavitation inducing conditions. Some hypotheses relating to diamond formation posit a possible role for cavitation—namely cavitation in the kimberlite pipes providing the extreme pressure needed to change pure carbon into the rare allotrope that is diamond. The loudest three sounds ever recorded, during the 1883 eruption of Krakatoa, are now understood as the bursts of three huge cavitation bubbles, each larger than the last, formed in the volcano's throat. Rising magma, filled with dissolved gasses and under immense pressure, encountered a different magma that compressed easily, allowing bubbles to grow and combine. Cavitation can occur in the xylem of vascular plants. The sap vaporizes locally so that either the vessel elements or tracheids are filled with water vapor. Plants are able to repair cavitated xylem in a number of ways. For plants less than 50 cm tall, root pressure can be sufficient to redissolve the vapor. Larger plants direct solutes into the xylem via ray cells, or in tracheids, via osmosis through bordered pits. Solutes attract water, the pressure rises and vapor can redissolve. In some trees, the sound of the cavitation is audible, particularly in summer, when the rate of evapotranspiration is highest. Some deciduous trees have to shed leaves in the autumn partly because cavitation increases as temperatures decrease. Cavitation plays a role in the spore dispersal mechanisms of certain plants. In ferns, for example, the fern sporangium acts as a catapult that launches spores into the air. The charging phase of the catapult is driven by water evaporation from the annulus cells, which triggers a pressure decrease. When the compressive pressure reaches approximately 9 MPa, cavitation occurs. This rapid event triggers spore dispersal due to the elastic energy released by the annulus structure. The initial spore acceleration is extremely large – up to 10 times the gravitational acceleration. Just as cavitation bubbles form on a fast-spinning boat propeller, they may also form on the tails and fins of aquatic animals. This primarily occurs near the surface of the ocean, where the ambient water pressure is low. Cavitation may limit the maximum swimming speed of powerful swimming animals like dolphins and tuna. Dolphins may have to restrict their speed because collapsing cavitation bubbles on their tail are painful. Tuna have bony fins without nerve endings and do not feel pain from cavitation. They are slowed down when cavitation bubbles create a vapor film around their fins. Lesions have been found on tuna that are consistent with cavitation damage. Some sea animals have found ways to use cavitation to their advantage when hunting prey. The pistol shrimp snaps a specialized claw to create cavitation, which can kill small fish. The mantis shrimp (of the smasher variety) uses cavitation as well in order to stun, smash open, or kill the shellfish that it feasts upon. Thresher sharks use 'tail slaps' to debilitate their small fish prey and cavitation bubbles have been seen rising from the apex of the tail arc. In the last half-decade, coastal erosion in the form of inertial cavitation has been generally accepted. Bubbles in an incoming wave are forced into cracks in the cliff being eroded. Varying pressure decompresses some vapor pockets which subsequently implode. The resulting pressure peaks can blast apart fractions of the rock. As early as 1754, the Swiss mathematician Leonhard Euler (1707–1783) speculated about the possibility of cavitation. In 1859, the English mathematician William Henry Besant (1828–1917) published a solution to the problem of the dynamics of the collapse of a spherical cavity in a fluid, which had been presented by the Anglo-Irish mathematician George Stokes (1819–1903) as one of the Cambridge [University] Senate-house problems and riders for the year 1847. In 1894, Irish fluid dynamicist Osborne Reynolds (1842–1912) studied the formation and collapse of vapor bubbles in boiling liquids and in constricted tubes. The term cavitation first appeared in 1895 in a paper by John Isaac Thornycroft (1843–1928) and Sydney Walker Barnaby (1855–1925)—son of Sir Nathaniel Barnaby (1829 – 1915), who had been Chief Constructor of the Royal Navy—to whom it had been suggested by the British engineer Robert Edmund Froude (1846–1924), third son of the English hydrodynamicist William Froude (1810–1879). Early experimental studies of cavitation were conducted in 1894-5 by Thornycroft and Barnaby and by the Anglo-Irish engineer Charles Algernon Parsons (1854-1931), who constructed a stroboscopic apparatus to study the phenomenon. Thornycroft and Barnaby were the first researchers to observe cavitation on the back sides of propeller blades. In 1917, the British physicist Lord Rayleigh (1842–1919) extended Besant's work, publishing a mathematical model of cavitation in an incompressible fluid (ignoring surface tension and viscosity), in which he also determined the pressure in the fluid. The mathematical models of cavitation which were developed by British engineer Stanley Smith Cook (1875–1952) and by Lord Rayleigh revealed that collapsing bubbles of vapor could generate very high pressures, which were capable of causing the damage that had been observed on ships' propellers. Experimental evidence of cavitation causing such high pressures was initially collected in 1952 by Mark Harrison (a fluid dynamicist and acoustician at the U.S. Navy's David Taylor Model Basin at Carderock, Maryland, USA) who used acoustic methods and in 1956 by Wernfried Güth (a physicist and acoustician of Göttigen University, Germany) who used optical Schlieren photography. In 1944, Soviet scientists Mark Iosifovich Kornfeld (1908–1993) and L. Suvorov of the Leningrad Physico-Technical Institute (now: the Ioffe Physical-Technical Institute of the Russian Academy of Sciences, St. Petersburg, Russia) proposed that during cavitation, bubbles in the vicinity of a solid surface do not collapse symmetrically; instead, a dimple forms on the bubble at a point opposite the solid surface and this dimple evolves into a jet of liquid. This jet of liquid damages the solid surface. This hypothesis was supported in 1951 by theoretical studies by Maurice Rattray Jr., a doctoral student at the California Institute of Technology. Kornfeld and Suvorov's hypothesis was confirmed experimentally in 1961 by Charles F. Naudé and Albert T. Ellis, fluid dynamicists at the California Institute of Technology. A series of experimental investigations of the propagation of strong shock wave (SW) in a liquid with gas bubbles, which made it possible to establish the basic laws governing the process, the mechanism for the transformation of the energy of the SW, attenuation of the SW, and the formation of the structure, and experiments on the analysis of the attenuation of waves in bubble screens with different acoustic properties were begun by pioneer works of Soviet scientist prof.V.F. Minin at the Institute of Hydrodynamics (Novosibirsk, Russia) in 1957–1960, who examined also the first convenient model of a screen - a sequence of alternating flat one-dimensional liquid and gas layers. In an experimental investigations of the dynamics of the form of pulsating gaseous cavities and interaction of SW with bubble clouds in 1957–1960 V.F. Minin discovered that under the action of SW a bubble collapses asymmetrically with the formation of a cumulative jet, which forms in the process of collapse and causes fragmentation of the bubble.
[ { "paragraph_id": 0, "text": "Cavitation in fluid mechanics and engineering normally refers to the phenomenon in which the static pressure of a liquid reduces to below the liquid's vapour pressure, leading to the formation of small vapor-filled cavities in the liquid. When subjected to higher pressure, these cavities, called \"bubbles\" or \"voids\", collapse and can generate shock waves that may damage machinery. These shock waves are strong when they are very close to the imploded bubble, but rapidly weaken as they propagate away from the implosion. Cavitation is a significant cause of wear in some engineering contexts. Collapsing voids that implode near to a metal surface cause cyclic stress through repeated implosion. This results in surface fatigue of the metal, causing a type of wear also called \"cavitation\". The most common examples of this kind of wear are to pump impellers, and bends where a sudden change in the direction of liquid occurs. Cavitation is usually divided into two classes of behavior: inertial (or transient) cavitation and non-inertial cavitation.", "title": "" }, { "paragraph_id": 1, "text": "The process in which a void or bubble in a liquid rapidly collapses, producing a shock wave, is called inertial cavitation. Inertial cavitation occurs in nature in the strikes of mantis shrimp and pistol shrimp, as well as in the vascular tissues of plants. In manufactured objects, it can occur in control valves, pumps, propellers and impellers.", "title": "" }, { "paragraph_id": 2, "text": "Non-inertial cavitation is the process in which a bubble in a fluid is forced to oscillate in size or shape due to some form of energy input, such as an acoustic field. The gas in the bubble may contain a portion of a different gas than the vapor phase of the liquid. Such cavitation is often employed in ultrasonic cleaning baths and can also be observed in pumps, propellers, etc.", "title": "" }, { "paragraph_id": 3, "text": "Since the shock waves formed by collapse of the voids are strong enough to cause significant damage to parts, cavitation is typically an undesirable phenomenon in machinery (although desirable if intentionally used, for example, to sterilize contaminated surgical instruments, break down pollutants in water purification systems, emulsify tissue for cataract surgery or kidney stone lithotripsy, or homogenize fluids). It is very often specifically prevented in the design of machines such as turbines or propellers, and eliminating cavitation is a major field in the study of fluid dynamics. However, it is sometimes useful and does not cause damage when the bubbles collapse away from machinery, such as in supercavitation.", "title": "" }, { "paragraph_id": 4, "text": "Inertial cavitation was first observed in the late 19th century, considering the collapse of a spherical void within a liquid. When a volume of liquid is subjected to a sufficiently low pressure, it may rupture and form a cavity. This phenomenon is coined cavitation inception and may occur behind the blade of a rapidly rotating propeller or on any surface vibrating in the liquid with sufficient amplitude and acceleration. A fast-flowing river can cause cavitation on rock surfaces, particularly when there is a drop-off, such as on a waterfall.", "title": "Physics" }, { "paragraph_id": 5, "text": "Vapor gases evaporate into the cavity from the surrounding medium; thus, the cavity is not a vacuum at all, but rather a low-pressure vapor (gas) bubble. Once the conditions which caused the bubble to form are no longer present, such as when the bubble moves downstream, the surrounding liquid begins to implode due its higher pressure, building up momentum as it moves inward. As the bubble finally collapses, the inward momentum of the surrounding liquid causes a sharp increase of pressure and temperature of the vapor within. The bubble eventually collapses to a minute fraction of its original size, at which point the gas within dissipates into the surrounding liquid via a rather violent mechanism which releases a significant amount of energy in the form of an acoustic shock wave and as visible light. At the point of total collapse, the temperature of the vapor within the bubble may be several thousand Kelvin, and the pressure several hundred atmospheres.", "title": "Physics" }, { "paragraph_id": 6, "text": "The physical process of cavitation inception is similar to boiling. The major difference between the two is the thermodynamic paths that precede the formation of the vapor. Boiling occurs when the local temperature of the liquid reaches the saturation temperature, and further heat is supplied to allow the liquid to sufficiently phase change into a gas. Cavitation inception occurs when the local pressure falls sufficiently far below the saturated vapor pressure, a value given by the tensile strength of the liquid at a certain temperature.", "title": "Physics" }, { "paragraph_id": 7, "text": "In order for cavitation inception to occur, the cavitation \"bubbles\" generally need a surface on which they can nucleate. This surface can be provided by the sides of a container, by impurities in the liquid, or by small undissolved microbubbles within the liquid. It is generally accepted that hydrophobic surfaces stabilize small bubbles. These pre-existing bubbles start to grow unbounded when they are exposed to a pressure below the threshold pressure, termed Blake's threshold. The presence of an incompressible core inside a cavitation nucleus substantially lowers the cavitation threshold below the Blake threshold.", "title": "Physics" }, { "paragraph_id": 8, "text": "The vapor pressure here differs from the meteorological definition of vapor pressure, which describes the partial pressure of water in the atmosphere at some value less than 100% saturation. Vapor pressure as relating to cavitation refers to the vapor pressure in equilibrium conditions and can therefore be more accurately defined as the equilibrium (or saturated) vapor pressure.", "title": "Physics" }, { "paragraph_id": 9, "text": "Non-inertial cavitation is the process in which small bubbles in a liquid are forced to oscillate in the presence of an acoustic field, when the intensity of the acoustic field is insufficient to cause total bubble collapse. This form of cavitation causes significantly less erosion than inertial cavitation, and is often used for the cleaning of delicate materials, such as silicon wafers.", "title": "Physics" }, { "paragraph_id": 10, "text": "Other ways of generating cavitation voids involve the local deposition of energy, such as an intense focused laser pulse (optic cavitation) or with an electrical discharge through a spark. These techniques have been used to study the evolution of the bubble that is actually created by locally boiling the liquid with a local increment of temperature.", "title": "Physics" }, { "paragraph_id": 11, "text": "Hydrodynamic cavitation is the process of vaporisation, bubble generation and bubble implosion which occurs in a flowing liquid as a result of a decrease and subsequent increase in local pressure. Cavitation will only occur if the local pressure declines to some point below the saturated vapor pressure of the liquid and subsequent recovery above the vapor pressure. If the recovery pressure is not above the vapor pressure then flashing is said to have occurred. In pipe systems, cavitation typically occurs either as the result of an increase in the kinetic energy (through an area constriction) or an increase in the pipe elevation.", "title": "Physics" }, { "paragraph_id": 12, "text": "Hydrodynamic cavitation can be produced by passing a liquid through a constricted channel at a specific flow velocity or by mechanical rotation of an object through a liquid. In the case of the constricted channel and based on the specific (or unique) geometry of the system, the combination of pressure and kinetic energy can create the hydrodynamic cavitation cavern downstream of the local constriction generating high energy cavitation bubbles.", "title": "Physics" }, { "paragraph_id": 13, "text": "Based on the thermodynamic phase change diagram, an increase in temperature could initiate a known phase change mechanism known as boiling. However, a decrease in static pressure could also help one pass the multi-phase diagram and initiate another phase change mechanism known as cavitation. On the other hand, a local increase in flow velocity could lead to a static pressure drop to the critical point at which cavitation could be initiated (based on Bernoulli's principle). The critical pressure point is vapor saturated pressure. In a closed fluidic system where no flow leakage is detected, a decrease in cross-sectional area would lead to velocity increment and hence static pressure drop. This is the working principle of many hydrodynamic cavitation based reactors for different applications such as water treatment, energy harvesting, heat transfer enhancement, food processing, etc.", "title": "Physics" }, { "paragraph_id": 14, "text": "There are different flow patterns detected as a cavitation flow progresses: inception, developed flow, supercavitation, and choked flow. Inception is the first moment that the second phase (gas phase) appears in the system. This is the weakest cavitating flow captured in a system corresponding to the highest cavitation number. When the cavities grow and becomes larger in size in the orifice or venturi structures, developed flow is recorded. The most intense cavitating flow is known as supercavitation where theoretically all the nozzle area of an orifice is filled with gas bubbles. This flow regime corresponds to the lowest cavitation number in a system. After supercavitation, the system is not capable of passing more flow. Hence, velocity does not change while the upstream pressure increase. This would lead to an increase in cavitation number which shows that choked flow occurred.", "title": "Physics" }, { "paragraph_id": 15, "text": "The process of bubble generation, and the subsequent growth and collapse of the cavitation bubbles, results in very high energy densities and in very high local temperatures and local pressures at the surface of the bubbles for a very short time. The overall liquid medium environment, therefore, remains at ambient conditions. When uncontrolled, cavitation is damaging; by controlling the flow of the cavitation, however, the power can be harnessed and non-destructive. Controlled cavitation can be used to enhance chemical reactions or propagate certain unexpected reactions because free radicals are generated in the process due to disassociation of vapors trapped in the cavitating bubbles.", "title": "Physics" }, { "paragraph_id": 16, "text": "Orifices and venturi are reported to be widely used for generating cavitation. A venturi has an inherent advantage over an orifice because of its smooth converging and diverging sections, such that it can generate a higher flow velocity at the throat for a given pressure drop across it. On the other hand, an orifice has an advantage that it can accommodate a greater number of holes (larger perimeter of holes) in a given cross sectional area of the pipe.", "title": "Physics" }, { "paragraph_id": 17, "text": "The cavitation phenomenon can be controlled to enhance the performance of high-speed marine vessels and projectiles, as well as in material processing technologies, in medicine, etc. Controlling the cavitating flows in liquids can be achieved only by advancing the mathematical foundation of the cavitation processes. These processes are manifested in different ways, the most common ones and promising for control being bubble cavitation and supercavitation. The first exact classical solution should perhaps be credited to the well-known solution by Hermann von Helmholtz in 1868. The earliest distinguished studies of academic type on the theory of a cavitating flow with free boundaries and supercavitation were published in the book Jets, wakes and cavities followed by Theory of jets of ideal fluid. Widely used in these books was the well-developed theory of conformal mappings of functions of a complex variable, allowing one to derive a large number of exact solutions of plane problems. Another venue combining the existing exact solutions with approximated and heuristic models was explored in the work Hydrodynamics of Flows with Free Boundaries that refined the applied calculation techniques based on the principle of cavity expansion independence, theory of pulsations and stability of elongated axisymmetric cavities, etc. and in Dimensionality and similarity methods in the problems of the hydromechanics of vessels.", "title": "Physics" }, { "paragraph_id": 18, "text": "A natural continuation of these studies was recently presented in The Hydrodynamics of Cavitating Flows – an encyclopedic work encompassing all the best advances in this domain for the last three decades, and blending the classical methods of mathematical research with the modern capabilities of computer technologies. These include elaboration of nonlinear numerical methods of solving 3D cavitation problems, refinement of the known plane linear theories, development of asymptotic theories of axisymmetric and nearly axisymmetric flows, etc. As compared to the classical approaches, the new trend is characterized by expansion of the theory into the 3D flows. It also reflects a certain correlation with current works of an applied character on the hydrodynamics of supercavitating bodies.", "title": "Physics" }, { "paragraph_id": 19, "text": "Hydrodynamic cavitation can also improve some industrial processes. For instance, cavitated corn slurry shows higher yields in ethanol production compared to uncavitated corn slurry in dry milling facilities.", "title": "Physics" }, { "paragraph_id": 20, "text": "This is also used in the mineralization of bio-refractory compounds which otherwise would need extremely high temperature and pressure conditions since free radicals are generated in the process due to the dissociation of vapors trapped in the cavitating bubbles, which results in either the intensification of the chemical reaction or may even result in the propagation of certain reactions not possible under otherwise ambient conditions.", "title": "Physics" }, { "paragraph_id": 21, "text": "Inertial cavitation can also occur in the presence of an acoustic field. Microscopic gas bubbles that are generally present in a liquid will be forced to oscillate due to an applied acoustic field. If the acoustic intensity is sufficiently high, the bubbles will first grow in size and then rapidly collapse. Hence, inertial cavitation can occur even if the rarefaction in the liquid is insufficient for a Rayleigh-like void to occur.", "title": "Physics" }, { "paragraph_id": 22, "text": "Ultrasonic cavitation inception will occur when the acceleration of the ultrasound source is enough to produce the needed pressure drop. This pressure drop depends on the value of the acceleration and the size of the affected volume by the pressure wave. The dimensionless number that predicts ultrasonic cavitation is the Garcia-Atance number. High power ultrasonic horns produce accelerations high enough to create a cavitating region that can be used for homogenization, dispersion, deagglomeration, erosion, cleaning, milling, emulsification, extraction, disintegration, and sonochemistry.", "title": "Physics" }, { "paragraph_id": 23, "text": "In industry, cavitation is often used to homogenize, or mix and break down, suspended particles in a colloidal liquid compound such as paint mixtures or milk. Many industrial mixing machines are based upon this design principle. It is usually achieved through impeller design or by forcing the mixture through an annular opening that has a narrow entrance orifice with a much larger exit orifice. In the latter case, the drastic decrease in pressure as the liquid accelerates into a larger volume induces cavitation. This method can be controlled with hydraulic devices that control inlet orifice size, allowing for dynamic adjustment during the process, or modification for different substances. The surface of this type of mixing valve, against which surface the cavitation bubbles are driven causing their implosion, undergoes tremendous mechanical and thermal localized stress; they are therefore often constructed of extremely strong and hard materials such as stainless steel, Stellite, or even polycrystalline diamond (PCD).", "title": "Applications" }, { "paragraph_id": 24, "text": "Cavitating water purification devices have also been designed, in which the extreme conditions of cavitation can break down pollutants and organic molecules. Spectral analysis of light emitted in sonochemical reactions reveal chemical and plasma-based mechanisms of energy transfer. The light emitted from cavitation bubbles is termed sonoluminescence.", "title": "Applications" }, { "paragraph_id": 25, "text": "Use of this technology has been tried successfully in alkali refining of vegetable oils.", "title": "Applications" }, { "paragraph_id": 26, "text": "Hydrophobic chemicals are attracted underwater by cavitation as the pressure difference between the bubbles and the liquid water forces them to join. This effect may assist in protein folding.", "title": "Applications" }, { "paragraph_id": 27, "text": "Cavitation plays an important role for the destruction of kidney stones in shock wave lithotripsy. Currently, tests are being conducted as to whether cavitation can be used to transfer large molecules into biological cells (sonoporation). Nitrogen cavitation is a method used in research to lyse cell membranes while leaving organelles intact.", "title": "Applications" }, { "paragraph_id": 28, "text": "Cavitation plays a key role in non-thermal, non-invasive fractionation of tissue for treatment of a variety of diseases and can be used to open the blood-brain barrier to increase uptake of neurological drugs in the brain.", "title": "Applications" }, { "paragraph_id": 29, "text": "Cavitation also plays a role in HIFU, a thermal non-invasive treatment methodology for cancer.", "title": "Applications" }, { "paragraph_id": 30, "text": "In wounds caused by high velocity impacts (like for example bullet wounds) there are also effects due to cavitation. The exact wounding mechanisms are not completely understood yet as there is temporary cavitation, and permanent cavitation together with crushing, tearing and stretching. Also the high variance in density within the body makes it hard to determine its effects.", "title": "Applications" }, { "paragraph_id": 31, "text": "Ultrasound sometimes is used to increase bone formation, for instance in post-surgical applications.", "title": "Applications" }, { "paragraph_id": 32, "text": "It has been suggested that the sound of \"cracking\" knuckles derives from the collapse of cavitation in the synovial fluid within the joint.", "title": "Applications" }, { "paragraph_id": 33, "text": "Cavitation can also form Ozone micro-nanobubbles which shows promise in dental applications.", "title": "Applications" }, { "paragraph_id": 34, "text": "In industrial cleaning applications, cavitation has sufficient power to overcome the particle-to-substrate adhesion forces, loosening contaminants. The threshold pressure required to initiate cavitation is a strong function of the pulse width and the power input. This method works by generating acoustic cavitation in the cleaning fluid, picking up and carrying contaminant particles away in the hope that they do not reattach to the material being cleaned (which is a possibility when the object is immersed, for example in an ultrasonic cleaning bath). The same physical forces that remove contaminants also have the potential to damage the target being cleaned.", "title": "Applications" }, { "paragraph_id": 35, "text": "Cavitation has been applied to egg pasteurization. A hole-filled rotor produces cavitation bubbles, heating the liquid from within. Equipment surfaces stay cooler than the passing liquid, so eggs do not harden as they did on the hot surfaces of older equipment. The intensity of cavitation can be adjusted, making it possible to tune the process for minimum protein damage.", "title": "Applications" }, { "paragraph_id": 36, "text": "Cavitation has been applied to vegetable oil degumming and refining since 2011 and is considered a proven and standard technology in this application. The implementation of hydrodynamic cavitation in the degumming and refining process allows for a significant reduction in process aid, such as chemicals, water and bleaching clay, use.", "title": "Applications" }, { "paragraph_id": 37, "text": "Cavitation has been applied to Biodiesel production since 2011 and is considered a proven and standard technology in this application. The implementation of hydrodynamic cavitation in the transesterification process allows for a significant reduction in catalyst use, quality improvement and production capacity increase.", "title": "Applications" }, { "paragraph_id": 38, "text": "Cavitation is usually an undesirable occurrence. In devices such as propellers and pumps, cavitation causes a great deal of noise, damage to components, vibrations, and a loss of efficiency. Noise caused by cavitation can be particularly undesirable in naval vessels where such noise may render them more easily detectable by passive sonar. Cavitation has also become a concern in the renewable energy sector as it may occur on the blade surface of tidal stream turbines.", "title": "Cavitation damage" }, { "paragraph_id": 39, "text": "When the cavitation bubbles collapse, they force energetic liquid into very small volumes, thereby creating spots of high temperature and emitting shock waves, the latter of which are a source of noise. The noise created by cavitation is a particular problem for military submarines, as it increases the chances of being detected by passive sonar.", "title": "Cavitation damage" }, { "paragraph_id": 40, "text": "Although the collapse of a small cavity is a relatively low-energy event, highly localized collapses can erode metals, such as steel, over time. The pitting caused by the collapse of cavities produces great wear on components and can dramatically shorten a propeller's or pump's lifetime.", "title": "Cavitation damage" }, { "paragraph_id": 41, "text": "After a surface is initially affected by cavitation, it tends to erode at an accelerating pace. The cavitation pits increase the turbulence of the fluid flow and create crevices that act as nucleation sites for additional cavitation bubbles. The pits also increase the components' surface area and leave behind residual stresses. This makes the surface more prone to stress corrosion.", "title": "Cavitation damage" }, { "paragraph_id": 42, "text": "Major places where cavitation occurs are in pumps, on propellers, or at restrictions in a flowing liquid.", "title": "Cavitation damage" }, { "paragraph_id": 43, "text": "As an impeller's (in a pump) or propeller's (as in the case of a ship or submarine) blades move through a fluid, low-pressure areas are formed as the fluid accelerates around and moves past the blades. The faster the blade moves, the lower the pressure can become around it. As it reaches vapor pressure, the fluid vaporizes and forms small bubbles of gas. This is cavitation. When the bubbles collapse later, they typically cause very strong local shock waves in the fluid, which may be audible and may even damage the blades.", "title": "Cavitation damage" }, { "paragraph_id": 44, "text": "Cavitation in pumps may occur in two different forms:", "title": "Cavitation damage" }, { "paragraph_id": 45, "text": "Suction cavitation occurs when the pump suction is under a low-pressure/high-vacuum condition where the liquid turns into a vapor at the eye of the pump impeller. This vapor is carried over to the discharge side of the pump, where it no longer sees vacuum and is compressed back into a liquid by the discharge pressure. This imploding action occurs violently and attacks the face of the impeller. An impeller that has been operating under a suction cavitation condition can have large chunks of material removed from its face or very small bits of material removed, causing the impeller to look spongelike. Both cases will cause premature failure of the pump, often due to bearing failure. Suction cavitation is often identified by a sound like gravel or marbles in the pump casing.", "title": "Cavitation damage" }, { "paragraph_id": 46, "text": "Common causes of suction cavitation can include clogged filters, pipe blockage on the suction side, poor piping design, pump running too far right on the pump curve, or conditions not meeting NPSH (net positive suction head) requirements.", "title": "Cavitation damage" }, { "paragraph_id": 47, "text": "In automotive applications, a clogged filter in a hydraulic system (power steering, power brakes) can cause suction cavitation making a noise that rises and falls in synch with engine RPM. It is fairly often a high pitched whine, like set of nylon gears not quite meshing correctly.", "title": "Cavitation damage" }, { "paragraph_id": 48, "text": "Discharge cavitation occurs when the pump discharge pressure is extremely high, normally occurring in a pump that is running at less than 10% of its best efficiency point. The high discharge pressure causes the majority of the fluid to circulate inside the pump instead of being allowed to flow out the discharge. As the liquid flows around the impeller, it must pass through the small clearance between the impeller and the pump housing at extremely high flow velocity. This flow velocity causes a vacuum to develop at the housing wall (similar to what occurs in a venturi), which turns the liquid into a vapor. A pump that has been operating under these conditions shows premature wear of the impeller vane tips and the pump housing. In addition, due to the high pressure conditions, premature failure of the pump's mechanical seal and bearings can be expected. Under extreme conditions, this can break the impeller shaft.", "title": "Cavitation damage" }, { "paragraph_id": 49, "text": "Discharge cavitation in joint fluid is thought to cause the popping sound produced by bone joint cracking, for example by deliberately cracking one's knuckles.", "title": "Cavitation damage" }, { "paragraph_id": 50, "text": "Since all pumps require well-developed inlet flow to meet their potential, a pump may not perform or be as reliable as expected due to a faulty suction piping layout such as a close-coupled elbow on the inlet flange. When poorly developed flow enters the pump impeller, it strikes the vanes and is unable to follow the impeller passage. The liquid then separates from the vanes causing mechanical problems due to cavitation, vibration and performance problems due to turbulence and poor filling of the impeller. This results in premature seal, bearing and impeller failure, high maintenance costs, high power consumption, and less-than-specified head and/or flow.", "title": "Cavitation damage" }, { "paragraph_id": 51, "text": "To have a well-developed flow pattern, pump manufacturer's manuals recommend about (10 diameters?) of straight pipe run upstream of the pump inlet flange. Unfortunately, piping designers and plant personnel must contend with space and equipment layout constraints and usually cannot comply with this recommendation. Instead, it is common to use an elbow close-coupled to the pump suction which creates a poorly developed flow pattern at the pump suction.", "title": "Cavitation damage" }, { "paragraph_id": 52, "text": "With a double-suction pump tied to a close-coupled elbow, flow distribution to the impeller is poor and causes reliability and performance shortfalls. The elbow divides the flow unevenly with more channeled to the outside of the elbow. Consequently, one side of the double-suction impeller receives more flow at a higher flow velocity and pressure while the starved side receives a highly turbulent and potentially damaging flow. This degrades overall pump performance (delivered head, flow and power consumption) and causes axial imbalance which shortens seal, bearing and impeller life. To overcome cavitation: Increase suction pressure if possible. Decrease liquid temperature if possible. Throttle back on the discharge valve to decrease flow-rate. Vent gases off the pump casing.", "title": "Cavitation damage" }, { "paragraph_id": 53, "text": "Cavitation can occur in control valves. If the actual pressure drop across the valve as defined by the upstream and downstream pressures in the system is greater than the sizing calculations allow, pressure drop flashing or cavitation may occur. The change from a liquid state to a vapor state results from the increase in flow velocity at or just downstream of the greatest flow restriction which is normally the valve port. To maintain a steady flow of liquid through a valve the flow velocity must be greatest at the vena contracta or the point where the cross sectional area is the smallest. This increase in flow velocity is accompanied by a substantial decrease in the fluid pressure which is partially recovered downstream as the area increases and flow velocity decreases. This pressure recovery is never completely to the level of the upstream pressure. If the pressure at the vena contracta drops below the vapor pressure of the fluid bubbles will form in the flow stream. If the pressure recovers after the valve to a pressure that is once again above the vapor pressure, then the vapor bubbles will collapse and cavitation will occur.", "title": "Cavitation damage" }, { "paragraph_id": 54, "text": "When water flows over a dam spillway, the irregularities on the spillway surface will cause small areas of flow separation in a high-speed flow, and, in these regions, the pressure will be lowered. If the flow velocities are high enough the pressure may fall to below the local vapor pressure of the water and vapor bubbles will form. When these are carried downstream into a high pressure region the bubbles collapse giving rise to high pressures and possible cavitation damage.", "title": "Cavitation damage" }, { "paragraph_id": 55, "text": "Experimental investigations show that the damage on concrete chute and tunnel spillways can start at clear water flow velocities of between 12 and 15 m/s (27 and 34 mph), and, up to flow velocities of 20 m/s (45 mph), it may be possible to protect the surface by streamlining the boundaries, improving the surface finishes or using resistant materials.", "title": "Cavitation damage" }, { "paragraph_id": 56, "text": "When some air is present in the water the resulting mixture is compressible and this damps the high pressure caused by the bubble collapses. If the flow velocities near the spillway invert are sufficiently high, aerators (or aeration devices) must be introduced to prevent cavitation. Although these have been installed for some years, the mechanisms of air entrainment at the aerators and the slow movement of the air away from the spillway surface are still challenging.", "title": "Cavitation damage" }, { "paragraph_id": 57, "text": "The spillway aeration device design is based upon a small deflection of the spillway bed (or sidewall) such as a ramp and offset to deflect the high flow velocity flow away from the spillway surface. In the cavity formed below the nappe, a local subpressure beneath the nappe is produced by which air is sucked into the flow. The complete design includes the deflection device (ramp, offset) and the air supply system.", "title": "Cavitation damage" }, { "paragraph_id": 58, "text": "Some larger diesel engines suffer from cavitation due to high compression and undersized cylinder walls. Vibrations of the cylinder wall induce alternating low and high pressure in the coolant against the cylinder wall. The result is pitting of the cylinder wall, which will eventually let cooling fluid leak into the cylinder and combustion gases to leak into the coolant.", "title": "Cavitation damage" }, { "paragraph_id": 59, "text": "It is possible to prevent this from happening with the use of chemical additives in the cooling fluid that form a protective layer on the cylinder wall. This layer will be exposed to the same cavitation, but rebuilds itself. Additionally a regulated overpressure in the cooling system (regulated and maintained by the coolant filler cap spring pressure) prevents the forming of cavitation.", "title": "Cavitation damage" }, { "paragraph_id": 60, "text": "From about the 1980s, new designs of smaller gasoline engines also displayed cavitation phenomena. One answer to the need for smaller and lighter engines was a smaller coolant volume and a correspondingly higher coolant flow velocity. This gave rise to rapid changes in flow velocity and therefore rapid changes of static pressure in areas of high heat transfer. Where resulting vapor bubbles collapsed against a surface, they had the effect of first disrupting protective oxide layers (of cast aluminium materials) and then repeatedly damaging the newly formed surface, preventing the action of some types of corrosion inhibitor (such as silicate based inhibitors). A final problem was the effect that increased material temperature had on the relative electrochemical reactivity of the base metal and its alloying constituents. The result was deep pits that could form and penetrate the engine head in a matter of hours when the engine was running at high load and high speed. These effects could largely be avoided by the use of organic corrosion inhibitors or (preferably) by designing the engine head in such a way as to avoid certain cavitation inducing conditions.", "title": "Cavitation damage" }, { "paragraph_id": 61, "text": "Some hypotheses relating to diamond formation posit a possible role for cavitation—namely cavitation in the kimberlite pipes providing the extreme pressure needed to change pure carbon into the rare allotrope that is diamond. The loudest three sounds ever recorded, during the 1883 eruption of Krakatoa, are now understood as the bursts of three huge cavitation bubbles, each larger than the last, formed in the volcano's throat. Rising magma, filled with dissolved gasses and under immense pressure, encountered a different magma that compressed easily, allowing bubbles to grow and combine.", "title": "In nature" }, { "paragraph_id": 62, "text": "Cavitation can occur in the xylem of vascular plants. The sap vaporizes locally so that either the vessel elements or tracheids are filled with water vapor. Plants are able to repair cavitated xylem in a number of ways. For plants less than 50 cm tall, root pressure can be sufficient to redissolve the vapor. Larger plants direct solutes into the xylem via ray cells, or in tracheids, via osmosis through bordered pits. Solutes attract water, the pressure rises and vapor can redissolve. In some trees, the sound of the cavitation is audible, particularly in summer, when the rate of evapotranspiration is highest. Some deciduous trees have to shed leaves in the autumn partly because cavitation increases as temperatures decrease.", "title": "In nature" }, { "paragraph_id": 63, "text": "Cavitation plays a role in the spore dispersal mechanisms of certain plants. In ferns, for example, the fern sporangium acts as a catapult that launches spores into the air. The charging phase of the catapult is driven by water evaporation from the annulus cells, which triggers a pressure decrease. When the compressive pressure reaches approximately 9 MPa, cavitation occurs. This rapid event triggers spore dispersal due to the elastic energy released by the annulus structure. The initial spore acceleration is extremely large – up to 10 times the gravitational acceleration.", "title": "In nature" }, { "paragraph_id": 64, "text": "Just as cavitation bubbles form on a fast-spinning boat propeller, they may also form on the tails and fins of aquatic animals. This primarily occurs near the surface of the ocean, where the ambient water pressure is low.", "title": "In nature" }, { "paragraph_id": 65, "text": "Cavitation may limit the maximum swimming speed of powerful swimming animals like dolphins and tuna. Dolphins may have to restrict their speed because collapsing cavitation bubbles on their tail are painful. Tuna have bony fins without nerve endings and do not feel pain from cavitation. They are slowed down when cavitation bubbles create a vapor film around their fins. Lesions have been found on tuna that are consistent with cavitation damage.", "title": "In nature" }, { "paragraph_id": 66, "text": "Some sea animals have found ways to use cavitation to their advantage when hunting prey. The pistol shrimp snaps a specialized claw to create cavitation, which can kill small fish. The mantis shrimp (of the smasher variety) uses cavitation as well in order to stun, smash open, or kill the shellfish that it feasts upon.", "title": "In nature" }, { "paragraph_id": 67, "text": "Thresher sharks use 'tail slaps' to debilitate their small fish prey and cavitation bubbles have been seen rising from the apex of the tail arc.", "title": "In nature" }, { "paragraph_id": 68, "text": "In the last half-decade, coastal erosion in the form of inertial cavitation has been generally accepted. Bubbles in an incoming wave are forced into cracks in the cliff being eroded. Varying pressure decompresses some vapor pockets which subsequently implode. The resulting pressure peaks can blast apart fractions of the rock.", "title": "In nature" }, { "paragraph_id": 69, "text": "As early as 1754, the Swiss mathematician Leonhard Euler (1707–1783) speculated about the possibility of cavitation. In 1859, the English mathematician William Henry Besant (1828–1917) published a solution to the problem of the dynamics of the collapse of a spherical cavity in a fluid, which had been presented by the Anglo-Irish mathematician George Stokes (1819–1903) as one of the Cambridge [University] Senate-house problems and riders for the year 1847. In 1894, Irish fluid dynamicist Osborne Reynolds (1842–1912) studied the formation and collapse of vapor bubbles in boiling liquids and in constricted tubes.", "title": "History" }, { "paragraph_id": 70, "text": "The term cavitation first appeared in 1895 in a paper by John Isaac Thornycroft (1843–1928) and Sydney Walker Barnaby (1855–1925)—son of Sir Nathaniel Barnaby (1829 – 1915), who had been Chief Constructor of the Royal Navy—to whom it had been suggested by the British engineer Robert Edmund Froude (1846–1924), third son of the English hydrodynamicist William Froude (1810–1879). Early experimental studies of cavitation were conducted in 1894-5 by Thornycroft and Barnaby and by the Anglo-Irish engineer Charles Algernon Parsons (1854-1931), who constructed a stroboscopic apparatus to study the phenomenon. Thornycroft and Barnaby were the first researchers to observe cavitation on the back sides of propeller blades.", "title": "History" }, { "paragraph_id": 71, "text": "In 1917, the British physicist Lord Rayleigh (1842–1919) extended Besant's work, publishing a mathematical model of cavitation in an incompressible fluid (ignoring surface tension and viscosity), in which he also determined the pressure in the fluid. The mathematical models of cavitation which were developed by British engineer Stanley Smith Cook (1875–1952) and by Lord Rayleigh revealed that collapsing bubbles of vapor could generate very high pressures, which were capable of causing the damage that had been observed on ships' propellers. Experimental evidence of cavitation causing such high pressures was initially collected in 1952 by Mark Harrison (a fluid dynamicist and acoustician at the U.S. Navy's David Taylor Model Basin at Carderock, Maryland, USA) who used acoustic methods and in 1956 by Wernfried Güth (a physicist and acoustician of Göttigen University, Germany) who used optical Schlieren photography.", "title": "History" }, { "paragraph_id": 72, "text": "In 1944, Soviet scientists Mark Iosifovich Kornfeld (1908–1993) and L. Suvorov of the Leningrad Physico-Technical Institute (now: the Ioffe Physical-Technical Institute of the Russian Academy of Sciences, St. Petersburg, Russia) proposed that during cavitation, bubbles in the vicinity of a solid surface do not collapse symmetrically; instead, a dimple forms on the bubble at a point opposite the solid surface and this dimple evolves into a jet of liquid. This jet of liquid damages the solid surface. This hypothesis was supported in 1951 by theoretical studies by Maurice Rattray Jr., a doctoral student at the California Institute of Technology. Kornfeld and Suvorov's hypothesis was confirmed experimentally in 1961 by Charles F. Naudé and Albert T. Ellis, fluid dynamicists at the California Institute of Technology.", "title": "History" }, { "paragraph_id": 73, "text": "A series of experimental investigations of the propagation of strong shock wave (SW) in a liquid with gas bubbles, which made it possible to establish the basic laws governing the process, the mechanism for the transformation of the energy of the SW, attenuation of the SW, and the formation of the structure, and experiments on the analysis of the attenuation of waves in bubble screens with different acoustic properties were begun by pioneer works of Soviet scientist prof.V.F. Minin at the Institute of Hydrodynamics (Novosibirsk, Russia) in 1957–1960, who examined also the first convenient model of a screen - a sequence of alternating flat one-dimensional liquid and gas layers. In an experimental investigations of the dynamics of the form of pulsating gaseous cavities and interaction of SW with bubble clouds in 1957–1960 V.F. Minin discovered that under the action of SW a bubble collapses asymmetrically with the formation of a cumulative jet, which forms in the process of collapse and causes fragmentation of the bubble.", "title": "History" } ]
Cavitation in fluid mechanics and engineering normally refers to the phenomenon in which the static pressure of a liquid reduces to below the liquid's vapour pressure, leading to the formation of small vapor-filled cavities in the liquid. When subjected to higher pressure, these cavities, called "bubbles" or "voids", collapse and can generate shock waves that may damage machinery. These shock waves are strong when they are very close to the imploded bubble, but rapidly weaken as they propagate away from the implosion. Cavitation is a significant cause of wear in some engineering contexts. Collapsing voids that implode near to a metal surface cause cyclic stress through repeated implosion. This results in surface fatigue of the metal, causing a type of wear also called "cavitation". The most common examples of this kind of wear are to pump impellers, and bends where a sudden change in the direction of liquid occurs. Cavitation is usually divided into two classes of behavior: inertial cavitation and non-inertial cavitation. The process in which a void or bubble in a liquid rapidly collapses, producing a shock wave, is called inertial cavitation. Inertial cavitation occurs in nature in the strikes of mantis shrimp and pistol shrimp, as well as in the vascular tissues of plants. In manufactured objects, it can occur in control valves, pumps, propellers and impellers. Non-inertial cavitation is the process in which a bubble in a fluid is forced to oscillate in size or shape due to some form of energy input, such as an acoustic field. The gas in the bubble may contain a portion of a different gas than the vapor phase of the liquid. Such cavitation is often employed in ultrasonic cleaning baths and can also be observed in pumps, propellers, etc. Since the shock waves formed by collapse of the voids are strong enough to cause significant damage to parts, cavitation is typically an undesirable phenomenon in machinery. It is very often specifically prevented in the design of machines such as turbines or propellers, and eliminating cavitation is a major field in the study of fluid dynamics. However, it is sometimes useful and does not cause damage when the bubbles collapse away from machinery, such as in supercavitation.
2002-01-19T02:48:49Z
2023-12-07T12:45:09Z
[ "Template:Cite journal", "Template:Cite web", "Template:Cite magazine", "Template:Other uses", "Template:Use mdy dates", "Template:Short description", "Template:Nbsp", "Template:Annotated link", "Template:Wiktionary", "Template:When", "Template:Cvt", "Template:Example needed", "Template:Cite news", "Template:Cite conference", "Template:Cbignore", "Template:Fluid Mechanics", "Template:Citation needed", "Template:Sup", "Template:Dead link", "Template:By whom", "Template:Reflist", "Template:Authority control", "Template:Anchor", "Template:Sfnp", "Template:Cite book", "Template:Verify source", "Template:Commons category", "Template:Webarchive", "Template:Expert needed" ]
https://en.wikipedia.org/wiki/Cavitation
7,808
Cyprinodontiformes
Cyprinodontiformes /ˌsɪprɪnoʊˈdɒntɪfɔːrmiːz/ is an order of ray-finned fish, comprising mostly small, freshwater fish. Many popular aquarium fish, such as killifish and live-bearers, are included. They are closely related to the Atheriniformes and are occasionally included with them. A colloquial term for the order as a whole is toothcarps, though they are not actually close relatives of the true carps – the latter belong to the superorder Ostariophysi, while the toothcarps are Acanthopterygii. The families of Cyprinodontiformes can be informally divided into three groups based on reproductive strategy: viviparous and ovoviviparous (all species give live birth), and oviparous (all species are egg-laying). The live-bearing groups differ in whether the young are carried to term within (ovoviviparous) or without (viviparous) an enclosing eggshell. Phylogenetically however, one of the two suborders – the Aplocheiloidei – contains oviparous species exclusively, as do two of the four superfamilies of the other suborder (the Cyprinodontoidea and Valencioidea of the Cyprinodontoidei). Vivipary and ovovivipary have evolved independently from oviparous ancestors, the latter possibly twice. Some members of this order are notable for inhabiting extreme environments, such as saline or very warm waters, heavily polluted waters, rain water pools devoid of minerals and made acidic by decaying vegetation, or isolated situations where no other types of fish occur. They are typically carnivores, and often live near the surface, where the oxygen-rich water compensates for environmental disadvantages. Scheel (1968) observed the gut contents were invariably ants, others have reported insects, worms and aquatic crustaceans. Aquarium specimens are invariably seen eating protozoans from the water column and the surfaces of leaves, however these are not apparent as stomach contents. Many members of the family Cyprinodontidae (the pupfishes) eat plant material as well and some have adapted to a diet very high in algae to the point where one, the American Flag Fish, is a renowned algae eater in the aquarium, in spite of belonging to an order of fishes that do not generally consume any plant material. In addition, killifish derive some of the carotenoids and other chemicals required to make their body pigments from pollen grains on the surface of and in the gut of insects they eat from the surface of the water; this can be simulated in culture by the use of special color enhancing foods that contain these compounds. Although the Cyprinodontiformes are a diverse group, most species contained within are small to medium-sized fish, with small mouths, large eyes, a single dorsal fin, and a rounded caudal fin. The largest species is the cuatro ojos (Anableps dowei), which measures 34 cm (13 in) in length, while the smallest, the least killifish (Heterandria formosa), is just 8 mm (0.31 in) long as an adult. CYPRINODONTIFORMES The family Aplocheilidae has been expanded by some authorities to include all the killifishes with three subfamilies, Aplocheilinae, Cynolebiinae and Nothobranchiinae, but this is not the classification adopted in the 5th Edition of Fishes of the World.
[ { "paragraph_id": 0, "text": "Cyprinodontiformes /ˌsɪprɪnoʊˈdɒntɪfɔːrmiːz/ is an order of ray-finned fish, comprising mostly small, freshwater fish. Many popular aquarium fish, such as killifish and live-bearers, are included. They are closely related to the Atheriniformes and are occasionally included with them. A colloquial term for the order as a whole is toothcarps, though they are not actually close relatives of the true carps – the latter belong to the superorder Ostariophysi, while the toothcarps are Acanthopterygii.", "title": "" }, { "paragraph_id": 1, "text": "The families of Cyprinodontiformes can be informally divided into three groups based on reproductive strategy: viviparous and ovoviviparous (all species give live birth), and oviparous (all species are egg-laying). The live-bearing groups differ in whether the young are carried to term within (ovoviviparous) or without (viviparous) an enclosing eggshell. Phylogenetically however, one of the two suborders – the Aplocheiloidei – contains oviparous species exclusively, as do two of the four superfamilies of the other suborder (the Cyprinodontoidea and Valencioidea of the Cyprinodontoidei). Vivipary and ovovivipary have evolved independently from oviparous ancestors, the latter possibly twice.", "title": "" }, { "paragraph_id": 2, "text": "Some members of this order are notable for inhabiting extreme environments, such as saline or very warm waters, heavily polluted waters, rain water pools devoid of minerals and made acidic by decaying vegetation, or isolated situations where no other types of fish occur.", "title": "Description" }, { "paragraph_id": 3, "text": "They are typically carnivores, and often live near the surface, where the oxygen-rich water compensates for environmental disadvantages. Scheel (1968) observed the gut contents were invariably ants, others have reported insects, worms and aquatic crustaceans. Aquarium specimens are invariably seen eating protozoans from the water column and the surfaces of leaves, however these are not apparent as stomach contents. Many members of the family Cyprinodontidae (the pupfishes) eat plant material as well and some have adapted to a diet very high in algae to the point where one, the American Flag Fish, is a renowned algae eater in the aquarium, in spite of belonging to an order of fishes that do not generally consume any plant material. In addition, killifish derive some of the carotenoids and other chemicals required to make their body pigments from pollen grains on the surface of and in the gut of insects they eat from the surface of the water; this can be simulated in culture by the use of special color enhancing foods that contain these compounds.", "title": "Description" }, { "paragraph_id": 4, "text": "Although the Cyprinodontiformes are a diverse group, most species contained within are small to medium-sized fish, with small mouths, large eyes, a single dorsal fin, and a rounded caudal fin. The largest species is the cuatro ojos (Anableps dowei), which measures 34 cm (13 in) in length, while the smallest, the least killifish (Heterandria formosa), is just 8 mm (0.31 in) long as an adult.", "title": "Description" }, { "paragraph_id": 5, "text": "CYPRINODONTIFORMES", "title": "Systematics" }, { "paragraph_id": 6, "text": "The family Aplocheilidae has been expanded by some authorities to include all the killifishes with three subfamilies, Aplocheilinae, Cynolebiinae and Nothobranchiinae, but this is not the classification adopted in the 5th Edition of Fishes of the World.", "title": "Systematics" } ]
Cyprinodontiformes is an order of ray-finned fish, comprising mostly small, freshwater fish. Many popular aquarium fish, such as killifish and live-bearers, are included. They are closely related to the Atheriniformes and are occasionally included with them. A colloquial term for the order as a whole is toothcarps, though they are not actually close relatives of the true carps – the latter belong to the superorder Ostariophysi, while the toothcarps are Acanthopterygii. The families of Cyprinodontiformes can be informally divided into three groups based on reproductive strategy: viviparous and ovoviviparous, and oviparous. The live-bearing groups differ in whether the young are carried to term within (ovoviviparous) or without (viviparous) an enclosing eggshell. Phylogenetically however, one of the two suborders – the Aplocheiloidei – contains oviparous species exclusively, as do two of the four superfamilies of the other suborder. Vivipary and ovovivipary have evolved independently from oviparous ancestors, the latter possibly twice.
2002-02-25T15:43:11Z
2023-09-08T13:07:38Z
[ "Template:IPAc-en", "Template:Convert", "Template:Commons category", "Template:Reflist", "Template:Actinopterygii", "Template:Taxonbar", "Template:Authority control", "Template:Short description", "Template:Automatic taxobox", "Template:FishBase order", "Template:Cite web", "Template:Cite book", "Template:Cite journal" ]
https://en.wikipedia.org/wiki/Cyprinodontiformes
7,810
Church of the Holy Sepulchre
The Church of the Holy Sepulchre, also known as the Church of the Resurrection, is a fourth-century church in the Christian Quarter of the Old City of Jerusalem. It is considered to be the holiest site for Christians in the world, as it has been the most important pilgrimage site for Christianity since the fourth century. According to traditions dating back to the fourth century, it contains two sites considered holy in Christianity: the site where Jesus was crucified, at a place known as Calvary or Golgotha, and Jesus's empty tomb, which is where he was buried and resurrected. Each time the church was rebuilt, some of the antiquities from the preceding structure were used in the newer renovation. The tomb itself is enclosed by a 19th-century shrine called the Aedicule. The Status Quo, an understanding between religious communities dating to 1757, applies to the site. Within the church proper are the last four stations of the Cross of the Via Dolorosa, representing the final episodes of the Passion of Jesus. The church has been a major Christian pilgrimage destination since its creation in the fourth century, as the traditional site of the resurrection of Christ, thus its original Greek name, Church of the Anastasis ('Resurrection'). Control of the church itself is shared among several Christian denominations and secular entities in complicated arrangements essentially unchanged for over 160 years, and some for much longer. The main denominations sharing property over parts of the church are the Roman Catholic, Greek Orthodox and Armenian Apostolic, and to a lesser degree the Coptic, Syriac, and Ethiopian Orthodox churches. The church was historically named either for the Resurrection of Jesus, or for his tomb, which is located at its focal point. The Church of the Holy Sepulchre, is also known as the Basilica of the Holy Sepulchre, or simply the Holy Sepulchre. Eastern Christians also call it the Church of the Resurrection or Church of the Anastasis, Anastasis being Greek for Resurrection. Following the siege of Jerusalem in AD 70 during the First Jewish–Roman War, Jerusalem had been reduced to ruins. In AD 130, the Roman emperor Hadrian began the building of a Roman colony, the new city of Aelia Capitolina, on the site. Circa AD 135, he ordered that a cave containing a rock-cut tomb be filled in to create a flat foundation for a temple dedicated to Jupiter or Venus. The temple remained until the early fourth century. After seeing a vision of a cross in the sky in 312, Constantine the Great began to favor Christianity, signed the Edict of Milan legalising the religion, and sent his mother, Helena, to Jerusalem to look for Christ's tomb. With the help of Bishop of Caesarea Eusebius and Bishop of Jerusalem Macarius, three crosses were found near a tomb; one which allegedly cured people of death was presumed to be the True Cross Jesus was crucified on, leading the Romans to believe that they had found Calvary. Constantine ordered in about 326 that the temple to Jupiter/Venus be replaced by a church. After the temple was torn down and its ruins removed, the soil was removed from the cave, revealing a rock-cut tomb that Helena and Macarius identified as the burial site of Jesus. A shrine was built on the site of the tomb Helena and Macarius had identified as that of Jesus, enclosing the rock tomb walls within its own. The Church of the Holy Sepulchre, planned by the architect Zenobius, was built as separate constructs over two holy sites: The Church of the Holy Sepulchre site has been recognized since early in the fourth century as the place where Jesus was crucified, buried, and rose from the dead. The church was consecrated on 13 September 335. In 327, Constantine and Helena separately commissioned the Church of the Nativity in Bethlehem to commemorate the birth of Jesus. The Constantinian sanctuary in Jerusalem was destroyed by a fire in May of 614, when the Sassanid Empire, under Khosrau II, invaded Jerusalem and captured the True Cross. In 630, the Emperor Heraclius rebuilt the church after recapturing the city. After Jerusalem came under Islamic rule, it remained a Christian church, with the early Muslim rulers protecting the city's Christian sites, prohibiting their destruction or use as living quarters. A story reports that the caliph Umar ibn al-Khattab visited the church and stopped to pray on the balcony, but at the time of prayer, turned away from the church and prayed outside. He feared that future generations would misinterpret this gesture, taking it as a pretext to turn the church into a mosque. Eutychius of Alexandria adds that Umar wrote a decree saying that Muslims would not inhabit this location. The building suffered severe damage from an earthquake in 746. Early in the ninth century, another earthquake damaged the dome of the Anastasis. The damage was repaired in 810 by Patriarch Thomas I. In 841, the church suffered a fire. In 935, the Christians prevented the construction of a Muslim mosque adjacent to the Church. In 938, a new fire damaged the inside of the basilica and came close to the rotunda. In 966, due to a defeat of Muslim armies in the region of Syria, a riot broke out, which was followed by reprisals. The basilica was burned again. The doors and roof were burnt, and Patriarch John VII was murdered. On 18 October 1009, Fatimid caliph al-Hakim bi-Amr Allah ordered the complete destruction of the church as part of a more general campaign against Christian places of worship in Palestine and Egypt. The damage was extensive, with few parts of the early church remaining, and the roof of the rock-cut tomb damaged; the original shrine was destroyed. Some partial repairs followed. Christian Europe reacted with shock: it was a spur to expulsions of Jews and, later on, the Crusades. In wide-ranging negotiations between the Fatimids and the Byzantine Empire in 1027–1028, an agreement was reached whereby the new Caliph Ali az-Zahir (al-Hakim's son) agreed to allow the rebuilding and redecoration of the church. The rebuilding was finally completed during the tenures of Emperor Constantine IX Monomachos and Patriarch Nicephorus of Jerusalem in 1048. As a concession, the mosque in Constantinople was reopened and the khutba sermons were to be pronounced in az-Zahir's name. Muslim sources say a by-product of the agreement was the renunciation of Islam by many Christians who had been forced to convert under al-Hakim's persecutions. In addition, the Byzantines, while releasing 5,000 Muslim prisoners, made demands for the restoration of other churches destroyed by al-Hakim and the reestablishment of a patriarch in Jerusalem. Contemporary sources credit the emperor with spending vast sums in an effort to restore the Church of the Holy Sepulchre after this agreement was made. Still, "a total replacement was far beyond available resources. The new construction was concentrated on the rotunda and its surrounding buildings: the great basilica remained in ruins." The rebuilt church site consisted of "a court open to the sky, with five small chapels attached to it." The chapels were east of the court of resurrection (when reconstructed, the location of the tomb was under open sky), where the western wall of the great basilica had been. They commemorated scenes from the passion, such as the location of the prison of Christ and his flagellation, and presumably were so placed because of the difficulties of free movement among shrines in the city streets. The dedication of these chapels indicates the importance of the pilgrims' devotion to the suffering of Christ. They have been described as "a sort of Via Dolorosa in miniature" since little or no rebuilding took place on the site of the great basilica. Western pilgrims to Jerusalem during the 11th century found much of the sacred site in ruins. Control of Jerusalem, and thereby the Church of the Holy Sepulchre, continued to change hands several times between the Fatimids and the Seljuk Turks (loyal to the Abbasid caliph in Baghdad) until the Crusaders' arrival in 1099. Many historians maintain that the main concern of Pope Urban II, when calling for the First Crusade, was the threat to Constantinople from the Seljuk invasion of Asia Minor in response to the appeal of Byzantine Emperor Alexios I Komnenos. Historians agree that the fate of Jerusalem and thereby the Church of the Holy Sepulchre was also of concern, if not the immediate goal of papal policy in 1095. The idea of taking Jerusalem gained more focus as the Crusade was underway. The rebuilt church site was taken from the Fatimids (who had recently taken it from the Abbasids) by the knights of the First Crusade on 15 July 1099. The First Crusade was envisioned as an armed pilgrimage, and no crusader could consider his journey complete unless he had prayed as a pilgrim at the Holy Sepulchre. The classical theory is that Crusader leader Godfrey of Bouillon, who became the first Latin ruler of Jerusalem, decided not to use the title "king" during his lifetime, and declared himself Advocatus Sancti Sepulchri ('Protector [or Defender] of the Holy Sepulchre'). According to the German priest and pilgrim Ludolf von Sudheim, the keys of the Chapel of the Holy Sepulchre were in hands of the "ancient Georgians", and the food, alms, candles and oil for lamps were given to them by the pilgrims at the south door of the church. By the Crusader period, a cistern under the former basilica was rumoured to have been where Helena had found the True Cross, and began to be venerated as such; the cistern later became the Chapel of the Invention of the Cross, but there is no evidence of the site's identification before the 11th century, and modern archaeological investigation has now dated the cistern to 11th-century repairs by Monomachos. William of Tyre, chronicler of the Crusader Kingdom of Jerusalem, reports on the rebuilding of the church in the mid-12th century. The Crusaders investigated the eastern ruins on the site, occasionally excavating through the rubble, and while attempting to reach the cistern, they discovered part of the original ground level of Hadrian's temple enclosure; they transformed this space into a chapel dedicated to Helena, widening their original excavation tunnel into a proper staircase. The Crusaders began to refurnish the church in Romanesque style and added a bell tower. These renovations unified the small chapels on the site and were completed during the reign of Queen Melisende in 1149, placing all the holy places under one roof for the first time. The church became the seat of the first Latin patriarchs and the site of the kingdom's scriptorium. Eight 11th- and 12th-century Crusader leaders (Godfrey, Baldwin I, Baldwin II, Fulk, Baldwin III, Amalric, Baldwin IV and Baldwin V — the first eight rulers of the Kingdom of Jerusalem) were buried in the south transept and inside the Chapel of Adam. The royal tombs were looted during the Khwarizmian sack of Jerusalem in 1244 but probably remained mostly intact until 1808 when a fire damaged the church. The tombs may have been destroyed by the fire, or during renovations by the Greek Orthodox custodians of the church in 1809-1810. The remains of the kings may still be in unmarked pits under the church's pavement. The church was lost to Saladin, along with the rest of the city, in 1187, although the treaty established after the Third Crusade allowed Christian pilgrims to visit the site. Emperor Frederick II (r. 1220–50) regained the city and the church by treaty in the 13th century while under a ban of excommunication, with the consequence that the holiest church in Christianity was laid under interdict. The church seems to have been largely in the hands of Greek Orthodox patriarch Athanasius II of Jerusalem (c. 1231–47) during the last period of Latin control over Jerusalem. Both city and church were captured by the Khwarezmians in 1244. There was certainly a recognisable Nestorian (Church of the East) presence at the Holy Sepulchre from the years 1348 through 1575, as contemporary Franciscan accounts indicate. The Franciscan friars renovated the church in 1555, as it had been neglected despite increased numbers of pilgrims. The Franciscans rebuilt the Aedicule, extending the structure to create an antechamber. A marble shrine commissioned by Friar Boniface of Ragusa was placed to envelop the remains of Christ's tomb, probably to prevent pilgrims from touching the original rock or taking small pieces as souvenirs. A marble slab was placed over the limestone burial bed where Jesus's body is believed to have lain. After the renovation of 1555, control of the church oscillated between the Franciscans and the Orthodox, depending on which community could obtain a favorable firman from the "Sublime Porte" at a particular time, often through outright bribery. Violent clashes were not uncommon. There was no agreement about this question, although it was discussed at the negotiations to the Treaty of Karlowitz in 1699. During the Holy Week of 1757, Orthodox Christians reportedly took over some of the Franciscan-controlled church. This may have been the cause of the sultan's firman (decree) later developed into the Status Quo. A fire severely damaged the structure again in 1808, causing the dome of the Rotunda to collapse and smashing the Aedicule's exterior decoration. The Rotunda and the Aedicule's exterior were rebuilt in 1809–10 by architect Nikolaos Ch. Komnenos of Mytilene in the contemporary Ottoman Baroque style. The interior of the antechamber, now known as the Chapel of the Angel, was partly rebuilt to a square ground plan in place of the previously semicircular western end. Another decree in 1853 from the sultan solidified the existing territorial division among the communities and solidified the Status Quo for arrangements to "remain in their present state", requiring consensus to make even minor changes. The dome was restored by Catholics, Greeks, and Turks in 1868, being made of iron ever since. By the time of the British Mandate for Palestine following the end of World War I, the cladding of red limestone applied to the Aedicule by Komnenos had deteriorated badly and was detaching from the underlying structure; from 1947 until restoration work in 2016–17, it was held in place with an exterior scaffolding of iron girders installed by the British authorities. After the care of the British Empire, the Church of England had an important role in the appropriation of the Holy Sepulcher, such as funds for the maintenance of external infrastructures, and the abolition of territorial claims near the Temple of the Holy Sepulcher, the Protestant Church allowed to carry out the elimination of taxes from the Holy Sepulcher, currently the Anglican and Lutheran dioceses of Jerusalem are allowed to attend Armenian cults. In 1948, Jerusalem was divided between Israel and Jordan and the Old City with the church were made part of Jordan. In 1967, Israeli forces captured East Jerusalem in the Six Day War, and that area has remained under Israeli control ever since. Under Israeli rule, legal arrangements relating to the churches of East Jerusalem were maintained in coordination with the Jordanian government. The dome at the Church of the Holy Sepulchre was restored again in 1994–97 as part of extensive modern renovations that have been ongoing since 1959. During the 1970–78 restoration works and excavations inside the building, and under the nearby Muristan bazaar, it was found that the area was originally a quarry, from which white meleke limestone was struck. East of the Chapel of Saint Helena, the excavators discovered a void containing a second-century drawing of a Roman pilgrim ship, two low walls supporting the platform of Hadrian's second-century temple, and a higher fourth-century wall built to support Constantine's basilica. After the excavations of the early 1970s, the Armenian authorities converted this archaeological space into the Chapel of Saint Vartan, and created an artificial walkway over the quarry on the north of the chapel, so that the new chapel could be accessed (by permission) from the Chapel of Saint Helena. After seven decades of being held together by steel girders, the Israel Antiquities Authority (IAA) declared the visibly deteriorating Aedicule structure unsafe. A restoration of the Aedicule was agreed upon and executed from May 2016 to March 2017. Much of the $4 million project was funded by the World Monuments Fund, as well as $1.3 million from Mica Ertegun and a significant sum from King Abdullah II of Jordan. The existence of the original limestone cave walls within the Aedicule was confirmed, and a window was created to view this from the inside. The presence of moisture led to the discovery of an underground shaft resembling an escape tunnel carved into the bedrock, seeming to lead from the tomb. For the first time since at least 1555, on 26 October 2016, marble cladding that protects the supposed burial bed of Jesus was removed. Members of the National Technical University of Athens were present. Initially, only a layer of debris was visible. This was cleared in the next day, and a partially broken marble slab with a Crusader-style cross carved was revealed. By the night of 28 October, the original limestone burial bed was shown to be intact. The tomb was resealed shortly thereafter. Mortar from just above the burial bed was later dated to the mid-fourth century. On 25 March 2020, Israeli health officials ordered the site closed to the public due to the COVID-19 pandemic. According to the keeper of the keys, it was the first such closure since 1349, during the Black Death. Clerics continued regular prayers inside the building, and it reopened to visitors two months later, on 24 May. During church renovations in 2022, a stone slab covered in modern graffiti was moved from a wall, revealing Cosmatesque-style decoration on one face. According to an IAA archaeologist, the decoration was once inlaid with pieces of glass and fine marble; it indicates that the relic was the front of the church's high altar from the Crusader era (c. 1149), which was later used by the Greek Orthodox until being damaged in the 1808 fire. The courtyard facing the entrance to the church is known as the parvis. Two streets open into the parvis: St Helena Road (west) and Suq ed-Dabbagha (east). Around the parvis are a few smaller structures. South of the parvis, opposite the church: On the eastern side of the parvis, south to north: North of the parvis, in front of the church façade or against it: A group of three chapels borders the parvis on its west side. They originally formed the baptistery complex of the Constantinian church. The southernmost chapel was the vestibule, the middle chapel the baptistery, and the north chapel the chamber in which the patriarch chrismated the newly baptized before leading them into the rotunda north of this complex. Now they are dedicated as (from south to north) The 12th-century Crusader bell tower is just south of the Rotunda, to the left of the entrance. Its upper level was lost in a 1545 collapse. In 1719, another two storeys were lost. The wooden doors that compose the main entrance are the original, highly carved arched doors. Today, only the left-hand entrance is currently accessible, as the right doorway has long since been bricked up. The entrance to the church leads to the south transept, through the crusader façade in the parvis of a larger courtyard. This is found past a group of streets winding through the outer Via Dolorosa by way of a souq in the Muristan. This narrow way of access to such a large structure has proven to be hazardous at times. For example, when a fire broke out in 1840, dozens of pilgrims were trampled to death. According to their own family lore, the Muslim Nuseibeh family has been responsible for opening the door as an impartial party to the church's denominations already since the seventh century. However, they themselves admit that the documents held by various Christian denominations only mention their role since the 12th century, in the time of Saladin, which is the date more generally accepted. After retaking Jerusalem from the Crusaders in 1187, Saladin entrusted the Joudeh family with the key to the church, which is made of iron and 30 centimetres (12 in) long; the Nuseibehs either became or remained its doorkeepers. The 'immovable ladder' stands beneath a window on the façade. Just inside the church entrance is a stairway leading up to Calvary (Golgotha), traditionally regarded as the site of Jesus's crucifixion and the most lavishly decorated part of the church. The exit is via another stairway opposite the first, leading down to the ambulatory. Golgotha and its chapels are just south of the main altar of the catholicon. Calvary is split into two chapels: one Greek Orthodox and one Catholic, each with its own altar. On the left (north) side, the Greek Orthodox chapel's altar is placed over the supposed rock of Calvary (the 12th Station of the Cross), which can be touched through a hole in the floor beneath the altar. The rock can be seen under protective glass on both sides of the altar. The softer surrounding stone was removed when the church was built. The Roman Catholic (Franciscan) Chapel of the Nailing of the Cross (the 11th Station of the Cross) stretches to the south. Between the Catholic Altar of the Nailing to the Cross and the Orthodox altar is the Catholic Altar of the Stabat Mater, which has a statue of Mary with an 18th-century bust; this middle altar marks the 13th Station of the Cross. On the ground floor, just underneath the Golgotha chapel, is the Chapel of Adam. According to tradition, Jesus was crucified over the place where Adam's skull was buried. According to some, the blood of Christ ran down the cross and through the rocks to fill Adam's skull. Through a window at the back of the 11th-century apse, the rock of Calvary can be seen with a crack traditionally held to be caused by the earthquake that followed Jesus's death; some scholars claim it is the result of quarrying against a natural flaw in the rock. Behind the Chapel of Adam is the Greek Treasury (Treasury of the Greek Patriarch). Some of its relics, such as a 12th-century crystal mitre, were transferred to the Greek Orthodox Patriarchate Museum (the Patriarchal Museum) on Greek Orthodox Patriarchate Street. Just inside the entrance to the church is the Stone of Anointing (also Stone of the Anointing or Stone of Unction), which tradition holds to be where Jesus's body was prepared for burial by Joseph of Arimathea, though this tradition is only attested since the crusader era (notably by the Italian Dominican pilgrim Riccoldo da Monte di Croce in 1288), and the present stone was only added in the 1810 reconstruction. The wall behind the stone is defined by its striking blue balconies and taphos symbol-bearing red banners (depicting the insignia of the Brotherhood of the Holy Sepulchre), and is decorated with lamps. The modern mosaic along the wall depicts the anointing of Jesus's body, preceded on the right by the Descent from the Cross, and succeeded on the left by the Burial of Jesus. The wall was a temporary addition to support the arch above it, which had been weakened after the damage in the 1808 fire; it blocks the view of the rotunda, separates the entrance from the catholicon, sits on top of four of the now empty and desecrated Crusader graves and is no longer structurally necessary. Opinions differ as to whether it is to be seen as the 13th Station of the Cross, which others identify as the lowering of Jesus from the cross and located between the 11th and 12th stations on Calvary. The lamps that hang over the Stone of Unction, adorned with cross-bearing chain links, are contributed by Armenians, Copts, Greeks and Latins. Immediately inside and to the left of the entrance is a bench (formerly a divan) that has traditionally been used by the church's Muslim doorkeepers, along with some Christian clergy, as well as electrical wiring. To the right of the entrance is a wall along the ambulatory containing the staircase leading to Golgotha. Further along the same wall is the entrance to the Chapel of Adam. The rotunda is the building of the larger dome located on the far west side. In the centre of the rotunda is a small chapel called the Aedicule in English, from the Latin aedicula, in reference to a small shrine. The Aedicule has two rooms: the first holds a relic called the Angel's Stone, which is believed to be a fragment of the large stone that sealed the tomb; the second, smaller room contains the tomb of Jesus. Possibly to prevent pilgrims from removing bits of the original rock as souvenirs, by 1555, a surface of marble cladding was placed on the tomb to prevent further damage to the tomb. In October 2016, the top slab was pulled back to reveal an older, partially broken marble slab with a Crusader-style cross carved in it. Beneath it, the limestone burial bed was revealed to be intact. Under the Status Quo, the Eastern Orthodox, Roman Catholic, and Armenian Apostolic Churches all have rights to the interior of the tomb, and all three communities celebrate the Divine Liturgy or Holy Mass there daily. It is also used for other ceremonies on special occasions, such as the Holy Saturday ceremony of the Holy Fire led by the Greek Orthodox patriarch (with the participation of the Coptic and Armenian patriarchs). To its rear, in the Coptic Chapel, constructed of iron latticework, lies the altar used by the Coptic Orthodox. Historically, the Georgians also retained the key to the Aedicule. To the right of the sepulchre on the northwestern edge of the Rotunda is the Chapel of the Apparition, which is reserved for Roman Catholic use. In the central nave of the Crusader-era church, just east of the larger rotunda, is the Crusader structure housing the main altar of the Church, today the Greek Orthodox catholicon. Its dome is 19.8 metres (65 ft) in diameter, and is set directly over the centre of the transept crossing of the choir where the compas is situated, an omphalos ("navel") stone once thought to be the center of the world and still venerated as such by Orthodox Christians (associated with the site of the Crucifixion and the Resurrection). Since 1996 this dome is topped by the monumental Golgotha Crucifix, which the Greek Patriarch Diodoros I of Jerusalem consecrated. It was at the initiative of Israeli professor Gustav Kühnel to erect a new crucifix at the church that would not only be worthy of the singularity of the site, but that would also become a symbol of the efforts of unity in the community of Christian faith. The catholicon's iconostasis demarcates the Orthodox sanctuary behind it, to its east. The iconostasis is flanked to the front by two episcopal thrones: the southern seat (cathedra) is the patriarchal throne of the Greek Orthodox patriarch of Jerusalem, and the northern seat is for an archbishop or bishop. (There is also a popular claim that both are patriarchal thrones, with the northern one being for the patriarch of Antioch — which has been described as a misstatement, however.) South of the Aedicule is the "Place of the Three Marys", marked by a stone canopy (the Station of the Holy Women) and a large modern wall mosaic. From here one can enter the Armenian monastery, which stretches over the ground and first upper floor of the church's southeastern part. West of the Aedicule, to the rear of the Rotunda, is the Syriac Chapel with the Tomb of Joseph of Arimathea, located in a Constantinian apse and containing an opening to an ancient Jewish rock-cut tomb. This chapel is where the Syriac Orthodox celebrate their Liturgy on Sundays. The Syriac Orthodox Chapel of Saint Joseph of Arimathea and Saint Nicodemus. On Sundays and feast days it is furnished for the celebration of Mass. It is accessed from the Rotunda, by a door west of the Aedicule. On the far side of the chapel is the low entrance to an almost complete first-century Jewish tomb, initially holding six kokh-type funeral shafts radiating from a central chamber, two of which are still exposed. Although this space was discovered relatively recently and contains no identifying marks, some believe that Joseph of Arimathea and Nicodemus were buried here. Since Jews always buried their dead outside the city, the presence of this tomb seems to prove that the Holy Sepulchre site was outside the city walls at the time of the crucifixion. The Arches of the Virgin are seven arches (an arcade) at the northern end of the north transept, which is to the catholicon's north. Disputed by the Orthodox and the Latin, the area is used to store ladders. In the northeast side of the complex, there is the Prison of Christ, alleged to be where Jesus was held. The Greek Orthodox are showing pilgrims yet another place where Jesus was allegedly held, the similarly named Prison of Christ in their Monastery of the Praetorium [C], located near the Church of Ecce Homo, between the Second and Third Stations of the Via Dolorosa. The Armenians regard a recess in the Monastery of the Flagellation at the Second Station of the Via Dolorosa as the Prison of Christ. A cistern among the ruins beneath the Church of St. Peter in Gallicantu on Mount Zion is also alleged to have been the Prison of Christ. To reconcile the traditions, some allege that Jesus was held in the Mount Zion cell in connection with his trial by the Jewish high priest, at the Praetorium in connection with his trial by the Roman governor Pilate, and near the Golgotha before crucifixion. The chapels in the ambulatory are, from north to south: the Greek Chapel of Saint Longinus (named after Longinus), the Armenian Chapel of the Division of Robes, the entrance to the Chapel of Saint Helena, and the Greek Chapel of the Derision. An Ottoman decree of 1757 helped establish a status quo upholding the state of affairs for various Holy Land sites. The status quo was upheld in Sultan Abdülmecid I's firman (decree) of 1852/3, which pinned down the now-permanent statutes of property and the regulations concerning the roles of the different denominations and other custodians. The primary custodians are the Roman Catholic, Greek Orthodox and Armenian Apostolic churches. The Greek Orthodox act through the Greek Orthodox Patriarchate as well as through the Brotherhood of the Holy Sepulchre. Roman Catholics act through the Franciscan Custody of the Holy Land. In the 19th century, the Coptic Orthodox, the Ethiopian Orthodox and the Syriac Orthodox also acquired lesser responsibilities, which include shrines and other structures in and around the building. None of these controls the main entrance. In 1192, Saladin assigned door-keeping responsibilities to the Muslim Nusaybah family. The wooden doors that compose the main entrance are the original, highly carved doors. The Joudeh al-Goudia (al-Ghodayya) family were entrusted as custodian to the keys of the Holy Sepulchre by Saladin in 1187. Despite occasional disagreements, religious services take place in the Church with regularity and coexistence is generally peaceful. An example of concord between the Church custodians is the full restoration of the Aedicule from 2016 to 2017. The establishment of the modern Status Quo in 1853 did not halt controversy and occasional violence. In 1902, 18 friars were hospitalized and some monks were jailed after the Franciscans and Greeks disagreed over who could clean the lowest step of the Chapel of the Franks. In the aftermath, the Greek patriarch, Franciscan custos, Ottoman governor and French consul general signed a convention that both denominations could sweep it. On a hot summer day in 2002, a Coptic monk moved his chair from its agreed spot into the shade. This was interpreted as a hostile move by the Ethiopians and eleven were hospitalized after the resulting fight. In another incident in 2004, during Orthodox celebrations of the Exaltation of the Holy Cross, a door to the Franciscan chapel was left open. This was taken as a sign of disrespect by the Orthodox and a fistfight broke out. Some people were arrested, but no one was seriously injured. On Palm Sunday, in April 2008, a brawl broke out when a Greek monk was ejected from the building by a rival faction. Police were called to the scene but were also attacked by the enraged brawlers. On Sunday, 9 November 2008, a clash erupted between Armenian and Greek monks during celebrations for the Feast of the Cross. In February 2018, the church was closed following a tax dispute over 152 million euros of uncollected taxes on church properties. The city hall stressed that the Church of the Holy Sepulchre and all other churches are exempt from the taxes, with the changes only affecting establishments like "hotels, halls and businesses" owned by the churches. NPR had reported that the Greek Orthodox Church calls itself the second-largest landowner in Israel, after the Israeli government. There was a lock-in protest against an Israeli legislative proposal which would expropriate church lands that had been sold to private companies since 2010, a measure which church leaders assert constitutes a serious violation of their property rights and the Status Quo. In a joint official statement the church authorities protested what they considered to be the peak of a systematic campaign in: a discriminatory and racist bill that targets solely the properties of the Christian community in the Holy Land ... This reminds us all of laws of a similar nature which were enacted against the Jews during dark periods in Europe. The 2018 taxation affair does not cover any church buildings or religious related facilities (because they are exempt by law), but commercial facilities such as the Notre Dame Hotel which was not paying the municipal property tax, and any land which is owned and used as a commercial land. The church holds the rights to land where private homes have been constructed, and some of the disagreement had been raised after the Knesset had proposed a bill that will make it harder for a private company not to extend a lease for land used by homeowners. The church leaders have said that such a bill will make it harder for them to sell church-owned lands. According to The Jerusalem Post: The stated aim of the bill is to protect homeowners against the possibility that private companies will not extend their leases of land on which their houses or apartments stand. In June 2019, a number of Christian denominations in Jerusalem raised their voice against the Supreme Court's decision to uphold the sale of three properties by the Greek Orthodox Patriarchate to Ateret Cohanim – an organization that seeks to increase the number of Jews living in the Old City and East Jerusalem. The church leaders warned that if the organization gets to control the sites, Christians could lose access to the Church of the Holy Sepulchre. In June 2022, the Supreme Court upheld the sale and ended the legal battle. The site of the church had been a temple to Jupiter or Venus built by Hadrian before Constantine's edifice was built. Hadrian's temple had been located there because it was the junction of the main north–south road with one of the two main east–west roads and directly adjacent to the forum (now the location of the Muristan, which is smaller than the former forum). The forum itself had been placed, as is traditional in Roman towns, at the junction of the main north–south road with the other main east–west road (which is now El-Bazar/David Street). The temple and forum together took up the entire space between the two main east–west roads (a few above-ground remains of the east end of the temple precinct still survive in the Alexander Nevsky Church complex of the Russian Mission in Exile). From the archaeological excavations in the 1970s, it is clear that construction took over most of the site of the earlier temple enclosure and that the Triportico and Rotunda roughly overlapped with the temple building itself; the excavations indicate that the temple extended at least as far back as the Aedicule, and the temple enclosure would have reached back slightly further. Virgilio Canio Corbo, a Franciscan priest and archaeologist, who was present at the excavations, estimated from the archaeological evidence that the western retaining wall of the temple itself would have passed extremely close to the east side of the supposed tomb; if the wall had been any further west any tomb would have been crushed under the weight of the wall (which would be immediately above it) if it had not already been destroyed when foundations for the wall were made. Other archaeologists have criticized Corbo's reconstructions. Dan Bahat, the former city archaeologist of Jerusalem, regards them as unsatisfactory, as there is no known temple of Aphrodite (Venus) matching Corbo's design, and no archaeological evidence for Corbo's suggestion that the temple building was on a platform raised high enough to avoid including anything sited where the Aedicule is now; indeed Bahat notes that many temples to Aphrodite have a rotunda-like design, and argues that there is no archaeological reason to assume that the present rotunda was not based on a rotunda in the temple previously on the site. The New Testament describes Jesus's tomb as being outside the city wall, as was normal for burials across the ancient world, which were regarded as unclean. Today, the site of the Church is within the current walls of the old city of Jerusalem. It has been well documented by archaeologists that in the time of Jesus, the walled city was smaller and the wall then was to the east of the current site of the Church. In other words, the city had been much narrower in Jesus's time, with the site then having been outside the walls; since Herod Agrippa (41–44) is recorded by history as extending the city to the north (beyond the present northern walls), the required repositioning of the western wall is traditionally attributed to him as well. The area immediately to the south and east of the sepulchre was a quarry and outside the city during the early first century as excavations under the Lutheran Church of the Redeemer across the street demonstrated. The church is a part of the UNESCO World Heritage Site Old City of Jerusalem. The Christian Quarter and the (also Christian) Armenian Quarter of the Old City of Jerusalem are both located in the northwestern and western part of the Old City, due to the fact that the Holy Sepulchre is located close to the northwestern corner of the walled city. The adjacent neighbourhood within the Christian Quarter is called the Muristan, a term derived from the Persian word for hospital – Christian pilgrim hospices have been maintained in this area near the Holy Sepulchre since at least the time of Charlemagne. From the ninth century onward, the construction of churches inspired by the Anastasis was extended across Europe. One example is Santo Stefano in Bologna, Italy, an agglomeration of seven churches recreating shrines of Jerusalem. Several churches and monasteries in Europe, for instance, in Germany and Russia, and at least one church in the United States have been wholly or partially modeled on the Church of the Resurrection, some even reproducing other holy places for the benefit of pilgrims who could not travel to the Holy Land. They include the Heiliges Grab [de] ("Holy Tomb") of Görlitz, constructed between 1481 and 1504, the New Jerusalem Monastery in Moscow Oblast, constructed by Patriarch Nikon between 1656 and 1666, and Mount St. Sepulchre Franciscan Monastery built by the Franciscans in Washington, DC in 1898. Author Andrew Holt writes that the church is the most important in all Christendom. Custodians Virtual tours
[ { "paragraph_id": 0, "text": "The Church of the Holy Sepulchre, also known as the Church of the Resurrection, is a fourth-century church in the Christian Quarter of the Old City of Jerusalem. It is considered to be the holiest site for Christians in the world, as it has been the most important pilgrimage site for Christianity since the fourth century.", "title": "" }, { "paragraph_id": 1, "text": "According to traditions dating back to the fourth century, it contains two sites considered holy in Christianity: the site where Jesus was crucified, at a place known as Calvary or Golgotha, and Jesus's empty tomb, which is where he was buried and resurrected. Each time the church was rebuilt, some of the antiquities from the preceding structure were used in the newer renovation. The tomb itself is enclosed by a 19th-century shrine called the Aedicule. The Status Quo, an understanding between religious communities dating to 1757, applies to the site.", "title": "" }, { "paragraph_id": 2, "text": "Within the church proper are the last four stations of the Cross of the Via Dolorosa, representing the final episodes of the Passion of Jesus. The church has been a major Christian pilgrimage destination since its creation in the fourth century, as the traditional site of the resurrection of Christ, thus its original Greek name, Church of the Anastasis ('Resurrection').", "title": "" }, { "paragraph_id": 3, "text": "Control of the church itself is shared among several Christian denominations and secular entities in complicated arrangements essentially unchanged for over 160 years, and some for much longer. The main denominations sharing property over parts of the church are the Roman Catholic, Greek Orthodox and Armenian Apostolic, and to a lesser degree the Coptic, Syriac, and Ethiopian Orthodox churches.", "title": "" }, { "paragraph_id": 4, "text": "The church was historically named either for the Resurrection of Jesus, or for his tomb, which is located at its focal point.", "title": "Name" }, { "paragraph_id": 5, "text": "The Church of the Holy Sepulchre, is also known as the Basilica of the Holy Sepulchre, or simply the Holy Sepulchre.", "title": "Name" }, { "paragraph_id": 6, "text": "Eastern Christians also call it the Church of the Resurrection or Church of the Anastasis, Anastasis being Greek for Resurrection.", "title": "Name" }, { "paragraph_id": 7, "text": "Following the siege of Jerusalem in AD 70 during the First Jewish–Roman War, Jerusalem had been reduced to ruins. In AD 130, the Roman emperor Hadrian began the building of a Roman colony, the new city of Aelia Capitolina, on the site. Circa AD 135, he ordered that a cave containing a rock-cut tomb be filled in to create a flat foundation for a temple dedicated to Jupiter or Venus. The temple remained until the early fourth century.", "title": "History" }, { "paragraph_id": 8, "text": "After seeing a vision of a cross in the sky in 312, Constantine the Great began to favor Christianity, signed the Edict of Milan legalising the religion, and sent his mother, Helena, to Jerusalem to look for Christ's tomb. With the help of Bishop of Caesarea Eusebius and Bishop of Jerusalem Macarius, three crosses were found near a tomb; one which allegedly cured people of death was presumed to be the True Cross Jesus was crucified on, leading the Romans to believe that they had found Calvary.", "title": "History" }, { "paragraph_id": 9, "text": "Constantine ordered in about 326 that the temple to Jupiter/Venus be replaced by a church. After the temple was torn down and its ruins removed, the soil was removed from the cave, revealing a rock-cut tomb that Helena and Macarius identified as the burial site of Jesus.", "title": "History" }, { "paragraph_id": 10, "text": "A shrine was built on the site of the tomb Helena and Macarius had identified as that of Jesus, enclosing the rock tomb walls within its own.", "title": "History" }, { "paragraph_id": 11, "text": "The Church of the Holy Sepulchre, planned by the architect Zenobius, was built as separate constructs over two holy sites:", "title": "History" }, { "paragraph_id": 12, "text": "The Church of the Holy Sepulchre site has been recognized since early in the fourth century as the place where Jesus was crucified, buried, and rose from the dead. The church was consecrated on 13 September 335.", "title": "History" }, { "paragraph_id": 13, "text": "In 327, Constantine and Helena separately commissioned the Church of the Nativity in Bethlehem to commemorate the birth of Jesus.", "title": "History" }, { "paragraph_id": 14, "text": "The Constantinian sanctuary in Jerusalem was destroyed by a fire in May of 614, when the Sassanid Empire, under Khosrau II, invaded Jerusalem and captured the True Cross. In 630, the Emperor Heraclius rebuilt the church after recapturing the city.", "title": "History" }, { "paragraph_id": 15, "text": "After Jerusalem came under Islamic rule, it remained a Christian church, with the early Muslim rulers protecting the city's Christian sites, prohibiting their destruction or use as living quarters. A story reports that the caliph Umar ibn al-Khattab visited the church and stopped to pray on the balcony, but at the time of prayer, turned away from the church and prayed outside. He feared that future generations would misinterpret this gesture, taking it as a pretext to turn the church into a mosque. Eutychius of Alexandria adds that Umar wrote a decree saying that Muslims would not inhabit this location. The building suffered severe damage from an earthquake in 746.", "title": "History" }, { "paragraph_id": 16, "text": "Early in the ninth century, another earthquake damaged the dome of the Anastasis. The damage was repaired in 810 by Patriarch Thomas I. In 841, the church suffered a fire. In 935, the Christians prevented the construction of a Muslim mosque adjacent to the Church. In 938, a new fire damaged the inside of the basilica and came close to the rotunda. In 966, due to a defeat of Muslim armies in the region of Syria, a riot broke out, which was followed by reprisals. The basilica was burned again. The doors and roof were burnt, and Patriarch John VII was murdered.", "title": "History" }, { "paragraph_id": 17, "text": "On 18 October 1009, Fatimid caliph al-Hakim bi-Amr Allah ordered the complete destruction of the church as part of a more general campaign against Christian places of worship in Palestine and Egypt. The damage was extensive, with few parts of the early church remaining, and the roof of the rock-cut tomb damaged; the original shrine was destroyed. Some partial repairs followed. Christian Europe reacted with shock: it was a spur to expulsions of Jews and, later on, the Crusades.", "title": "History" }, { "paragraph_id": 18, "text": "In wide-ranging negotiations between the Fatimids and the Byzantine Empire in 1027–1028, an agreement was reached whereby the new Caliph Ali az-Zahir (al-Hakim's son) agreed to allow the rebuilding and redecoration of the church. The rebuilding was finally completed during the tenures of Emperor Constantine IX Monomachos and Patriarch Nicephorus of Jerusalem in 1048. As a concession, the mosque in Constantinople was reopened and the khutba sermons were to be pronounced in az-Zahir's name. Muslim sources say a by-product of the agreement was the renunciation of Islam by many Christians who had been forced to convert under al-Hakim's persecutions. In addition, the Byzantines, while releasing 5,000 Muslim prisoners, made demands for the restoration of other churches destroyed by al-Hakim and the reestablishment of a patriarch in Jerusalem. Contemporary sources credit the emperor with spending vast sums in an effort to restore the Church of the Holy Sepulchre after this agreement was made. Still, \"a total replacement was far beyond available resources. The new construction was concentrated on the rotunda and its surrounding buildings: the great basilica remained in ruins.\"", "title": "History" }, { "paragraph_id": 19, "text": "The rebuilt church site consisted of \"a court open to the sky, with five small chapels attached to it.\" The chapels were east of the court of resurrection (when reconstructed, the location of the tomb was under open sky), where the western wall of the great basilica had been. They commemorated scenes from the passion, such as the location of the prison of Christ and his flagellation, and presumably were so placed because of the difficulties of free movement among shrines in the city streets. The dedication of these chapels indicates the importance of the pilgrims' devotion to the suffering of Christ. They have been described as \"a sort of Via Dolorosa in miniature\" since little or no rebuilding took place on the site of the great basilica. Western pilgrims to Jerusalem during the 11th century found much of the sacred site in ruins. Control of Jerusalem, and thereby the Church of the Holy Sepulchre, continued to change hands several times between the Fatimids and the Seljuk Turks (loyal to the Abbasid caliph in Baghdad) until the Crusaders' arrival in 1099.", "title": "History" }, { "paragraph_id": 20, "text": "Many historians maintain that the main concern of Pope Urban II, when calling for the First Crusade, was the threat to Constantinople from the Seljuk invasion of Asia Minor in response to the appeal of Byzantine Emperor Alexios I Komnenos. Historians agree that the fate of Jerusalem and thereby the Church of the Holy Sepulchre was also of concern, if not the immediate goal of papal policy in 1095. The idea of taking Jerusalem gained more focus as the Crusade was underway. The rebuilt church site was taken from the Fatimids (who had recently taken it from the Abbasids) by the knights of the First Crusade on 15 July 1099.", "title": "History" }, { "paragraph_id": 21, "text": "The First Crusade was envisioned as an armed pilgrimage, and no crusader could consider his journey complete unless he had prayed as a pilgrim at the Holy Sepulchre. The classical theory is that Crusader leader Godfrey of Bouillon, who became the first Latin ruler of Jerusalem, decided not to use the title \"king\" during his lifetime, and declared himself Advocatus Sancti Sepulchri ('Protector [or Defender] of the Holy Sepulchre').", "title": "History" }, { "paragraph_id": 22, "text": "According to the German priest and pilgrim Ludolf von Sudheim, the keys of the Chapel of the Holy Sepulchre were in hands of the \"ancient Georgians\", and the food, alms, candles and oil for lamps were given to them by the pilgrims at the south door of the church.", "title": "History" }, { "paragraph_id": 23, "text": "By the Crusader period, a cistern under the former basilica was rumoured to have been where Helena had found the True Cross, and began to be venerated as such; the cistern later became the Chapel of the Invention of the Cross, but there is no evidence of the site's identification before the 11th century, and modern archaeological investigation has now dated the cistern to 11th-century repairs by Monomachos.", "title": "History" }, { "paragraph_id": 24, "text": "William of Tyre, chronicler of the Crusader Kingdom of Jerusalem, reports on the rebuilding of the church in the mid-12th century. The Crusaders investigated the eastern ruins on the site, occasionally excavating through the rubble, and while attempting to reach the cistern, they discovered part of the original ground level of Hadrian's temple enclosure; they transformed this space into a chapel dedicated to Helena, widening their original excavation tunnel into a proper staircase.", "title": "History" }, { "paragraph_id": 25, "text": "The Crusaders began to refurnish the church in Romanesque style and added a bell tower. These renovations unified the small chapels on the site and were completed during the reign of Queen Melisende in 1149, placing all the holy places under one roof for the first time.", "title": "History" }, { "paragraph_id": 26, "text": "The church became the seat of the first Latin patriarchs and the site of the kingdom's scriptorium.", "title": "History" }, { "paragraph_id": 27, "text": "Eight 11th- and 12th-century Crusader leaders (Godfrey, Baldwin I, Baldwin II, Fulk, Baldwin III, Amalric, Baldwin IV and Baldwin V — the first eight rulers of the Kingdom of Jerusalem) were buried in the south transept and inside the Chapel of Adam. The royal tombs were looted during the Khwarizmian sack of Jerusalem in 1244 but probably remained mostly intact until 1808 when a fire damaged the church. The tombs may have been destroyed by the fire, or during renovations by the Greek Orthodox custodians of the church in 1809-1810. The remains of the kings may still be in unmarked pits under the church's pavement.", "title": "History" }, { "paragraph_id": 28, "text": "The church was lost to Saladin, along with the rest of the city, in 1187, although the treaty established after the Third Crusade allowed Christian pilgrims to visit the site. Emperor Frederick II (r. 1220–50) regained the city and the church by treaty in the 13th century while under a ban of excommunication, with the consequence that the holiest church in Christianity was laid under interdict. The church seems to have been largely in the hands of Greek Orthodox patriarch Athanasius II of Jerusalem (c. 1231–47) during the last period of Latin control over Jerusalem. Both city and church were captured by the Khwarezmians in 1244.", "title": "History" }, { "paragraph_id": 29, "text": "There was certainly a recognisable Nestorian (Church of the East) presence at the Holy Sepulchre from the years 1348 through 1575, as contemporary Franciscan accounts indicate. The Franciscan friars renovated the church in 1555, as it had been neglected despite increased numbers of pilgrims. The Franciscans rebuilt the Aedicule, extending the structure to create an antechamber. A marble shrine commissioned by Friar Boniface of Ragusa was placed to envelop the remains of Christ's tomb, probably to prevent pilgrims from touching the original rock or taking small pieces as souvenirs. A marble slab was placed over the limestone burial bed where Jesus's body is believed to have lain.", "title": "History" }, { "paragraph_id": 30, "text": "After the renovation of 1555, control of the church oscillated between the Franciscans and the Orthodox, depending on which community could obtain a favorable firman from the \"Sublime Porte\" at a particular time, often through outright bribery. Violent clashes were not uncommon. There was no agreement about this question, although it was discussed at the negotiations to the Treaty of Karlowitz in 1699. During the Holy Week of 1757, Orthodox Christians reportedly took over some of the Franciscan-controlled church. This may have been the cause of the sultan's firman (decree) later developed into the Status Quo.", "title": "History" }, { "paragraph_id": 31, "text": "A fire severely damaged the structure again in 1808, causing the dome of the Rotunda to collapse and smashing the Aedicule's exterior decoration. The Rotunda and the Aedicule's exterior were rebuilt in 1809–10 by architect Nikolaos Ch. Komnenos of Mytilene in the contemporary Ottoman Baroque style. The interior of the antechamber, now known as the Chapel of the Angel, was partly rebuilt to a square ground plan in place of the previously semicircular western end.", "title": "History" }, { "paragraph_id": 32, "text": "Another decree in 1853 from the sultan solidified the existing territorial division among the communities and solidified the Status Quo for arrangements to \"remain in their present state\", requiring consensus to make even minor changes.", "title": "History" }, { "paragraph_id": 33, "text": "The dome was restored by Catholics, Greeks, and Turks in 1868, being made of iron ever since.", "title": "History" }, { "paragraph_id": 34, "text": "By the time of the British Mandate for Palestine following the end of World War I, the cladding of red limestone applied to the Aedicule by Komnenos had deteriorated badly and was detaching from the underlying structure; from 1947 until restoration work in 2016–17, it was held in place with an exterior scaffolding of iron girders installed by the British authorities.", "title": "History" }, { "paragraph_id": 35, "text": "After the care of the British Empire, the Church of England had an important role in the appropriation of the Holy Sepulcher, such as funds for the maintenance of external infrastructures, and the abolition of territorial claims near the Temple of the Holy Sepulcher, the Protestant Church allowed to carry out the elimination of taxes from the Holy Sepulcher, currently the Anglican and Lutheran dioceses of Jerusalem are allowed to attend Armenian cults.", "title": "History" }, { "paragraph_id": 36, "text": "In 1948, Jerusalem was divided between Israel and Jordan and the Old City with the church were made part of Jordan. In 1967, Israeli forces captured East Jerusalem in the Six Day War, and that area has remained under Israeli control ever since. Under Israeli rule, legal arrangements relating to the churches of East Jerusalem were maintained in coordination with the Jordanian government. The dome at the Church of the Holy Sepulchre was restored again in 1994–97 as part of extensive modern renovations that have been ongoing since 1959. During the 1970–78 restoration works and excavations inside the building, and under the nearby Muristan bazaar, it was found that the area was originally a quarry, from which white meleke limestone was struck.", "title": "History" }, { "paragraph_id": 37, "text": "East of the Chapel of Saint Helena, the excavators discovered a void containing a second-century drawing of a Roman pilgrim ship, two low walls supporting the platform of Hadrian's second-century temple, and a higher fourth-century wall built to support Constantine's basilica. After the excavations of the early 1970s, the Armenian authorities converted this archaeological space into the Chapel of Saint Vartan, and created an artificial walkway over the quarry on the north of the chapel, so that the new chapel could be accessed (by permission) from the Chapel of Saint Helena.", "title": "History" }, { "paragraph_id": 38, "text": "After seven decades of being held together by steel girders, the Israel Antiquities Authority (IAA) declared the visibly deteriorating Aedicule structure unsafe. A restoration of the Aedicule was agreed upon and executed from May 2016 to March 2017. Much of the $4 million project was funded by the World Monuments Fund, as well as $1.3 million from Mica Ertegun and a significant sum from King Abdullah II of Jordan. The existence of the original limestone cave walls within the Aedicule was confirmed, and a window was created to view this from the inside. The presence of moisture led to the discovery of an underground shaft resembling an escape tunnel carved into the bedrock, seeming to lead from the tomb. For the first time since at least 1555, on 26 October 2016, marble cladding that protects the supposed burial bed of Jesus was removed. Members of the National Technical University of Athens were present. Initially, only a layer of debris was visible. This was cleared in the next day, and a partially broken marble slab with a Crusader-style cross carved was revealed. By the night of 28 October, the original limestone burial bed was shown to be intact. The tomb was resealed shortly thereafter. Mortar from just above the burial bed was later dated to the mid-fourth century.", "title": "History" }, { "paragraph_id": 39, "text": "On 25 March 2020, Israeli health officials ordered the site closed to the public due to the COVID-19 pandemic. According to the keeper of the keys, it was the first such closure since 1349, during the Black Death. Clerics continued regular prayers inside the building, and it reopened to visitors two months later, on 24 May.", "title": "History" }, { "paragraph_id": 40, "text": "During church renovations in 2022, a stone slab covered in modern graffiti was moved from a wall, revealing Cosmatesque-style decoration on one face. According to an IAA archaeologist, the decoration was once inlaid with pieces of glass and fine marble; it indicates that the relic was the front of the church's high altar from the Crusader era (c. 1149), which was later used by the Greek Orthodox until being damaged in the 1808 fire.", "title": "History" }, { "paragraph_id": 41, "text": "The courtyard facing the entrance to the church is known as the parvis. Two streets open into the parvis: St Helena Road (west) and Suq ed-Dabbagha (east). Around the parvis are a few smaller structures.", "title": "Description" }, { "paragraph_id": 42, "text": "South of the parvis, opposite the church:", "title": "Description" }, { "paragraph_id": 43, "text": "On the eastern side of the parvis, south to north:", "title": "Description" }, { "paragraph_id": 44, "text": "North of the parvis, in front of the church façade or against it:", "title": "Description" }, { "paragraph_id": 45, "text": "A group of three chapels borders the parvis on its west side. They originally formed the baptistery complex of the Constantinian church. The southernmost chapel was the vestibule, the middle chapel the baptistery, and the north chapel the chamber in which the patriarch chrismated the newly baptized before leading them into the rotunda north of this complex. Now they are dedicated as (from south to north)", "title": "Description" }, { "paragraph_id": 46, "text": "The 12th-century Crusader bell tower is just south of the Rotunda, to the left of the entrance. Its upper level was lost in a 1545 collapse. In 1719, another two storeys were lost.", "title": "Description" }, { "paragraph_id": 47, "text": "The wooden doors that compose the main entrance are the original, highly carved arched doors. Today, only the left-hand entrance is currently accessible, as the right doorway has long since been bricked up. The entrance to the church leads to the south transept, through the crusader façade in the parvis of a larger courtyard. This is found past a group of streets winding through the outer Via Dolorosa by way of a souq in the Muristan. This narrow way of access to such a large structure has proven to be hazardous at times. For example, when a fire broke out in 1840, dozens of pilgrims were trampled to death.", "title": "Description" }, { "paragraph_id": 48, "text": "According to their own family lore, the Muslim Nuseibeh family has been responsible for opening the door as an impartial party to the church's denominations already since the seventh century. However, they themselves admit that the documents held by various Christian denominations only mention their role since the 12th century, in the time of Saladin, which is the date more generally accepted. After retaking Jerusalem from the Crusaders in 1187, Saladin entrusted the Joudeh family with the key to the church, which is made of iron and 30 centimetres (12 in) long; the Nuseibehs either became or remained its doorkeepers.", "title": "Description" }, { "paragraph_id": 49, "text": "The 'immovable ladder' stands beneath a window on the façade.", "title": "Description" }, { "paragraph_id": 50, "text": "", "title": "Description" }, { "paragraph_id": 51, "text": "Just inside the church entrance is a stairway leading up to Calvary (Golgotha), traditionally regarded as the site of Jesus's crucifixion and the most lavishly decorated part of the church. The exit is via another stairway opposite the first, leading down to the ambulatory. Golgotha and its chapels are just south of the main altar of the catholicon.", "title": "Description" }, { "paragraph_id": 52, "text": "Calvary is split into two chapels: one Greek Orthodox and one Catholic, each with its own altar. On the left (north) side, the Greek Orthodox chapel's altar is placed over the supposed rock of Calvary (the 12th Station of the Cross), which can be touched through a hole in the floor beneath the altar. The rock can be seen under protective glass on both sides of the altar. The softer surrounding stone was removed when the church was built. The Roman Catholic (Franciscan) Chapel of the Nailing of the Cross (the 11th Station of the Cross) stretches to the south. Between the Catholic Altar of the Nailing to the Cross and the Orthodox altar is the Catholic Altar of the Stabat Mater, which has a statue of Mary with an 18th-century bust; this middle altar marks the 13th Station of the Cross.", "title": "Description" }, { "paragraph_id": 53, "text": "On the ground floor, just underneath the Golgotha chapel, is the Chapel of Adam. According to tradition, Jesus was crucified over the place where Adam's skull was buried. According to some, the blood of Christ ran down the cross and through the rocks to fill Adam's skull. Through a window at the back of the 11th-century apse, the rock of Calvary can be seen with a crack traditionally held to be caused by the earthquake that followed Jesus's death; some scholars claim it is the result of quarrying against a natural flaw in the rock.", "title": "Description" }, { "paragraph_id": 54, "text": "Behind the Chapel of Adam is the Greek Treasury (Treasury of the Greek Patriarch). Some of its relics, such as a 12th-century crystal mitre, were transferred to the Greek Orthodox Patriarchate Museum (the Patriarchal Museum) on Greek Orthodox Patriarchate Street.", "title": "Description" }, { "paragraph_id": 55, "text": "Just inside the entrance to the church is the Stone of Anointing (also Stone of the Anointing or Stone of Unction), which tradition holds to be where Jesus's body was prepared for burial by Joseph of Arimathea, though this tradition is only attested since the crusader era (notably by the Italian Dominican pilgrim Riccoldo da Monte di Croce in 1288), and the present stone was only added in the 1810 reconstruction.", "title": "Description" }, { "paragraph_id": 56, "text": "The wall behind the stone is defined by its striking blue balconies and taphos symbol-bearing red banners (depicting the insignia of the Brotherhood of the Holy Sepulchre), and is decorated with lamps. The modern mosaic along the wall depicts the anointing of Jesus's body, preceded on the right by the Descent from the Cross, and succeeded on the left by the Burial of Jesus.", "title": "Description" }, { "paragraph_id": 57, "text": "The wall was a temporary addition to support the arch above it, which had been weakened after the damage in the 1808 fire; it blocks the view of the rotunda, separates the entrance from the catholicon, sits on top of four of the now empty and desecrated Crusader graves and is no longer structurally necessary. Opinions differ as to whether it is to be seen as the 13th Station of the Cross, which others identify as the lowering of Jesus from the cross and located between the 11th and 12th stations on Calvary.", "title": "Description" }, { "paragraph_id": 58, "text": "The lamps that hang over the Stone of Unction, adorned with cross-bearing chain links, are contributed by Armenians, Copts, Greeks and Latins.", "title": "Description" }, { "paragraph_id": 59, "text": "Immediately inside and to the left of the entrance is a bench (formerly a divan) that has traditionally been used by the church's Muslim doorkeepers, along with some Christian clergy, as well as electrical wiring. To the right of the entrance is a wall along the ambulatory containing the staircase leading to Golgotha. Further along the same wall is the entrance to the Chapel of Adam.", "title": "Description" }, { "paragraph_id": 60, "text": "", "title": "Description" }, { "paragraph_id": 61, "text": "The rotunda is the building of the larger dome located on the far west side. In the centre of the rotunda is a small chapel called the Aedicule in English, from the Latin aedicula, in reference to a small shrine. The Aedicule has two rooms: the first holds a relic called the Angel's Stone, which is believed to be a fragment of the large stone that sealed the tomb; the second, smaller room contains the tomb of Jesus. Possibly to prevent pilgrims from removing bits of the original rock as souvenirs, by 1555, a surface of marble cladding was placed on the tomb to prevent further damage to the tomb. In October 2016, the top slab was pulled back to reveal an older, partially broken marble slab with a Crusader-style cross carved in it. Beneath it, the limestone burial bed was revealed to be intact.", "title": "Description" }, { "paragraph_id": 62, "text": "Under the Status Quo, the Eastern Orthodox, Roman Catholic, and Armenian Apostolic Churches all have rights to the interior of the tomb, and all three communities celebrate the Divine Liturgy or Holy Mass there daily. It is also used for other ceremonies on special occasions, such as the Holy Saturday ceremony of the Holy Fire led by the Greek Orthodox patriarch (with the participation of the Coptic and Armenian patriarchs). To its rear, in the Coptic Chapel, constructed of iron latticework, lies the altar used by the Coptic Orthodox. Historically, the Georgians also retained the key to the Aedicule.", "title": "Description" }, { "paragraph_id": 63, "text": "To the right of the sepulchre on the northwestern edge of the Rotunda is the Chapel of the Apparition, which is reserved for Roman Catholic use.", "title": "Description" }, { "paragraph_id": 64, "text": "In the central nave of the Crusader-era church, just east of the larger rotunda, is the Crusader structure housing the main altar of the Church, today the Greek Orthodox catholicon. Its dome is 19.8 metres (65 ft) in diameter, and is set directly over the centre of the transept crossing of the choir where the compas is situated, an omphalos (\"navel\") stone once thought to be the center of the world and still venerated as such by Orthodox Christians (associated with the site of the Crucifixion and the Resurrection).", "title": "Description" }, { "paragraph_id": 65, "text": "Since 1996 this dome is topped by the monumental Golgotha Crucifix, which the Greek Patriarch Diodoros I of Jerusalem consecrated. It was at the initiative of Israeli professor Gustav Kühnel to erect a new crucifix at the church that would not only be worthy of the singularity of the site, but that would also become a symbol of the efforts of unity in the community of Christian faith.", "title": "Description" }, { "paragraph_id": 66, "text": "The catholicon's iconostasis demarcates the Orthodox sanctuary behind it, to its east. The iconostasis is flanked to the front by two episcopal thrones: the southern seat (cathedra) is the patriarchal throne of the Greek Orthodox patriarch of Jerusalem, and the northern seat is for an archbishop or bishop. (There is also a popular claim that both are patriarchal thrones, with the northern one being for the patriarch of Antioch — which has been described as a misstatement, however.)", "title": "Description" }, { "paragraph_id": 67, "text": "South of the Aedicule is the \"Place of the Three Marys\", marked by a stone canopy (the Station of the Holy Women) and a large modern wall mosaic. From here one can enter the Armenian monastery, which stretches over the ground and first upper floor of the church's southeastern part.", "title": "Description" }, { "paragraph_id": 68, "text": "West of the Aedicule, to the rear of the Rotunda, is the Syriac Chapel with the Tomb of Joseph of Arimathea, located in a Constantinian apse and containing an opening to an ancient Jewish rock-cut tomb. This chapel is where the Syriac Orthodox celebrate their Liturgy on Sundays.", "title": "Description" }, { "paragraph_id": 69, "text": "The Syriac Orthodox Chapel of Saint Joseph of Arimathea and Saint Nicodemus. On Sundays and feast days it is furnished for the celebration of Mass. It is accessed from the Rotunda, by a door west of the Aedicule.", "title": "Description" }, { "paragraph_id": 70, "text": "On the far side of the chapel is the low entrance to an almost complete first-century Jewish tomb, initially holding six kokh-type funeral shafts radiating from a central chamber, two of which are still exposed. Although this space was discovered relatively recently and contains no identifying marks, some believe that Joseph of Arimathea and Nicodemus were buried here. Since Jews always buried their dead outside the city, the presence of this tomb seems to prove that the Holy Sepulchre site was outside the city walls at the time of the crucifixion.", "title": "Description" }, { "paragraph_id": 71, "text": "The Arches of the Virgin are seven arches (an arcade) at the northern end of the north transept, which is to the catholicon's north. Disputed by the Orthodox and the Latin, the area is used to store ladders.", "title": "Description" }, { "paragraph_id": 72, "text": "In the northeast side of the complex, there is the Prison of Christ, alleged to be where Jesus was held. The Greek Orthodox are showing pilgrims yet another place where Jesus was allegedly held, the similarly named Prison of Christ in their Monastery of the Praetorium [C], located near the Church of Ecce Homo, between the Second and Third Stations of the Via Dolorosa. The Armenians regard a recess in the Monastery of the Flagellation at the Second Station of the Via Dolorosa as the Prison of Christ. A cistern among the ruins beneath the Church of St. Peter in Gallicantu on Mount Zion is also alleged to have been the Prison of Christ. To reconcile the traditions, some allege that Jesus was held in the Mount Zion cell in connection with his trial by the Jewish high priest, at the Praetorium in connection with his trial by the Roman governor Pilate, and near the Golgotha before crucifixion.", "title": "Description" }, { "paragraph_id": 73, "text": "The chapels in the ambulatory are, from north to south: the Greek Chapel of Saint Longinus (named after Longinus), the Armenian Chapel of the Division of Robes, the entrance to the Chapel of Saint Helena, and the Greek Chapel of the Derision.", "title": "Description" }, { "paragraph_id": 74, "text": "An Ottoman decree of 1757 helped establish a status quo upholding the state of affairs for various Holy Land sites. The status quo was upheld in Sultan Abdülmecid I's firman (decree) of 1852/3, which pinned down the now-permanent statutes of property and the regulations concerning the roles of the different denominations and other custodians.", "title": "Status Quo" }, { "paragraph_id": 75, "text": "The primary custodians are the Roman Catholic, Greek Orthodox and Armenian Apostolic churches. The Greek Orthodox act through the Greek Orthodox Patriarchate as well as through the Brotherhood of the Holy Sepulchre. Roman Catholics act through the Franciscan Custody of the Holy Land. In the 19th century, the Coptic Orthodox, the Ethiopian Orthodox and the Syriac Orthodox also acquired lesser responsibilities, which include shrines and other structures in and around the building.", "title": "Status Quo" }, { "paragraph_id": 76, "text": "None of these controls the main entrance. In 1192, Saladin assigned door-keeping responsibilities to the Muslim Nusaybah family. The wooden doors that compose the main entrance are the original, highly carved doors. The Joudeh al-Goudia (al-Ghodayya) family were entrusted as custodian to the keys of the Holy Sepulchre by Saladin in 1187. Despite occasional disagreements, religious services take place in the Church with regularity and coexistence is generally peaceful. An example of concord between the Church custodians is the full restoration of the Aedicule from 2016 to 2017.", "title": "Status Quo" }, { "paragraph_id": 77, "text": "The establishment of the modern Status Quo in 1853 did not halt controversy and occasional violence. In 1902, 18 friars were hospitalized and some monks were jailed after the Franciscans and Greeks disagreed over who could clean the lowest step of the Chapel of the Franks. In the aftermath, the Greek patriarch, Franciscan custos, Ottoman governor and French consul general signed a convention that both denominations could sweep it. On a hot summer day in 2002, a Coptic monk moved his chair from its agreed spot into the shade. This was interpreted as a hostile move by the Ethiopians and eleven were hospitalized after the resulting fight. In another incident in 2004, during Orthodox celebrations of the Exaltation of the Holy Cross, a door to the Franciscan chapel was left open. This was taken as a sign of disrespect by the Orthodox and a fistfight broke out. Some people were arrested, but no one was seriously injured.", "title": "Status Quo" }, { "paragraph_id": 78, "text": "On Palm Sunday, in April 2008, a brawl broke out when a Greek monk was ejected from the building by a rival faction. Police were called to the scene but were also attacked by the enraged brawlers. On Sunday, 9 November 2008, a clash erupted between Armenian and Greek monks during celebrations for the Feast of the Cross.", "title": "Status Quo" }, { "paragraph_id": 79, "text": "In February 2018, the church was closed following a tax dispute over 152 million euros of uncollected taxes on church properties. The city hall stressed that the Church of the Holy Sepulchre and all other churches are exempt from the taxes, with the changes only affecting establishments like \"hotels, halls and businesses\" owned by the churches. NPR had reported that the Greek Orthodox Church calls itself the second-largest landowner in Israel, after the Israeli government.", "title": "Status Quo" }, { "paragraph_id": 80, "text": "There was a lock-in protest against an Israeli legislative proposal which would expropriate church lands that had been sold to private companies since 2010, a measure which church leaders assert constitutes a serious violation of their property rights and the Status Quo. In a joint official statement the church authorities protested what they considered to be the peak of a systematic campaign in:", "title": "Status Quo" }, { "paragraph_id": 81, "text": "a discriminatory and racist bill that targets solely the properties of the Christian community in the Holy Land ... This reminds us all of laws of a similar nature which were enacted against the Jews during dark periods in Europe.", "title": "Status Quo" }, { "paragraph_id": 82, "text": "The 2018 taxation affair does not cover any church buildings or religious related facilities (because they are exempt by law), but commercial facilities such as the Notre Dame Hotel which was not paying the municipal property tax, and any land which is owned and used as a commercial land. The church holds the rights to land where private homes have been constructed, and some of the disagreement had been raised after the Knesset had proposed a bill that will make it harder for a private company not to extend a lease for land used by homeowners. The church leaders have said that such a bill will make it harder for them to sell church-owned lands. According to The Jerusalem Post:", "title": "Status Quo" }, { "paragraph_id": 83, "text": "The stated aim of the bill is to protect homeowners against the possibility that private companies will not extend their leases of land on which their houses or apartments stand.", "title": "Status Quo" }, { "paragraph_id": 84, "text": "In June 2019, a number of Christian denominations in Jerusalem raised their voice against the Supreme Court's decision to uphold the sale of three properties by the Greek Orthodox Patriarchate to Ateret Cohanim – an organization that seeks to increase the number of Jews living in the Old City and East Jerusalem. The church leaders warned that if the organization gets to control the sites, Christians could lose access to the Church of the Holy Sepulchre. In June 2022, the Supreme Court upheld the sale and ended the legal battle.", "title": "Status Quo" }, { "paragraph_id": 85, "text": "The site of the church had been a temple to Jupiter or Venus built by Hadrian before Constantine's edifice was built. Hadrian's temple had been located there because it was the junction of the main north–south road with one of the two main east–west roads and directly adjacent to the forum (now the location of the Muristan, which is smaller than the former forum). The forum itself had been placed, as is traditional in Roman towns, at the junction of the main north–south road with the other main east–west road (which is now El-Bazar/David Street). The temple and forum together took up the entire space between the two main east–west roads (a few above-ground remains of the east end of the temple precinct still survive in the Alexander Nevsky Church complex of the Russian Mission in Exile).", "title": "Connection to Roman temple" }, { "paragraph_id": 86, "text": "From the archaeological excavations in the 1970s, it is clear that construction took over most of the site of the earlier temple enclosure and that the Triportico and Rotunda roughly overlapped with the temple building itself; the excavations indicate that the temple extended at least as far back as the Aedicule, and the temple enclosure would have reached back slightly further. Virgilio Canio Corbo, a Franciscan priest and archaeologist, who was present at the excavations, estimated from the archaeological evidence that the western retaining wall of the temple itself would have passed extremely close to the east side of the supposed tomb; if the wall had been any further west any tomb would have been crushed under the weight of the wall (which would be immediately above it) if it had not already been destroyed when foundations for the wall were made.", "title": "Connection to Roman temple" }, { "paragraph_id": 87, "text": "Other archaeologists have criticized Corbo's reconstructions. Dan Bahat, the former city archaeologist of Jerusalem, regards them as unsatisfactory, as there is no known temple of Aphrodite (Venus) matching Corbo's design, and no archaeological evidence for Corbo's suggestion that the temple building was on a platform raised high enough to avoid including anything sited where the Aedicule is now; indeed Bahat notes that many temples to Aphrodite have a rotunda-like design, and argues that there is no archaeological reason to assume that the present rotunda was not based on a rotunda in the temple previously on the site.", "title": "Connection to Roman temple" }, { "paragraph_id": 88, "text": "The New Testament describes Jesus's tomb as being outside the city wall, as was normal for burials across the ancient world, which were regarded as unclean. Today, the site of the Church is within the current walls of the old city of Jerusalem. It has been well documented by archaeologists that in the time of Jesus, the walled city was smaller and the wall then was to the east of the current site of the Church. In other words, the city had been much narrower in Jesus's time, with the site then having been outside the walls; since Herod Agrippa (41–44) is recorded by history as extending the city to the north (beyond the present northern walls), the required repositioning of the western wall is traditionally attributed to him as well.", "title": "Location" }, { "paragraph_id": 89, "text": "The area immediately to the south and east of the sepulchre was a quarry and outside the city during the early first century as excavations under the Lutheran Church of the Redeemer across the street demonstrated.", "title": "Location" }, { "paragraph_id": 90, "text": "The church is a part of the UNESCO World Heritage Site Old City of Jerusalem.", "title": "Location" }, { "paragraph_id": 91, "text": "The Christian Quarter and the (also Christian) Armenian Quarter of the Old City of Jerusalem are both located in the northwestern and western part of the Old City, due to the fact that the Holy Sepulchre is located close to the northwestern corner of the walled city. The adjacent neighbourhood within the Christian Quarter is called the Muristan, a term derived from the Persian word for hospital – Christian pilgrim hospices have been maintained in this area near the Holy Sepulchre since at least the time of Charlemagne.", "title": "Location" }, { "paragraph_id": 92, "text": "From the ninth century onward, the construction of churches inspired by the Anastasis was extended across Europe. One example is Santo Stefano in Bologna, Italy, an agglomeration of seven churches recreating shrines of Jerusalem.", "title": "Influence" }, { "paragraph_id": 93, "text": "Several churches and monasteries in Europe, for instance, in Germany and Russia, and at least one church in the United States have been wholly or partially modeled on the Church of the Resurrection, some even reproducing other holy places for the benefit of pilgrims who could not travel to the Holy Land. They include the Heiliges Grab [de] (\"Holy Tomb\") of Görlitz, constructed between 1481 and 1504, the New Jerusalem Monastery in Moscow Oblast, constructed by Patriarch Nikon between 1656 and 1666, and Mount St. Sepulchre Franciscan Monastery built by the Franciscans in Washington, DC in 1898.", "title": "Influence" }, { "paragraph_id": 94, "text": "Author Andrew Holt writes that the church is the most important in all Christendom.", "title": "Influence" }, { "paragraph_id": 95, "text": "Custodians", "title": "External links" }, { "paragraph_id": 96, "text": "Virtual tours", "title": "External links" } ]
The Church of the Holy Sepulchre, also known as the Church of the Resurrection, is a fourth-century church in the Christian Quarter of the Old City of Jerusalem. It is considered to be the holiest site for Christians in the world, as it has been the most important pilgrimage site for Christianity since the fourth century. According to traditions dating back to the fourth century, it contains two sites considered holy in Christianity: the site where Jesus was crucified, at a place known as Calvary or Golgotha, and Jesus's empty tomb, which is where he was buried and resurrected. Each time the church was rebuilt, some of the antiquities from the preceding structure were used in the newer renovation. The tomb itself is enclosed by a 19th-century shrine called the Aedicule. The Status Quo, an understanding between religious communities dating to 1757, applies to the site. Within the church proper are the last four stations of the Cross of the Via Dolorosa, representing the final episodes of the Passion of Jesus. The church has been a major Christian pilgrimage destination since its creation in the fourth century, as the traditional site of the resurrection of Christ, thus its original Greek name, Church of the Anastasis ('Resurrection'). Control of the church itself is shared among several Christian denominations and secular entities in complicated arrangements essentially unchanged for over 160 years, and some for much longer. The main denominations sharing property over parts of the church are the Roman Catholic, Greek Orthodox and Armenian Apostolic, and to a lesser degree the Coptic, Syriac, and Ethiopian Orthodox churches.
2002-01-19T17:44:36Z
2023-12-31T00:34:28Z
[ "Template:Portal", "Template:Cite web", "Template:Cite news", "Template:Cemeteries in Jerusalem", "Template:Convert", "Template:Interlanguage link", "Template:Dead", "Template:Use dmy dates", "Template:Failed verification", "Template:Main", "Template:ISBN", "Template:Sfn", "Template:Dubious", "Template:Clarify", "Template:Cite EB1911", "Template:Commons category", "Template:Jerusalem sidebar", "Template:See also", "Template:Better source needed", "Template:More citations needed", "Template:Refend", "Template:Authority control", "Template:Efn", "Template:Lang", "Template:Gallery", "Template:Ill", "Template:Cite journal", "Template:Multiple image", "Template:Citation needed", "Template:Reflist", "Template:Redirect", "Template:Div col", "Template:Div col end", "Template:Webarchive", "Template:Refbegin", "Template:CathEncy", "Template:Subscription required", "Template:Citation", "Template:Infobox church", "Template:Blockquote", "Template:Cite book", "Template:Cbignore", "Template:Wikisource", "Template:Anchor", "Template:Notelist", "Template:Jerusalem Old City", "Template:Short description", "Template:Transliteration", "Template:Wikt-lang", "Template:Harvnb", "Template:Full citation needed", "Template:Further", "Template:Cn", "Template:Cite AV media" ]
https://en.wikipedia.org/wiki/Church_of_the_Holy_Sepulchre
7,811
Cernunnos
In ancient Celtic and Gallo-Roman religion, Cernunnos or Carnonos is a god depicted with antlers, seated cross-legged, and is associated with stags, horned serpents, dogs and bulls. He is usually shown holding or wearing a torc and sometimes holding a bag of coins (or grain) and a cornucopia. He is believed to have originally been a Proto-Celtic God. There are more than fifty depictions and inscriptions referring to him, mainly in the north-eastern region of Gaul. The Gaulish form of the name Cernunnos is Karnonos, from the stem karnon which means "horn" or "antler," suffixed with the augmentative -no- is characteristic of theonyms. Karnon is cognate with Latin cornu and Germanic *hurnaz, ultimately from Proto-Indo-European *k̑r̥no- The etymon karn- "horn" appears in both Gaulish and Galatian branches of Continental Celtic. Hesychius of Alexandria glosses the Galatian word karnon (κάρνον) as "Gallic trumpet", that is, the Celtic military horn listed as the carnyx (κάρνυξ) by Eustathius of Thessalonica, who notes the instrument's animal-shaped bell. The root also appears in the names of Celtic polities, most prominent among them the Carnutes, meaning something like "the Horned Ones", and in several personal names found in inscriptions. Maier (2010) states that the etymology of Cernunnos is unclear, but seems to be rooted in the Celtic word for "horn" or "antler" (as in Carnonos). "Cernunnos" is believed by some Celticists to be an obscure epithet of a better attested Gaulish deity; perhaps the god described in the interpretatio Romana as Mercury or Dis Pater, which are considered to share Cernunnos's psychopomp or chthonic associations. The name has only appeared once with an image, when it was inscribed on the Nautae Parisiaci (the sailors of the Parisii, who were a tribe of Gauls). Otherwise, variations of the name Cernunnos has also been found in a Celtic inscription written in Greek characters at Montagnac, Hérault (as καρνονου, karnonou, in the dative case). A Gallo-Latin adjective carnuātus, "horned", is also found. Due to the lack of surviving Gaulish literature regarding mythologies about Cernunnos, stories with various possible epithets he might have had, or information regarding religious practices and followers, his overall significance in Gaulish religious traditions is unknown. Interpretations of his role within Gaulish culture vary from a god of animals, nature, fertility and prosperity to a symbol of authority, strength, unyielding endurance and virility; he has also been portrayed as a god of travel, commerce and bi-directionality; or associated with crossroads, the underworld and reincarnation, symbolizing the cycle of life and death. The only evidence that has survived are inscriptions found on various artifacts. The Nautae Parisiaci monument was probably constructed by Gaulish sailors in 14 CE. It was discovered in 1710 within the foundations of the cathedral of Notre-Dame de Paris, site of ancient Lutetia, the civitas capital of the Celtic Parisii. It is now displayed in the Musée National du Moyen Age in Paris. The distinctive stone pillar is an important monument of Gallo-Roman religion. Its low reliefs depict and label by name several Roman deities such as Jupiter, Vulcan, and Castor and Pollux, along with Gallic deities such as Esus, Smertrios, and Tarvos Trigaranus. The name Cernunnos can be read clearly on 18th century drawings of the inscriptions, but the initial letter has been obscured since, so that today only a reading "[_]ernunnos" can be verified. Additional evidence is given by one inscription on a metal plaque from Steinsel-Rëlent in Luxembourg, in the territory of the Celtic Treveri. This inscription read Deo Ceruninco, "to the God Cerunincos", assumed to be the same deity. The Gaulish inscription from Montagnac reads αλλετ[ει]νος καρνονου αλ[ι]σο[ντ]εας (Alletinos [dedicated this] to Carnonos of Alisontea), with the last word possibly a place name based on Alisia, "service-tree" or "rock" (compare Alesia, Gaulish Alisiia). On the Pillar of the Boatmen, we find an image depicted with stag's antlers, both having torcs hanging from them with the inscription of "[C]ernunnos" with it. The lower part of the relief is lost, but the dimensions suggest that the god was sitting cross-legged, in the depiction traditionally called "Buddhic posture", providing a direct parallel to the antlered figure on the Gundestrup cauldron. Iconography associated with Cernunnos is often portrayed with a stag and the ram-horned serpent. Less frequently, there are bulls (at Rheims), dogs and rats. Because of the image of him on the Gundestrup Cauldron, some scholars describe Cernunnos as the Lord of the Animals or the Lord of Wild Things, and Miranda Green describes him as a "peaceful god of nature and fruitfulness" who seems to be seated in a manner that suggests traditional shamans who were often depicted surrounded by animals. Other academics such as Ceisiwr Serith describes Cernunnos as a god of bi-directionality and mediator between opposites, seeing the animal symbolism in the artwork reflecting this idea. The Pilier des nautes links him with sailors and with commerce, suggesting that he was also associated with material wealth as does the coin pouch from the Cernunnos of Rheims (Marne, Champagne, France)—in antiquity, Durocortorum, the civitas capital of the Remi tribe—and the stag vomiting coins from Niedercorn-Turbelslach (Luxembourg) in the lands of the Treveri. The god may have symbolized the fecundity of the stag-inhabited forest. Other examples of Cernunnos imagery include a petroglyph in Val Camonica in Cisalpine Gaul. The antlered human figure has been dated as early as the 7th century BCE or as late as the 4th. Two goddesses with antlers appear at Besançon and Clermont-Ferrand, France. An antlered god appears on a relief in Cirencester, Britain dated to Roman times and appears depicted on a coin from Petersfield, Hampshire. An antlered child appears on a relief from Vendeuvres, flanked by serpents and holding a purse and a torc. The best known image appears on the Gundestrup cauldron found on Jutland, dating to the 1st century BCE, thought to depict Celtic subject matter though usually regarded as of Thracian workmanship. Among the Celtiberians, horned or antlered figures of the Cernunnos type include a "Janus-like" god from Candelario (Salamanca) with two faces and two small horns; a horned god from the hills of Ríotinto (Huelva); and a possible representation of the deity Vestius Aloniecus near his altars in Lourizán (Pontevedra). The horns are taken to represent "aggressive power, genetic vigor and fecundity." Divine representations of the Cernunnos type are exceptions to the often-expressed view that the Celts only began to picture their gods in human form after the Roman conquest of Gaul. The Celtic "horned god", while well attested in iconography, cannot be identified in description of Celtic religion in Roman ethnography and does not appear to have been given any interpretatio romana, perhaps due to being too distinctive to be translatable into the Roman pantheon. While Cernunnos was never assimilated, scholars have sometimes compared him functionally to Greek and Roman divine figures such as Mercury, Actaeon, specialized forms of Jupiter, and Dis Pater, the latter of whom Julius Caesar said was considered the ancestor of the Gauls. An image of Cernunnos survives in the Stuttgart Psalter, a 9th century Christian manuscript. The god is recognizably depicted with cross-legged posture, horns, and a ram-headed serpent. He sits in an arcaded niche of hades within the Descent into Limbo scene. This later image is a representation of Cernunnos as lord of the underworld, firmly planted in the funerary sphere. There have been attempts to find the cern root in the name of Conall Cernach, the foster brother of the Irish hero Cuchulainn in the Ulster Cycle. In this line of interpretation, Cernach is taken as an epithet with a wide semantic field—"angular; victorious; prominent," though there is little evidence that the figures of Conall and Cernunnos are related. A brief passage involving Conall in an eighth-century story entitled Táin Bó Fraích ("The Cattle Raid on Fraech") has been taken as evidence that Conall bore attributes of a "master of beasts." In this passage Conall Cernach is portrayed as a hero and mighty warrior who assists the protagonist Fraech in rescuing his wife and son, and reclaiming his cattle. The fort that Conall must penetrate is guarded by a mighty serpent. The supposed anti-climax of this tale is when the fearsome serpent, instead of attacking Conall, darts to Conall's waist and girdles him as a belt. Rather than killing the serpent, Conall allows it to live, and then proceeds to attack and rob the fort of its great treasures the serpent previously protected. The figure of Conall Cernach is not associated with animals or forestry elsewhere; and the epithet "Cernach" has historically been explained as a description of Conall's impenetrable "horn-like" skin which protected him from injury. Some see the qualities of Cernunnos subsumed into the life of Saint Ciarán of Saighir, one of the Twelve Apostles of Ireland. When he was building his first tiny cell, as his hagiography goes, his first disciple and monk was a boar that had been rendered gentle by God. This was followed by a fox, a badger, a wolf and a stag. Within Neopaganism, specifically the Wiccan tradition, The Horned God is a deity that is believed to be the consort of the Great Goddess and syncretizes various horned or antlered gods from various cultures. The name Cernunnos became associated with the Wiccan Horned God through the adoption of the writings of Margaret Murray, an Egyptologist and folklorist of the early 20th century. Murray, through her Witch-cult hypothesis, believed that the various horned deities found in Europe were expressions of a "proto-horned god" and in 1931 published her theory in "The God of the Witches". Her work was considered highly controversial at the time, but was adopted by Gerald Gardner in his development of the religious movement of Wicca. Within the Wiccan tradition, The Horned God reflects the seasons of the year in an annual cycle of life, death and rebirth and his imagery is a blend of the Gaulish god Cernunnos, the Greek god Pan, The Green Man motif, and various other horned spirit imagery.
[ { "paragraph_id": 0, "text": "In ancient Celtic and Gallo-Roman religion, Cernunnos or Carnonos is a god depicted with antlers, seated cross-legged, and is associated with stags, horned serpents, dogs and bulls. He is usually shown holding or wearing a torc and sometimes holding a bag of coins (or grain) and a cornucopia. He is believed to have originally been a Proto-Celtic God. There are more than fifty depictions and inscriptions referring to him, mainly in the north-eastern region of Gaul.", "title": "" }, { "paragraph_id": 1, "text": "The Gaulish form of the name Cernunnos is Karnonos, from the stem karnon which means \"horn\" or \"antler,\" suffixed with the augmentative -no- is characteristic of theonyms. Karnon is cognate with Latin cornu and Germanic *hurnaz, ultimately from Proto-Indo-European *k̑r̥no-", "title": "Name and etymology" }, { "paragraph_id": 2, "text": "The etymon karn- \"horn\" appears in both Gaulish and Galatian branches of Continental Celtic. Hesychius of Alexandria glosses the Galatian word karnon (κάρνον) as \"Gallic trumpet\", that is, the Celtic military horn listed as the carnyx (κάρνυξ) by Eustathius of Thessalonica, who notes the instrument's animal-shaped bell. The root also appears in the names of Celtic polities, most prominent among them the Carnutes, meaning something like \"the Horned Ones\", and in several personal names found in inscriptions.", "title": "Name and etymology" }, { "paragraph_id": 3, "text": "Maier (2010) states that the etymology of Cernunnos is unclear, but seems to be rooted in the Celtic word for \"horn\" or \"antler\" (as in Carnonos).", "title": "Name and etymology" }, { "paragraph_id": 4, "text": "\"Cernunnos\" is believed by some Celticists to be an obscure epithet of a better attested Gaulish deity; perhaps the god described in the interpretatio Romana as Mercury or Dis Pater, which are considered to share Cernunnos's psychopomp or chthonic associations. The name has only appeared once with an image, when it was inscribed on the Nautae Parisiaci (the sailors of the Parisii, who were a tribe of Gauls). Otherwise, variations of the name Cernunnos has also been found in a Celtic inscription written in Greek characters at Montagnac, Hérault (as καρνονου, karnonou, in the dative case). A Gallo-Latin adjective carnuātus, \"horned\", is also found.", "title": "Name and etymology" }, { "paragraph_id": 5, "text": "Due to the lack of surviving Gaulish literature regarding mythologies about Cernunnos, stories with various possible epithets he might have had, or information regarding religious practices and followers, his overall significance in Gaulish religious traditions is unknown. Interpretations of his role within Gaulish culture vary from a god of animals, nature, fertility and prosperity to a symbol of authority, strength, unyielding endurance and virility; he has also been portrayed as a god of travel, commerce and bi-directionality; or associated with crossroads, the underworld and reincarnation, symbolizing the cycle of life and death. The only evidence that has survived are inscriptions found on various artifacts.", "title": "Epigraphic evidence" }, { "paragraph_id": 6, "text": "The Nautae Parisiaci monument was probably constructed by Gaulish sailors in 14 CE. It was discovered in 1710 within the foundations of the cathedral of Notre-Dame de Paris, site of ancient Lutetia, the civitas capital of the Celtic Parisii. It is now displayed in the Musée National du Moyen Age in Paris. The distinctive stone pillar is an important monument of Gallo-Roman religion. Its low reliefs depict and label by name several Roman deities such as Jupiter, Vulcan, and Castor and Pollux, along with Gallic deities such as Esus, Smertrios, and Tarvos Trigaranus. The name Cernunnos can be read clearly on 18th century drawings of the inscriptions, but the initial letter has been obscured since, so that today only a reading \"[_]ernunnos\" can be verified.", "title": "Epigraphic evidence" }, { "paragraph_id": 7, "text": "Additional evidence is given by one inscription on a metal plaque from Steinsel-Rëlent in Luxembourg, in the territory of the Celtic Treveri. This inscription read Deo Ceruninco, \"to the God Cerunincos\", assumed to be the same deity. The Gaulish inscription from Montagnac reads αλλετ[ει]νος καρνονου αλ[ι]σο[ντ]εας (Alletinos [dedicated this] to Carnonos of Alisontea), with the last word possibly a place name based on Alisia, \"service-tree\" or \"rock\" (compare Alesia, Gaulish Alisiia).", "title": "Epigraphic evidence" }, { "paragraph_id": 8, "text": "On the Pillar of the Boatmen, we find an image depicted with stag's antlers, both having torcs hanging from them with the inscription of \"[C]ernunnos\" with it. The lower part of the relief is lost, but the dimensions suggest that the god was sitting cross-legged, in the depiction traditionally called \"Buddhic posture\", providing a direct parallel to the antlered figure on the Gundestrup cauldron.", "title": "Iconography" }, { "paragraph_id": 9, "text": "Iconography associated with Cernunnos is often portrayed with a stag and the ram-horned serpent. Less frequently, there are bulls (at Rheims), dogs and rats. Because of the image of him on the Gundestrup Cauldron, some scholars describe Cernunnos as the Lord of the Animals or the Lord of Wild Things, and Miranda Green describes him as a \"peaceful god of nature and fruitfulness\" who seems to be seated in a manner that suggests traditional shamans who were often depicted surrounded by animals. Other academics such as Ceisiwr Serith describes Cernunnos as a god of bi-directionality and mediator between opposites, seeing the animal symbolism in the artwork reflecting this idea.", "title": "Iconography" }, { "paragraph_id": 10, "text": "The Pilier des nautes links him with sailors and with commerce, suggesting that he was also associated with material wealth as does the coin pouch from the Cernunnos of Rheims (Marne, Champagne, France)—in antiquity, Durocortorum, the civitas capital of the Remi tribe—and the stag vomiting coins from Niedercorn-Turbelslach (Luxembourg) in the lands of the Treveri. The god may have symbolized the fecundity of the stag-inhabited forest.", "title": "Iconography" }, { "paragraph_id": 11, "text": "Other examples of Cernunnos imagery include a petroglyph in Val Camonica in Cisalpine Gaul. The antlered human figure has been dated as early as the 7th century BCE or as late as the 4th. Two goddesses with antlers appear at Besançon and Clermont-Ferrand, France. An antlered god appears on a relief in Cirencester, Britain dated to Roman times and appears depicted on a coin from Petersfield, Hampshire. An antlered child appears on a relief from Vendeuvres, flanked by serpents and holding a purse and a torc. The best known image appears on the Gundestrup cauldron found on Jutland, dating to the 1st century BCE, thought to depict Celtic subject matter though usually regarded as of Thracian workmanship.", "title": "Iconography" }, { "paragraph_id": 12, "text": "Among the Celtiberians, horned or antlered figures of the Cernunnos type include a \"Janus-like\" god from Candelario (Salamanca) with two faces and two small horns; a horned god from the hills of Ríotinto (Huelva); and a possible representation of the deity Vestius Aloniecus near his altars in Lourizán (Pontevedra). The horns are taken to represent \"aggressive power, genetic vigor and fecundity.\"", "title": "Iconography" }, { "paragraph_id": 13, "text": "Divine representations of the Cernunnos type are exceptions to the often-expressed view that the Celts only began to picture their gods in human form after the Roman conquest of Gaul. The Celtic \"horned god\", while well attested in iconography, cannot be identified in description of Celtic religion in Roman ethnography and does not appear to have been given any interpretatio romana, perhaps due to being too distinctive to be translatable into the Roman pantheon. While Cernunnos was never assimilated, scholars have sometimes compared him functionally to Greek and Roman divine figures such as Mercury, Actaeon, specialized forms of Jupiter, and Dis Pater, the latter of whom Julius Caesar said was considered the ancestor of the Gauls.", "title": "Iconography" }, { "paragraph_id": 14, "text": "An image of Cernunnos survives in the Stuttgart Psalter, a 9th century Christian manuscript. The god is recognizably depicted with cross-legged posture, horns, and a ram-headed serpent. He sits in an arcaded niche of hades within the Descent into Limbo scene. This later image is a representation of Cernunnos as lord of the underworld, firmly planted in the funerary sphere.", "title": "Iconography" }, { "paragraph_id": 15, "text": "There have been attempts to find the cern root in the name of Conall Cernach, the foster brother of the Irish hero Cuchulainn in the Ulster Cycle. In this line of interpretation, Cernach is taken as an epithet with a wide semantic field—\"angular; victorious; prominent,\" though there is little evidence that the figures of Conall and Cernunnos are related.", "title": "Possible reflexes in Insular Celtic" }, { "paragraph_id": 16, "text": "A brief passage involving Conall in an eighth-century story entitled Táin Bó Fraích (\"The Cattle Raid on Fraech\") has been taken as evidence that Conall bore attributes of a \"master of beasts.\" In this passage Conall Cernach is portrayed as a hero and mighty warrior who assists the protagonist Fraech in rescuing his wife and son, and reclaiming his cattle. The fort that Conall must penetrate is guarded by a mighty serpent. The supposed anti-climax of this tale is when the fearsome serpent, instead of attacking Conall, darts to Conall's waist and girdles him as a belt. Rather than killing the serpent, Conall allows it to live, and then proceeds to attack and rob the fort of its great treasures the serpent previously protected.", "title": "Possible reflexes in Insular Celtic" }, { "paragraph_id": 17, "text": "The figure of Conall Cernach is not associated with animals or forestry elsewhere; and the epithet \"Cernach\" has historically been explained as a description of Conall's impenetrable \"horn-like\" skin which protected him from injury.", "title": "Possible reflexes in Insular Celtic" }, { "paragraph_id": 18, "text": "Some see the qualities of Cernunnos subsumed into the life of Saint Ciarán of Saighir, one of the Twelve Apostles of Ireland. When he was building his first tiny cell, as his hagiography goes, his first disciple and monk was a boar that had been rendered gentle by God. This was followed by a fox, a badger, a wolf and a stag.", "title": "Possible connection to Saint Ciarán" }, { "paragraph_id": 19, "text": "Within Neopaganism, specifically the Wiccan tradition, The Horned God is a deity that is believed to be the consort of the Great Goddess and syncretizes various horned or antlered gods from various cultures. The name Cernunnos became associated with the Wiccan Horned God through the adoption of the writings of Margaret Murray, an Egyptologist and folklorist of the early 20th century. Murray, through her Witch-cult hypothesis, believed that the various horned deities found in Europe were expressions of a \"proto-horned god\" and in 1931 published her theory in \"The God of the Witches\". Her work was considered highly controversial at the time, but was adopted by Gerald Gardner in his development of the religious movement of Wicca.", "title": "Neopaganism and Wicca" }, { "paragraph_id": 20, "text": "Within the Wiccan tradition, The Horned God reflects the seasons of the year in an annual cycle of life, death and rebirth and his imagery is a blend of the Gaulish god Cernunnos, the Greek god Pan, The Green Man motif, and various other horned spirit imagery.", "title": "Neopaganism and Wicca" } ]
In ancient Celtic and Gallo-Roman religion, Cernunnos or Carnonos is a god depicted with antlers, seated cross-legged, and is associated with stags, horned serpents, dogs and bulls. He is usually shown holding or wearing a torc and sometimes holding a bag of coins and a cornucopia. He is believed to have originally been a Proto-Celtic God. There are more than fifty depictions and inscriptions referring to him, mainly in the north-eastern region of Gaul.
2002-01-20T04:42:33Z
2023-12-15T16:35:39Z
[ "Template:PIE", "Template:ISBN", "Template:Webarchive", "Template:Cite web", "Template:Authority control", "Template:Cite book", "Template:Celtic mythology topics", "Template:Short description", "Template:Lang", "Template:Transliteration", "Template:Citation needed", "Template:\"'", "Template:Cite journal", "Template:Gaulish-Brythonic mythology" ]
https://en.wikipedia.org/wiki/Cernunnos
7,816
Click consonant
Click consonants, or clicks, are speech sounds that occur as consonants in many languages of Southern Africa and in three languages of East Africa. Examples familiar to English-speakers are the tut-tut (British spelling) or tsk! tsk! (American spelling) used to express disapproval or pity (IPA [ǀ]), the tchick! used to spur on a horse (IPA [ǁ]), and the clip-clop! sound children make with their tongue to imitate a horse trotting (IPA [ǃ]). However, these paralinguistic sounds in English are not full click consonants, as they only involve the front of the tongue, without the release of the back of the tongue that is required for clicks to combine with vowels and form syllables. Anatomically, clicks are obstruents articulated with two closures (points of contact) in the mouth, one forward and one at the back. The enclosed pocket of air is rarefied by a sucking action of the tongue (in technical terminology, clicks have a lingual ingressive airstream mechanism). The forward closure is then released, producing what may be the loudest consonants in the language, although in some languages such as Hadza and Sandawe, clicks can be more subtle and may even be mistaken for ejectives. Click consonants occur at six principal places of articulation. The International Phonetic Alphabet (IPA) provides five letters for these places (there is as yet no dedicated symbol for the sixth). The above clicks sound like affricates, in that they involve a lot of friction. The next two families of clicks are more abrupt sounds that do not have this friction. Technically, these IPA letters transcribe only the forward articulation of the click, not the entire consonant. As the Handbook states, Since any click involves a velar or uvular closure [as well], it is possible to symbolize factors such as voicelessness, voicing or nasality of the click by combining the click symbol with the appropriate velar or uvular symbol: [k͡ǂ ɡ͡ǂ ŋ͡ǂ], [q͡ǃ]. Thus technically [ǂ] is not a consonant, but only one part of the articulation of a consonant, and one may speak of "ǂ-clicks" to mean any of the various click consonants that share the [ǂ] place of articulation. In practice, however, the simple letter ⟨ǂ⟩ has long been used as an abbreviation for [k͡ǂ], and in that role it is sometimes seen combined with diacritics for voicing (e.g. ⟨ǂ̬⟩ for [ɡ͡ǂ]), nasalization (e.g. ⟨ǂ̃⟩ for [ŋ͡ǂ]), etc. These differing transcription conventions may reflect differing theoretical analyses of the nature of click consonants, or attempts to address common misunderstandings of clicks. Clicks occur in all three Khoisan language families of southern Africa, where they may be the most numerous consonants. To a lesser extent they occur in three neighbouring groups of Bantu languages—which borrowed them, directly or indirectly, from Khoisan. In the southeast, in eastern South Africa, Eswatini, Lesotho, Zimbabwe and southern Mozambique, they were adopted from a Tuu language (or languages) by the languages of the Nguni cluster (especially Zulu, Xhosa and Phuthi, but also to a lesser extent Swazi and Ndebele), and spread from them in a reduced fashion to the Zulu-based pidgin Fanagalo, Sesotho, Tsonga, Ronga, the Mzimba dialect of Tumbuka and more recently to Ndau and urban varieties of Pedi, where the spread of clicks continues. The second point of transfer was near the Caprivi Strip and the Okavango River where, apparently, the Yeyi language borrowed the clicks from a West Kalahari Khoe language; a separate development led to a smaller click inventory in the neighbouring Mbukushu, Kwangali, Gciriku, Kuhane and Fwe languages in Angola, Namibia, Botswana and Zambia. These sounds occur not only in borrowed vocabulary, but have spread to native Bantu words as well, in the case of Nguni at least partially due to a type of word taboo called hlonipha. Some creolised varieties of Afrikaans, such as Oorlams, retain clicks in Khoekhoe words. Three languages in East Africa use clicks: Sandawe and Hadza of Tanzania, and Dahalo, an endangered South Cushitic language of Kenya that has clicks in only a few dozen words. It is thought the latter may remain from an episode of language shift. The only non-African language known to have clicks as regular speech sounds is Damin, a ritual code once used by speakers of Lardil in Australia. In addition, one consonant in Damin is the egressive equivalent of a click, using the tongue to compress the air in the mouth for an outward (egressive) "spurt". Once clicks are borrowed into a language as regular speech sounds, they may spread to native words, as has happened due to hlonipa word-taboo in the Nguni languages. In Gciriku, for example, the European loanword tomate (tomato) appears as cumáte with a click [ǀ], though it begins with a t in all neighbouring languages. Scattered clicks are found in ideophones and mimesis in other languages, such as Kongo /ᵑǃ/, Mijikenda /ᵑǀ/ and Hadza /ᵑʘʷ/ (Hadza does not otherwise have labial clicks). Ideophones often use phonemic distinctions not found in normal vocabulary. English and many other languages may use bare click releases in interjections, without an accompanying rear release or transition into a vowel, such as the dental "tsk-tsk" sound used to express disapproval, or the lateral tchick used with horses. In a number of languages ranging from the central Mediterranean to Iran, a bare dental click release accompanied by tipping the head upwards signifies "no". Libyan Arabic apparently has three such sounds. A voiceless nasal back-released velar click [ʞ] is used throughout Africa for backchanneling. This sound starts off as a typical click, but the action is reversed and it is the rear velar or uvular closure that is released, drawing in air from the throat and nasal passages. Clicks occasionally turn up elsewhere, as in the special registers twins sometimes develop with each other. In West Africa, clicks have been reported allophonically, and similarly in French and German, faint clicks have been recorded in rapid speech where consonants such as /t/ and /k/ overlap between words. In Rwanda, the sequence /mŋ/ may be pronounced either with an epenthetic vowel, [mᵊ̃ŋ], or with a light bilabial click, [m̃ŋ]—often by the same speaker. Speakers of Gan Chinese from Ningdu county, as well as speakers of Mandarin from Beijing and Jilin and presumably people from other parts of the country, produce flapped nasal clicks in nursery rhymes with varying degrees of competence, in the words for 'goose' and 'duck', both of which begin with /ŋ/ in Gan and until recently began with /ŋ/ in Mandarin as well. In Gan, the nursery rhyme is, where the /ŋ/ onsets are all pronounced [ᵑǃ¡]. Occasionally other languages are claimed to have click sounds in general vocabulary. This is usually a misnomer for ejective consonants, which are found across much of the world. For the most part, the Southern African Khoisan languages only use root-initial clicks. Hadza, Sandawe and several Bantu languages also allow syllable-initial clicks within roots. In no language does a click close a syllable or end a word, but since the languages of the world that happen to have clicks consist mostly of CV syllables and allow at most only a limited set of consonants (such as a nasal or a glottal stop) to close a syllable or end a word, most consonants share the distribution of clicks in these languages. Most languages of the Khoesan families (Tuu, Kxʼa and Khoe) have four click types: { ǀ ǁ ǃ ǂ } or variants thereof, though a few have three or five, the last supplemented with either bilabial { ʘ } or retroflex { }. Hadza and Sandawe in Tanzania have three, { ǀ ǁ ǃ }. Yeyi is the only Bantu language with four, { ǀ ǁ ǃ ǂ }, while Xhosa and Zulu have three, { ǀ ǁ ǃ }, and most other Bantu languages with clicks have fewer. Like other consonants, clicks can be described using four parameters: place of articulation, manner of articulation, phonation (including glottalisation) and airstream mechanism. As noted above, clicks necessarily involve at least two closures, which in some cases operate partially independently: an anterior articulation traditionally represented by the special click symbol in the IPA—and a posterior articulation traditionally transcribed for convenience as oral or nasal, voiced or voiceless, though such features actually apply to the entire consonant. The literature also describes a contrast between velar and uvular rear articulations for some languages. In some languages that have been reported to make this distinction, such as Nǁng, all clicks have a uvular rear closure, and the clicks explicitly described as uvular are in fact cases where the uvular closure is independently audible: contours of a click into a pulmonic or ejective component, in which the click has two release bursts, the forward (click-type) and then the rearward (uvular) component. "Velar" clicks in these languages have only a single release burst, that of the forward release, and the release of the rear articulation isn't audible. However, in other languages all clicks are velar, and a few languages, such as Taa, have a true velar–uvular distinction that depends on the place rather than the timing of rear articulation and that is audible in the quality of the vowel. Regardless, in most of the literature the stated place of the click is the anterior articulation (called the release or influx), whereas the manner is ascribed to the posterior articulation (called the accompaniment or efflux). The anterior articulation defines the click type and is written with the IPA letter for the click (dental ⟨ǀ⟩, alveolar ⟨ǃ⟩, etc.), whereas the traditional term 'accompaniment' conflates the categories of manner (nasal, affricated), phonation (voiced, aspirated, breathy voiced, glottalised), as well as any change in the airstream with the release of the posterior articulation (pulmonic, ejective), all of which are transcribed with additional letters or diacritics, as in the nasal alveolar click, ⟨ǃŋ⟩ or ⟨ᵑǃ⟩ or—to take an extreme example—the voiced (uvular) ejective alveolar click, ⟨ᶢǃ͡qʼ⟩. The size of click inventories ranges from as few as three (in Sesotho) or four (in Dahalo), to dozens in the Kxʼa and Tuu (Northern and Southern Khoisan) languages. Taa, the last vibrant language in the latter family, has 45 to 115 click phonemes, depending on analysis (clusters vs. contours), and over 70% of words in the dictionary of this language begin with a click. Clicks appear more stop-like (sharp/abrupt) or affricate-like (noisy) depending on their place of articulation: In southern Africa, clicks involving an apical alveolar or laminal postalveolar closure are acoustically abrupt and sharp, like stops, whereas labial, dental and lateral clicks typically have longer and acoustically noisier click types that are superficially more like affricates. In East Africa, however, the alveolar clicks tend to be flapped, whereas the lateral clicks tend to be more sharp. The five click places of articulation with dedicated symbols in the International Phonetic Alphabet (IPA) are labial ʘ, dental ǀ, palatal ("palato-alveolar") ǂ, (post)alveolar ("retroflex") ǃ and lateral ǁ. In most languages, the alveolar and palatal types are abrupt; that is, they are sharp popping sounds with little frication (turbulent airflow). The labial, dental and lateral types, on the other hand, are typically noisy: they are longer, lip- or tooth-sucking sounds with turbulent airflow, and are sometimes called affricates. (This applies to the forward articulation; both may also have either an affricate or non-affricate rear articulation as well.) The apical places, ǃ and ǁ, are sometimes called "grave", because their pitch is dominated by low frequencies; whereas the laminal places, ǀ and ǂ, are sometimes called "acute", because they are dominated by high frequencies. (At least in the Nǁng language and Juǀʼhoan, this is associated with a difference in the placement of the rear articulation: "grave" clicks are uvular, whereas "acute" clicks are pharyngeal.) Thus the alveolar click /ǃ/ sounds something like a cork pulled from a bottle (a low-pitch pop), at least in Xhosa; whereas the dental click /ǀ/ is like English tsk! tsk!, a high-pitched sucking on the incisors. The lateral clicks are pronounced by sucking on the molars of one or both sides. The labial click /ʘ/ is different from what many people associate with a kiss: the lips are pressed more-or-less flat together, as they are for a [p] or an [m], not rounded as they are for a [w]. The most populous languages with clicks, Zulu and Xhosa, use the letters c, q, x, by themselves and in digraphs, to write click consonants. Most Khoisan languages, on the other hand (with the notable exceptions of Naro and Sandawe), use a more iconic system based on the pipe ⟨|⟩. (The exclamation point for the "retroflex" click was originally a pipe with a subscript dot, along the lines of ṭ, ḍ, ṇ used to transcribe the retroflex consonants of India.) There are also two main conventions for the second letter of the digraph as well: voicing may be written with g and uvular affrication with x, or voicing with d and affrication with g (a convention of Afrikaans). In two orthographies of Juǀʼhoan, for example, voiced /ᶢǃ/ is written g! or dq, and /ᵏǃ͡χ/ !x or qg. In languages without /ᵏǃ͡χ/, such as Zulu, /ᶢǃ/ may be written gq. There are a few less-well-attested articulations. A reported subapical retroflex articulation ⟨⟩ in Grootfontein !Kung turns out to be alveolar with lateral release, ⟨ǃ⟩; Ekoka !Kung has a fricated alveolar click with an s-like release, provisionally transcribed ⟨ǃ͡s⟩; and Sandawe has a "slapped" alveolar click, provisionally transcribed ⟨ǃ¡⟩ (in turn, the lateral clicks in Sandawe are more abrupt and less noisy than in southern Africa). However, the Khoisan languages are poorly attested, and it is quite possible that, as they become better described, more click articulations will be found. Formerly when a click consonant was transcribed, two symbols were used, one for each articulation, and connected with a tie bar. This is because a click such as [ɢ͡ǀ] was analysed as a voiced uvular rear articulation [ɢ] pronounced simultaneously with the forward ingressive release [ǀ]. The symbols may be written in either order, depending on the analysis: ⟨ɢ͡ǀ⟩ or ⟨ǀ͡ɢ⟩. However, a tie bar was not often used in practice, and when the manner is tenuis (a simple [k]), it was often omitted as well. That is, ⟨ǂ⟩ = ⟨kǂ⟩ = ⟨ǂk⟩ = ⟨k͡ǂ⟩ = ⟨ǂ͡k⟩. Regardless, elements that do not overlap with the forward release are usually written according to their temporal order: Prenasalisation is always written first (⟨ɴɢ͡ǀ⟩ = ⟨ɴǀ͡ɢ⟩ = ⟨ɴǀ̬⟩), and the non-lingual part of a contour is always written second (⟨k͡ǀʼqʼ⟩ = ⟨ǀ͡kʼqʼ⟩ = ⟨ǀ͡qʼ⟩). However, it is common to analyse clicks as simplex segments, despite the fact that the front and rear articulations are independent, and to use diacritics to indicate the rear articulation and the accompaniment. At first this tended to be ⟨ᵏǀ, ᶢǀ, ᵑǀ⟩ for ⟨k͡ǀ, ɡ͡ǀ, ŋ͡ǀ⟩, based on the belief that the rear articulation was velar; but as it has become clear that the rear articulation is often uvular or even pharyngeal even when there is no velar–uvular contrast, voicing and nasalisation diacritics more in keeping with the IPA have started to appear: ⟨ǀ̥, ǀ̬, ǀ̃, ŋǀ̬⟩ for ⟨ᵏǀ, ᶢǀ, ᵑǀ, ŋᶢǀ⟩. In practical orthography, the voicing or nasalisation is sometimes given the anterior place of articulation: dc for ᶢǀ and mʘ for ᵑʘ, for example. In the literature on Damin, the clicks are transcribed by adding ⟨!⟩ to the homorganic nasal: ⟨m!, nh!, n!, rn!⟩. Places of articulation are often called click types, releases, or influxes, though 'release' is also used for the accompaniment/efflux. There are seven or eight known places of articulation, not counting slapped or egressive clicks. These are (bi)labial affricated ʘ, or "bilabial"; laminal denti-alveolar affricated ǀ, or "dental"; apical (post)alveolar plosive ǃ, or "alveolar"; laminal palatal plosive ǂ, or "palatal"; laminal palatal affricated ǂᶴ (known only from Ekoka !Kung); subapical postalveolar , or "retroflex" (only known from Central !Kung and possibly Damin); and apical (post)alveolar lateral ǁ, or "lateral". Languages illustrating each of these articulations are listed below. Given the poor state of documentation of Khoisan languages, it is quite possible that additional places of articulation will turn up. No language is known to contrast more than five. Extra-linguistically, Coatlán Zapotec of Mexico uses a linguolabial click, [ǀ̼ʔ], as mimesis for a pig drinking water, and several languages, such as Wolof, use a velar click [ʞ], long judged to be physically impossible, for backchanneling and to express approval. An extended dental click with lip pursing or compression ("sucking-teeth"), variable in sound and sometimes described as intermediate between [ǀ] and [ʘ], is found across West Africa, the Caribbean and into the United States. The exact place of the alveolar clicks varies between languages. The lateral, for example, is alveolar in Khoekhoe but postalveolar or even palatal in Sandawe; the central is alveolar in Nǀuu but postalveolar in Juǀʼhoan. The terms for the click types were originally developed by Bleek in 1862. Since then there has been some conflicting variation. However, apart from "cerebral" (retroflex), which was found to be an inaccurate label when true retroflex clicks were discovered, Bleek's terms are still considered normative today. Here are the terms used in some of the main references. The dental, lateral and bilabial clicks are rarely confused, but the palatal and alveolar clicks frequently have conflicting names in older literature, and non-standard terminology is fossilized in Unicode. However, since Ladefoged & Traill (1984) clarified the places of articulation, the terms listed under Vosser (2013) in the table above have become standard, apart from such details as whether in a particular language ǃ and ǁ are alveolar or postalveolar, or whether the rear articulation is velar, uvular or pharyngeal, which again varies between languages (or may even be contrastive within a language). In several languages, including Nama and Juǀʼhoan, the alveolar click types [ǃ] and [ǁ] only occur, or preferentially occur, before back vowels, whereas the dental and palatal clicks occur before any vowel. The effect is most noticeable with the high front vowel [i]. In Nama, for example, the diphthong [əi] is common but [i] is rare after alveolar clicks, whereas the opposite is true after dental and palatal clicks. This is a common effect of uvular or uvularised consonants on vowels in both click and non-click languages. In Taa, for example, the back-vowel constraint is triggered by both alveolar clicks and uvular stops, but not by palatal clicks or velar stops: sequences such as */ǃi/ and */qi/ are rare to non-existent, whereas sequences such as /ǂi/ and /ki/ are common. The back-vowel constraint is also triggered by labial clicks, though not by labial stops. Clicks subject to this constraint involve a sharp retraction of the tongue during release. Miller and colleagues (2003) used ultrasound imaging to show that the rear articulation of the alveolar clicks ([ǃ]) in Nama is substantially different from that of palatal and dental clicks. Specifically, the shape of the body of the tongue in palatal clicks is very similar to that of the vowel [i], and involves the same tongue muscles, so that sequences such as [ǂi] involved a simple and quick transition. The rear articulation of the alveolar clicks, however, is several centimetres further back, and involves a different set of muscles in the uvular region. The part of the tongue required to approach the palate for the vowel [i] is deeply retracted in [ǃ], as it lies at the bottom of the air pocket used to create the vacuum required for click airstream. This makes the transition required for [ǃi] much more complex and the timing more difficult than the shallower and more forward tongue position of the palatal clicks. Consequently, [ǃi] takes 50 ms longer to pronounce than [ǂi], the same amount of time required to pronounce [ǃəi]. Languages do not all behave alike. In Nǀuu, the simple clicks /ʘ, ǃ, ǁ/ trigger the [əi] and [æ] allophones of /i/ and /e/, whereas /ǀ, ǂ/ do not. All of the affricated contour clicks, such as /ǂ͡χ/, do as well, as do the uvular stops /q, χ/. However, the occlusive contour clicks pattern like the simple clicks, and /ǂ͡q/ does not trigger the back-vowel constraint. This is because they involve tongue-root raising rather than tongue-root retraction in the uvular-pharyngeal region. However, in Gǀwi, which is otherwise largely similar, both /ǂ͡q/ and /ǂ͡χ/ trigger the back-vowel constraint (Miller 2009). Click manners are often called click accompaniments or effluxes, but both terms have met with objections on theoretical grounds. There is a great variety of click manners, both simplex and complex, the latter variously analysed as consonant clusters or contours. With so few click languages, and so little study of them, it is also unclear to what extent clicks in different languages are equivalent. For example, the [ǃkˀ] of Khoekhoe, [ǃkˀ ~ ŋˀǃk] of Sandawe and [ŋ̊ǃˀ ~ ŋǃkˀ] of Hadza may be essentially the same phone; no language distinguishes them, and the differences in transcription may have more to do with the approach of the linguist than with actual differences in the sounds. Such suspected allophones/allographs are listed on a common row in the table below. Some Khoisan languages are typologically unusual in allowing mixed voicing in non-click consonant clusters/contours, such as dt͡sʼk͡xʼ, so it is not surprising that they would allow mixed voicing in clicks as well. This may be an effect of epiglottalised voiced consonants, because voicing is incompatible with epiglottalisation. As do other consonants, clicks vary in phonation. Oral clicks are attested with four phonations: tenuis, aspirated, voiced and breathy voiced (murmured). Nasal clicks may also vary, with plain voiced, breathy voiced / murmured nasal, aspirated and unaspirated voiceless clicks attested (the last only in Taa). The aspirated nasal clicks are often said to have 'delayed aspiration'; there is nasal airflow throughout the click, which may become voiced between vowels, though the aspiration itself is voiceless. A few languages also have pre-glottalised nasal clicks, which have very brief prenasalisation but have not been phonetically analysed to the extent that other types of clicks have. All languages have nasal clicks, and all but Dahalo and Damin also have oral clicks. All languages but Damin have at least one phonation contrast as well. Clicks may be pronounced with a third place of articulation, glottal. A glottal stop is made during the hold of the click; the (necessarily voiceless) click is released, and then the glottal hold is released into the vowel. Glottalised clicks are very common, and they are generally nasalised as well. The nasalisation cannot be heard during the click release, as there is no pulmonic airflow, and generally not at all when the click occurs at the beginning of an utterance, but it has the effect of nasalising preceding vowels, to the extent that the glottalised clicks of Sandawe and Hadza are often described as prenasalised when in medial position. Two languages, Gǀwi and Yeyi, contrast plain and nasal glottalised clicks, but in languages without such a contrast, the glottalised click is nasal. Miller (2011) analyses the glottalisation as phonation, and so considers these to be simple clicks. Various languages also have prenasalised clicks, which may be analysed as consonant sequences. Sotho, for example, allows a syllabic nasal before its three clicks, as in nnqane 'the other side' (prenasalised nasal) and seqhenqha 'hunk'. There is ongoing discussion as to how the distinction between what were historically described as 'velar' and 'uvular' clicks is best described. The 'uvular' clicks are only found in some languages, and have an extended pronunciation that suggests that they are more complex than the simple ('velar') clicks, which are found in all. Nakagawa (1996) describes the extended clicks in Gǀwi as consonant clusters, sequences equivalent to English st or pl, whereas Miller (2011) analyses similar sounds in several languages as click–non-click contours, where a click transitions into a pulmonic or ejective articulation within a single segment, analogous to how English ch and j transition from occlusive to fricative but still behave as unitary sounds. With ejective clicks, for example, Miller finds that although the ejective release follows the click release, it is the rear closure of the click that is ejective, not an independently articulated consonant. That is, in a simple click, the release of the rear articulation is not audible, whereas in a contour click, the rear (uvular) articulation is audibly released after the front (click) articulation, resulting in a double release. These contour clicks may be linguo-pulmonic, that is, they may transition from a click (lingual) articulation to a normal pulmonic consonant like [ɢ] (e.g. [ǀ͡ɢ]); or linguo-glottalic and transition from lingual to an ejective consonant like [qʼ] (e.g. [ǀ͡qʼ]): that is, a sequence of ingressive (lingual) release + egressive (pulmonic or glottalic) release. In some cases there is a shift in place of articulation as well, and instead of a uvular release, the uvular click transitions to a velar or epiglottal release (depending on the description, [ǂ͡kxʼ] or [ǂᴴ]). Although homorganic [ǂ͡χʼ] does not contrast with heterorganic [ǂ͡kxʼ] in any known language, they are phonetically quite distinct (Miller 2011). Implosive clicks, i.e. velar [ɠ͡ʘ ɠ͡ǀ ɠ͡ǃ ɠ͡ǂ ɠ͡ǁ], uvular [ʛ͡ʘ ʛ͡ǀ ʛ͡ǃ ʛ͡ǂ ʛ͡ǁ], and de facto front-closed palatal [ʄ͡ʘ ʄ͡ǀ ʄ͡ǃ ʄ͡ǁ] are not only possible but easier to produce than modally voiced clicks. However, they are not attested in any language. Apart from Dahalo, Damin and many of the Bantu languages (Yeyi and Xhosa being exceptions), 'click' languages have glottalized nasal clicks. Contour clicks are restricted to southern Africa, but are very common there: they are found in all members of the Tuu, Kxʼa and Khoe families, as well as in the Bantu language Yeyi. In a comparative study of clicks across various languages, using her own field work as well as phonetic descriptions and data by other field researchers, Miller (2011) posits 21 types of clicks that contrast in manner or airstream. The homorganic and heterorganic affricated ejective clicks do not contrast in any known language, but are judged dissimilar enough to keep separate. Miller's conclusions differ from those of the primary researcher of a language; see the individual languages for details. (all spoken primarily in South Africa, Namibia and Botswana; Khoekhoe is similar to Korana except it has lost ejective /ᵏꞰ͡χʼ/) (Zulu is similar to Xhosa apart from not having /ᵑꞰˀ/) Each language below is illustrated with Ʞ as a placeholder for the different click types. Under each language are the orthography (in italics, with old forms in parentheses), the researchers' transcription (in ⟨angle brackets⟩), or allophonic variation (in [brackets]). Some languages also have labialised or prenasalised clicks as well as those listed below. Yeyi also has prenasalised /ŋᶢꞰ/. The original researchers believe that [Ʞʰ] and [Ʞχ] are allophones. A DoBeS (2008) study of the Western ǃXoo dialect of Taa found several new manners: creaky voiced (the voiced equivalent of glottalised oral), breathy-voiced nasal, prenasalised glottalised (the voiced equivalent of glottalised) and a (pre)voiced ejective. These extra voiced clicks reflect Western ǃXoo morphology, where many nouns form their plural by voicing their initial consonant. DoBeS analyses most Taa clicks as clusters, leaving nine basic manners (marked with asterisks in the table). This comes close to Miller's distinction between simple and contour clicks, shaded light and medium grey in the table. Languages of the southern African Khoisan families only permit clicks at the beginning of a word root. However, they also restrict other classes of consonant, such as ejectives and affricates, to root-initial position. The Bantu languages, Hadza and Sandawe allow clicks within roots. In some languages, all click consonants within known roots are the same phoneme, as in Hadza cikiringcingca /ǀikiɺiN.ǀiN.ǀa/ 'pinkie finger', which has three tenuis dental clicks. Other languages are known to have the occasional root with different clicks, as in Xhosa ugqwanxa /uᶢ̊ǃʱʷaᵑǁa/ 'black ironwood', which has a slack-voiced alveolar click and a nasal lateral click. No natural language allows clicks at the ends of syllables or words, but then no languages with clicks allows many consonants at all in those positions. Similarly, clicks are not found in underlying consonant clusters apart from /Cw/ (and, depending on the analysis, /Cχ/), as languages with clicks do not have other consonant clusters than that. Due to vowel elision, however, there are cases where clicks are pronounced in cross-linguistically common types of consonant clusters, such as Xhosa [sᵑǃɔɓilɛ] Snqobile, from Sinqobile (a name), and [isǁʰɔsa] isXhosa, from isiXhosa (the Xhosa language). Like other articulatorily complex consonants, clicks tend to be found in lexical words rather than in grammatical words, but this is only a tendency. In Nǁng, for example, there are two sets of personal pronouns, a full one without clicks and a partial set with clicks (ńg 'I', á 'thou', í 'we all', ú 'you', vs. nǀǹg 'I', gǀà 'thou', gǀì 'we all', gǀù 'you'), as well as other grammatical words with clicks such as ǁu 'not' and nǀa 'with, and'. One genetic study concluded that clicks, which occur in the languages of the genetically divergent populations Hadza and !Kung, may be an ancient element of human language. However, this conclusion relies on several dubious assumptions (see Hadza language), and most linguists assume that clicks, being quite complex consonants, arose relatively late in human history. How they arose is not known, but it is generally assumed that they developed from sequences of non-click consonants, as they are found allophonically for doubly articulated consonants in West Africa, for /tk/ sequences that overlap at word boundaries in German, and for the sequence /mw/ in Ndau and Tonga. Such developments have also been posited in historical reconstruction. For example, the Sandawe word for 'horn', /tɬana/, with a lateral affricate, may be a cognate with the root /ᵑǁaː/ found throughout the Khoe family, which has a lateral click. This and other words suggests that at least some Khoe clicks may have formed from consonant clusters when the first vowel of a word was lost; in this instance *[tɬana] > *[tɬna] > [ǁŋa] ~ [ᵑǁa]. On the other side of the equation, several non-endangered languages in vigorous use demonstrate click loss. For example, the East Kalahari languages have lost clicks from a large percentage of their vocabulary, presumably due to Bantu influence. As a rule, a click is replaced by a consonant with close to the manner of articulation of the click and the place of articulation of the forward release: alveolar click releases (the [ǃ] family) tend to mutate into a velar stop or affricate, such as [k], [ɡ], [ŋ], [k͡x]; palatal clicks ([ǂ] etc.) tend to mutate into a palatal stop such as [c], [ ɟ], [ ɲ], [cʼ], or a post-alveolar affricate [tʃ], [dʒ]; and dental clicks ([ǀ] etc.) tend to mutate into an alveolar affricate [ts]. Clicks are often presented as difficult sounds to articulate within words. However, children acquire them readily; a two-year-old, for example, may be able to pronounce a word with a lateral click [ǁ] with no problem, but still be unable to pronounce [s]. Lucy Lloyd reported that after long contact with the Khoi and San, it was difficult for her to refrain from using clicks when speaking English.
[ { "paragraph_id": 0, "text": "Click consonants, or clicks, are speech sounds that occur as consonants in many languages of Southern Africa and in three languages of East Africa. Examples familiar to English-speakers are the tut-tut (British spelling) or tsk! tsk! (American spelling) used to express disapproval or pity (IPA [ǀ]), the tchick! used to spur on a horse (IPA [ǁ]), and the clip-clop! sound children make with their tongue to imitate a horse trotting (IPA [ǃ]). However, these paralinguistic sounds in English are not full click consonants, as they only involve the front of the tongue, without the release of the back of the tongue that is required for clicks to combine with vowels and form syllables.", "title": "" }, { "paragraph_id": 1, "text": "Anatomically, clicks are obstruents articulated with two closures (points of contact) in the mouth, one forward and one at the back. The enclosed pocket of air is rarefied by a sucking action of the tongue (in technical terminology, clicks have a lingual ingressive airstream mechanism). The forward closure is then released, producing what may be the loudest consonants in the language, although in some languages such as Hadza and Sandawe, clicks can be more subtle and may even be mistaken for ejectives.", "title": "" }, { "paragraph_id": 2, "text": "Click consonants occur at six principal places of articulation. The International Phonetic Alphabet (IPA) provides five letters for these places (there is as yet no dedicated symbol for the sixth).", "title": "Phonetics and IPA notation" }, { "paragraph_id": 3, "text": "The above clicks sound like affricates, in that they involve a lot of friction. The next two families of clicks are more abrupt sounds that do not have this friction.", "title": "Phonetics and IPA notation" }, { "paragraph_id": 4, "text": "Technically, these IPA letters transcribe only the forward articulation of the click, not the entire consonant. As the Handbook states,", "title": "Phonetics and IPA notation" }, { "paragraph_id": 5, "text": "Since any click involves a velar or uvular closure [as well], it is possible to symbolize factors such as voicelessness, voicing or nasality of the click by combining the click symbol with the appropriate velar or uvular symbol: [k͡ǂ ɡ͡ǂ ŋ͡ǂ], [q͡ǃ].", "title": "Phonetics and IPA notation" }, { "paragraph_id": 6, "text": "Thus technically [ǂ] is not a consonant, but only one part of the articulation of a consonant, and one may speak of \"ǂ-clicks\" to mean any of the various click consonants that share the [ǂ] place of articulation. In practice, however, the simple letter ⟨ǂ⟩ has long been used as an abbreviation for [k͡ǂ], and in that role it is sometimes seen combined with diacritics for voicing (e.g. ⟨ǂ̬⟩ for [ɡ͡ǂ]), nasalization (e.g. ⟨ǂ̃⟩ for [ŋ͡ǂ]), etc. These differing transcription conventions may reflect differing theoretical analyses of the nature of click consonants, or attempts to address common misunderstandings of clicks.", "title": "Phonetics and IPA notation" }, { "paragraph_id": 7, "text": "Clicks occur in all three Khoisan language families of southern Africa, where they may be the most numerous consonants. To a lesser extent they occur in three neighbouring groups of Bantu languages—which borrowed them, directly or indirectly, from Khoisan. In the southeast, in eastern South Africa, Eswatini, Lesotho, Zimbabwe and southern Mozambique, they were adopted from a Tuu language (or languages) by the languages of the Nguni cluster (especially Zulu, Xhosa and Phuthi, but also to a lesser extent Swazi and Ndebele), and spread from them in a reduced fashion to the Zulu-based pidgin Fanagalo, Sesotho, Tsonga, Ronga, the Mzimba dialect of Tumbuka and more recently to Ndau and urban varieties of Pedi, where the spread of clicks continues. The second point of transfer was near the Caprivi Strip and the Okavango River where, apparently, the Yeyi language borrowed the clicks from a West Kalahari Khoe language; a separate development led to a smaller click inventory in the neighbouring Mbukushu, Kwangali, Gciriku, Kuhane and Fwe languages in Angola, Namibia, Botswana and Zambia. These sounds occur not only in borrowed vocabulary, but have spread to native Bantu words as well, in the case of Nguni at least partially due to a type of word taboo called hlonipha. Some creolised varieties of Afrikaans, such as Oorlams, retain clicks in Khoekhoe words.", "title": "Languages with clicks" }, { "paragraph_id": 8, "text": "Three languages in East Africa use clicks: Sandawe and Hadza of Tanzania, and Dahalo, an endangered South Cushitic language of Kenya that has clicks in only a few dozen words. It is thought the latter may remain from an episode of language shift.", "title": "Languages with clicks" }, { "paragraph_id": 9, "text": "The only non-African language known to have clicks as regular speech sounds is Damin, a ritual code once used by speakers of Lardil in Australia. In addition, one consonant in Damin is the egressive equivalent of a click, using the tongue to compress the air in the mouth for an outward (egressive) \"spurt\".", "title": "Languages with clicks" }, { "paragraph_id": 10, "text": "Once clicks are borrowed into a language as regular speech sounds, they may spread to native words, as has happened due to hlonipa word-taboo in the Nguni languages. In Gciriku, for example, the European loanword tomate (tomato) appears as cumáte with a click [ǀ], though it begins with a t in all neighbouring languages.", "title": "Use" }, { "paragraph_id": 11, "text": "Scattered clicks are found in ideophones and mimesis in other languages, such as Kongo /ᵑǃ/, Mijikenda /ᵑǀ/ and Hadza /ᵑʘʷ/ (Hadza does not otherwise have labial clicks). Ideophones often use phonemic distinctions not found in normal vocabulary.", "title": "Use" }, { "paragraph_id": 12, "text": "English and many other languages may use bare click releases in interjections, without an accompanying rear release or transition into a vowel, such as the dental \"tsk-tsk\" sound used to express disapproval, or the lateral tchick used with horses. In a number of languages ranging from the central Mediterranean to Iran, a bare dental click release accompanied by tipping the head upwards signifies \"no\". Libyan Arabic apparently has three such sounds. A voiceless nasal back-released velar click [ʞ] is used throughout Africa for backchanneling. This sound starts off as a typical click, but the action is reversed and it is the rear velar or uvular closure that is released, drawing in air from the throat and nasal passages.", "title": "Use" }, { "paragraph_id": 13, "text": "Clicks occasionally turn up elsewhere, as in the special registers twins sometimes develop with each other. In West Africa, clicks have been reported allophonically, and similarly in French and German, faint clicks have been recorded in rapid speech where consonants such as /t/ and /k/ overlap between words. In Rwanda, the sequence /mŋ/ may be pronounced either with an epenthetic vowel, [mᵊ̃ŋ], or with a light bilabial click, [m̃ŋ]—often by the same speaker.", "title": "Use" }, { "paragraph_id": 14, "text": "Speakers of Gan Chinese from Ningdu county, as well as speakers of Mandarin from Beijing and Jilin and presumably people from other parts of the country, produce flapped nasal clicks in nursery rhymes with varying degrees of competence, in the words for 'goose' and 'duck', both of which begin with /ŋ/ in Gan and until recently began with /ŋ/ in Mandarin as well. In Gan, the nursery rhyme is,", "title": "Use" }, { "paragraph_id": 15, "text": "where the /ŋ/ onsets are all pronounced [ᵑǃ¡].", "title": "Use" }, { "paragraph_id": 16, "text": "Occasionally other languages are claimed to have click sounds in general vocabulary. This is usually a misnomer for ejective consonants, which are found across much of the world.", "title": "Use" }, { "paragraph_id": 17, "text": "For the most part, the Southern African Khoisan languages only use root-initial clicks. Hadza, Sandawe and several Bantu languages also allow syllable-initial clicks within roots. In no language does a click close a syllable or end a word, but since the languages of the world that happen to have clicks consist mostly of CV syllables and allow at most only a limited set of consonants (such as a nasal or a glottal stop) to close a syllable or end a word, most consonants share the distribution of clicks in these languages.", "title": "Use" }, { "paragraph_id": 18, "text": "Most languages of the Khoesan families (Tuu, Kxʼa and Khoe) have four click types: { ǀ ǁ ǃ ǂ } or variants thereof, though a few have three or five, the last supplemented with either bilabial { ʘ } or retroflex { }. Hadza and Sandawe in Tanzania have three, { ǀ ǁ ǃ }. Yeyi is the only Bantu language with four, { ǀ ǁ ǃ ǂ }, while Xhosa and Zulu have three, { ǀ ǁ ǃ }, and most other Bantu languages with clicks have fewer.", "title": "Use" }, { "paragraph_id": 19, "text": "Like other consonants, clicks can be described using four parameters: place of articulation, manner of articulation, phonation (including glottalisation) and airstream mechanism. As noted above, clicks necessarily involve at least two closures, which in some cases operate partially independently: an anterior articulation traditionally represented by the special click symbol in the IPA—and a posterior articulation traditionally transcribed for convenience as oral or nasal, voiced or voiceless, though such features actually apply to the entire consonant. The literature also describes a contrast between velar and uvular rear articulations for some languages.", "title": "Types of clicks" }, { "paragraph_id": 20, "text": "In some languages that have been reported to make this distinction, such as Nǁng, all clicks have a uvular rear closure, and the clicks explicitly described as uvular are in fact cases where the uvular closure is independently audible: contours of a click into a pulmonic or ejective component, in which the click has two release bursts, the forward (click-type) and then the rearward (uvular) component. \"Velar\" clicks in these languages have only a single release burst, that of the forward release, and the release of the rear articulation isn't audible. However, in other languages all clicks are velar, and a few languages, such as Taa, have a true velar–uvular distinction that depends on the place rather than the timing of rear articulation and that is audible in the quality of the vowel.", "title": "Types of clicks" }, { "paragraph_id": 21, "text": "Regardless, in most of the literature the stated place of the click is the anterior articulation (called the release or influx), whereas the manner is ascribed to the posterior articulation (called the accompaniment or efflux). The anterior articulation defines the click type and is written with the IPA letter for the click (dental ⟨ǀ⟩, alveolar ⟨ǃ⟩, etc.), whereas the traditional term 'accompaniment' conflates the categories of manner (nasal, affricated), phonation (voiced, aspirated, breathy voiced, glottalised), as well as any change in the airstream with the release of the posterior articulation (pulmonic, ejective), all of which are transcribed with additional letters or diacritics, as in the nasal alveolar click, ⟨ǃŋ⟩ or ⟨ᵑǃ⟩ or—to take an extreme example—the voiced (uvular) ejective alveolar click, ⟨ᶢǃ͡qʼ⟩.", "title": "Types of clicks" }, { "paragraph_id": 22, "text": "The size of click inventories ranges from as few as three (in Sesotho) or four (in Dahalo), to dozens in the Kxʼa and Tuu (Northern and Southern Khoisan) languages. Taa, the last vibrant language in the latter family, has 45 to 115 click phonemes, depending on analysis (clusters vs. contours), and over 70% of words in the dictionary of this language begin with a click.", "title": "Types of clicks" }, { "paragraph_id": 23, "text": "Clicks appear more stop-like (sharp/abrupt) or affricate-like (noisy) depending on their place of articulation: In southern Africa, clicks involving an apical alveolar or laminal postalveolar closure are acoustically abrupt and sharp, like stops, whereas labial, dental and lateral clicks typically have longer and acoustically noisier click types that are superficially more like affricates. In East Africa, however, the alveolar clicks tend to be flapped, whereas the lateral clicks tend to be more sharp.", "title": "Types of clicks" }, { "paragraph_id": 24, "text": "The five click places of articulation with dedicated symbols in the International Phonetic Alphabet (IPA) are labial ʘ, dental ǀ, palatal (\"palato-alveolar\") ǂ, (post)alveolar (\"retroflex\") ǃ and lateral ǁ. In most languages, the alveolar and palatal types are abrupt; that is, they are sharp popping sounds with little frication (turbulent airflow). The labial, dental and lateral types, on the other hand, are typically noisy: they are longer, lip- or tooth-sucking sounds with turbulent airflow, and are sometimes called affricates. (This applies to the forward articulation; both may also have either an affricate or non-affricate rear articulation as well.) The apical places, ǃ and ǁ, are sometimes called \"grave\", because their pitch is dominated by low frequencies; whereas the laminal places, ǀ and ǂ, are sometimes called \"acute\", because they are dominated by high frequencies. (At least in the Nǁng language and Juǀʼhoan, this is associated with a difference in the placement of the rear articulation: \"grave\" clicks are uvular, whereas \"acute\" clicks are pharyngeal.) Thus the alveolar click /ǃ/ sounds something like a cork pulled from a bottle (a low-pitch pop), at least in Xhosa; whereas the dental click /ǀ/ is like English tsk! tsk!, a high-pitched sucking on the incisors. The lateral clicks are pronounced by sucking on the molars of one or both sides. The labial click /ʘ/ is different from what many people associate with a kiss: the lips are pressed more-or-less flat together, as they are for a [p] or an [m], not rounded as they are for a [w].", "title": "Transcription" }, { "paragraph_id": 25, "text": "The most populous languages with clicks, Zulu and Xhosa, use the letters c, q, x, by themselves and in digraphs, to write click consonants. Most Khoisan languages, on the other hand (with the notable exceptions of Naro and Sandawe), use a more iconic system based on the pipe ⟨|⟩. (The exclamation point for the \"retroflex\" click was originally a pipe with a subscript dot, along the lines of ṭ, ḍ, ṇ used to transcribe the retroflex consonants of India.) There are also two main conventions for the second letter of the digraph as well: voicing may be written with g and uvular affrication with x, or voicing with d and affrication with g (a convention of Afrikaans). In two orthographies of Juǀʼhoan, for example, voiced /ᶢǃ/ is written g! or dq, and /ᵏǃ͡χ/ !x or qg. In languages without /ᵏǃ͡χ/, such as Zulu, /ᶢǃ/ may be written gq.", "title": "Transcription" }, { "paragraph_id": 26, "text": "There are a few less-well-attested articulations. A reported subapical retroflex articulation ⟨⟩ in Grootfontein !Kung turns out to be alveolar with lateral release, ⟨ǃ⟩; Ekoka !Kung has a fricated alveolar click with an s-like release, provisionally transcribed ⟨ǃ͡s⟩; and Sandawe has a \"slapped\" alveolar click, provisionally transcribed ⟨ǃ¡⟩ (in turn, the lateral clicks in Sandawe are more abrupt and less noisy than in southern Africa). However, the Khoisan languages are poorly attested, and it is quite possible that, as they become better described, more click articulations will be found.", "title": "Transcription" }, { "paragraph_id": 27, "text": "Formerly when a click consonant was transcribed, two symbols were used, one for each articulation, and connected with a tie bar. This is because a click such as [ɢ͡ǀ] was analysed as a voiced uvular rear articulation [ɢ] pronounced simultaneously with the forward ingressive release [ǀ]. The symbols may be written in either order, depending on the analysis: ⟨ɢ͡ǀ⟩ or ⟨ǀ͡ɢ⟩. However, a tie bar was not often used in practice, and when the manner is tenuis (a simple [k]), it was often omitted as well. That is, ⟨ǂ⟩ = ⟨kǂ⟩ = ⟨ǂk⟩ = ⟨k͡ǂ⟩ = ⟨ǂ͡k⟩. Regardless, elements that do not overlap with the forward release are usually written according to their temporal order: Prenasalisation is always written first (⟨ɴɢ͡ǀ⟩ = ⟨ɴǀ͡ɢ⟩ = ⟨ɴǀ̬⟩), and the non-lingual part of a contour is always written second (⟨k͡ǀʼqʼ⟩ = ⟨ǀ͡kʼqʼ⟩ = ⟨ǀ͡qʼ⟩).", "title": "Transcription" }, { "paragraph_id": 28, "text": "However, it is common to analyse clicks as simplex segments, despite the fact that the front and rear articulations are independent, and to use diacritics to indicate the rear articulation and the accompaniment. At first this tended to be ⟨ᵏǀ, ᶢǀ, ᵑǀ⟩ for ⟨k͡ǀ, ɡ͡ǀ, ŋ͡ǀ⟩, based on the belief that the rear articulation was velar; but as it has become clear that the rear articulation is often uvular or even pharyngeal even when there is no velar–uvular contrast, voicing and nasalisation diacritics more in keeping with the IPA have started to appear: ⟨ǀ̥, ǀ̬, ǀ̃, ŋǀ̬⟩ for ⟨ᵏǀ, ᶢǀ, ᵑǀ, ŋᶢǀ⟩.", "title": "Transcription" }, { "paragraph_id": 29, "text": "In practical orthography, the voicing or nasalisation is sometimes given the anterior place of articulation: dc for ᶢǀ and mʘ for ᵑʘ, for example.", "title": "Transcription" }, { "paragraph_id": 30, "text": "In the literature on Damin, the clicks are transcribed by adding ⟨!⟩ to the homorganic nasal: ⟨m!, nh!, n!, rn!⟩.", "title": "Transcription" }, { "paragraph_id": 31, "text": "Places of articulation are often called click types, releases, or influxes, though 'release' is also used for the accompaniment/efflux. There are seven or eight known places of articulation, not counting slapped or egressive clicks. These are (bi)labial affricated ʘ, or \"bilabial\"; laminal denti-alveolar affricated ǀ, or \"dental\"; apical (post)alveolar plosive ǃ, or \"alveolar\"; laminal palatal plosive ǂ, or \"palatal\"; laminal palatal affricated ǂᶴ (known only from Ekoka !Kung); subapical postalveolar , or \"retroflex\" (only known from Central !Kung and possibly Damin); and apical (post)alveolar lateral ǁ, or \"lateral\".", "title": "Places of articulation" }, { "paragraph_id": 32, "text": "Languages illustrating each of these articulations are listed below. Given the poor state of documentation of Khoisan languages, it is quite possible that additional places of articulation will turn up. No language is known to contrast more than five.", "title": "Places of articulation" }, { "paragraph_id": 33, "text": "Extra-linguistically, Coatlán Zapotec of Mexico uses a linguolabial click, [ǀ̼ʔ], as mimesis for a pig drinking water, and several languages, such as Wolof, use a velar click [ʞ], long judged to be physically impossible, for backchanneling and to express approval. An extended dental click with lip pursing or compression (\"sucking-teeth\"), variable in sound and sometimes described as intermediate between [ǀ] and [ʘ], is found across West Africa, the Caribbean and into the United States.", "title": "Places of articulation" }, { "paragraph_id": 34, "text": "The exact place of the alveolar clicks varies between languages. The lateral, for example, is alveolar in Khoekhoe but postalveolar or even palatal in Sandawe; the central is alveolar in Nǀuu but postalveolar in Juǀʼhoan.", "title": "Places of articulation" }, { "paragraph_id": 35, "text": "The terms for the click types were originally developed by Bleek in 1862. Since then there has been some conflicting variation. However, apart from \"cerebral\" (retroflex), which was found to be an inaccurate label when true retroflex clicks were discovered, Bleek's terms are still considered normative today. Here are the terms used in some of the main references.", "title": "Places of articulation" }, { "paragraph_id": 36, "text": "The dental, lateral and bilabial clicks are rarely confused, but the palatal and alveolar clicks frequently have conflicting names in older literature, and non-standard terminology is fossilized in Unicode. However, since Ladefoged & Traill (1984) clarified the places of articulation, the terms listed under Vosser (2013) in the table above have become standard, apart from such details as whether in a particular language ǃ and ǁ are alveolar or postalveolar, or whether the rear articulation is velar, uvular or pharyngeal, which again varies between languages (or may even be contrastive within a language).", "title": "Places of articulation" }, { "paragraph_id": 37, "text": "In several languages, including Nama and Juǀʼhoan, the alveolar click types [ǃ] and [ǁ] only occur, or preferentially occur, before back vowels, whereas the dental and palatal clicks occur before any vowel. The effect is most noticeable with the high front vowel [i]. In Nama, for example, the diphthong [əi] is common but [i] is rare after alveolar clicks, whereas the opposite is true after dental and palatal clicks. This is a common effect of uvular or uvularised consonants on vowels in both click and non-click languages. In Taa, for example, the back-vowel constraint is triggered by both alveolar clicks and uvular stops, but not by palatal clicks or velar stops: sequences such as */ǃi/ and */qi/ are rare to non-existent, whereas sequences such as /ǂi/ and /ki/ are common. The back-vowel constraint is also triggered by labial clicks, though not by labial stops. Clicks subject to this constraint involve a sharp retraction of the tongue during release.", "title": "The back-vowel constraint" }, { "paragraph_id": 38, "text": "Miller and colleagues (2003) used ultrasound imaging to show that the rear articulation of the alveolar clicks ([ǃ]) in Nama is substantially different from that of palatal and dental clicks. Specifically, the shape of the body of the tongue in palatal clicks is very similar to that of the vowel [i], and involves the same tongue muscles, so that sequences such as [ǂi] involved a simple and quick transition. The rear articulation of the alveolar clicks, however, is several centimetres further back, and involves a different set of muscles in the uvular region. The part of the tongue required to approach the palate for the vowel [i] is deeply retracted in [ǃ], as it lies at the bottom of the air pocket used to create the vacuum required for click airstream. This makes the transition required for [ǃi] much more complex and the timing more difficult than the shallower and more forward tongue position of the palatal clicks. Consequently, [ǃi] takes 50 ms longer to pronounce than [ǂi], the same amount of time required to pronounce [ǃəi].", "title": "The back-vowel constraint" }, { "paragraph_id": 39, "text": "Languages do not all behave alike. In Nǀuu, the simple clicks /ʘ, ǃ, ǁ/ trigger the [əi] and [æ] allophones of /i/ and /e/, whereas /ǀ, ǂ/ do not. All of the affricated contour clicks, such as /ǂ͡χ/, do as well, as do the uvular stops /q, χ/. However, the occlusive contour clicks pattern like the simple clicks, and /ǂ͡q/ does not trigger the back-vowel constraint. This is because they involve tongue-root raising rather than tongue-root retraction in the uvular-pharyngeal region. However, in Gǀwi, which is otherwise largely similar, both /ǂ͡q/ and /ǂ͡χ/ trigger the back-vowel constraint (Miller 2009).", "title": "The back-vowel constraint" }, { "paragraph_id": 40, "text": "Click manners are often called click accompaniments or effluxes, but both terms have met with objections on theoretical grounds.", "title": "Manners of articulation" }, { "paragraph_id": 41, "text": "There is a great variety of click manners, both simplex and complex, the latter variously analysed as consonant clusters or contours. With so few click languages, and so little study of them, it is also unclear to what extent clicks in different languages are equivalent. For example, the [ǃkˀ] of Khoekhoe, [ǃkˀ ~ ŋˀǃk] of Sandawe and [ŋ̊ǃˀ ~ ŋǃkˀ] of Hadza may be essentially the same phone; no language distinguishes them, and the differences in transcription may have more to do with the approach of the linguist than with actual differences in the sounds. Such suspected allophones/allographs are listed on a common row in the table below.", "title": "Manners of articulation" }, { "paragraph_id": 42, "text": "Some Khoisan languages are typologically unusual in allowing mixed voicing in non-click consonant clusters/contours, such as dt͡sʼk͡xʼ, so it is not surprising that they would allow mixed voicing in clicks as well. This may be an effect of epiglottalised voiced consonants, because voicing is incompatible with epiglottalisation.", "title": "Manners of articulation" }, { "paragraph_id": 43, "text": "As do other consonants, clicks vary in phonation. Oral clicks are attested with four phonations: tenuis, aspirated, voiced and breathy voiced (murmured). Nasal clicks may also vary, with plain voiced, breathy voiced / murmured nasal, aspirated and unaspirated voiceless clicks attested (the last only in Taa). The aspirated nasal clicks are often said to have 'delayed aspiration'; there is nasal airflow throughout the click, which may become voiced between vowels, though the aspiration itself is voiceless. A few languages also have pre-glottalised nasal clicks, which have very brief prenasalisation but have not been phonetically analysed to the extent that other types of clicks have.", "title": "Manners of articulation" }, { "paragraph_id": 44, "text": "All languages have nasal clicks, and all but Dahalo and Damin also have oral clicks. All languages but Damin have at least one phonation contrast as well.", "title": "Manners of articulation" }, { "paragraph_id": 45, "text": "Clicks may be pronounced with a third place of articulation, glottal. A glottal stop is made during the hold of the click; the (necessarily voiceless) click is released, and then the glottal hold is released into the vowel. Glottalised clicks are very common, and they are generally nasalised as well. The nasalisation cannot be heard during the click release, as there is no pulmonic airflow, and generally not at all when the click occurs at the beginning of an utterance, but it has the effect of nasalising preceding vowels, to the extent that the glottalised clicks of Sandawe and Hadza are often described as prenasalised when in medial position. Two languages, Gǀwi and Yeyi, contrast plain and nasal glottalised clicks, but in languages without such a contrast, the glottalised click is nasal. Miller (2011) analyses the glottalisation as phonation, and so considers these to be simple clicks.", "title": "Manners of articulation" }, { "paragraph_id": 46, "text": "Various languages also have prenasalised clicks, which may be analysed as consonant sequences. Sotho, for example, allows a syllabic nasal before its three clicks, as in nnqane 'the other side' (prenasalised nasal) and seqhenqha 'hunk'.", "title": "Manners of articulation" }, { "paragraph_id": 47, "text": "There is ongoing discussion as to how the distinction between what were historically described as 'velar' and 'uvular' clicks is best described. The 'uvular' clicks are only found in some languages, and have an extended pronunciation that suggests that they are more complex than the simple ('velar') clicks, which are found in all. Nakagawa (1996) describes the extended clicks in Gǀwi as consonant clusters, sequences equivalent to English st or pl, whereas Miller (2011) analyses similar sounds in several languages as click–non-click contours, where a click transitions into a pulmonic or ejective articulation within a single segment, analogous to how English ch and j transition from occlusive to fricative but still behave as unitary sounds. With ejective clicks, for example, Miller finds that although the ejective release follows the click release, it is the rear closure of the click that is ejective, not an independently articulated consonant. That is, in a simple click, the release of the rear articulation is not audible, whereas in a contour click, the rear (uvular) articulation is audibly released after the front (click) articulation, resulting in a double release.", "title": "Manners of articulation" }, { "paragraph_id": 48, "text": "These contour clicks may be linguo-pulmonic, that is, they may transition from a click (lingual) articulation to a normal pulmonic consonant like [ɢ] (e.g. [ǀ͡ɢ]); or linguo-glottalic and transition from lingual to an ejective consonant like [qʼ] (e.g. [ǀ͡qʼ]): that is, a sequence of ingressive (lingual) release + egressive (pulmonic or glottalic) release. In some cases there is a shift in place of articulation as well, and instead of a uvular release, the uvular click transitions to a velar or epiglottal release (depending on the description, [ǂ͡kxʼ] or [ǂᴴ]). Although homorganic [ǂ͡χʼ] does not contrast with heterorganic [ǂ͡kxʼ] in any known language, they are phonetically quite distinct (Miller 2011).", "title": "Manners of articulation" }, { "paragraph_id": 49, "text": "Implosive clicks, i.e. velar [ɠ͡ʘ ɠ͡ǀ ɠ͡ǃ ɠ͡ǂ ɠ͡ǁ], uvular [ʛ͡ʘ ʛ͡ǀ ʛ͡ǃ ʛ͡ǂ ʛ͡ǁ], and de facto front-closed palatal [ʄ͡ʘ ʄ͡ǀ ʄ͡ǃ ʄ͡ǁ] are not only possible but easier to produce than modally voiced clicks. However, they are not attested in any language.", "title": "Manners of articulation" }, { "paragraph_id": 50, "text": "Apart from Dahalo, Damin and many of the Bantu languages (Yeyi and Xhosa being exceptions), 'click' languages have glottalized nasal clicks. Contour clicks are restricted to southern Africa, but are very common there: they are found in all members of the Tuu, Kxʼa and Khoe families, as well as in the Bantu language Yeyi.", "title": "Manners of articulation" }, { "paragraph_id": 51, "text": "In a comparative study of clicks across various languages, using her own field work as well as phonetic descriptions and data by other field researchers, Miller (2011) posits 21 types of clicks that contrast in manner or airstream. The homorganic and heterorganic affricated ejective clicks do not contrast in any known language, but are judged dissimilar enough to keep separate. Miller's conclusions differ from those of the primary researcher of a language; see the individual languages for details.", "title": "Manners of articulation" }, { "paragraph_id": 52, "text": "(all spoken primarily in South Africa, Namibia and Botswana; Khoekhoe is similar to Korana except it has lost ejective /ᵏꞰ͡χʼ/)", "title": "Manners of articulation" }, { "paragraph_id": 53, "text": "(Zulu is similar to Xhosa apart from not having /ᵑꞰˀ/)", "title": "Manners of articulation" }, { "paragraph_id": 54, "text": "Each language below is illustrated with Ʞ as a placeholder for the different click types. Under each language are the orthography (in italics, with old forms in parentheses), the researchers' transcription (in ⟨angle brackets⟩), or allophonic variation (in [brackets]). Some languages also have labialised or prenasalised clicks as well as those listed below.", "title": "Manners of articulation" }, { "paragraph_id": 55, "text": "Yeyi also has prenasalised /ŋᶢꞰ/. The original researchers believe that [Ʞʰ] and [Ʞχ] are allophones.", "title": "Manners of articulation" }, { "paragraph_id": 56, "text": "A DoBeS (2008) study of the Western ǃXoo dialect of Taa found several new manners: creaky voiced (the voiced equivalent of glottalised oral), breathy-voiced nasal, prenasalised glottalised (the voiced equivalent of glottalised) and a (pre)voiced ejective. These extra voiced clicks reflect Western ǃXoo morphology, where many nouns form their plural by voicing their initial consonant. DoBeS analyses most Taa clicks as clusters, leaving nine basic manners (marked with asterisks in the table). This comes close to Miller's distinction between simple and contour clicks, shaded light and medium grey in the table.", "title": "Manners of articulation" }, { "paragraph_id": 57, "text": "Languages of the southern African Khoisan families only permit clicks at the beginning of a word root. However, they also restrict other classes of consonant, such as ejectives and affricates, to root-initial position. The Bantu languages, Hadza and Sandawe allow clicks within roots.", "title": "Phonotactics" }, { "paragraph_id": 58, "text": "In some languages, all click consonants within known roots are the same phoneme, as in Hadza cikiringcingca /ǀikiɺiN.ǀiN.ǀa/ 'pinkie finger', which has three tenuis dental clicks. Other languages are known to have the occasional root with different clicks, as in Xhosa ugqwanxa /uᶢ̊ǃʱʷaᵑǁa/ 'black ironwood', which has a slack-voiced alveolar click and a nasal lateral click.", "title": "Phonotactics" }, { "paragraph_id": 59, "text": "No natural language allows clicks at the ends of syllables or words, but then no languages with clicks allows many consonants at all in those positions. Similarly, clicks are not found in underlying consonant clusters apart from /Cw/ (and, depending on the analysis, /Cχ/), as languages with clicks do not have other consonant clusters than that. Due to vowel elision, however, there are cases where clicks are pronounced in cross-linguistically common types of consonant clusters, such as Xhosa [sᵑǃɔɓilɛ] Snqobile, from Sinqobile (a name), and [isǁʰɔsa] isXhosa, from isiXhosa (the Xhosa language).", "title": "Phonotactics" }, { "paragraph_id": 60, "text": "Like other articulatorily complex consonants, clicks tend to be found in lexical words rather than in grammatical words, but this is only a tendency. In Nǁng, for example, there are two sets of personal pronouns, a full one without clicks and a partial set with clicks (ńg 'I', á 'thou', í 'we all', ú 'you', vs. nǀǹg 'I', gǀà 'thou', gǀì 'we all', gǀù 'you'), as well as other grammatical words with clicks such as ǁu 'not' and nǀa 'with, and'.", "title": "Phonotactics" }, { "paragraph_id": 61, "text": "One genetic study concluded that clicks, which occur in the languages of the genetically divergent populations Hadza and !Kung, may be an ancient element of human language. However, this conclusion relies on several dubious assumptions (see Hadza language), and most linguists assume that clicks, being quite complex consonants, arose relatively late in human history. How they arose is not known, but it is generally assumed that they developed from sequences of non-click consonants, as they are found allophonically for doubly articulated consonants in West Africa, for /tk/ sequences that overlap at word boundaries in German, and for the sequence /mw/ in Ndau and Tonga. Such developments have also been posited in historical reconstruction. For example, the Sandawe word for 'horn', /tɬana/, with a lateral affricate, may be a cognate with the root /ᵑǁaː/ found throughout the Khoe family, which has a lateral click. This and other words suggests that at least some Khoe clicks may have formed from consonant clusters when the first vowel of a word was lost; in this instance *[tɬana] > *[tɬna] > [ǁŋa] ~ [ᵑǁa].", "title": "Click genesis and click loss" }, { "paragraph_id": 62, "text": "On the other side of the equation, several non-endangered languages in vigorous use demonstrate click loss. For example, the East Kalahari languages have lost clicks from a large percentage of their vocabulary, presumably due to Bantu influence. As a rule, a click is replaced by a consonant with close to the manner of articulation of the click and the place of articulation of the forward release: alveolar click releases (the [ǃ] family) tend to mutate into a velar stop or affricate, such as [k], [ɡ], [ŋ], [k͡x]; palatal clicks ([ǂ] etc.) tend to mutate into a palatal stop such as [c], [ ɟ], [ ɲ], [cʼ], or a post-alveolar affricate [tʃ], [dʒ]; and dental clicks ([ǀ] etc.) tend to mutate into an alveolar affricate [ts].", "title": "Click genesis and click loss" }, { "paragraph_id": 63, "text": "Clicks are often presented as difficult sounds to articulate within words. However, children acquire them readily; a two-year-old, for example, may be able to pronounce a word with a lateral click [ǁ] with no problem, but still be unable to pronounce [s]. Lucy Lloyd reported that after long contact with the Khoi and San, it was difficult for her to refrain from using clicks when speaking English.", "title": "Difficulty" } ]
Click consonants, or clicks, are speech sounds that occur as consonants in many languages of Southern Africa and in three languages of East Africa. Examples familiar to English-speakers are the tut-tut or tsk! tsk! used to express disapproval or pity, the tchick! used to spur on a horse, and the clip-clop! sound children make with their tongue to imitate a horse trotting. However, these paralinguistic sounds in English are not full click consonants, as they only involve the front of the tongue, without the release of the back of the tongue that is required for clicks to combine with vowels and form syllables. Anatomically, clicks are obstruents articulated with two closures in the mouth, one forward and one at the back. The enclosed pocket of air is rarefied by a sucking action of the tongue. The forward closure is then released, producing what may be the loudest consonants in the language, although in some languages such as Hadza and Sandawe, clicks can be more subtle and may even be mistaken for ejectives.
2002-01-20T17:40:58Z
2023-12-14T17:11:27Z
[ "Template:Cite book", "Template:Short description", "Template:IPA", "Template:See also", "Template:Further", "Template:Angbr", "Template:Which", "Template:Cite web", "Template:Div col", "Template:Authority control", "Template:Reflist", "Template:Cite conference", "Template:Note", "Template:SOWL", "Template:Infobox symbol", "Template:Angbr IPA", "Template:Blockquote", "Template:Use dmy dates", "Template:Anchor", "Template:Ref", "Template:Div col end", "Template:Webarchive", "Template:IPA navigation", "Template:Articulation navbox", "Template:Citation needed", "Template:Main", "Template:Clarify", "Template:IPAblink", "Template:Refn", "Template:Sfn", "Template:Cite journal" ]
https://en.wikipedia.org/wiki/Click_consonant
7,817
The Cider House Rules
The Cider House Rules (1985) is a novel by American writer John Irving, a Bildungsroman that was later adapted into a 1999 film and a stage play by Peter Parnell. The story, set in the pre– and post–World War II era, tells of a young man, Homer Wells, growing up under the guidance of Dr. Wilbur Larch, an obstetrician and abortion provider. The story relates his early life at Larch's orphanage in Maine and follows Homer as he eventually leaves the nest and comes of age. Homer Wells is shown growing up in an orphanage where he spends his childhood trying to be "of use" as a medical assistant to director Dr. Wilbur Larch, whose history is told in flashbacks: After a traumatic misadventure with a prostitute as a young man, Wilbur turns his back on sex and love, choosing instead to help women with unwanted pregnancies give birth and then keeping the babies in an orphanage. He makes a point of maintaining an emotional distance from the orphans, so that they can more easily make the transition into an adoptive family, but when it becomes clear that Homer is going to spend his childhood at the orphanage, Wilbur trains the orphan as an obstetrician and comes to love him like a son. Wilbur's and Homer's lives are complicated by the abortions Wilbur provides. Wilbur came to this work reluctantly, but is driven by having seen the horrors of back-alley operations. Homer, upon learning Wilbur's secret, considers it morally wrong. As a young man, Homer befriends a young couple, Candy Kendall and Wally Worthington, who come to St. Cloud's for an abortion. Homer leaves the orphanage, and returns with them to Wally's family's orchard in Heart's Rock, near the Maine coast. Wally and Homer become best friends and Homer develops a secret love for Candy. Wally goes off to serve in the Second World War and his plane is shot down over Burma. He is declared missing by the military, but Homer and Candy both believe he is dead and move on with their lives, which includes beginning a romantic relationship. When Candy becomes pregnant, they go back to St. Cloud's Orphanage, where their son is born and named Angel. Subsequently, Wally is found in Burma and returns home, paralyzed from the waist down. He is still able to have sexual intercourse but is sterile due to an infection caught in Burma. Homer and Candy lie to the family about Angel's parentage, claiming that Homer had adopted him. Wally and Candy marry shortly afterward, but Candy and Homer maintain a secret affair that lasts some 15 years. Many years later, teenaged Angel falls in love with Rose, the daughter of the head migrant worker at the apple orchard. Rose becomes pregnant by her father, and Homer aborts her fetus. Homer decides to return to the orphanage after Wilbur's death, to work as the new director. Though he maintains his distaste for abortions, he continues Dr. Larch's legacy of aborting babies, and he dreams of the day when abortions are legal. The name "The Cider House Rules" refers to the list of rules that migrant workers are supposed to follow at the Ocean View Orchards. However, none of them can read, and they are completely unaware of the rules – which have been posted for years. A subplot follows the character Melony, who grew up alongside Homer in the orphanage. She was Homer's first girlfriend. After Homer leaves the orphanage, so does she in an effort to find him. She eventually becomes an electrician and takes a female lover, Lorna. Melony is stoic, who refuses to press charges against a man who brutally broke her nose and arm. She intends to later take revenge. She is the catalyst that transforms Homer from his comfortable, but not entirely admirable position, at the apple orchard into Dr. Larch's replacement. Wally's experience getting shot down over Burma was based in part on that of Irving's biological father (whom he never met), who was shot down over Burma and survived. The novel was adapted into a film of the same name released in 1999 directed by Lasse Hallström. It starred Tobey Maguire as Homer Wells.
[ { "paragraph_id": 0, "text": "The Cider House Rules (1985) is a novel by American writer John Irving, a Bildungsroman that was later adapted into a 1999 film and a stage play by Peter Parnell. The story, set in the pre– and post–World War II era, tells of a young man, Homer Wells, growing up under the guidance of Dr. Wilbur Larch, an obstetrician and abortion provider. The story relates his early life at Larch's orphanage in Maine and follows Homer as he eventually leaves the nest and comes of age.", "title": "" }, { "paragraph_id": 1, "text": "Homer Wells is shown growing up in an orphanage where he spends his childhood trying to be \"of use\" as a medical assistant to director Dr. Wilbur Larch, whose history is told in flashbacks: After a traumatic misadventure with a prostitute as a young man, Wilbur turns his back on sex and love, choosing instead to help women with unwanted pregnancies give birth and then keeping the babies in an orphanage.", "title": "Plot" }, { "paragraph_id": 2, "text": "He makes a point of maintaining an emotional distance from the orphans, so that they can more easily make the transition into an adoptive family, but when it becomes clear that Homer is going to spend his childhood at the orphanage, Wilbur trains the orphan as an obstetrician and comes to love him like a son.", "title": "Plot" }, { "paragraph_id": 3, "text": "Wilbur's and Homer's lives are complicated by the abortions Wilbur provides. Wilbur came to this work reluctantly, but is driven by having seen the horrors of back-alley operations. Homer, upon learning Wilbur's secret, considers it morally wrong.", "title": "Plot" }, { "paragraph_id": 4, "text": "As a young man, Homer befriends a young couple, Candy Kendall and Wally Worthington, who come to St. Cloud's for an abortion. Homer leaves the orphanage, and returns with them to Wally's family's orchard in Heart's Rock, near the Maine coast. Wally and Homer become best friends and Homer develops a secret love for Candy. Wally goes off to serve in the Second World War and his plane is shot down over Burma. He is declared missing by the military, but Homer and Candy both believe he is dead and move on with their lives, which includes beginning a romantic relationship. When Candy becomes pregnant, they go back to St. Cloud's Orphanage, where their son is born and named Angel.", "title": "Plot" }, { "paragraph_id": 5, "text": "Subsequently, Wally is found in Burma and returns home, paralyzed from the waist down. He is still able to have sexual intercourse but is sterile due to an infection caught in Burma. Homer and Candy lie to the family about Angel's parentage, claiming that Homer had adopted him. Wally and Candy marry shortly afterward, but Candy and Homer maintain a secret affair that lasts some 15 years.", "title": "Plot" }, { "paragraph_id": 6, "text": "Many years later, teenaged Angel falls in love with Rose, the daughter of the head migrant worker at the apple orchard. Rose becomes pregnant by her father, and Homer aborts her fetus. Homer decides to return to the orphanage after Wilbur's death, to work as the new director. Though he maintains his distaste for abortions, he continues Dr. Larch's legacy of aborting babies, and he dreams of the day when abortions are legal.", "title": "Plot" }, { "paragraph_id": 7, "text": "The name \"The Cider House Rules\" refers to the list of rules that migrant workers are supposed to follow at the Ocean View Orchards. However, none of them can read, and they are completely unaware of the rules – which have been posted for years.", "title": "Plot" }, { "paragraph_id": 8, "text": "A subplot follows the character Melony, who grew up alongside Homer in the orphanage. She was Homer's first girlfriend. After Homer leaves the orphanage, so does she in an effort to find him. She eventually becomes an electrician and takes a female lover, Lorna. Melony is stoic, who refuses to press charges against a man who brutally broke her nose and arm. She intends to later take revenge. She is the catalyst that transforms Homer from his comfortable, but not entirely admirable position, at the apple orchard into Dr. Larch's replacement.", "title": "Plot" }, { "paragraph_id": 9, "text": "Wally's experience getting shot down over Burma was based in part on that of Irving's biological father (whom he never met), who was shot down over Burma and survived.", "title": "Background" }, { "paragraph_id": 10, "text": "The novel was adapted into a film of the same name released in 1999 directed by Lasse Hallström. It starred Tobey Maguire as Homer Wells.", "title": "Film adaptation" } ]
The Cider House Rules (1985) is a novel by American writer John Irving, a Bildungsroman that was later adapted into a 1999 film and a stage play by Peter Parnell. The story, set in the pre– and post–World War II era, tells of a young man, Homer Wells, growing up under the guidance of Dr. Wilbur Larch, an obstetrician and abortion provider. The story relates his early life at Larch's orphanage in Maine and follows Homer as he eventually leaves the nest and comes of age.
2002-02-25T15:51:15Z
2023-09-14T19:52:12Z
[ "Template:For", "Template:Infobox book", "Template:Main", "Template:Cite web", "Template:John Irving", "Template:Authority control", "Template:Short description" ]
https://en.wikipedia.org/wiki/The_Cider_House_Rules
7,818
Consumer
A consumer is a person or a group who intends to order, or use purchased goods, products, or services primarily for personal, social, family, household and similar needs, who is not directly related to entrepreneurial or business activities. The term most commonly refers to a person who purchases goods and services for personal use. "Consumers, by definition, include us all," said President John F. Kennedy, offering his definition to the United States Congress on March 15, 1962. This speech became the basis for the creation of World Consumer Rights Day, now celebrated on March 15. In his speech, John Fitzgerald Kennedy outlined the integral responsibility to consumers from their respective governments to help exercise consumers' rights, including: In an economy, a consumer buys goods or services primarily for consumption and not for resale or for commercial purposes. Consumers pay some amount of money (or equivalent) for goods or services.) then consume (use up). As such, consumers play a vital role in the economic system of a capitalist system and form a fundamental part of any economy. Without consumer demand, producers would lack one of the key motivations to produce: to sell to consumers. The consumer also forms one end of the chain of distribution. Recently in marketing, instead of marketers generating broad demographic profiles and Fisio-graphic profiles of market segments, marketers have started to engage in personalized marketing, permission marketing, and mass customization to target potential consumers. Largely due to the rise of the Internet, consumers are shifting more and more towards becoming prosumers, consumers who are also producers (often of information and media on the social web) - they influence the products created (e.g. by customization, crowdfunding or publishing their preferences), actively participate in the production process, or use interactive products. The law primarily uses a notion of the consumer in relation to consumer protection laws, and the definition of consumer is often restricted to living persons (not corporations or businesses) and excludes commercial users. A typical legal rationale for protecting the consumer is based on the notion of policing market failures and inefficiencies, such as inequalities of bargaining power between a consumer and a business. As all potential voters are also consumers, consumer protection has a clear political significance. Concern over the interests of consumers has spawned consumer activism, where organized activists do research, education and advocacy to improve the offer of products and services. Consumer education has been incorporated into some school curricula. There are also various non-profit publications, such as Which?, Consumer Reports and Choice magazine, dedicated to assist in consumer education and decision making. In India, the Consumer Protection Act 1986 differentiates the consumption of a commodity or service for personal use or to earn a livelihood. Only consumers are protected per this act and any person, entity or organization purchasing a commodity for commercial reasons are exempted from any benefits of this act.
[ { "paragraph_id": 0, "text": "A consumer is a person or a group who intends to order, or use purchased goods, products, or services primarily for personal, social, family, household and similar needs, who is not directly related to entrepreneurial or business activities. The term most commonly refers to a person who purchases goods and services for personal use.", "title": "" }, { "paragraph_id": 1, "text": "\"Consumers, by definition, include us all,\" said President John F. Kennedy, offering his definition to the United States Congress on March 15, 1962. This speech became the basis for the creation of World Consumer Rights Day, now celebrated on March 15. In his speech, John Fitzgerald Kennedy outlined the integral responsibility to consumers from their respective governments to help exercise consumers' rights, including:", "title": "Consumer rights" }, { "paragraph_id": 2, "text": "In an economy, a consumer buys goods or services primarily for consumption and not for resale or for commercial purposes. Consumers pay some amount of money (or equivalent) for goods or services.) then consume (use up). As such, consumers play a vital role in the economic system of a capitalist system and form a fundamental part of any economy.", "title": "Economics and marketing" }, { "paragraph_id": 3, "text": "Without consumer demand, producers would lack one of the key motivations to produce: to sell to consumers. The consumer also forms one end of the chain of distribution.", "title": "Economics and marketing" }, { "paragraph_id": 4, "text": "Recently in marketing, instead of marketers generating broad demographic profiles and Fisio-graphic profiles of market segments, marketers have started to engage in personalized marketing, permission marketing, and mass customization to target potential consumers.", "title": "Economics and marketing" }, { "paragraph_id": 5, "text": "Largely due to the rise of the Internet, consumers are shifting more and more towards becoming prosumers, consumers who are also producers (often of information and media on the social web) - they influence the products created (e.g. by customization, crowdfunding or publishing their preferences), actively participate in the production process, or use interactive products.", "title": "Economics and marketing" }, { "paragraph_id": 6, "text": "The law primarily uses a notion of the consumer in relation to consumer protection laws, and the definition of consumer is often restricted to living persons (not corporations or businesses) and excludes commercial users. A typical legal rationale for protecting the consumer is based on the notion of policing market failures and inefficiencies, such as inequalities of bargaining power between a consumer and a business. As all potential voters are also consumers, consumer protection has a clear political significance.", "title": "Law and politics" }, { "paragraph_id": 7, "text": "Concern over the interests of consumers has spawned consumer activism, where organized activists do research, education and advocacy to improve the offer of products and services. Consumer education has been incorporated into some school curricula. There are also various non-profit publications, such as Which?, Consumer Reports and Choice magazine, dedicated to assist in consumer education and decision making.", "title": "Law and politics" }, { "paragraph_id": 8, "text": "In India, the Consumer Protection Act 1986 differentiates the consumption of a commodity or service for personal use or to earn a livelihood. Only consumers are protected per this act and any person, entity or organization purchasing a commodity for commercial reasons are exempted from any benefits of this act.", "title": "Law and politics" } ]
A consumer is a person or a group who intends to order, or use purchased goods, products, or services primarily for personal, social, family, household and similar needs, who is not directly related to entrepreneurial or business activities. The term most commonly refers to a person who purchases goods and services for personal use.
2002-01-21T15:15:56Z
2023-12-31T02:09:45Z
[ "Template:Short description", "Template:Div col end", "Template:Consumerism", "Template:Cite web", "Template:Cite book", "Template:Cite journal", "Template:About", "Template:Reflist", "Template:Cite magazine", "Template:Authority control", "Template:When?", "Template:Quantify", "Template:Citation needed", "Template:Div col" ]
https://en.wikipedia.org/wiki/Consumer
7,819
Cactus
A cactus (pl.: cacti, cactuses, or less commonly, cactus) is a member of the plant family Cactaceae (/kæˈkteɪsiaɪ, -siːiː/), a family comprising about 127 genera with some 1,750 known species of the order Caryophyllales. The word cactus derives, through Latin, from the Ancient Greek word κάκτος (káktos), a name originally used by Theophrastus for a spiny plant whose identity is now not certain. Cacti occur in a wide range of shapes and sizes. They are native to the Americas, ranging from Patagonia in the south to parts of western Canada in the north, with the exception of Rhipsalis baccifera, which is also found in Africa and Sri Lanka. Cacti are adapted to live in very dry environments, including the Atacama Desert, one of the driest places on Earth. Because of this, cacti show many adaptations to conserve water. For example, almost all cacti are succulents, meaning they have thickened, fleshy parts adapted to store water. Unlike many other succulents, the stem is the only part of most cacti where this vital process takes place. Most species of cacti have lost true leaves, retaining only spines, which are highly modified leaves. As well as defending against herbivores, spines help prevent water loss by reducing air flow close to the cactus and providing some shade. In the absence of true leaves, cacti's enlarged stems carry out photosynthesis. Cactus spines are produced from specialized structures called areoles, a kind of highly reduced branch. Areoles are an identifying feature of cacti. As well as spines, areoles give rise to flowers, which are usually tubular and multipetaled. Many cacti have short growing seasons and long dormancies and are able to react quickly to any rainfall, helped by an extensive but relatively shallow root system that quickly absorbs any water reaching the ground surface. Cactus stems are often ribbed or fluted with a number of ribs which corresponds to a number in the Fibonacci numbers (2, 3, 5, 8, 13, 21, 34 etc). This allows them to expand and contract easily for quick water absorption after rain, followed by retention over long drought periods. Like other succulent plants, most cacti employ a special mechanism called "crassulacean acid metabolism" (CAM) as part of photosynthesis. Transpiration, during which carbon dioxide enters the plant and water escapes, does not take place during the day at the same time as photosynthesis, but instead occurs at night. The plant stores the carbon dioxide it takes in as malic acid, retaining it until daylight returns, and only then using it in photosynthesis. Because transpiration takes place during the cooler, more humid night hours, water loss is significantly reduced. Many smaller cacti have globe-shaped stems, combining the highest possible volume for water storage with the lowest possible surface area for water loss from transpiration. The tallest free-standing cactus is Pachycereus pringlei, with a maximum recorded height of 19.2 m (63 ft), and the smallest is Blossfeldia liliputiana, only about 1 cm (0.4 in) in diameter at maturity. A fully grown saguaro (Carnegiea gigantea) is said to be able to absorb as much as 200 U.S. gallons (760 L; 170 imp gal) of water during a rainstorm. A few species differ significantly in appearance from most of the family. At least superficially, plants of the genera Leuenbergeria, Rhodocactus and Pereskia resemble other trees and shrubs growing around them. They have persistent leaves, and when older, bark-covered stems. Their areoles identify them as cacti, and in spite of their appearance, they, too, have many adaptations for water conservation. Leuenbergeria is considered close to the ancestral species from which all cacti evolved. In tropical regions, other cacti grow as forest climbers and epiphytes (plants that grow on trees). Their stems are typically flattened, almost leaf-like in appearance, with fewer or even no spines, such as the well-known Christmas cactus or Thanksgiving cactus (in the genus Schlumbergera). Cacti have a variety of uses: many species are used as ornamental plants, others are grown for fodder or forage, and others for food (particularly their fruit). Cochineal is the product of an insect that lives on some cacti. Many succulent plants in both the Old and New World – such as some Euphorbiaceae (euphorbias) – are also spiny stem succulents and because of this are sometimes incorrectly referred to as "cactus". The 1,500 to 1,800 species of cacti mostly fall into one of two groups of "core cacti": opuntias (subfamily Opuntioideae) and "cactoids" (subfamily Cactoideae). Most members of these two groups are easily recognizable as cacti. They have fleshy succulent stems that are major organs of photosynthesis. They have absent, small, or transient leaves. They have flowers with ovaries that lie below the sepals and petals, often deeply sunken into a fleshy receptacle (the part of the stem from which the flower parts grow). All cacti have areoles—highly specialized short shoots with extremely short internodes that produce spines, normal shoots, and flowers. The remaining cacti fall into only two groups: three tree-like genera, Leuenbergeria, Pereskia and Rhodocactus (all formerly placed in Pereskia), and the much smaller Maihuenia. These two groups are rather different from other cacti, which means any description of cacti as a whole must frequently make exceptions for them. Species of the first three genera superficially resemble other tropical forest trees. When mature, they have woody stems that may be covered with bark and long-lasting leaves that provide the main means of photosynthesis. Their flowers may have superior ovaries (i.e., above the points of attachment of the sepals and petals) and areoles that produce further leaves. The two species of Maihuenia have succulent but non-photosynthetic stems and prominent succulent leaves. Cacti show a wide variety of growth habits, which are difficult to divide into clear, simple categories. Cacti can be tree-like (arborescent), meaning they typically have a single more-or-less woody trunk topped by several to many branches. In the genera Leuenbergeria, Pereskia and Rhodocactus, the branches are covered with leaves, so the species of these genera may not be recognized as cacti. In most other cacti, the branches are more typically cactus-like, bare of leaves and bark and covered with spines, as in Pachycereus pringlei or the larger opuntias. Some cacti may become tree-sized but without branches, such as larger specimens of Echinocactus platyacanthus. Cacti may also be described as shrubby, with several stems coming from the ground or from branches very low down, such as in Stenocereus thurberi. Smaller cacti may be described as columnar. They consist of erect, cylinder-shaped stems, which may or may not branch, without a very clear division into trunk and branches. The boundary between columnar forms and tree-like or shrubby forms is difficult to define. Smaller and younger specimens of Cephalocereus senilis, for example, are columnar, whereas older and larger specimens may become tree-like. In some cases, the "columns" may be horizontal rather than vertical. Thus, Stenocereus eruca can be described as columnar even though it has stems growing along the ground, rooting at intervals. Cacti whose stems are even smaller may be described as globular (or globose). They consist of shorter, more ball-shaped stems than columnar cacti. Globular cacti may be solitary, such as Ferocactus latispinus, or their stems may form clusters that can create large mounds. All or some stems in a cluster may share a common root. Other cacti have a quite different appearance. In tropical regions, some grow as forest climbers and epiphytes. Their stems are typically flattened and almost leaf-like in appearance, with few or even no spines. Climbing cacti can be very large; a specimen of Hylocereus was reported as 100 meters (330 ft) long from root to the most distant stem. Epiphytic cacti, such as species of Rhipsalis or Schlumbergera, often hang downwards, forming dense clumps where they grow in trees high above the ground. The leafless, spiny stem is the characteristic feature of the majority of cacti (all belonging to the largest subfamily, the Cactoideae). The stem is typically succulent, meaning it is adapted to store water. The surface of the stem may be smooth (as in some species of Opuntia) or covered with protuberances of various kinds, which are usually called tubercles. These vary from small "bumps" to prominent, nipple-like shapes in the genus Mammillaria and outgrowths almost like leaves in Ariocarpus species. The stem may also be ribbed or fluted in shape. The prominence of these ribs depends on how much water the stem is storing: when full (up to 90% of the mass of a cactus may be water), the ribs may be almost invisible on the swollen stem, whereas when the cactus is short of water and the stems shrink, the ribs may be very visible. The stems of most cacti are some shade of green, often bluish or brownish green. Such stems contain chlorophyll and are able to carry out photosynthesis; they also have stomata (small structures that can open and close to allow passage of gases). Cactus stems are often visibly waxy. Areoles are structures unique to cacti. Although variable, they typically appear as woolly or hairy areas on the stems from which spines emerge. Flowers are also produced from areoles. In the genus Leuenbergeria, believed similar to the ancestor of all cacti, the areoles occur in the axils of leaves (i.e. in the angle between the leaf stalk and the stem). In leafless cacti, areoles are often borne on raised areas on the stem where leaf bases would have been. Areoles are highly specialized and very condensed shoots or branches. In a normal shoot, nodes bearing leaves or flowers would be separated by lengths of stem (internodes). In an areole, the nodes are so close together, they form a single structure. The areole may be circular, elongated into an oval shape, or even separated into two parts; the two parts may be visibly connected in some way (e.g. by a groove in the stem) or appear entirely separate (a dimorphic areole). The part nearer the top of the stem then produces flowers, the other part spines. Areoles often have multicellular hairs (trichomes) that give the areole a hairy or woolly appearance, sometimes of a distinct color such as yellow or brown. In most cacti, the areoles produce new spines or flowers only for a few years and then become inactive. This results in a relatively fixed number of spines, with flowers being produced only from the ends of stems, which are still growing and forming new areoles. In Pereskia, a genus close to the ancestor of cacti, areoles remain active for much longer; this is also the case in Opuntia and Neoraimondia. The great majority of cacti have no visible leaves; photosynthesis takes place in the stems (which may be flattened and leaflike in some species). Exceptions occur in three (taxonomically, four) groups of cacti. All the species of Leuenbergeria, Pereskia and Rhodocactus are superficially like normal trees or shrubs and have numerous leaves with a midrib and a flattened blade (lamina) on either side. This group is paraphyletic, forming two taxonomic clades. Many cacti in the opuntia group (subfamily Opuntioideae) also have visible leaves, which may be long-lasting (as in Pereskiopsis species) or produced only during the growing season and then lost (as in many species of Opuntia). The small genus Maihuenia also relies on leaves for photosynthesis. The structure of the leaves varies somewhat between these groups. Opuntioids and Maihuenia have leaves that appear to consist only of a midrib. Even those cacti without visible photosynthetic leaves do usually have very small leaves, less than 0.5 mm (0.02 in) long in about half of the species studied and almost always less than 1.5 mm (0.06 in) long. The function of such leaves cannot be photosynthesis; a role in the production of plant hormones, such as auxin, and in defining axillary buds has been suggested. Botanically, "spines" are distinguished from "thorns": spines are modified leaves, and thorns are modified branches. Cacti produce spines, always from areoles as noted above. Spines are present even in those cacti with leaves, such as Pereskia, Pereskiopsis and Maihuenia, so they clearly evolved before complete leaflessness. Some cacti only have spines when young, possibly only when seedlings. This is particularly true of tree-living cacti, such as Rhipsalis and Schlumbergera, but also of some ground-living cacti, such as Ariocarpus. The spines of cacti are often useful in identification, since they vary greatly between species in number, color, size, shape and hardness, as well as in whether all the spines produced by an areole are similar or whether they are of distinct kinds. Most spines are straight or at most slightly curved, and are described as hair-like, bristle-like, needle-like or awl-like, depending on their length and thickness. Some cacti have flattened spines (e.g. Sclerocactus papyracanthus). Other cacti have hooked spines. Sometimes, one or more central spines are hooked, while outer spines are straight (e.g., Mammillaria rekoi). In addition to normal-length spines, members of the subfamily Opuntioideae have relatively short spines, called glochids, that are barbed along their length and easily shed. These enter the skin and are difficult to remove due to being very fine and easily broken, causing long-lasting irritation. Most ground-living cacti have only fine roots, which spread out around the base of the plant for varying distances, close to the surface. Some cacti have taproots; in genera such as Ariocarpus, these are considerably larger and of a greater volume than the body. Taproots may aid in stabilizing the larger columnar cacti. Climbing, creeping and epiphytic cacti may have only adventitious roots, produced along the stems where these come into contact with a rooting medium. Like their spines, cactus flowers are variable. Typically, the ovary is surrounded by material derived from stem or receptacle tissue, forming a structure called a pericarpel. Tissue derived from the petals and sepals continues the pericarpel, forming a composite tube—the whole may be called a floral tube, although strictly speaking only the part furthest from the base is floral in origin. The outside of the tubular structure often has areoles that produce wool and spines. Typically, the tube also has small scale-like bracts, which gradually change into sepal-like and then petal-like structures, so the sepals and petals cannot be clearly differentiated (and hence are often called "tepals"). Some cacti produce floral tubes without wool or spines (e.g. Gymnocalycium) or completely devoid of any external structures (e.g. Mammillaria). Unlike the flowers of most other cacti, Pereskia flowers may be borne in clusters. Cactus flowers usually have many stamens, but only a single style, which may branch at the end into more than one stigma. The stamens usually arise from all over the inner surface of the upper part of the floral tube, although in some cacti, the stamens are produced in one or more distinct "series" in more specific areas of the inside of the floral tube. The flower as a whole is usually radially symmetrical (actinomorphic), but may be bilaterally symmetrical (zygomorphic) in some species. Flower colors range from white through yellow and red to magenta. All cacti have some adaptations to promote efficient water use. Most cacti—opuntias and cactoids—specialize in surviving in hot and dry environments (i.e. are xerophytes), but the first ancestors of modern cacti were already adapted to periods of intermittent drought. A small number of cactus species in the tribes Hylocereeae and Rhipsalideae have become adapted to life as climbers or epiphytes, often in tropical forests, where water conservation is less important. The absence of visible leaves is one of the most striking features of most cacti. Pereskia (which is close to the ancestral species from which all cacti evolved) does have long-lasting leaves, which are, however, thickened and succulent in many species. Other species of cactus with long-lasting leaves, such as the opuntioid Pereskiopsis, also have succulent leaves. A key issue in retaining water is the ratio of surface area to volume. Water loss is proportional to surface area, whereas the amount of water present is proportional to volume. Structures with a high surface area-to-volume ratio, such as thin leaves, necessarily lose water at a higher rate than structures with a low area-to-volume ratio, such as thickened stems. Spines, which are modified leaves, are present on even those cacti with true leaves, showing the evolution of spines preceded the loss of leaves. Although spines have a high surface area-to-volume ratio, at maturity they contain little or no water, being composed of fibers made up of dead cells. Spines provide protection from herbivores and camouflage in some species, and assist in water conservation in several ways. They trap air near the surface of the cactus, creating a moister layer that reduces evaporation and transpiration. They can provide some shade, which lowers the temperature of the surface of the cactus, also reducing water loss. When sufficiently moist air is present, such as during fog or early morning mist, spines can condense moisture, which then drips onto the ground and is absorbed by the roots. The majority of cacti are stem succulents, i.e., plants in which the stem is the main organ used to store water. Water may form up to 90% of the total mass of a cactus. Stem shapes vary considerably among cacti. The cylindrical shape of columnar cacti and the spherical shape of globular cacti produce a low surface area-to-volume ratio, thus reducing water loss, as well as minimizing the heating effects of sunlight. The ribbed or fluted stems of many cacti allow the stem to shrink during periods of drought and then swell as it fills with water during periods of availability. A mature saguaro (Carnegiea gigantea) is said to be able to absorb as much as 200 U.S. gallons (760 L; 170 imp gal) of water during a rainstorm. The outer layer of the stem usually has a tough cuticle, reinforced with waxy layers, which reduce water loss. These layers are responsible for the grayish or bluish tinge to the stem color of many cacti. The stems of most cacti have adaptations to allow them to conduct photosynthesis in the absence of leaves. This is discussed further below under Metabolism. Many cacti have roots that spread out widely, but only penetrate a short distance into the soil. In one case, a young saguaro only 12 cm (4.7 in) tall had a root system with a diameter of 2 m (7 ft), but no more than 10 cm (4 in) deep. Cacti can also form new roots quickly when rain falls after a drought. The concentration of salts in the root cells of cacti is relatively high. All these adaptations enable cacti to absorb water rapidly during periods of brief or light rainfall. Thus, Ferocactus cylindraceus reportedly can take up a significant amount of water within 12 hours from as little as 7 mm (0.3 in) of rainfall, becoming fully hydrated in a few days. Although in most cacti, the stem acts as the main organ for storing water, some cacti have in addition large taproots. These may be several times the length of the above-ground body in the case of species such as Copiapoa atacamensis, which grows in one of the driest places in the world, the Atacama Desert in northern Chile. Photosynthesis requires plants to take in carbon dioxide gas (CO2). As they do so, they lose water through transpiration. Like other types of succulents, cacti reduce this water loss by the way in which they carry out photosynthesis. "Normal" leafy plants use the C3 mechanism: during daylight hours, CO2 is continually drawn out of the air present in spaces inside leaves and converted first into a compound containing three carbon atoms (3-phosphoglycerate) and then into products such as carbohydrates. The access of air to internal spaces within a plant is controlled by stomata, which are able to open and close. The need for a continuous supply of CO2 during photosynthesis means the stomata must be open, so water vapor is continuously being lost. Plants using the C3 mechanism lose as much as 97% of the water taken up through their roots in this way. A further problem is that as temperatures rise, the enzyme that captures CO2 starts to capture more and more oxygen instead, reducing the efficiency of photosynthesis by up to 25%. Crassulacean acid metabolism (CAM) is a mechanism adopted by cacti and other succulents to avoid the problems of the C3 mechanism. In full CAM, the stomata open only at night, when temperatures and water loss are lowest. CO2 enters the plant and is captured in the form of organic acids stored inside cells (in vacuoles). The stomata remain closed throughout the day, and photosynthesis uses only this stored CO2. CAM uses water much more efficiently at the price of limiting the amount of carbon fixed from the atmosphere and thus available for growth. CAM-cycling is a less water-efficient system whereby stomata open in the day, just as in plants using the C3 mechanism. At night, or when the plant is short of water, the stomata close and the CAM mechanism is used to store CO2 produced by respiration for use later in photosynthesis. CAM-cycling is present in Pereskia species. By studying the ratio of C to C incorporated into a plant—its isotopic signature—it is possible to deduce how much CO2 is taken up at night and how much in the daytime. Using this approach, most of the Pereskia species investigated exhibit some degree of CAM-cycling, suggesting this ability was present in the ancestor of all cacti. Pereskia leaves are claimed to only have the C3 mechanism with CAM restricted to stems. More recent studies show that "it is highly unlikely that significant carbon assimilation occurs in the stem"; Pereskia species are described as having "C3 with inducible CAM." Leafless cacti carry out all their photosynthesis in the stem, using full CAM. As of February 2012, it is not clear whether stem-based CAM evolved once only in the core cacti, or separately in the opuntias and cactoids; CAM is known to have evolved convergently many times. To carry out photosynthesis, cactus stems have undergone many adaptations. Early in their evolutionary history, the ancestors of modern cacti (other than Leuenbergeria species) developed stomata on their stems and began to delay developing bark. However, this alone was not sufficient; cacti with only these adaptations appear to do very little photosynthesis in their stems. Stems needed to develop structures similar to those normally found only in leaves. Immediately below the outer epidermis, a hypodermal layer developed made up of cells with thickened walls, offering mechanical support. Air spaces were needed between the cells to allow carbon dioxide to diffuse inwards. The center of the stem, the cortex, developed "chlorenchyma" – a plant tissue made up of relatively unspecialized cells containing chloroplasts, arranged into a "spongy layer" and a "palisade layer" where most of the photosynthesis occurs. Naming and classifying cacti has been both difficult and controversial since the first cacti were discovered for science. The difficulties began with Carl Linnaeus. In 1737, he placed the cacti he knew into two genera, Cactus and Pereskia. However, when he published Species Plantarum in 1753—the starting point for modern botanical nomenclature—he relegated them all to one genus, Cactus. The word "cactus" is derived through Latin from the Ancient Greek κάκτος (kaktos), a name used by Theophrastus for a spiny plant, which may have been the cardoon (Cynara cardunculus). Later botanists, such as Philip Miller in 1754, divided cacti into several genera, which, in 1789, Antoine Laurent de Jussieu placed in his newly created family Cactaceae. By the early 20th century, botanists came to feel Linnaeus's name Cactus had become so confused as to its meaning (was it the genus or the family?) that it should not be used as a genus name. The 1905 Vienna botanical congress rejected the name Cactus and instead declared Mammillaria was the type genus of the family Cactaceae. It did, however, conserve the name Cactaceae, leading to the unusual situation in which the family Cactaceae no longer contains the genus after which it was named. The difficulties continued, partly because giving plants scientific names relies on "type specimens". Ultimately, if botanists want to know whether a particular plant is an example of, say, Mammillaria mammillaris, they should be able to compare it with the type specimen to which this name is permanently attached. Type specimens are normally prepared by compression and drying, after which they are stored in herbaria to act as definitive references. However, cacti are very difficult to preserve in this way; they have evolved to resist drying and their bodies do not easily compress. A further difficulty is that many cacti were given names by growers and horticulturalists rather than botanists; as a result, the provisions of the International Code of Nomenclature for algae, fungi, and plants (which governs the names of cacti, as well as other plants) were often ignored. Curt Backeberg, in particular, is said to have named or renamed 1,200 species without one of his names ever being attached to a specimen, which, according to David Hunt, ensured he "left a trail of nomenclatural chaos that will probably vex cactus taxonomists for centuries." In 1984, it was decided that the Cactaceae Section of the International Organization for Succulent Plant Study should set up a working party, now called the International Cactaceae Systematics Group (ICSG), to produce consensus classifications down to the level of genera. Their system has been used as the basis of subsequent classifications. Detailed treatments published in the 21st century have divided the family into around 125–130 genera and 1,400–1,500 species, which are then arranged into a number of tribes and subfamilies. The ICSG classification of the cactus family recognized four subfamilies, the largest of which was divided into nine tribes. The subfamilies were: Molecular phylogenetic studies have supported the monophyly of three of these subfamilies (not Pereskioideae), but have not supported all of the tribes or even genera below this level; indeed, a 2011 study found only 39% of the genera in the subfamily Cactoideae sampled in the research were monophyletic. Classification of the cacti currently remains uncertain and is likely to change. A 2005 study suggested the genus Pereskia as then circumscribed (Pereskia sensu lato) was basal within the Cactaceae, but confirmed earlier suggestions it was not monophyletic, i.e., did not include all the descendants of a common ancestor. The Bayesian consensus cladogram from this study is shown below with subsequent generic changes added. A 2011 study using fewer genes but more species also found that Pereskia s.l. was divided into the same clades, but was unable to resolve the members of the "core cacti" clade. It was accepted that the relationships shown above are "the most robust to date." Leuenbergeria species (Pereskia s.l. Clade A) always lack two key features of the stem present in most of the remaining "caulocacti": like most non-cacti, their stems begin to form bark early in the plants' life and also lack stomata—structures that control admission of air into a plant and hence control photosynthesis. By contrast, caulocacti, including species of Rhodocactus and the remaining species of Pereskia s.s., typically delay forming bark and have stomata on their stems, thus giving the stem the potential to become a major organ for photosynthesis. (The two highly specialized species of Maihuenia are something of an exception.) The first cacti are thought to have been only slightly succulent shrubs or small trees whose leaves carried out photosynthesis. They lived in tropical areas that experienced periodic drought. If Leuenbergeria is a good model of these early cacti, then, although they would have appeared superficially similar to other trees growing nearby, they had already evolved strategies to conserve water (some of which are present in members of other families in the order Caryophyllales). These strategies included being able to respond rapidly to periods of rain, and keeping transpiration low by using water very efficiently during photosynthesis. The latter was achieved by tightly controlling the opening of stomata. Like Pereskia species today, early ancestors may have been able to switch from the normal C3 mechanism, where carbon dioxide is used continuously in photosynthesis, to CAM cycling, in which when the stomata are closed, carbon dioxide produced by respiration is stored for later use in photosynthesis. The clade containing Rhodocactus and Pereskia s.s. marks the beginnings of an evolutionary switch to using stems as photosynthetic organs. Stems have stomata and the formation of bark takes place later than in normal trees. The "core cacti" show a steady increase in both stem succulence and photosynthesis accompanied by multiple losses of leaves, more-or-less complete in the Cactoideae. One evolutionary question at present unanswered is whether the switch to full CAM photosynthesis in stems occurred only once in the core cacti, in which case it has been lost in Maihuenia, or separately in Opuntioideae and Cactoideae, in which case it never evolved in Maihuenia. Understanding evolution within the core cacti clade is difficult as of February 2012, since phylogenetic relationships are still uncertain and not well related to current classifications. Thus, a 2011 study found "an extraordinarily high proportion of genera" were not monophyletic, so were not all descendants of a single common ancestor. For example, of the 36 genera in the subfamily Cactoideae sampled in the research, 22 (61%) were found not monophyletic. Nine tribes are recognized within Cactoideae in the International Cactaceae Systematics Group (ICSG) classification; one, Calymmantheae, comprises a single genus, Calymmanthium. Only two of the remaining eight – Cacteae and Rhipsalideae – were shown to be monophyletic in a 2011 study by Hernández-Hernández et al. For a more detailed discussion of the phylogeny of the cacti, see Classification of the Cactaceae. No known fossils of cacti exist to throw light on their evolutionary history. However, the geographical distribution of cacti offers some evidence. Except for a relatively recent spread of Rhipsalis baccifera to parts of the Old World, cacti are plants of South America and mainly southern regions of North America. This suggests the family must have evolved after the ancient continent of Gondwana split into South America and Africa, which occurred during the Early Cretaceous, around 145 to 101 million years ago. Precisely when after this split cacti evolved is less clear. Older sources suggest an early origin around 90 – 66 million years ago, during the Late Cretaceous. More recent molecular studies suggest a much younger origin, perhaps in very Late Eocene to early Oligocene periods, around 35–30 million years ago. Based on the phylogeny of the cacti, the earliest diverging group (Leuenbergeria) may have originated in Central America and northern South America, whereas the caulocacti, those with more-or-less succulent stems, evolved later in the southern part of South America, and then moved northwards. Core cacti, those with strongly succulent stems, are estimated to have evolved around 25 million years ago. A possible stimulus to their evolution may have been uplifting in the central Andes, some 25–20 million years ago, which was associated with increasing and varying aridity. However, the current species diversity of cacti is thought to have arisen only in the last 10–5 million years (from the late Miocene into the Pliocene). Other succulent plants, such as the Aizoaceae in South Africa, the Didiereaceae in Madagascar and the genus Agave in the Americas, appear to have diversified at the same time, which coincided with a global expansion of arid environments. Cacti inhabit diverse regions, from coastal plains to high mountain areas. With one exception, they are native to the Americas, where their range extends from Patagonia to British Columbia and Alberta in western Canada. A number of centers of diversity exist. For cacti adapted to drought, the three main centers are Mexico and the southwestern United States; the southwestern Andes, where they are found in Peru, Bolivia, Chile and Argentina; and eastern Brazil, away from the Amazon Basin. Tree-living epiphytic and climbing cacti necessarily have different centers of diversity, as they require moister environments. They are mainly found in the coastal mountains and Atlantic forests of southeastern Brazil; in Bolivia, which is the center of diversity for the subfamily Rhipsalideae; and in forested regions of Central America, where the climbing Hylocereeae are most diverse. Rhipsalis baccifera is the exception; it is native to both the Americas and the Old World, where it is found in tropical Africa, Madagascar, and Sri Lanka. One theory is it was spread by being carried as seeds in the digestive tracts of migratory birds; the seeds of Rhipsalis are adapted for bird distribution. Old World populations are polyploid, and regarded as distinct subspecies, supporting the idea that the spread was not recent. The alternative theory is the species initially crossed the Atlantic on European ships trading between South America and Africa, after which birds may have spread it more widely. Many other species have become naturalized outside the Americas after having been introduced by people, especially in Australia, Hawaii, and the Mediterranean region. In Australia, species of Opuntia, particularly Opuntia stricta, were introduced in the 19th century for use as natural agricultural fences and in an attempt to establish a cochineal industry. They rapidly became a major weed problem, but are now controlled by biological agents, particularly the moth Cactoblastis cactorum. The weed potential of Opuntia species in Australia continues however, leading to all opuntioid cacti except O. ficus-indica being declared Weeds of National Significance by the Australian Weeds Committee in April 2012. The Arabian Peninsula has a wide variety of ever-increasing, introduced cactus populations. Some of these are cultivated, some are escapes from cultivation, and some are invasives that are presumed to be ornamental escapes. Cactus flowers are pollinated by insects, birds and bats. None are known to be wind-pollinated and self-pollination occurs in only a very few species; for example the flowers of some species of Frailea do not open (cleistogamy). The need to attract pollinators has led to the evolution of pollination syndromes, which are defined as groups of "floral traits, including rewards, associated with the attraction and utilization of a specific group of animals as pollinators." Bees are the most common pollinators of cacti; bee-pollination is considered to have been the first to evolve. Day-flying butterflies and nocturnal moths are associated with different pollination syndromes. Butterfly-pollinated flowers are usually brightly colored, opening during the day, whereas moth-pollinated flowers are often white or pale in color, opening only in the evening and at night. As an example, Lophocereus schottii is pollinated by a particular species of moth, Upiga virescens, which also lays its eggs among the developing seeds its caterpillars later consume. The flowers of this cactus are funnel-shaped, white to deep pink, up to 4 cm (1.6 in) long, and open at night. Hummingbirds are significant pollinators of cacti. Species showing the typical hummingbird-pollination syndrome have flowers with colors towards the red end of the spectrum, anthers and stamens that protrude from the flower, and a shape that is not radially symmetrical, with a lower lip that bends downwards; they produce large amounts of nectar with a relatively low sugar content. Schlumbergera species, such as S. truncata, have flowers that correspond closely to this syndrome. Other hummingbird-pollinated genera include Cleistocactus and Disocactus. Bat-pollination is relatively uncommon in flowering plants, but about a quarter of the genera of cacti are known to be pollinated by bats—an unusually high proportion, exceeded among eudicots by only two other families, both with very few genera. Columnar cacti growing in semidesert areas are among those most likely to be bat-pollinated; this may be because bats are able to travel considerable distances, so are effective pollinators of plants growing widely separated from one another. The pollination syndrome associated with bats includes a tendency for flowers to open in the evening and at night, when bats are active. Other features include a relatively dull color, often white or green; a radially symmetrical shape, often tubular; a smell described as "musty"; and the production of a large amount of sugar-rich nectar. Carnegiea gigantea is an example of a bat-pollinated cactus, as are many species of Pachycereus and Pilosocereus. The fruits produced by cacti after the flowers have been fertilized vary considerably; many are fleshy, although some are dry. All contain a large number of seeds. Fleshy, colorful and sweet-tasting fruits are associated with seed dispersal by birds. The seeds pass through their digestive systems and are deposited in their droppings. Fruit that falls to the ground may be eaten by other animals; giant tortoises are reported to distribute Opuntia seeds in the Galápagos Islands. Ants appear to disperse the seeds of a few genera, such as Blossfeldia. Drier spiny fruits may cling to the fur of mammals or be moved around by the wind. As of March 2012, there is still controversy as to the precise dates when humans first entered those areas of the New World where cacti are commonly found, and hence when they might first have used them. An archaeological site in Chile has been dated to around 15,000 years ago, suggesting cacti would have been encountered before then. Early evidence of the use of cacti includes cave paintings in the Serra da Capivara in Brazil, and seeds found in ancient middens (waste dumps) in Mexico and Peru, with dates estimated at 12,000–9,000 years ago. Hunter-gatherers likely collected cactus fruits in the wild and brought them back to their camps. It is not known when cacti were first cultivated. Opuntias (prickly pears) were used for a variety of purposes by the Aztecs, whose empire, lasting from the 14th to the 16th century, had a complex system of horticulture. Their capital from the 15th century was Tenochtitlan (now Mexico City); one explanation for the origin of the name is that it includes the Nahuatl word nōchtli, referring to the fruit of an opuntia. The coat of arms of Mexico shows an eagle perched on a cactus while holding a snake, an image at the center of the myth of the founding of Tenochtitlan. The Aztecs symbolically linked the ripe red fruits of an opuntia to human hearts; just as the fruit quenches thirst, so offering human hearts to the sun god ensured the sun would keep moving. Europeans first encountered cacti when they arrived in the New World late in the 15th century. Their first landfalls were in the West Indies, where relatively few cactus genera are found; one of the most common is the genus Melocactus. Thus, melocacti were possibly among the first cacti seen by Europeans. Melocactus species were present in English collections of cacti before the end of the 16th century (by 1570 according to one source,) where they were called Echinomelocactus, later shortened to Melocactus by Joseph Pitton de Tourneville in the early 18th century. Cacti, both purely ornamental species and those with edible fruit, continued to arrive in Europe, so Carl Linnaeus was able to name 22 species by 1753. One of these, his Cactus opuntia (now part of Opuntia ficus-indica), was described as "fructu majore ... nunc in Hispania et Lusitania" (with larger fruit ... now in Spain and Portugal), indicative of its early use in Europe. The plant now known as Opuntia ficus-indica, or the Indian fig cactus, has long been an important source of food. The original species is thought to have come from central Mexico, although this is now obscure because the indigenous people of southern North America developed and distributed a range of horticultural varieties (cultivars), including forms of the species and hybrids with other opuntias. Both the fruit and pads are eaten, the former often under the Spanish name tuna, the latter under the name nopal. Cultivated forms are often significantly less spiny or even spineless. The nopal industry in Mexico was said to be worth US$150 million in 2007. The Indian fig cactus was probably already present in the Caribbean when the Spanish arrived, and was soon after brought to Europe. It spread rapidly in the Mediterranean area, both naturally and by being introduced—so much so, early botanists assumed it was native to the area. Outside the Americas, the Indian fig cactus is an important commercial crop in Sicily, Algeria and other North African countries. Fruits of other opuntias are also eaten, generally under the same name, tuna. Flower buds, particularly of Cylindropuntia species, are also consumed. Almost any fleshy cactus fruit is edible. The word pitaya or pitahaya (usually considered to have been taken into Spanish from Haitian creole) can be applied to a range of "scaly fruit", particularly those of columnar cacti. The fruit of the saguaro (Carnegiea gigantea) has long been important to the indigenous peoples of northwestern Mexico and the southwestern United States, including the Sonoran Desert. It can be preserved by boiling to produce syrup and by drying. The syrup can also be fermented to produce an alcoholic drink. Fruits of Stenocereus species have also been important food sources in similar parts of North America; Stenocereus queretaroensis is cultivated for its fruit. In more tropical southern areas, the climber Selenicereus undatus provides pitahaya orejona, now widely grown in Asia under the name dragon fruit. Other cacti providing edible fruit include species of Echinocereus, Ferocactus, Mammillaria, Myrtillocactus, Pachycereus, Peniocereus and Selenicereus. The bodies of cacti other than opuntias are less often eaten, although Anderson reported that Neowerdermannia vorwerkii is prepared and eaten like potatoes in upland Bolivia. A number of species of cacti have been shown to contain psychoactive agents, chemical compounds that can cause changes in mood, perception and cognition through their effects on the brain. Two species have a long history of use by the indigenous peoples of the Americas: peyote, Lophophora williamsii, in North America, and the San Pedro cactus, Trichocereus macrogonus var. pachanoi, in South America. Both contain mescaline. L. williamsii is native to northern Mexico and southern Texas. Individual stems are about 2–6 cm (0.8–2.4 in) high with a diameter of 4–11 cm (1.6–4.3 in), and may be found in clumps up to 1 m (3 ft) wide. A large part of the stem is usually below ground. Mescaline is concentrated in the photosynthetic portion of the stem above ground. The center of the stem, which contains the growing point (the apical meristem), is sunken. Experienced collectors of peyote remove a thin slice from the top of the plant, leaving the growing point intact, thus allowing the plant to regenerate. Evidence indicates peyote was in use more than 5,500 years ago; dried peyote buttons presumed to be from a site on the Rio Grande, Texas, were radiocarbon dated to around 3780–3660 BC. Peyote is perceived as a means of accessing the spirit world. Attempts by the Roman Catholic church to suppress its use after the Spanish conquest were largely unsuccessful, and by the middle of the 20th century, peyote was more widely used than ever by indigenous peoples as far north as Canada. It is now used formally by the Native American Church. Trichocereus macrogonus var. pachanoi (syn. Echinopsis pachanoi) is native to Ecuador and Peru. It is very different in appearance from L. williamsii. It has tall stems, up to 6 m (20 ft) high, with a diameter of 6–15 cm (2.4–5.9 in), which branch from the base, giving the whole plant a shrubby or tree-like appearance. Archaeological evidence of the use of this cactus appears to date back to 2,000–2,300 years ago, with carvings and ceramic objects showing columnar cacti. Although church authorities under the Spanish attempted to suppress its use, this failed, as shown by the Christian element in the common name "San Pedro cactus"—Saint Peter cactus. Anderson attributes the name to the belief that just as St Peter holds the keys to heaven, the effects of the cactus allow users "to reach heaven while still on earth." It continues to be used for its psychoactive effects, both for spiritual and for healing purposes, often combined with other psychoactive agents, such as Datura ferox and tobacco. Several other species of Echinopsis, including E. peruviana, also contain mescaline. Cacti were cultivated as ornamental plants from the time they were first brought from the New World. By the early 1800s, enthusiasts in Europe had large collections (often including other succulents alongside cacti). Rare plants were sold for very high prices. Suppliers of cacti and other succulents employed collectors to obtain plants from the wild, in addition to growing their own. In the late 1800s, collectors turned to orchids, and cacti became less popular, although never disappearing from cultivation. Cacti are often grown in greenhouses, particularly in regions unsuited to the cultivation of cacti outdoors, such the northern parts of Europe and North America. Here, they may be kept in pots or grown in the ground. Cacti are also grown as houseplants, many being tolerant of the often dry atmosphere. Cacti in pots may be placed outside in the summer to ornament gardens or patios, and then kept under cover during the winter. Less drought-resistant epiphytes, such as epiphyllum hybrids, Schlumbergera (the Thanksgiving or Christmas cactus) and Hatiora (the Easter cactus), are widely cultivated as houseplants. Cacti may also be planted outdoors in regions with suitable climates. Concern for water conservation in arid regions has led to the promotion of gardens requiring less watering (xeriscaping). For example, in California, the East Bay Municipal Utility District sponsored the publication of a book on plants and landscapes for summer-dry climates. Cacti are one group of drought-resistant plants recommended for dry landscape gardening. Cacti have many other uses. They are used for human food and as fodder for animals, usually after burning off their spines. In addition to their use as psychoactive agents, some cacti are employed in herbal medicine. The practice of using various species of Opuntia in this way has spread from the Americas, where they naturally occur, to other regions where they grow, such as India. Cochineal is a red dye produced by a scale insect that lives on species of Opuntia. Long used by the peoples of Central and North America, demand fell rapidly when European manufacturers began to produce synthetic dyes in the middle of the 19th century. Commercial production has now increased following a rise in demand for natural dyes. Cacti are used as construction materials. Living cactus fences are employed as barricades around buildings to prevent people breaking in. They also used to corral animals. The woody parts of cacti, such as Cereus repandus and Echinopsis atacamensis, are used in buildings and in furniture. The frames of wattle and daub houses built by the Seri people of Mexico may use parts of the saguaro (Carnegiea gigantea). The very fine spines and hairs (trichomes) of some cacti were used as a source of fiber for filling pillows and in weaving. All cacti are included in Appendix II of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), which "lists species that are not necessarily now threatened with extinction but that may become so unless trade is closely controlled." Control is exercised by making international trade in most specimens of cacti illegal unless permits have been issued, at least for exports. Some exceptions are allowed, e.g., for "naturalized or artificially propagated plants". Some cacti, such as all Ariocarpus and Discocactus species, are included in the more restrictive Appendix I, used for the "most endangered" species. These may only be moved between countries for non-commercial purposes, and only then when accompanied by both export and import permits. The three main threats to cacti in the wild are development, grazing and over-collection. Development takes many forms. The construction of a dam near Zimapan, Mexico, caused the destruction of a large part of the natural habitat of Echinocactus grusonii. Urban development and highways have destroyed cactus habitats in parts of Mexico, New Mexico and Arizona, including the Sonoran Desert. The conversion of land to agriculture has affected populations of Ariocarpus kotschoubeyanus in Mexico, where dry plains were plowed for maize cultivation, and of Copiapoa and Eulychnia in Chile, where valley slopes were planted with vines. Grazing, in many areas by introduced animals, such as goats, has caused serious damage to populations of cacti (as well as other plants); two examples cited by Anderson are the Galápagos Islands generally and the effect on Browningia candelaris in Peru. Over-collection of cacti for sale has greatly affected some species. For example, the type locality of Pelecyphora strobiliformis near Miquihuana, Mexico, was virtually denuded of plants, which were dug up for sale in Europe. Illegal collecting of cacti from the wild continues to pose a threat. Conservation of cacti can be in situ or ex situ. In situ conservation involves preserving habits through enforcement of legal protection and the creation of specially protected areas such as national parks and reserves. Examples of such protected areas in the United States include Big Bend National Park, Texas; Joshua Tree National Park, California; and Saguaro National Park, Arizona. Latin American examples include Parque Nacional del Pinacate, Sonora, Mexico and Pan de Azúcar National Park, Chile. Ex situ conservation aims to preserve plants and seeds outside their natural habitats, often with the intention of later reintroduction. Botanical gardens play an important role in ex situ conservation; for example, seeds of cacti and other succulents are kept in long-term storage at the Desert Botanical Garden, Arizona. The popularity of cacti means many books are devoted to their cultivation. Cacti naturally occur in a wide range of habitats and are then grown in many countries with different climates, so precisely replicating the conditions in which a species normally grows is usually not practical. A broad distinction can be made between semidesert cacti and epiphytic cacti, which need different conditions and are best grown separately. This section is primarily concerned with the cultivation of semidesert cacti in containers and under protection, such as in a greenhouse or in the home, rather than cultivation outside in the ground in those climates that permit it. For the cultivation of epiphytic cacti, see Cultivation of Schlumbergera (Christmas or Thanksgiving cacti), and Cultivation of epiphyllum hybrids. The purpose of the growing medium is to provide support and to store water, oxygen and dissolved minerals to feed the plant. In the case of cacti, there is general agreement that an open medium with a high air content is important. When cacti are grown in containers, recommendations as to how this should be achieved vary greatly; Miles Anderson says that if asked to describe a perfect growing medium, "ten growers would give 20 different answers". Roger Brown suggests a mixture of two parts commercial soilless growing medium, one part hydroponic clay and one part coarse pumice or perlite, with the addition of soil from earthworm castings. The general recommendation of 25–75% organic-based material, the rest being inorganic such as pumice, perlite or grit, is supported by other sources. However, the use of organic material is rejected altogether by others; Hecht says that cacti (other than epiphytes) "want soil that is low in or free of humus", and recommends coarse sand as the basis of a growing medium. Semi-desert cacti need careful watering. General advice is hard to give, since the frequency of watering required depends on where the cacti are being grown, the nature of the growing medium, and the original habitat of the cacti. Brown says that more cacti are lost through the "untimely application of water than for any other reason" and that even during the dormant winter season, cacti need some water. Other sources say that water can be withheld during winter (November to March in the Northern Hemisphere). Another issue is the hardness of the water; where it is necessary to use hard water, regular re-potting is recommended to avoid the build up of salts. The general advice given is that during the growing season, cacti should be allowed to dry out between thorough waterings. A water meter can help in determining when the soil is dry. Although semi-desert cacti may be exposed to high light levels in the wild, they may still need some shading when subjected to the higher light levels and temperatures of a greenhouse in summer. Allowing the temperature to rise above 32 °C (90 °F) is not recommended. The minimum winter temperature required depends very much on the species of cactus involved. For a mixed collection, a minimum temperature of between 5 °C (41 °F) and 10 °C (50 °F) is often suggested, except for cold-sensitive genera such as Melocactus and Discocactus. Some cacti, particularly those from the high Andes, are fully frost-hardy when kept dry (e.g. Rebutia minuscula survives temperatures down to −9 °C (16 °F) in cultivation) and may flower better when exposed to a period of cold. Cacti can be propagated by seed, cuttings or grafting. Seed sown early in the year produces seedlings that benefit from a longer growing period. Seed is sown in a moist growing medium and then kept in a covered environment, until 7–10 days after germination, to avoid drying out. A very wet growing medium can cause both seeds and seedlings to rot. A temperature range of 18–30 °C (64–86 °F) is suggested for germination; soil temperatures of around 22 °C (72 °F) promote the best root growth. Low light levels are sufficient during germination, but afterwards semi-desert cacti need higher light levels to produce strong growth, although acclimatization is needed to conditions in a greenhouse, such as higher temperatures and strong sunlight. Reproduction by cuttings makes use of parts of a plant that can grow roots. Some cacti produce "pads" or "joints" that can be detached or cleanly cut off. Other cacti produce offsets that can be removed. Otherwise, stem cuttings can be made, ideally from relatively new growth. It is recommended that any cut surfaces be allowed to dry for a period of several days to several weeks until a callus forms over the cut surface. Rooting can then take place in an appropriate growing medium at a temperature of around 22 °C (72 °F). Grafting is used for species difficult to grow well in cultivation or that cannot grow independently, such as some chlorophyll-free forms with white, yellow or red bodies, or some forms that show abnormal growth (e.g., cristate or monstrose forms). For the host plant (the stock), growers choose one that grows strongly in cultivation and is compatible with the plant to be propagated: the scion. The grower makes cuts on both stock and scion and joins the two, binding them together while they unite. Various kinds of graft are used—flat grafts, where both scion and stock are of similar diameters, and cleft grafts, where a smaller scion is inserted into a cleft made in the stock. Commercially, huge numbers of cacti are produced annually. For example, in 2002 in Korea alone, 49 million plants were propagated, with a value of almost US$9 million. Most of them (31 million plants) were propagated by grafting. A range of pests attack cacti in cultivation. Those that feed on sap include mealybugs, living on both stems and roots; scale insects, generally only found on stems; whiteflies, which are said to be an "infrequent" pest of cacti; red spider mites, which are very small but can occur in large numbers, constructing a fine web around themselves and badly marking the cactus via their sap sucking, even if they do not kill it; and thrips, which particularly attack flowers. Some of these pests are resistant to many insecticides, although there are biological controls available. Roots of cacti can be eaten by the larvae of sciarid flies and fungus gnats. Slugs and snails also eat cacti. Fungi, bacteria and viruses attack cacti, the first two particularly when plants are over-watered. Fusarium rot can gain entry through a wound and cause rotting accompanied by red-violet mold. "Helminosporium rot" is caused by Bipolaris cactivora (syn. Helminosporium cactivorum); Phytophthora species also cause similar rotting in cacti. Fungicides may be of limited value in combating these diseases. Several viruses have been found in cacti, including cactus virus X. These appear to cause only limited visible symptoms, such as chlorotic (pale green) spots and mosaic effects (streaks and patches of paler color). However, in an Agave species, cactus virus X has been shown to reduce growth, particularly when the roots are dry. There are no treatments for virus diseases.
[ { "paragraph_id": 0, "text": "A cactus (pl.: cacti, cactuses, or less commonly, cactus) is a member of the plant family Cactaceae (/kæˈkteɪsiaɪ, -siːiː/), a family comprising about 127 genera with some 1,750 known species of the order Caryophyllales. The word cactus derives, through Latin, from the Ancient Greek word κάκτος (káktos), a name originally used by Theophrastus for a spiny plant whose identity is now not certain. Cacti occur in a wide range of shapes and sizes. They are native to the Americas, ranging from Patagonia in the south to parts of western Canada in the north, with the exception of Rhipsalis baccifera, which is also found in Africa and Sri Lanka. Cacti are adapted to live in very dry environments, including the Atacama Desert, one of the driest places on Earth. Because of this, cacti show many adaptations to conserve water. For example, almost all cacti are succulents, meaning they have thickened, fleshy parts adapted to store water. Unlike many other succulents, the stem is the only part of most cacti where this vital process takes place. Most species of cacti have lost true leaves, retaining only spines, which are highly modified leaves. As well as defending against herbivores, spines help prevent water loss by reducing air flow close to the cactus and providing some shade. In the absence of true leaves, cacti's enlarged stems carry out photosynthesis.", "title": "" }, { "paragraph_id": 1, "text": "Cactus spines are produced from specialized structures called areoles, a kind of highly reduced branch. Areoles are an identifying feature of cacti. As well as spines, areoles give rise to flowers, which are usually tubular and multipetaled. Many cacti have short growing seasons and long dormancies and are able to react quickly to any rainfall, helped by an extensive but relatively shallow root system that quickly absorbs any water reaching the ground surface. Cactus stems are often ribbed or fluted with a number of ribs which corresponds to a number in the Fibonacci numbers (2, 3, 5, 8, 13, 21, 34 etc). This allows them to expand and contract easily for quick water absorption after rain, followed by retention over long drought periods. Like other succulent plants, most cacti employ a special mechanism called \"crassulacean acid metabolism\" (CAM) as part of photosynthesis. Transpiration, during which carbon dioxide enters the plant and water escapes, does not take place during the day at the same time as photosynthesis, but instead occurs at night. The plant stores the carbon dioxide it takes in as malic acid, retaining it until daylight returns, and only then using it in photosynthesis. Because transpiration takes place during the cooler, more humid night hours, water loss is significantly reduced.", "title": "" }, { "paragraph_id": 2, "text": "Many smaller cacti have globe-shaped stems, combining the highest possible volume for water storage with the lowest possible surface area for water loss from transpiration. The tallest free-standing cactus is Pachycereus pringlei, with a maximum recorded height of 19.2 m (63 ft), and the smallest is Blossfeldia liliputiana, only about 1 cm (0.4 in) in diameter at maturity. A fully grown saguaro (Carnegiea gigantea) is said to be able to absorb as much as 200 U.S. gallons (760 L; 170 imp gal) of water during a rainstorm. A few species differ significantly in appearance from most of the family. At least superficially, plants of the genera Leuenbergeria, Rhodocactus and Pereskia resemble other trees and shrubs growing around them. They have persistent leaves, and when older, bark-covered stems. Their areoles identify them as cacti, and in spite of their appearance, they, too, have many adaptations for water conservation. Leuenbergeria is considered close to the ancestral species from which all cacti evolved. In tropical regions, other cacti grow as forest climbers and epiphytes (plants that grow on trees). Their stems are typically flattened, almost leaf-like in appearance, with fewer or even no spines, such as the well-known Christmas cactus or Thanksgiving cactus (in the genus Schlumbergera).", "title": "" }, { "paragraph_id": 3, "text": "Cacti have a variety of uses: many species are used as ornamental plants, others are grown for fodder or forage, and others for food (particularly their fruit). Cochineal is the product of an insect that lives on some cacti.", "title": "" }, { "paragraph_id": 4, "text": "Many succulent plants in both the Old and New World – such as some Euphorbiaceae (euphorbias) – are also spiny stem succulents and because of this are sometimes incorrectly referred to as \"cactus\".", "title": "" }, { "paragraph_id": 5, "text": "The 1,500 to 1,800 species of cacti mostly fall into one of two groups of \"core cacti\": opuntias (subfamily Opuntioideae) and \"cactoids\" (subfamily Cactoideae). Most members of these two groups are easily recognizable as cacti. They have fleshy succulent stems that are major organs of photosynthesis. They have absent, small, or transient leaves. They have flowers with ovaries that lie below the sepals and petals, often deeply sunken into a fleshy receptacle (the part of the stem from which the flower parts grow). All cacti have areoles—highly specialized short shoots with extremely short internodes that produce spines, normal shoots, and flowers.", "title": "Morphology" }, { "paragraph_id": 6, "text": "The remaining cacti fall into only two groups: three tree-like genera, Leuenbergeria, Pereskia and Rhodocactus (all formerly placed in Pereskia), and the much smaller Maihuenia. These two groups are rather different from other cacti, which means any description of cacti as a whole must frequently make exceptions for them. Species of the first three genera superficially resemble other tropical forest trees. When mature, they have woody stems that may be covered with bark and long-lasting leaves that provide the main means of photosynthesis. Their flowers may have superior ovaries (i.e., above the points of attachment of the sepals and petals) and areoles that produce further leaves. The two species of Maihuenia have succulent but non-photosynthetic stems and prominent succulent leaves.", "title": "Morphology" }, { "paragraph_id": 7, "text": "Cacti show a wide variety of growth habits, which are difficult to divide into clear, simple categories.", "title": "Morphology" }, { "paragraph_id": 8, "text": "Cacti can be tree-like (arborescent), meaning they typically have a single more-or-less woody trunk topped by several to many branches. In the genera Leuenbergeria, Pereskia and Rhodocactus, the branches are covered with leaves, so the species of these genera may not be recognized as cacti. In most other cacti, the branches are more typically cactus-like, bare of leaves and bark and covered with spines, as in Pachycereus pringlei or the larger opuntias. Some cacti may become tree-sized but without branches, such as larger specimens of Echinocactus platyacanthus. Cacti may also be described as shrubby, with several stems coming from the ground or from branches very low down, such as in Stenocereus thurberi.", "title": "Morphology" }, { "paragraph_id": 9, "text": "Smaller cacti may be described as columnar. They consist of erect, cylinder-shaped stems, which may or may not branch, without a very clear division into trunk and branches. The boundary between columnar forms and tree-like or shrubby forms is difficult to define. Smaller and younger specimens of Cephalocereus senilis, for example, are columnar, whereas older and larger specimens may become tree-like. In some cases, the \"columns\" may be horizontal rather than vertical. Thus, Stenocereus eruca can be described as columnar even though it has stems growing along the ground, rooting at intervals.", "title": "Morphology" }, { "paragraph_id": 10, "text": "Cacti whose stems are even smaller may be described as globular (or globose). They consist of shorter, more ball-shaped stems than columnar cacti. Globular cacti may be solitary, such as Ferocactus latispinus, or their stems may form clusters that can create large mounds. All or some stems in a cluster may share a common root.", "title": "Morphology" }, { "paragraph_id": 11, "text": "Other cacti have a quite different appearance. In tropical regions, some grow as forest climbers and epiphytes. Their stems are typically flattened and almost leaf-like in appearance, with few or even no spines. Climbing cacti can be very large; a specimen of Hylocereus was reported as 100 meters (330 ft) long from root to the most distant stem. Epiphytic cacti, such as species of Rhipsalis or Schlumbergera, often hang downwards, forming dense clumps where they grow in trees high above the ground.", "title": "Morphology" }, { "paragraph_id": 12, "text": "The leafless, spiny stem is the characteristic feature of the majority of cacti (all belonging to the largest subfamily, the Cactoideae). The stem is typically succulent, meaning it is adapted to store water. The surface of the stem may be smooth (as in some species of Opuntia) or covered with protuberances of various kinds, which are usually called tubercles. These vary from small \"bumps\" to prominent, nipple-like shapes in the genus Mammillaria and outgrowths almost like leaves in Ariocarpus species. The stem may also be ribbed or fluted in shape. The prominence of these ribs depends on how much water the stem is storing: when full (up to 90% of the mass of a cactus may be water), the ribs may be almost invisible on the swollen stem, whereas when the cactus is short of water and the stems shrink, the ribs may be very visible.", "title": "Morphology" }, { "paragraph_id": 13, "text": "The stems of most cacti are some shade of green, often bluish or brownish green. Such stems contain chlorophyll and are able to carry out photosynthesis; they also have stomata (small structures that can open and close to allow passage of gases). Cactus stems are often visibly waxy.", "title": "Morphology" }, { "paragraph_id": 14, "text": "Areoles are structures unique to cacti. Although variable, they typically appear as woolly or hairy areas on the stems from which spines emerge. Flowers are also produced from areoles. In the genus Leuenbergeria, believed similar to the ancestor of all cacti, the areoles occur in the axils of leaves (i.e. in the angle between the leaf stalk and the stem). In leafless cacti, areoles are often borne on raised areas on the stem where leaf bases would have been.", "title": "Morphology" }, { "paragraph_id": 15, "text": "Areoles are highly specialized and very condensed shoots or branches. In a normal shoot, nodes bearing leaves or flowers would be separated by lengths of stem (internodes). In an areole, the nodes are so close together, they form a single structure. The areole may be circular, elongated into an oval shape, or even separated into two parts; the two parts may be visibly connected in some way (e.g. by a groove in the stem) or appear entirely separate (a dimorphic areole). The part nearer the top of the stem then produces flowers, the other part spines. Areoles often have multicellular hairs (trichomes) that give the areole a hairy or woolly appearance, sometimes of a distinct color such as yellow or brown.", "title": "Morphology" }, { "paragraph_id": 16, "text": "In most cacti, the areoles produce new spines or flowers only for a few years and then become inactive. This results in a relatively fixed number of spines, with flowers being produced only from the ends of stems, which are still growing and forming new areoles. In Pereskia, a genus close to the ancestor of cacti, areoles remain active for much longer; this is also the case in Opuntia and Neoraimondia.", "title": "Morphology" }, { "paragraph_id": 17, "text": "The great majority of cacti have no visible leaves; photosynthesis takes place in the stems (which may be flattened and leaflike in some species). Exceptions occur in three (taxonomically, four) groups of cacti. All the species of Leuenbergeria, Pereskia and Rhodocactus are superficially like normal trees or shrubs and have numerous leaves with a midrib and a flattened blade (lamina) on either side. This group is paraphyletic, forming two taxonomic clades. Many cacti in the opuntia group (subfamily Opuntioideae) also have visible leaves, which may be long-lasting (as in Pereskiopsis species) or produced only during the growing season and then lost (as in many species of Opuntia). The small genus Maihuenia also relies on leaves for photosynthesis. The structure of the leaves varies somewhat between these groups. Opuntioids and Maihuenia have leaves that appear to consist only of a midrib.", "title": "Morphology" }, { "paragraph_id": 18, "text": "Even those cacti without visible photosynthetic leaves do usually have very small leaves, less than 0.5 mm (0.02 in) long in about half of the species studied and almost always less than 1.5 mm (0.06 in) long. The function of such leaves cannot be photosynthesis; a role in the production of plant hormones, such as auxin, and in defining axillary buds has been suggested.", "title": "Morphology" }, { "paragraph_id": 19, "text": "Botanically, \"spines\" are distinguished from \"thorns\": spines are modified leaves, and thorns are modified branches. Cacti produce spines, always from areoles as noted above. Spines are present even in those cacti with leaves, such as Pereskia, Pereskiopsis and Maihuenia, so they clearly evolved before complete leaflessness. Some cacti only have spines when young, possibly only when seedlings. This is particularly true of tree-living cacti, such as Rhipsalis and Schlumbergera, but also of some ground-living cacti, such as Ariocarpus.", "title": "Morphology" }, { "paragraph_id": 20, "text": "The spines of cacti are often useful in identification, since they vary greatly between species in number, color, size, shape and hardness, as well as in whether all the spines produced by an areole are similar or whether they are of distinct kinds. Most spines are straight or at most slightly curved, and are described as hair-like, bristle-like, needle-like or awl-like, depending on their length and thickness. Some cacti have flattened spines (e.g. Sclerocactus papyracanthus). Other cacti have hooked spines. Sometimes, one or more central spines are hooked, while outer spines are straight (e.g., Mammillaria rekoi).", "title": "Morphology" }, { "paragraph_id": 21, "text": "In addition to normal-length spines, members of the subfamily Opuntioideae have relatively short spines, called glochids, that are barbed along their length and easily shed. These enter the skin and are difficult to remove due to being very fine and easily broken, causing long-lasting irritation.", "title": "Morphology" }, { "paragraph_id": 22, "text": "Most ground-living cacti have only fine roots, which spread out around the base of the plant for varying distances, close to the surface. Some cacti have taproots; in genera such as Ariocarpus, these are considerably larger and of a greater volume than the body. Taproots may aid in stabilizing the larger columnar cacti. Climbing, creeping and epiphytic cacti may have only adventitious roots, produced along the stems where these come into contact with a rooting medium.", "title": "Morphology" }, { "paragraph_id": 23, "text": "Like their spines, cactus flowers are variable. Typically, the ovary is surrounded by material derived from stem or receptacle tissue, forming a structure called a pericarpel. Tissue derived from the petals and sepals continues the pericarpel, forming a composite tube—the whole may be called a floral tube, although strictly speaking only the part furthest from the base is floral in origin. The outside of the tubular structure often has areoles that produce wool and spines. Typically, the tube also has small scale-like bracts, which gradually change into sepal-like and then petal-like structures, so the sepals and petals cannot be clearly differentiated (and hence are often called \"tepals\"). Some cacti produce floral tubes without wool or spines (e.g. Gymnocalycium) or completely devoid of any external structures (e.g. Mammillaria). Unlike the flowers of most other cacti, Pereskia flowers may be borne in clusters.", "title": "Morphology" }, { "paragraph_id": 24, "text": "Cactus flowers usually have many stamens, but only a single style, which may branch at the end into more than one stigma. The stamens usually arise from all over the inner surface of the upper part of the floral tube, although in some cacti, the stamens are produced in one or more distinct \"series\" in more specific areas of the inside of the floral tube.", "title": "Morphology" }, { "paragraph_id": 25, "text": "The flower as a whole is usually radially symmetrical (actinomorphic), but may be bilaterally symmetrical (zygomorphic) in some species. Flower colors range from white through yellow and red to magenta.", "title": "Morphology" }, { "paragraph_id": 26, "text": "", "title": "Adaptations for water conservation" }, { "paragraph_id": 27, "text": "All cacti have some adaptations to promote efficient water use. Most cacti—opuntias and cactoids—specialize in surviving in hot and dry environments (i.e. are xerophytes), but the first ancestors of modern cacti were already adapted to periods of intermittent drought. A small number of cactus species in the tribes Hylocereeae and Rhipsalideae have become adapted to life as climbers or epiphytes, often in tropical forests, where water conservation is less important.", "title": "Adaptations for water conservation" }, { "paragraph_id": 28, "text": "The absence of visible leaves is one of the most striking features of most cacti. Pereskia (which is close to the ancestral species from which all cacti evolved) does have long-lasting leaves, which are, however, thickened and succulent in many species. Other species of cactus with long-lasting leaves, such as the opuntioid Pereskiopsis, also have succulent leaves. A key issue in retaining water is the ratio of surface area to volume. Water loss is proportional to surface area, whereas the amount of water present is proportional to volume. Structures with a high surface area-to-volume ratio, such as thin leaves, necessarily lose water at a higher rate than structures with a low area-to-volume ratio, such as thickened stems.", "title": "Adaptations for water conservation" }, { "paragraph_id": 29, "text": "Spines, which are modified leaves, are present on even those cacti with true leaves, showing the evolution of spines preceded the loss of leaves. Although spines have a high surface area-to-volume ratio, at maturity they contain little or no water, being composed of fibers made up of dead cells. Spines provide protection from herbivores and camouflage in some species, and assist in water conservation in several ways. They trap air near the surface of the cactus, creating a moister layer that reduces evaporation and transpiration. They can provide some shade, which lowers the temperature of the surface of the cactus, also reducing water loss. When sufficiently moist air is present, such as during fog or early morning mist, spines can condense moisture, which then drips onto the ground and is absorbed by the roots.", "title": "Adaptations for water conservation" }, { "paragraph_id": 30, "text": "The majority of cacti are stem succulents, i.e., plants in which the stem is the main organ used to store water. Water may form up to 90% of the total mass of a cactus. Stem shapes vary considerably among cacti. The cylindrical shape of columnar cacti and the spherical shape of globular cacti produce a low surface area-to-volume ratio, thus reducing water loss, as well as minimizing the heating effects of sunlight. The ribbed or fluted stems of many cacti allow the stem to shrink during periods of drought and then swell as it fills with water during periods of availability. A mature saguaro (Carnegiea gigantea) is said to be able to absorb as much as 200 U.S. gallons (760 L; 170 imp gal) of water during a rainstorm. The outer layer of the stem usually has a tough cuticle, reinforced with waxy layers, which reduce water loss. These layers are responsible for the grayish or bluish tinge to the stem color of many cacti.", "title": "Adaptations for water conservation" }, { "paragraph_id": 31, "text": "The stems of most cacti have adaptations to allow them to conduct photosynthesis in the absence of leaves. This is discussed further below under Metabolism.", "title": "Adaptations for water conservation" }, { "paragraph_id": 32, "text": "Many cacti have roots that spread out widely, but only penetrate a short distance into the soil. In one case, a young saguaro only 12 cm (4.7 in) tall had a root system with a diameter of 2 m (7 ft), but no more than 10 cm (4 in) deep. Cacti can also form new roots quickly when rain falls after a drought. The concentration of salts in the root cells of cacti is relatively high. All these adaptations enable cacti to absorb water rapidly during periods of brief or light rainfall. Thus, Ferocactus cylindraceus reportedly can take up a significant amount of water within 12 hours from as little as 7 mm (0.3 in) of rainfall, becoming fully hydrated in a few days.", "title": "Adaptations for water conservation" }, { "paragraph_id": 33, "text": "Although in most cacti, the stem acts as the main organ for storing water, some cacti have in addition large taproots. These may be several times the length of the above-ground body in the case of species such as Copiapoa atacamensis, which grows in one of the driest places in the world, the Atacama Desert in northern Chile.", "title": "Adaptations for water conservation" }, { "paragraph_id": 34, "text": "Photosynthesis requires plants to take in carbon dioxide gas (CO2). As they do so, they lose water through transpiration. Like other types of succulents, cacti reduce this water loss by the way in which they carry out photosynthesis. \"Normal\" leafy plants use the C3 mechanism: during daylight hours, CO2 is continually drawn out of the air present in spaces inside leaves and converted first into a compound containing three carbon atoms (3-phosphoglycerate) and then into products such as carbohydrates. The access of air to internal spaces within a plant is controlled by stomata, which are able to open and close. The need for a continuous supply of CO2 during photosynthesis means the stomata must be open, so water vapor is continuously being lost. Plants using the C3 mechanism lose as much as 97% of the water taken up through their roots in this way. A further problem is that as temperatures rise, the enzyme that captures CO2 starts to capture more and more oxygen instead, reducing the efficiency of photosynthesis by up to 25%.", "title": "Adaptations for water conservation" }, { "paragraph_id": 35, "text": "Crassulacean acid metabolism (CAM) is a mechanism adopted by cacti and other succulents to avoid the problems of the C3 mechanism. In full CAM, the stomata open only at night, when temperatures and water loss are lowest. CO2 enters the plant and is captured in the form of organic acids stored inside cells (in vacuoles). The stomata remain closed throughout the day, and photosynthesis uses only this stored CO2. CAM uses water much more efficiently at the price of limiting the amount of carbon fixed from the atmosphere and thus available for growth. CAM-cycling is a less water-efficient system whereby stomata open in the day, just as in plants using the C3 mechanism. At night, or when the plant is short of water, the stomata close and the CAM mechanism is used to store CO2 produced by respiration for use later in photosynthesis. CAM-cycling is present in Pereskia species.", "title": "Adaptations for water conservation" }, { "paragraph_id": 36, "text": "By studying the ratio of C to C incorporated into a plant—its isotopic signature—it is possible to deduce how much CO2 is taken up at night and how much in the daytime. Using this approach, most of the Pereskia species investigated exhibit some degree of CAM-cycling, suggesting this ability was present in the ancestor of all cacti. Pereskia leaves are claimed to only have the C3 mechanism with CAM restricted to stems. More recent studies show that \"it is highly unlikely that significant carbon assimilation occurs in the stem\"; Pereskia species are described as having \"C3 with inducible CAM.\" Leafless cacti carry out all their photosynthesis in the stem, using full CAM. As of February 2012, it is not clear whether stem-based CAM evolved once only in the core cacti, or separately in the opuntias and cactoids; CAM is known to have evolved convergently many times.", "title": "Adaptations for water conservation" }, { "paragraph_id": 37, "text": "To carry out photosynthesis, cactus stems have undergone many adaptations. Early in their evolutionary history, the ancestors of modern cacti (other than Leuenbergeria species) developed stomata on their stems and began to delay developing bark. However, this alone was not sufficient; cacti with only these adaptations appear to do very little photosynthesis in their stems. Stems needed to develop structures similar to those normally found only in leaves. Immediately below the outer epidermis, a hypodermal layer developed made up of cells with thickened walls, offering mechanical support. Air spaces were needed between the cells to allow carbon dioxide to diffuse inwards. The center of the stem, the cortex, developed \"chlorenchyma\" – a plant tissue made up of relatively unspecialized cells containing chloroplasts, arranged into a \"spongy layer\" and a \"palisade layer\" where most of the photosynthesis occurs.", "title": "Adaptations for water conservation" }, { "paragraph_id": 38, "text": "Naming and classifying cacti has been both difficult and controversial since the first cacti were discovered for science. The difficulties began with Carl Linnaeus. In 1737, he placed the cacti he knew into two genera, Cactus and Pereskia. However, when he published Species Plantarum in 1753—the starting point for modern botanical nomenclature—he relegated them all to one genus, Cactus. The word \"cactus\" is derived through Latin from the Ancient Greek κάκτος (kaktos), a name used by Theophrastus for a spiny plant, which may have been the cardoon (Cynara cardunculus).", "title": "Taxonomy and classification" }, { "paragraph_id": 39, "text": "Later botanists, such as Philip Miller in 1754, divided cacti into several genera, which, in 1789, Antoine Laurent de Jussieu placed in his newly created family Cactaceae. By the early 20th century, botanists came to feel Linnaeus's name Cactus had become so confused as to its meaning (was it the genus or the family?) that it should not be used as a genus name. The 1905 Vienna botanical congress rejected the name Cactus and instead declared Mammillaria was the type genus of the family Cactaceae. It did, however, conserve the name Cactaceae, leading to the unusual situation in which the family Cactaceae no longer contains the genus after which it was named.", "title": "Taxonomy and classification" }, { "paragraph_id": 40, "text": "The difficulties continued, partly because giving plants scientific names relies on \"type specimens\". Ultimately, if botanists want to know whether a particular plant is an example of, say, Mammillaria mammillaris, they should be able to compare it with the type specimen to which this name is permanently attached. Type specimens are normally prepared by compression and drying, after which they are stored in herbaria to act as definitive references. However, cacti are very difficult to preserve in this way; they have evolved to resist drying and their bodies do not easily compress. A further difficulty is that many cacti were given names by growers and horticulturalists rather than botanists; as a result, the provisions of the International Code of Nomenclature for algae, fungi, and plants (which governs the names of cacti, as well as other plants) were often ignored. Curt Backeberg, in particular, is said to have named or renamed 1,200 species without one of his names ever being attached to a specimen, which, according to David Hunt, ensured he \"left a trail of nomenclatural chaos that will probably vex cactus taxonomists for centuries.\"", "title": "Taxonomy and classification" }, { "paragraph_id": 41, "text": "In 1984, it was decided that the Cactaceae Section of the International Organization for Succulent Plant Study should set up a working party, now called the International Cactaceae Systematics Group (ICSG), to produce consensus classifications down to the level of genera. Their system has been used as the basis of subsequent classifications. Detailed treatments published in the 21st century have divided the family into around 125–130 genera and 1,400–1,500 species, which are then arranged into a number of tribes and subfamilies. The ICSG classification of the cactus family recognized four subfamilies, the largest of which was divided into nine tribes. The subfamilies were:", "title": "Taxonomy and classification" }, { "paragraph_id": 42, "text": "Molecular phylogenetic studies have supported the monophyly of three of these subfamilies (not Pereskioideae), but have not supported all of the tribes or even genera below this level; indeed, a 2011 study found only 39% of the genera in the subfamily Cactoideae sampled in the research were monophyletic. Classification of the cacti currently remains uncertain and is likely to change.", "title": "Taxonomy and classification" }, { "paragraph_id": 43, "text": "A 2005 study suggested the genus Pereskia as then circumscribed (Pereskia sensu lato) was basal within the Cactaceae, but confirmed earlier suggestions it was not monophyletic, i.e., did not include all the descendants of a common ancestor. The Bayesian consensus cladogram from this study is shown below with subsequent generic changes added.", "title": "Phylogeny and evolution" }, { "paragraph_id": 44, "text": "A 2011 study using fewer genes but more species also found that Pereskia s.l. was divided into the same clades, but was unable to resolve the members of the \"core cacti\" clade. It was accepted that the relationships shown above are \"the most robust to date.\"", "title": "Phylogeny and evolution" }, { "paragraph_id": 45, "text": "Leuenbergeria species (Pereskia s.l. Clade A) always lack two key features of the stem present in most of the remaining \"caulocacti\": like most non-cacti, their stems begin to form bark early in the plants' life and also lack stomata—structures that control admission of air into a plant and hence control photosynthesis. By contrast, caulocacti, including species of Rhodocactus and the remaining species of Pereskia s.s., typically delay forming bark and have stomata on their stems, thus giving the stem the potential to become a major organ for photosynthesis. (The two highly specialized species of Maihuenia are something of an exception.)", "title": "Phylogeny and evolution" }, { "paragraph_id": 46, "text": "The first cacti are thought to have been only slightly succulent shrubs or small trees whose leaves carried out photosynthesis. They lived in tropical areas that experienced periodic drought. If Leuenbergeria is a good model of these early cacti, then, although they would have appeared superficially similar to other trees growing nearby, they had already evolved strategies to conserve water (some of which are present in members of other families in the order Caryophyllales). These strategies included being able to respond rapidly to periods of rain, and keeping transpiration low by using water very efficiently during photosynthesis. The latter was achieved by tightly controlling the opening of stomata. Like Pereskia species today, early ancestors may have been able to switch from the normal C3 mechanism, where carbon dioxide is used continuously in photosynthesis, to CAM cycling, in which when the stomata are closed, carbon dioxide produced by respiration is stored for later use in photosynthesis.", "title": "Phylogeny and evolution" }, { "paragraph_id": 47, "text": "The clade containing Rhodocactus and Pereskia s.s. marks the beginnings of an evolutionary switch to using stems as photosynthetic organs. Stems have stomata and the formation of bark takes place later than in normal trees. The \"core cacti\" show a steady increase in both stem succulence and photosynthesis accompanied by multiple losses of leaves, more-or-less complete in the Cactoideae. One evolutionary question at present unanswered is whether the switch to full CAM photosynthesis in stems occurred only once in the core cacti, in which case it has been lost in Maihuenia, or separately in Opuntioideae and Cactoideae, in which case it never evolved in Maihuenia.", "title": "Phylogeny and evolution" }, { "paragraph_id": 48, "text": "Understanding evolution within the core cacti clade is difficult as of February 2012, since phylogenetic relationships are still uncertain and not well related to current classifications. Thus, a 2011 study found \"an extraordinarily high proportion of genera\" were not monophyletic, so were not all descendants of a single common ancestor. For example, of the 36 genera in the subfamily Cactoideae sampled in the research, 22 (61%) were found not monophyletic. Nine tribes are recognized within Cactoideae in the International Cactaceae Systematics Group (ICSG) classification; one, Calymmantheae, comprises a single genus, Calymmanthium. Only two of the remaining eight – Cacteae and Rhipsalideae – were shown to be monophyletic in a 2011 study by Hernández-Hernández et al. For a more detailed discussion of the phylogeny of the cacti, see Classification of the Cactaceae.", "title": "Phylogeny and evolution" }, { "paragraph_id": 49, "text": "No known fossils of cacti exist to throw light on their evolutionary history. However, the geographical distribution of cacti offers some evidence. Except for a relatively recent spread of Rhipsalis baccifera to parts of the Old World, cacti are plants of South America and mainly southern regions of North America. This suggests the family must have evolved after the ancient continent of Gondwana split into South America and Africa, which occurred during the Early Cretaceous, around 145 to 101 million years ago. Precisely when after this split cacti evolved is less clear. Older sources suggest an early origin around 90 – 66 million years ago, during the Late Cretaceous. More recent molecular studies suggest a much younger origin, perhaps in very Late Eocene to early Oligocene periods, around 35–30 million years ago. Based on the phylogeny of the cacti, the earliest diverging group (Leuenbergeria) may have originated in Central America and northern South America, whereas the caulocacti, those with more-or-less succulent stems, evolved later in the southern part of South America, and then moved northwards. Core cacti, those with strongly succulent stems, are estimated to have evolved around 25 million years ago. A possible stimulus to their evolution may have been uplifting in the central Andes, some 25–20 million years ago, which was associated with increasing and varying aridity. However, the current species diversity of cacti is thought to have arisen only in the last 10–5 million years (from the late Miocene into the Pliocene). Other succulent plants, such as the Aizoaceae in South Africa, the Didiereaceae in Madagascar and the genus Agave in the Americas, appear to have diversified at the same time, which coincided with a global expansion of arid environments.", "title": "Phylogeny and evolution" }, { "paragraph_id": 50, "text": "Cacti inhabit diverse regions, from coastal plains to high mountain areas. With one exception, they are native to the Americas, where their range extends from Patagonia to British Columbia and Alberta in western Canada. A number of centers of diversity exist. For cacti adapted to drought, the three main centers are Mexico and the southwestern United States; the southwestern Andes, where they are found in Peru, Bolivia, Chile and Argentina; and eastern Brazil, away from the Amazon Basin. Tree-living epiphytic and climbing cacti necessarily have different centers of diversity, as they require moister environments. They are mainly found in the coastal mountains and Atlantic forests of southeastern Brazil; in Bolivia, which is the center of diversity for the subfamily Rhipsalideae; and in forested regions of Central America, where the climbing Hylocereeae are most diverse.", "title": "Distribution" }, { "paragraph_id": 51, "text": "Rhipsalis baccifera is the exception; it is native to both the Americas and the Old World, where it is found in tropical Africa, Madagascar, and Sri Lanka. One theory is it was spread by being carried as seeds in the digestive tracts of migratory birds; the seeds of Rhipsalis are adapted for bird distribution. Old World populations are polyploid, and regarded as distinct subspecies, supporting the idea that the spread was not recent. The alternative theory is the species initially crossed the Atlantic on European ships trading between South America and Africa, after which birds may have spread it more widely.", "title": "Distribution" }, { "paragraph_id": 52, "text": "Many other species have become naturalized outside the Americas after having been introduced by people, especially in Australia, Hawaii, and the Mediterranean region. In Australia, species of Opuntia, particularly Opuntia stricta, were introduced in the 19th century for use as natural agricultural fences and in an attempt to establish a cochineal industry. They rapidly became a major weed problem, but are now controlled by biological agents, particularly the moth Cactoblastis cactorum. The weed potential of Opuntia species in Australia continues however, leading to all opuntioid cacti except O. ficus-indica being declared Weeds of National Significance by the Australian Weeds Committee in April 2012.", "title": "Distribution" }, { "paragraph_id": 53, "text": "The Arabian Peninsula has a wide variety of ever-increasing, introduced cactus populations. Some of these are cultivated, some are escapes from cultivation, and some are invasives that are presumed to be ornamental escapes.", "title": "Distribution" }, { "paragraph_id": 54, "text": "Cactus flowers are pollinated by insects, birds and bats. None are known to be wind-pollinated and self-pollination occurs in only a very few species; for example the flowers of some species of Frailea do not open (cleistogamy). The need to attract pollinators has led to the evolution of pollination syndromes, which are defined as groups of \"floral traits, including rewards, associated with the attraction and utilization of a specific group of animals as pollinators.\"", "title": "Reproductive ecology" }, { "paragraph_id": 55, "text": "Bees are the most common pollinators of cacti; bee-pollination is considered to have been the first to evolve. Day-flying butterflies and nocturnal moths are associated with different pollination syndromes. Butterfly-pollinated flowers are usually brightly colored, opening during the day, whereas moth-pollinated flowers are often white or pale in color, opening only in the evening and at night. As an example, Lophocereus schottii is pollinated by a particular species of moth, Upiga virescens, which also lays its eggs among the developing seeds its caterpillars later consume. The flowers of this cactus are funnel-shaped, white to deep pink, up to 4 cm (1.6 in) long, and open at night.", "title": "Reproductive ecology" }, { "paragraph_id": 56, "text": "Hummingbirds are significant pollinators of cacti. Species showing the typical hummingbird-pollination syndrome have flowers with colors towards the red end of the spectrum, anthers and stamens that protrude from the flower, and a shape that is not radially symmetrical, with a lower lip that bends downwards; they produce large amounts of nectar with a relatively low sugar content. Schlumbergera species, such as S. truncata, have flowers that correspond closely to this syndrome. Other hummingbird-pollinated genera include Cleistocactus and Disocactus.", "title": "Reproductive ecology" }, { "paragraph_id": 57, "text": "Bat-pollination is relatively uncommon in flowering plants, but about a quarter of the genera of cacti are known to be pollinated by bats—an unusually high proportion, exceeded among eudicots by only two other families, both with very few genera. Columnar cacti growing in semidesert areas are among those most likely to be bat-pollinated; this may be because bats are able to travel considerable distances, so are effective pollinators of plants growing widely separated from one another. The pollination syndrome associated with bats includes a tendency for flowers to open in the evening and at night, when bats are active. Other features include a relatively dull color, often white or green; a radially symmetrical shape, often tubular; a smell described as \"musty\"; and the production of a large amount of sugar-rich nectar. Carnegiea gigantea is an example of a bat-pollinated cactus, as are many species of Pachycereus and Pilosocereus.", "title": "Reproductive ecology" }, { "paragraph_id": 58, "text": "The fruits produced by cacti after the flowers have been fertilized vary considerably; many are fleshy, although some are dry. All contain a large number of seeds. Fleshy, colorful and sweet-tasting fruits are associated with seed dispersal by birds. The seeds pass through their digestive systems and are deposited in their droppings. Fruit that falls to the ground may be eaten by other animals; giant tortoises are reported to distribute Opuntia seeds in the Galápagos Islands. Ants appear to disperse the seeds of a few genera, such as Blossfeldia. Drier spiny fruits may cling to the fur of mammals or be moved around by the wind.", "title": "Reproductive ecology" }, { "paragraph_id": 59, "text": "As of March 2012, there is still controversy as to the precise dates when humans first entered those areas of the New World where cacti are commonly found, and hence when they might first have used them. An archaeological site in Chile has been dated to around 15,000 years ago, suggesting cacti would have been encountered before then. Early evidence of the use of cacti includes cave paintings in the Serra da Capivara in Brazil, and seeds found in ancient middens (waste dumps) in Mexico and Peru, with dates estimated at 12,000–9,000 years ago. Hunter-gatherers likely collected cactus fruits in the wild and brought them back to their camps.", "title": "Uses" }, { "paragraph_id": 60, "text": "It is not known when cacti were first cultivated. Opuntias (prickly pears) were used for a variety of purposes by the Aztecs, whose empire, lasting from the 14th to the 16th century, had a complex system of horticulture. Their capital from the 15th century was Tenochtitlan (now Mexico City); one explanation for the origin of the name is that it includes the Nahuatl word nōchtli, referring to the fruit of an opuntia. The coat of arms of Mexico shows an eagle perched on a cactus while holding a snake, an image at the center of the myth of the founding of Tenochtitlan. The Aztecs symbolically linked the ripe red fruits of an opuntia to human hearts; just as the fruit quenches thirst, so offering human hearts to the sun god ensured the sun would keep moving.", "title": "Uses" }, { "paragraph_id": 61, "text": "Europeans first encountered cacti when they arrived in the New World late in the 15th century. Their first landfalls were in the West Indies, where relatively few cactus genera are found; one of the most common is the genus Melocactus. Thus, melocacti were possibly among the first cacti seen by Europeans. Melocactus species were present in English collections of cacti before the end of the 16th century (by 1570 according to one source,) where they were called Echinomelocactus, later shortened to Melocactus by Joseph Pitton de Tourneville in the early 18th century. Cacti, both purely ornamental species and those with edible fruit, continued to arrive in Europe, so Carl Linnaeus was able to name 22 species by 1753. One of these, his Cactus opuntia (now part of Opuntia ficus-indica), was described as \"fructu majore ... nunc in Hispania et Lusitania\" (with larger fruit ... now in Spain and Portugal), indicative of its early use in Europe.", "title": "Uses" }, { "paragraph_id": 62, "text": "The plant now known as Opuntia ficus-indica, or the Indian fig cactus, has long been an important source of food. The original species is thought to have come from central Mexico, although this is now obscure because the indigenous people of southern North America developed and distributed a range of horticultural varieties (cultivars), including forms of the species and hybrids with other opuntias. Both the fruit and pads are eaten, the former often under the Spanish name tuna, the latter under the name nopal. Cultivated forms are often significantly less spiny or even spineless. The nopal industry in Mexico was said to be worth US$150 million in 2007. The Indian fig cactus was probably already present in the Caribbean when the Spanish arrived, and was soon after brought to Europe. It spread rapidly in the Mediterranean area, both naturally and by being introduced—so much so, early botanists assumed it was native to the area. Outside the Americas, the Indian fig cactus is an important commercial crop in Sicily, Algeria and other North African countries. Fruits of other opuntias are also eaten, generally under the same name, tuna. Flower buds, particularly of Cylindropuntia species, are also consumed.", "title": "Uses" }, { "paragraph_id": 63, "text": "Almost any fleshy cactus fruit is edible. The word pitaya or pitahaya (usually considered to have been taken into Spanish from Haitian creole) can be applied to a range of \"scaly fruit\", particularly those of columnar cacti. The fruit of the saguaro (Carnegiea gigantea) has long been important to the indigenous peoples of northwestern Mexico and the southwestern United States, including the Sonoran Desert. It can be preserved by boiling to produce syrup and by drying. The syrup can also be fermented to produce an alcoholic drink. Fruits of Stenocereus species have also been important food sources in similar parts of North America; Stenocereus queretaroensis is cultivated for its fruit. In more tropical southern areas, the climber Selenicereus undatus provides pitahaya orejona, now widely grown in Asia under the name dragon fruit. Other cacti providing edible fruit include species of Echinocereus, Ferocactus, Mammillaria, Myrtillocactus, Pachycereus, Peniocereus and Selenicereus. The bodies of cacti other than opuntias are less often eaten, although Anderson reported that Neowerdermannia vorwerkii is prepared and eaten like potatoes in upland Bolivia.", "title": "Uses" }, { "paragraph_id": 64, "text": "A number of species of cacti have been shown to contain psychoactive agents, chemical compounds that can cause changes in mood, perception and cognition through their effects on the brain. Two species have a long history of use by the indigenous peoples of the Americas: peyote, Lophophora williamsii, in North America, and the San Pedro cactus, Trichocereus macrogonus var. pachanoi, in South America. Both contain mescaline.", "title": "Uses" }, { "paragraph_id": 65, "text": "L. williamsii is native to northern Mexico and southern Texas. Individual stems are about 2–6 cm (0.8–2.4 in) high with a diameter of 4–11 cm (1.6–4.3 in), and may be found in clumps up to 1 m (3 ft) wide. A large part of the stem is usually below ground. Mescaline is concentrated in the photosynthetic portion of the stem above ground. The center of the stem, which contains the growing point (the apical meristem), is sunken. Experienced collectors of peyote remove a thin slice from the top of the plant, leaving the growing point intact, thus allowing the plant to regenerate. Evidence indicates peyote was in use more than 5,500 years ago; dried peyote buttons presumed to be from a site on the Rio Grande, Texas, were radiocarbon dated to around 3780–3660 BC. Peyote is perceived as a means of accessing the spirit world. Attempts by the Roman Catholic church to suppress its use after the Spanish conquest were largely unsuccessful, and by the middle of the 20th century, peyote was more widely used than ever by indigenous peoples as far north as Canada. It is now used formally by the Native American Church.", "title": "Uses" }, { "paragraph_id": 66, "text": "Trichocereus macrogonus var. pachanoi (syn. Echinopsis pachanoi) is native to Ecuador and Peru. It is very different in appearance from L. williamsii. It has tall stems, up to 6 m (20 ft) high, with a diameter of 6–15 cm (2.4–5.9 in), which branch from the base, giving the whole plant a shrubby or tree-like appearance. Archaeological evidence of the use of this cactus appears to date back to 2,000–2,300 years ago, with carvings and ceramic objects showing columnar cacti. Although church authorities under the Spanish attempted to suppress its use, this failed, as shown by the Christian element in the common name \"San Pedro cactus\"—Saint Peter cactus. Anderson attributes the name to the belief that just as St Peter holds the keys to heaven, the effects of the cactus allow users \"to reach heaven while still on earth.\" It continues to be used for its psychoactive effects, both for spiritual and for healing purposes, often combined with other psychoactive agents, such as Datura ferox and tobacco. Several other species of Echinopsis, including E. peruviana, also contain mescaline.", "title": "Uses" }, { "paragraph_id": 67, "text": "Cacti were cultivated as ornamental plants from the time they were first brought from the New World. By the early 1800s, enthusiasts in Europe had large collections (often including other succulents alongside cacti). Rare plants were sold for very high prices. Suppliers of cacti and other succulents employed collectors to obtain plants from the wild, in addition to growing their own. In the late 1800s, collectors turned to orchids, and cacti became less popular, although never disappearing from cultivation.", "title": "Uses" }, { "paragraph_id": 68, "text": "Cacti are often grown in greenhouses, particularly in regions unsuited to the cultivation of cacti outdoors, such the northern parts of Europe and North America. Here, they may be kept in pots or grown in the ground. Cacti are also grown as houseplants, many being tolerant of the often dry atmosphere. Cacti in pots may be placed outside in the summer to ornament gardens or patios, and then kept under cover during the winter. Less drought-resistant epiphytes, such as epiphyllum hybrids, Schlumbergera (the Thanksgiving or Christmas cactus) and Hatiora (the Easter cactus), are widely cultivated as houseplants.", "title": "Uses" }, { "paragraph_id": 69, "text": "Cacti may also be planted outdoors in regions with suitable climates. Concern for water conservation in arid regions has led to the promotion of gardens requiring less watering (xeriscaping). For example, in California, the East Bay Municipal Utility District sponsored the publication of a book on plants and landscapes for summer-dry climates. Cacti are one group of drought-resistant plants recommended for dry landscape gardening.", "title": "Uses" }, { "paragraph_id": 70, "text": "Cacti have many other uses. They are used for human food and as fodder for animals, usually after burning off their spines. In addition to their use as psychoactive agents, some cacti are employed in herbal medicine. The practice of using various species of Opuntia in this way has spread from the Americas, where they naturally occur, to other regions where they grow, such as India.", "title": "Uses" }, { "paragraph_id": 71, "text": "Cochineal is a red dye produced by a scale insect that lives on species of Opuntia. Long used by the peoples of Central and North America, demand fell rapidly when European manufacturers began to produce synthetic dyes in the middle of the 19th century. Commercial production has now increased following a rise in demand for natural dyes.", "title": "Uses" }, { "paragraph_id": 72, "text": "Cacti are used as construction materials. Living cactus fences are employed as barricades around buildings to prevent people breaking in. They also used to corral animals. The woody parts of cacti, such as Cereus repandus and Echinopsis atacamensis, are used in buildings and in furniture. The frames of wattle and daub houses built by the Seri people of Mexico may use parts of the saguaro (Carnegiea gigantea). The very fine spines and hairs (trichomes) of some cacti were used as a source of fiber for filling pillows and in weaving.", "title": "Uses" }, { "paragraph_id": 73, "text": "All cacti are included in Appendix II of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), which \"lists species that are not necessarily now threatened with extinction but that may become so unless trade is closely controlled.\" Control is exercised by making international trade in most specimens of cacti illegal unless permits have been issued, at least for exports. Some exceptions are allowed, e.g., for \"naturalized or artificially propagated plants\". Some cacti, such as all Ariocarpus and Discocactus species, are included in the more restrictive Appendix I, used for the \"most endangered\" species. These may only be moved between countries for non-commercial purposes, and only then when accompanied by both export and import permits.", "title": "Conservation" }, { "paragraph_id": 74, "text": "The three main threats to cacti in the wild are development, grazing and over-collection. Development takes many forms. The construction of a dam near Zimapan, Mexico, caused the destruction of a large part of the natural habitat of Echinocactus grusonii. Urban development and highways have destroyed cactus habitats in parts of Mexico, New Mexico and Arizona, including the Sonoran Desert. The conversion of land to agriculture has affected populations of Ariocarpus kotschoubeyanus in Mexico, where dry plains were plowed for maize cultivation, and of Copiapoa and Eulychnia in Chile, where valley slopes were planted with vines. Grazing, in many areas by introduced animals, such as goats, has caused serious damage to populations of cacti (as well as other plants); two examples cited by Anderson are the Galápagos Islands generally and the effect on Browningia candelaris in Peru. Over-collection of cacti for sale has greatly affected some species. For example, the type locality of Pelecyphora strobiliformis near Miquihuana, Mexico, was virtually denuded of plants, which were dug up for sale in Europe. Illegal collecting of cacti from the wild continues to pose a threat.", "title": "Conservation" }, { "paragraph_id": 75, "text": "Conservation of cacti can be in situ or ex situ. In situ conservation involves preserving habits through enforcement of legal protection and the creation of specially protected areas such as national parks and reserves. Examples of such protected areas in the United States include Big Bend National Park, Texas; Joshua Tree National Park, California; and Saguaro National Park, Arizona. Latin American examples include Parque Nacional del Pinacate, Sonora, Mexico and Pan de Azúcar National Park, Chile. Ex situ conservation aims to preserve plants and seeds outside their natural habitats, often with the intention of later reintroduction. Botanical gardens play an important role in ex situ conservation; for example, seeds of cacti and other succulents are kept in long-term storage at the Desert Botanical Garden, Arizona.", "title": "Conservation" }, { "paragraph_id": 76, "text": "The popularity of cacti means many books are devoted to their cultivation. Cacti naturally occur in a wide range of habitats and are then grown in many countries with different climates, so precisely replicating the conditions in which a species normally grows is usually not practical. A broad distinction can be made between semidesert cacti and epiphytic cacti, which need different conditions and are best grown separately. This section is primarily concerned with the cultivation of semidesert cacti in containers and under protection, such as in a greenhouse or in the home, rather than cultivation outside in the ground in those climates that permit it. For the cultivation of epiphytic cacti, see Cultivation of Schlumbergera (Christmas or Thanksgiving cacti), and Cultivation of epiphyllum hybrids.", "title": "Cultivation" }, { "paragraph_id": 77, "text": "The purpose of the growing medium is to provide support and to store water, oxygen and dissolved minerals to feed the plant. In the case of cacti, there is general agreement that an open medium with a high air content is important. When cacti are grown in containers, recommendations as to how this should be achieved vary greatly; Miles Anderson says that if asked to describe a perfect growing medium, \"ten growers would give 20 different answers\". Roger Brown suggests a mixture of two parts commercial soilless growing medium, one part hydroponic clay and one part coarse pumice or perlite, with the addition of soil from earthworm castings. The general recommendation of 25–75% organic-based material, the rest being inorganic such as pumice, perlite or grit, is supported by other sources. However, the use of organic material is rejected altogether by others; Hecht says that cacti (other than epiphytes) \"want soil that is low in or free of humus\", and recommends coarse sand as the basis of a growing medium.", "title": "Cultivation" }, { "paragraph_id": 78, "text": "Semi-desert cacti need careful watering. General advice is hard to give, since the frequency of watering required depends on where the cacti are being grown, the nature of the growing medium, and the original habitat of the cacti. Brown says that more cacti are lost through the \"untimely application of water than for any other reason\" and that even during the dormant winter season, cacti need some water. Other sources say that water can be withheld during winter (November to March in the Northern Hemisphere). Another issue is the hardness of the water; where it is necessary to use hard water, regular re-potting is recommended to avoid the build up of salts. The general advice given is that during the growing season, cacti should be allowed to dry out between thorough waterings. A water meter can help in determining when the soil is dry.", "title": "Cultivation" }, { "paragraph_id": 79, "text": "Although semi-desert cacti may be exposed to high light levels in the wild, they may still need some shading when subjected to the higher light levels and temperatures of a greenhouse in summer. Allowing the temperature to rise above 32 °C (90 °F) is not recommended. The minimum winter temperature required depends very much on the species of cactus involved. For a mixed collection, a minimum temperature of between 5 °C (41 °F) and 10 °C (50 °F) is often suggested, except for cold-sensitive genera such as Melocactus and Discocactus. Some cacti, particularly those from the high Andes, are fully frost-hardy when kept dry (e.g. Rebutia minuscula survives temperatures down to −9 °C (16 °F) in cultivation) and may flower better when exposed to a period of cold.", "title": "Cultivation" }, { "paragraph_id": 80, "text": "Cacti can be propagated by seed, cuttings or grafting. Seed sown early in the year produces seedlings that benefit from a longer growing period. Seed is sown in a moist growing medium and then kept in a covered environment, until 7–10 days after germination, to avoid drying out. A very wet growing medium can cause both seeds and seedlings to rot. A temperature range of 18–30 °C (64–86 °F) is suggested for germination; soil temperatures of around 22 °C (72 °F) promote the best root growth. Low light levels are sufficient during germination, but afterwards semi-desert cacti need higher light levels to produce strong growth, although acclimatization is needed to conditions in a greenhouse, such as higher temperatures and strong sunlight.", "title": "Cultivation" }, { "paragraph_id": 81, "text": "Reproduction by cuttings makes use of parts of a plant that can grow roots. Some cacti produce \"pads\" or \"joints\" that can be detached or cleanly cut off. Other cacti produce offsets that can be removed. Otherwise, stem cuttings can be made, ideally from relatively new growth. It is recommended that any cut surfaces be allowed to dry for a period of several days to several weeks until a callus forms over the cut surface. Rooting can then take place in an appropriate growing medium at a temperature of around 22 °C (72 °F).", "title": "Cultivation" }, { "paragraph_id": 82, "text": "Grafting is used for species difficult to grow well in cultivation or that cannot grow independently, such as some chlorophyll-free forms with white, yellow or red bodies, or some forms that show abnormal growth (e.g., cristate or monstrose forms). For the host plant (the stock), growers choose one that grows strongly in cultivation and is compatible with the plant to be propagated: the scion. The grower makes cuts on both stock and scion and joins the two, binding them together while they unite. Various kinds of graft are used—flat grafts, where both scion and stock are of similar diameters, and cleft grafts, where a smaller scion is inserted into a cleft made in the stock.", "title": "Cultivation" }, { "paragraph_id": 83, "text": "Commercially, huge numbers of cacti are produced annually. For example, in 2002 in Korea alone, 49 million plants were propagated, with a value of almost US$9 million. Most of them (31 million plants) were propagated by grafting.", "title": "Cultivation" }, { "paragraph_id": 84, "text": "A range of pests attack cacti in cultivation. Those that feed on sap include mealybugs, living on both stems and roots; scale insects, generally only found on stems; whiteflies, which are said to be an \"infrequent\" pest of cacti; red spider mites, which are very small but can occur in large numbers, constructing a fine web around themselves and badly marking the cactus via their sap sucking, even if they do not kill it; and thrips, which particularly attack flowers. Some of these pests are resistant to many insecticides, although there are biological controls available. Roots of cacti can be eaten by the larvae of sciarid flies and fungus gnats. Slugs and snails also eat cacti.", "title": "Cultivation" }, { "paragraph_id": 85, "text": "Fungi, bacteria and viruses attack cacti, the first two particularly when plants are over-watered. Fusarium rot can gain entry through a wound and cause rotting accompanied by red-violet mold. \"Helminosporium rot\" is caused by Bipolaris cactivora (syn. Helminosporium cactivorum); Phytophthora species also cause similar rotting in cacti. Fungicides may be of limited value in combating these diseases. Several viruses have been found in cacti, including cactus virus X. These appear to cause only limited visible symptoms, such as chlorotic (pale green) spots and mosaic effects (streaks and patches of paler color). However, in an Agave species, cactus virus X has been shown to reduce growth, particularly when the roots are dry. There are no treatments for virus diseases.", "title": "Cultivation" } ]
A cactus is a member of the plant family Cactaceae, a family comprising about 127 genera with some 1,750 known species of the order Caryophyllales. The word cactus derives, through Latin, from the Ancient Greek word κάκτος (káktos), a name originally used by Theophrastus for a spiny plant whose identity is now not certain. Cacti occur in a wide range of shapes and sizes. They are native to the Americas, ranging from Patagonia in the south to parts of western Canada in the north, with the exception of Rhipsalis baccifera, which is also found in Africa and Sri Lanka. Cacti are adapted to live in very dry environments, including the Atacama Desert, one of the driest places on Earth. Because of this, cacti show many adaptations to conserve water. For example, almost all cacti are succulents, meaning they have thickened, fleshy parts adapted to store water. Unlike many other succulents, the stem is the only part of most cacti where this vital process takes place. Most species of cacti have lost true leaves, retaining only spines, which are highly modified leaves. As well as defending against herbivores, spines help prevent water loss by reducing air flow close to the cactus and providing some shade. In the absence of true leaves, cacti's enlarged stems carry out photosynthesis. Cactus spines are produced from specialized structures called areoles, a kind of highly reduced branch. Areoles are an identifying feature of cacti. As well as spines, areoles give rise to flowers, which are usually tubular and multipetaled. Many cacti have short growing seasons and long dormancies and are able to react quickly to any rainfall, helped by an extensive but relatively shallow root system that quickly absorbs any water reaching the ground surface. Cactus stems are often ribbed or fluted with a number of ribs which corresponds to a number in the Fibonacci numbers. This allows them to expand and contract easily for quick water absorption after rain, followed by retention over long drought periods. Like other succulent plants, most cacti employ a special mechanism called "crassulacean acid metabolism" (CAM) as part of photosynthesis. Transpiration, during which carbon dioxide enters the plant and water escapes, does not take place during the day at the same time as photosynthesis, but instead occurs at night. The plant stores the carbon dioxide it takes in as malic acid, retaining it until daylight returns, and only then using it in photosynthesis. Because transpiration takes place during the cooler, more humid night hours, water loss is significantly reduced. Many smaller cacti have globe-shaped stems, combining the highest possible volume for water storage with the lowest possible surface area for water loss from transpiration. The tallest free-standing cactus is Pachycereus pringlei, with a maximum recorded height of 19.2 m (63 ft), and the smallest is Blossfeldia liliputiana, only about 1 cm (0.4 in) in diameter at maturity. A fully grown saguaro is said to be able to absorb as much as 200 U.S. gallons of water during a rainstorm. A few species differ significantly in appearance from most of the family. At least superficially, plants of the genera Leuenbergeria, Rhodocactus and Pereskia resemble other trees and shrubs growing around them. They have persistent leaves, and when older, bark-covered stems. Their areoles identify them as cacti, and in spite of their appearance, they, too, have many adaptations for water conservation. Leuenbergeria is considered close to the ancestral species from which all cacti evolved. In tropical regions, other cacti grow as forest climbers and epiphytes. Their stems are typically flattened, almost leaf-like in appearance, with fewer or even no spines, such as the well-known Christmas cactus or Thanksgiving cactus. Cacti have a variety of uses: many species are used as ornamental plants, others are grown for fodder or forage, and others for food. Cochineal is the product of an insect that lives on some cacti. Many succulent plants in both the Old and New World – such as some Euphorbiaceae (euphorbias) – are also spiny stem succulents and because of this are sometimes incorrectly referred to as "cactus".
2002-02-25T15:43:11Z
2023-12-11T07:03:04Z
[ "Template:CO2", "Template:Abbr", "Template:Portal bar", "Template:Authority control", "Template:Good article", "Template:Cn", "Template:Vanchor", "Template:As of", "Template:Clade", "Template:Anchor", "Template:-", "Template:Main article", "Template:Notelist", "Template:Citation", "Template:Reflist", "Template:Harvc", "Template:Short description", "Template:Plural form", "Template:Sfnp", "Template:Update", "Template:Period span/brief", "Template:Linktext", "Template:Angiosperm families", "Template:Taxonbar", "Template:Multiple image", "Template:IPAc-en", "Template:About", "Template:Wikt-lang", "Template:Pp-semi-indef", "Template:Legend", "Template:See also", "Template:Commons category", "Template:Wikibooks", "Template:Curlie", "Template:Redirect", "Template:Automatic taxobox", "Template:Efn", "Template:Convert", "Template:Center", "Template:Lang" ]
https://en.wikipedia.org/wiki/Cactus
7,820
CCC
CCC may refer to:
[ { "paragraph_id": 0, "text": "CCC may refer to:", "title": "" } ]
CCC may refer to:
2002-01-21T21:41:32Z
2023-12-28T19:16:57Z
[ "Template:Disambiguation", "Template:Pp-move", "Template:Wiktionary", "Template:TOC right" ]
https://en.wikipedia.org/wiki/CCC
7,821
Civilian Conservation Corps
The Civilian Conservation Corps (CCC) was a voluntary government work relief program that ran from 1933 to 1942 in the United States for unemployed, unmarried men ages 18–25 and eventually expanded to ages 17–28. The CCC was a major part of President Franklin D. Roosevelt's New Deal that supplied manual labor jobs related to the conservation and development of natural resources in rural lands owned by federal, state, and local governments. The CCC was designed to supply jobs for young men and to relieve families who had difficulty finding jobs during the Great Depression in the United States. Robert Fechner was the first director of this agency, succeeded by James McEntee following Fechner's death. The largest enrollment at any one time was 300,000. Through the course of its nine years in operation, three million young men took part in the CCC, which provided them with shelter, clothing, and food, together with a wage of $30 (equivalent to $678 in current dollars) per month ($25 of which had to be sent home to their families). The American public made the CCC the most popular of all the New Deal programs. Sources written at the time claimed an individual's enrollment in the CCC led to improved physical condition, heightened morale, and increased employability. The CCC also led to a greater public awareness and appreciation of the outdoors and the nation's natural resources, and the continued need for a carefully planned, comprehensive national program for the protection and development of natural resources. The CCC operated separate programs for veterans and Native Americans. Approximately 15,000 Native Americans took part in the program, helping them weather the Great Depression. By 1942, with World War II raging and the draft in effect, the need for work relief declined, and Congress voted to close the program. As governor of New York, Franklin D. Roosevelt had run a similar program on a much smaller scale, known as the Temporary Emergency Relief Administration (TERA). It was started in early 1932 to "use men from the lists of the unemployed to improve our existing reforestation areas." In its first year alone, more than 25,000 unemployed New Yorkers were active in its paid conservation work. Long interested in conservation, as president Roosevelt proposed a full-scale national program to Congress on March 21, 1933: I propose to create [the CCC] to be used in complex work, not interfering with normal employment and confining itself to forestry, the prevention of soil erosion, flood control, and similar projects. I call your attention to the fact that this type of work is of definite, practical value, not only through the prevention of great present financial loss but also as a means of creating future national wealth. He promised this law would provide 250,000 young men with meals, housing, workwear, and medical care in exchange for their work in the national forests and other government properties. The Emergency Conservation Work (ECW) Act was introduced to Congress the same day and enacted by voice vote on March 31. Roosevelt issued Executive Order 6101 on April 5, 1933, which established the CCC organization and appointed a director, Robert Fechner, a former labor union official who served until 1939. The organization and administration of the CCC was a new experiment in operations for a federal government agency. The order directed that the program be supervised jointly by four government departments: Labor, which recruited the young men; War, which operated the camps; the Agriculture; and Interior, which organized and supervised the work projects. A CCC Advisory Council was composed of a representative from each of those departments. In addition, the Office of Education and Veterans Administration participated in the program. To overcome opposition from labor unions, which wanted no training programs started when so many of their members were unemployed, Roosevelt chose Robert Fechner, vice president of the International Association of Machinists and Aerospace Workers, as director of the Corps. William Green, head of the American Federation of Labor, was taken to the first camp to see that there was no job training involved beyond simple manual labor. Reserve officers from the U.S. Army were in charge of the camps, but there was no military training. General Douglas MacArthur was placed in charge of the program, but said that the number of army officers and soldiers assigned to the camps was affecting the readiness of the regular army. However, the army also found numerous benefits in the program. When the draft began in 1940, the policy was to make CCC alumni corporals and sergeants. The CCC also provided command experience to Organized Reserve Corps officers. George Marshall "embraced" the CCC, unlike many of his brother officers. Through the CCC, the regular army could assess the leadership performance of both regular and reserve officers. The CCC provided lessons which the army used in developing its wartime mobilization plans for training camps. An implicit goal of the CCC was to restore morale in an era of 25% unemployment for all men and much higher rates for poorly educated teenagers. Jeffrey Suzik argues in "'Building Better Men': The CCC Boy and the Changing Social Ideal of Manliness" that the CCC provided an ideology of manly outdoor work to counter the Depression, as well as cash to help the family budget. Through a regime of heavy manual labor, civic and political education, and an all-male living and working environment, the CCC tried to build "better men" who would be economically independent and self-reliant. By 1939, there was a shift in the ideal from the hardy manual worker to the highly trained citizen soldier ready for war. The legislation and mobilization of the program occurred quite rapidly. Roosevelt made his request to Congress on March 21, 1933; the legislation was submitted to Congress the same day; Congress passed it by voice vote on March 31; Roosevelt signed it the same day, then issued an executive order on April 5 creating the agency, appointing Fechner its director, and assigning War Department corps area commanders to begin enrollment. The first CCC enrollee was selected April 8, and lists of unemployed men were subsequently supplied by state and local welfare and relief agencies for immediate enrollment. On April 17, the first camp, NF-1, Camp Roosevelt, was established at George Washington National Forest near Luray, Virginia. On June 18, the first of 161 soil erosion control camps was opened in Clayton, Alabama. By July 1, 1933, there were 1,463 working camps with 250,000 junior enrollees 18–25 years of age; 28,000 veterans; 14,000 Native Americans; and 25,000 adults in the Local Experienced Men (LEM) program. The typical CCC enrollee was a U.S. citizen, unmarried, unemployed male, 18–25 years of age. Normally his family was on local relief. Each enrollee volunteered and, upon passing a physical exam and/or a period of conditioning, was required to serve a minimum six-month period, with the option to serve as many as four periods, or up to two years, if employment outside the Corps was not possible. Enrollees worked 40 hours per week over five days, sometimes including Saturdays if poor weather dictated. In return they received $30 per month (equivalent to $680 in 2022) with a compulsory allotment of $25 (about equivalent to $570 in 2022) sent to a family dependent, as well as housing, food, clothing, and medical care. Following the second Bonus Army march on Washington, D.C., President Roosevelt amended the CCC program on May 11, 1933, to include work opportunities for veterans. Veteran qualifications differed from the junior enrollee; one needed to be certified by the Veterans Administration by an application. They could be any age, and married or single as long as they were in need of work. Veterans were generally assigned to entire veteran camps. Enrollees were eligible for the following "rated" positions to help with camp administration: senior leader, mess steward, storekeeper and two cooks; assistant leader, company clerk, assistant educational advisor and three second cooks. These men received additional pay ranging from $36 to $45 per month depending on their rating. Each CCC camp was located in the area of particular conservation work to be performed and organized around a complement of up to 200 civilian enrollees in a designated numbered "company" unit. The CCC camp was a temporary community in itself, structured to have barracks (initially Army tents) for 50 enrollees each, officer/technical staff quarters, medical dispensary, mess hall, recreation hall, educational building, lavatory and showers, technical/administrative offices, tool room/blacksmith shop and motor pool garages. The company organization of each camp had a dual-authority supervisory staff: firstly, Department of War personnel or Reserve officers (until July 1, 1939), a "company commander" and junior officer, who were responsible for overall camp operation, logistics, education and training; and secondly, ten to fourteen technical service civilians, including a camp "superintendent" and "foreman", employed by either the Departments of Interior or Agriculture, responsible for the particular fieldwork. Also included in camp operation were several non-technical supervisor LEMs, who provided knowledge of the work at hand, "lay of the land," and paternal guidance for inexperienced enrollees. Enrollees were organized into work detail units called "sections" of 25 men each, according to the barracks they resided in. Each section had an enrollee "senior leader" and "assistant leader" who were accountable for the men at work and in the barracks. The CCC performed 300 types of work projects in nine approved general classifications: The responses to this seven-month experimental conservation program were enthusiastic. On October 1, 1933, Director Fechner was directed to arrange for the second period of enrollment. By January 1934, 300,000 men were enrolled. In July 1934, this cap was increased by 50,000 to include men from Midwest states that had been affected by drought. The temporary tent camps had also developed to include wooden barracks. An education program had been established, emphasizing job training and literacy. Approximately 55% of enrollees were from rural communities, a majority of which were non-farm; 45% came from urban areas. Level of education for the enrollee averaged 3% illiterate; 38% had less than eight years of school; 48% did not complete high school; and 11% were high school graduates. At the time of entry, 70% of enrollees were malnourished and poorly clothed. Few had work experience beyond occasional odd jobs. Peace was maintained by the threat of "dishonorable discharge". "This is a training station; we're going to leave morally and physically fit to lick 'Old Man Depression,'" boasted the newsletter, Happy Days, of a North Carolina camp. Because of the power of conservative Solid South white Democrats in Congress, who insisted on racial segregation, most New Deal programs were racially segregated; blacks and whites rarely worked alongside each other. At this time, all the states of the South had passed legislation imposing racial segregation and, since the turn of the century, laws and constitutional provisions that disenfranchised most blacks; they were excluded from formal politics. Because of discrimination by white officials at the local and state levels, blacks in the South did not receive as many benefits as whites from New Deal programs. In the first few weeks of operation, CCC camps in the North were integrated. By July 1935, however, all camps in the United States were segregated. Enrollment peaked at the end of 1935, when there were 500,000 men in 2,600 camps in operation in every state. All received equal pay and housing. Black leaders lobbied to secure leadership roles. Adult white men held the major leadership roles in all the camps. Director Fechner refused to appoint black adults to any supervisory positions except that of education director in the all-black camps. The CCC operated a separate division for members of federally recognized tribes: the "Indian Emergency Conservation Work Division" (IECW or CCC-ID). Native men from reservations worked on roads, bridges, clinics, shelters, and other public works near their reservations. Although they were organized as groups classified as camps, no permanent camps were established for Native Americans. Instead, organized groups moved with their families from project to project and were provided with an additional rental allowance. The CCC often provided the only paid work, as many reservations were in remote rural areas. Enrollees had to be between the ages of 17 and 35. During 1933, about half the male heads of households on the Sioux reservations in South Dakota were employed by the CCC-ID. With grants from the Public Works Administration (PWA), the Indian Division built schools and conducted a road-building program in and around many reservations to improve infrastructure. The mission was to reduce erosion and improve the value of Indian lands. Crews built dams of many types on creeks, then sowed grass on the eroded areas from which the damming material had been taken. They built roads and planted shelter-belts on federal lands. The steady income helped participants regain self-respect, and many used the funds to improve their lives. John Collier, the federal Commissioner of Indian Affairs and Daniel Murphy, the director of the CCC-ID, both based the program on Indian self-rule and the restoration of tribal lands, governments, and cultures. The next year, Congress passed the Indian Reorganization Act of 1934, which ended allotments and helped preserve tribal lands, and encouraged tribes to re-establish self-government. Collier said of the CCC-Indian Division, "no previous undertaking in Indian Service has so largely been the Indians' own undertaking". Educational programs trained participants in gardening, stock raising, safety, native arts, and some academic subjects. IECW differed from other CCC activities in that it explicitly trained men in skills to be carpenters, truck drivers, radio operators, mechanics, surveyors, and technicians. With the passage of the National Defense Vocational Training Act of 1941, enrollees began participating in defense-oriented training. The government paid for the classes and after students completed courses and passed a competency test, guaranteed automatic employment in defense work. A total of 85,000 Native Americans were enrolled in this training. This proved valuable social capital for the 24,000 alumni who later served in the military and the 40,000 who left the reservations for city jobs supporting the war effort. Responding to public demand to alleviate unemployment, Congress approved the Emergency Relief Appropriation Act of 1935, on April 8, 1935, which included continued funding for the CCC program through March 31, 1937. The age limit was expanded to 17–28 to include more men. April 1, 1935, to March 31, 1936, was the period of greatest activity and work accomplished by the CCC program. Enrollment peaked at 505,782 in about 2,900 camps by August 31, 1935, followed by a reduction to 350,000 enrollees in 2,019 camps by June 30, 1936. During this period the public response to the CCC program was overwhelmingly popular. A Gallup poll of April 18, 1936, asked: "Are you in favor of the CCC camps?"; 82% of respondents said "yes", including 92% of Democrats and 67% of Republicans. On June 28, 1937, the Civilian Conservation Corps was legally established and transferred from its original designation as the Emergency Conservation Work program. Funding was extended for three more years by Public Law No. 163, 75th Congress, effective July 1, 1937. Congress changed the age limits to 17–23 years old and changed the requirement that enrollees be on relief to "not regularly in attendance at school, or possessing full-time employment." The 1937 law mandated the inclusion of vocational and academic training for a minimum of 10 hours per week. Students in school were allowed to enroll during summer vacation. During this period, the CCC forces contributed to disaster relief following 1937 floods in New York, Vermont, and the Ohio and Mississippi river valleys, and response and clean-up after the 1938 hurricane in New England. In 1939 Congress ended the independent status of the CCC, transferring it to the control of the Federal Security Agency. The National Youth Administration, U.S. Employment Service, the Office of Education, and the Works Progress Administration also had some responsibilities. About 5,000 reserve officers serving in the camps were affected, as they were transferred to federal Civil Service, and military ranks and titles were eliminated. Despite the loss of overt military leadership in the camps by July 1940, with war underway in Europe and Asia, the government directed an increasing number of CCC projects to resources for national defense. It developed infrastructure for military training facilities and forest protection. By 1940 the CCC was no longer wholly a relief agency, was rapidly losing its non-military character, and it was becoming a system for work-training, as its ranks had become increasingly younger and inexperienced. Although the CCC was probably the most popular New Deal program, it never was authorized as a permanent agency. The program was reduced in scale as the Depression waned and employment opportunities improved. After conscription began in 1940, fewer eligible young men were available. Following the attack on Pearl Harbor in December 1941, the Roosevelt administration directed all federal programs to emphasize the war effort. Most CCC work, except for wildland firefighting, was shifted onto U.S. military bases to help with construction. The CCC disbanded one year earlier than planned, as the 77th United States Congress ceased funding it. Operations were formally concluded at the end of the federal fiscal year on June 30, 1942. The end of the CCC program and closing of the camps involved arrangements to leave the incomplete work projects in the best possible state, the separation of about 1,800 appointed employees, the transfer of CCC property to the War and Navy Departments and other agencies, and the preparation of final accountability records. Liquidation of the CCC was ordered by Congress by the Labor-Federal Security Appropriation Act (56 Stat. 569) on July 2, 1942, and virtually completed on June 30, 1943. Liquidation appropriations for the CCC continued through April 20, 1948. Some former CCC sites in good condition were reactivated from 1941 to 1947 as Civilian Public Service camps where conscientious objectors performed "work of national importance" as an alternative to military service. Other camps were used to hold Japanese, German and Italian Americans interned under the Western Defense Command's Enemy Alien Control Program, as well as Axis prisoners of war. Most of the Japanese American internment camps were built by the people held there. After the CCC disbanded, the federal agencies responsible for public lands organized their own seasonal fire crews, modeled after the CCC. These have performed a firefighting function formerly done by the CCC and provided the same sort of outdoor work experience for young people. Approximately 47 young men have died while in this line of duty. In several cities where CCC workers worked, statues were erected to commemorate them. The CCC program was never officially terminated. Congress provided funding for closing the remaining camps in 1942 with the equipment being reallocated. It became a model for conservation programs that were implemented in the period after World War II. Present-day corps are national, state, and local programs that engage primarily youth and young adults (ages 16–25) in community service, training, and educational activities. The nation's approximately 113 corps programs operate in 41 of the 50 states and Washington, D.C. During 2004, they enrolled more than 23,000 young people. The Corps Network, known originally as the National Association of Service and Conservation Corps (NASCC), works to expand and enhance corps-type programs throughout the country. The Corps Network began in 1985 when the nation's first 24 Corps directors banded together to secure an advocate at the federal level and a repository of information on how best to start and manage a corps. Early financial assistance from the Ford, Hewlett and Mott Foundations was critical to establishing the association. Similar active programs in the United States are: the National Civilian Community Corps, part of the AmeriCorps program, a team-based national service program in which young adults ages 18–24 spend 10 months working for non-profit and government organizations; and the Civilian Conservation Corps, USA, (CCCUSA) managed by its president, Thomas Hark, in 2016. Hark, his co-founder Mike Rama, currently the Deputy Director of the Corporate Eco Forum (CEF) founded by M. R. Rangaswami, and their team of strategic advisors have reimagined the federal Civilian Conservation Corps program of the 1930s as a private, locally governed, national social franchise. The goal of this recently established CCCUSA is to enroll a million young people annually, building a core set of values in each enrollee, who will then become the catalyst in their own communities and states to create a more civil society and stronger nation. The CCC program became a model for the creation of team-based national service youth conservation programs such as the Student Conservation Association (SCA). The SCA, founded in 1959, is a nonprofit organization that offers conservation internships and summer trail crew opportunities to more than 4,000 people each year. In 1976, Governor of California Jerry Brown established the California Conservation Corps. This program had many similar characteristics - residential centers, high expectations for participation, and emphasis on hard work on public lands. Young adults from different backgrounds were recruited for a term of one year. Corps members attended a training session called the Corpsmember Orientation Motivation Education and Training (COMET) program before being assigned to one of the various centers. Project work is also similar to the original CCC of the 1930s - work on public forests, state and federal parks. The Nevada Conservation Corps is a non-profit organization that partners with public land management agencies such as the Bureau of Land Management, United States Forest Service, National Park Service, and Nevada State Parks to complete conservation and restoration projects throughout Nevada. Conservation work includes fuel reductions through thinning, constructing and maintaining trails, invasive species removal, and performing biological surveys. The Nevada Conservation Corps was created through the Great Basin Institute and is part of the AmeriCorps program. Conservation Corps Minnesota & Iowa provides environmental stewardship and service-learning opportunities to youth and young adults while accomplishing conservation, natural resource management projects and emergency response work through its Young Adult Program and the Summer Youth Program. These programs emphasize the development of job and life skills by conservation and community service work. The Montana Conservation Corps (MCC) is a non-profit organization with a mission to equip young people with the skills and values to be vigorous citizens who improve their communities and environment. Collectively, MCC crews contribute more than 90,000 work hours each year. The MCC was established in 1991 by Montana's Human Resource Development Councils in Billings, Bozeman and Kalispell. Originally, it was a summer program for disadvantaged youth, although it has grown into an AmeriCorps-sponsored non-profit organization with six regional offices that serve Montana, Idaho, Wyoming, North Dakota, and South Dakota. All regions also offer Montana YES (Youth Engaged in Service) summer programs for teenagers who are 14 to 17 years old. Established in 1995, Environmental Corps, now Texas Conservation Corps (TxCC), is an American YouthWorks program which allows youth, ages 17 to 28, to contribute to the restoration and preservation of parks and public lands in Texas. The only conservation corps in Texas, TxcC is a nonprofit corporation based in Austin, Texas, which serves the entire state. Their work ranges from disaster relief to trail building to habitat restoration. TxCC has done projects in national, state, and city parks. The Washington Conservation Corps (WCC) is a sub-agency of the Washington State Department of Ecology. It employs men and women 18 to 25 years old in a program to protect and enhance Washington's natural resources. WCC is a part of the AmeriCorps program. The Vermont Youth Conservation Corps (VYCC) is a non-profit, youth service and education organization that hires Corps Members, aged 16–24, to work on high-priority conservation projects in Vermont. Through these work projects, Corps Members develop a strong work ethic, strengthen their leadership skills, and learn how to take personal responsibility for their actions. VYCC Crews work at VT State Parks, U.S. Forest Service Campgrounds, in local communities, and throughout the state's backcountry. The VYCC has also given aid to a similar program in North Carolina, which is currently in its infancy. The Youth Conservation Corps is a youth conservation program present in federal lands around the country. The program gives youth aged 13–17 the opportunity to participate in conservation projects in a team setting. YCC programs are available in land managed by the National Park Service, the Forest Service, and the Fish and Wildlife Service. Projects can last up to 10 weeks and typically run over the summer. Some YCC programs are residential, meaning the participants are given housing on the land they work on. Projects may necessitate youth to camp in backcountry settings in order to work on trails or campsites. Most require youth to commute daily or house youth for only a few days a week. Youth are typically paid for their work. YCC programs contribute to the maintenance of public lands and instill a value for hard work and the outdoors in those who participate. Conservation Legacy is a non-profit employment, job training, and education organization with locations across the United States including Arizona Conservation Corps in Tucson and Flagstaff, Arizona; Conservation Corps New Mexico in Las Cruces, New Mexico; Southwest Conservation Corps in Durango and Salida, Colorado; and Southeast Conservation Corps in Chattanooga, Tennessee. Conservation Legacy also operates an AmeriCorps VISTA team serving to improve the environment and economies of historic mining communities in the American West and Appalachia. Conservation Legacy also hosts the Environmental Stewards Program - providing internships with federal, state, municipal and NGO land management agencies nationwide. Conservation Legacy formed as a merger of the Southwest Youth Corps, San Luis Valley Youth Corps, The Youth Corps of Southern Arizona, and Coconino Rural Environmental Corps. Conservation Legacy engages young adults ages 14 to 26 and U.S. military veterans of all ages in personal and professional development experiences involving conservation projects on public lands. Corp members live, work, and learn in teams of six to eight for terms of service ranging from 3 months to 1 year. The Sea Ranger Service is a social enterprise, based in Netherlands, that has taken its inspiration from the Civilian Conservation Corps in running a permanent youth training program, supported by veterans, to manage ocean areas and carry out underwater landscape restoration. Unemployed youths are trained up as Sea Rangers during a bootcamp and subsequently offered full-time employment to manage and regenerate Marine Protected Areas and aid ocean conservation. The Sea Ranger Service works in close cooperation with the Dutch government and national maritime authorities. The Aina Corps performed environmental restoration work in Hawaii in 2020, funded by the CARES Act.
[ { "paragraph_id": 0, "text": "The Civilian Conservation Corps (CCC) was a voluntary government work relief program that ran from 1933 to 1942 in the United States for unemployed, unmarried men ages 18–25 and eventually expanded to ages 17–28. The CCC was a major part of President Franklin D. Roosevelt's New Deal that supplied manual labor jobs related to the conservation and development of natural resources in rural lands owned by federal, state, and local governments. The CCC was designed to supply jobs for young men and to relieve families who had difficulty finding jobs during the Great Depression in the United States.", "title": "" }, { "paragraph_id": 1, "text": "Robert Fechner was the first director of this agency, succeeded by James McEntee following Fechner's death. The largest enrollment at any one time was 300,000. Through the course of its nine years in operation, three million young men took part in the CCC, which provided them with shelter, clothing, and food, together with a wage of $30 (equivalent to $678 in current dollars) per month ($25 of which had to be sent home to their families).", "title": "" }, { "paragraph_id": 2, "text": "The American public made the CCC the most popular of all the New Deal programs. Sources written at the time claimed an individual's enrollment in the CCC led to improved physical condition, heightened morale, and increased employability. The CCC also led to a greater public awareness and appreciation of the outdoors and the nation's natural resources, and the continued need for a carefully planned, comprehensive national program for the protection and development of natural resources.", "title": "" }, { "paragraph_id": 3, "text": "The CCC operated separate programs for veterans and Native Americans. Approximately 15,000 Native Americans took part in the program, helping them weather the Great Depression.", "title": "" }, { "paragraph_id": 4, "text": "By 1942, with World War II raging and the draft in effect, the need for work relief declined, and Congress voted to close the program.", "title": "" }, { "paragraph_id": 5, "text": "As governor of New York, Franklin D. Roosevelt had run a similar program on a much smaller scale, known as the Temporary Emergency Relief Administration (TERA). It was started in early 1932 to \"use men from the lists of the unemployed to improve our existing reforestation areas.\" In its first year alone, more than 25,000 unemployed New Yorkers were active in its paid conservation work. Long interested in conservation, as president Roosevelt proposed a full-scale national program to Congress on March 21, 1933:", "title": "Founding" }, { "paragraph_id": 6, "text": "I propose to create [the CCC] to be used in complex work, not interfering with normal employment and confining itself to forestry, the prevention of soil erosion, flood control, and similar projects. I call your attention to the fact that this type of work is of definite, practical value, not only through the prevention of great present financial loss but also as a means of creating future national wealth.", "title": "Founding" }, { "paragraph_id": 7, "text": "He promised this law would provide 250,000 young men with meals, housing, workwear, and medical care in exchange for their work in the national forests and other government properties. The Emergency Conservation Work (ECW) Act was introduced to Congress the same day and enacted by voice vote on March 31. Roosevelt issued Executive Order 6101 on April 5, 1933, which established the CCC organization and appointed a director, Robert Fechner, a former labor union official who served until 1939. The organization and administration of the CCC was a new experiment in operations for a federal government agency. The order directed that the program be supervised jointly by four government departments: Labor, which recruited the young men; War, which operated the camps; the Agriculture; and Interior, which organized and supervised the work projects. A CCC Advisory Council was composed of a representative from each of those departments. In addition, the Office of Education and Veterans Administration participated in the program. To overcome opposition from labor unions, which wanted no training programs started when so many of their members were unemployed, Roosevelt chose Robert Fechner, vice president of the International Association of Machinists and Aerospace Workers, as director of the Corps. William Green, head of the American Federation of Labor, was taken to the first camp to see that there was no job training involved beyond simple manual labor.", "title": "Founding" }, { "paragraph_id": 8, "text": "Reserve officers from the U.S. Army were in charge of the camps, but there was no military training. General Douglas MacArthur was placed in charge of the program, but said that the number of army officers and soldiers assigned to the camps was affecting the readiness of the regular army. However, the army also found numerous benefits in the program. When the draft began in 1940, the policy was to make CCC alumni corporals and sergeants. The CCC also provided command experience to Organized Reserve Corps officers. George Marshall \"embraced\" the CCC, unlike many of his brother officers.", "title": "U.S. Army" }, { "paragraph_id": 9, "text": "Through the CCC, the regular army could assess the leadership performance of both regular and reserve officers. The CCC provided lessons which the army used in developing its wartime mobilization plans for training camps.", "title": "U.S. Army" }, { "paragraph_id": 10, "text": "An implicit goal of the CCC was to restore morale in an era of 25% unemployment for all men and much higher rates for poorly educated teenagers. Jeffrey Suzik argues in \"'Building Better Men': The CCC Boy and the Changing Social Ideal of Manliness\" that the CCC provided an ideology of manly outdoor work to counter the Depression, as well as cash to help the family budget. Through a regime of heavy manual labor, civic and political education, and an all-male living and working environment, the CCC tried to build \"better men\" who would be economically independent and self-reliant. By 1939, there was a shift in the ideal from the hardy manual worker to the highly trained citizen soldier ready for war.", "title": "History" }, { "paragraph_id": 11, "text": "The legislation and mobilization of the program occurred quite rapidly. Roosevelt made his request to Congress on March 21, 1933; the legislation was submitted to Congress the same day; Congress passed it by voice vote on March 31; Roosevelt signed it the same day, then issued an executive order on April 5 creating the agency, appointing Fechner its director, and assigning War Department corps area commanders to begin enrollment. The first CCC enrollee was selected April 8, and lists of unemployed men were subsequently supplied by state and local welfare and relief agencies for immediate enrollment. On April 17, the first camp, NF-1, Camp Roosevelt, was established at George Washington National Forest near Luray, Virginia. On June 18, the first of 161 soil erosion control camps was opened in Clayton, Alabama. By July 1, 1933, there were 1,463 working camps with 250,000 junior enrollees 18–25 years of age; 28,000 veterans; 14,000 Native Americans; and 25,000 adults in the Local Experienced Men (LEM) program.", "title": "History" }, { "paragraph_id": 12, "text": "The typical CCC enrollee was a U.S. citizen, unmarried, unemployed male, 18–25 years of age. Normally his family was on local relief. Each enrollee volunteered and, upon passing a physical exam and/or a period of conditioning, was required to serve a minimum six-month period, with the option to serve as many as four periods, or up to two years, if employment outside the Corps was not possible. Enrollees worked 40 hours per week over five days, sometimes including Saturdays if poor weather dictated. In return they received $30 per month (equivalent to $680 in 2022) with a compulsory allotment of $25 (about equivalent to $570 in 2022) sent to a family dependent, as well as housing, food, clothing, and medical care.", "title": "History" }, { "paragraph_id": 13, "text": "", "title": "History" }, { "paragraph_id": 14, "text": "Following the second Bonus Army march on Washington, D.C., President Roosevelt amended the CCC program on May 11, 1933, to include work opportunities for veterans. Veteran qualifications differed from the junior enrollee; one needed to be certified by the Veterans Administration by an application. They could be any age, and married or single as long as they were in need of work. Veterans were generally assigned to entire veteran camps. Enrollees were eligible for the following \"rated\" positions to help with camp administration: senior leader, mess steward, storekeeper and two cooks; assistant leader, company clerk, assistant educational advisor and three second cooks. These men received additional pay ranging from $36 to $45 per month depending on their rating.", "title": "History" }, { "paragraph_id": 15, "text": "Each CCC camp was located in the area of particular conservation work to be performed and organized around a complement of up to 200 civilian enrollees in a designated numbered \"company\" unit. The CCC camp was a temporary community in itself, structured to have barracks (initially Army tents) for 50 enrollees each, officer/technical staff quarters, medical dispensary, mess hall, recreation hall, educational building, lavatory and showers, technical/administrative offices, tool room/blacksmith shop and motor pool garages.", "title": "History" }, { "paragraph_id": 16, "text": "The company organization of each camp had a dual-authority supervisory staff: firstly, Department of War personnel or Reserve officers (until July 1, 1939), a \"company commander\" and junior officer, who were responsible for overall camp operation, logistics, education and training; and secondly, ten to fourteen technical service civilians, including a camp \"superintendent\" and \"foreman\", employed by either the Departments of Interior or Agriculture, responsible for the particular fieldwork. Also included in camp operation were several non-technical supervisor LEMs, who provided knowledge of the work at hand, \"lay of the land,\" and paternal guidance for inexperienced enrollees. Enrollees were organized into work detail units called \"sections\" of 25 men each, according to the barracks they resided in. Each section had an enrollee \"senior leader\" and \"assistant leader\" who were accountable for the men at work and in the barracks.", "title": "History" }, { "paragraph_id": 17, "text": "The CCC performed 300 types of work projects in nine approved general classifications:", "title": "History" }, { "paragraph_id": 18, "text": "The responses to this seven-month experimental conservation program were enthusiastic. On October 1, 1933, Director Fechner was directed to arrange for the second period of enrollment. By January 1934, 300,000 men were enrolled. In July 1934, this cap was increased by 50,000 to include men from Midwest states that had been affected by drought. The temporary tent camps had also developed to include wooden barracks. An education program had been established, emphasizing job training and literacy.", "title": "History" }, { "paragraph_id": 19, "text": "Approximately 55% of enrollees were from rural communities, a majority of which were non-farm; 45% came from urban areas. Level of education for the enrollee averaged 3% illiterate; 38% had less than eight years of school; 48% did not complete high school; and 11% were high school graduates. At the time of entry, 70% of enrollees were malnourished and poorly clothed. Few had work experience beyond occasional odd jobs. Peace was maintained by the threat of \"dishonorable discharge\". \"This is a training station; we're going to leave morally and physically fit to lick 'Old Man Depression,'\" boasted the newsletter, Happy Days, of a North Carolina camp.", "title": "History" }, { "paragraph_id": 20, "text": "Because of the power of conservative Solid South white Democrats in Congress, who insisted on racial segregation, most New Deal programs were racially segregated; blacks and whites rarely worked alongside each other. At this time, all the states of the South had passed legislation imposing racial segregation and, since the turn of the century, laws and constitutional provisions that disenfranchised most blacks; they were excluded from formal politics. Because of discrimination by white officials at the local and state levels, blacks in the South did not receive as many benefits as whites from New Deal programs.", "title": "History" }, { "paragraph_id": 21, "text": "In the first few weeks of operation, CCC camps in the North were integrated. By July 1935, however, all camps in the United States were segregated. Enrollment peaked at the end of 1935, when there were 500,000 men in 2,600 camps in operation in every state. All received equal pay and housing. Black leaders lobbied to secure leadership roles. Adult white men held the major leadership roles in all the camps. Director Fechner refused to appoint black adults to any supervisory positions except that of education director in the all-black camps.", "title": "History" }, { "paragraph_id": 22, "text": "The CCC operated a separate division for members of federally recognized tribes: the \"Indian Emergency Conservation Work Division\" (IECW or CCC-ID). Native men from reservations worked on roads, bridges, clinics, shelters, and other public works near their reservations. Although they were organized as groups classified as camps, no permanent camps were established for Native Americans. Instead, organized groups moved with their families from project to project and were provided with an additional rental allowance. The CCC often provided the only paid work, as many reservations were in remote rural areas. Enrollees had to be between the ages of 17 and 35.", "title": "History" }, { "paragraph_id": 23, "text": "During 1933, about half the male heads of households on the Sioux reservations in South Dakota were employed by the CCC-ID. With grants from the Public Works Administration (PWA), the Indian Division built schools and conducted a road-building program in and around many reservations to improve infrastructure. The mission was to reduce erosion and improve the value of Indian lands. Crews built dams of many types on creeks, then sowed grass on the eroded areas from which the damming material had been taken. They built roads and planted shelter-belts on federal lands. The steady income helped participants regain self-respect, and many used the funds to improve their lives. John Collier, the federal Commissioner of Indian Affairs and Daniel Murphy, the director of the CCC-ID, both based the program on Indian self-rule and the restoration of tribal lands, governments, and cultures. The next year, Congress passed the Indian Reorganization Act of 1934, which ended allotments and helped preserve tribal lands, and encouraged tribes to re-establish self-government.", "title": "History" }, { "paragraph_id": 24, "text": "Collier said of the CCC-Indian Division, \"no previous undertaking in Indian Service has so largely been the Indians' own undertaking\". Educational programs trained participants in gardening, stock raising, safety, native arts, and some academic subjects. IECW differed from other CCC activities in that it explicitly trained men in skills to be carpenters, truck drivers, radio operators, mechanics, surveyors, and technicians. With the passage of the National Defense Vocational Training Act of 1941, enrollees began participating in defense-oriented training. The government paid for the classes and after students completed courses and passed a competency test, guaranteed automatic employment in defense work. A total of 85,000 Native Americans were enrolled in this training. This proved valuable social capital for the 24,000 alumni who later served in the military and the 40,000 who left the reservations for city jobs supporting the war effort.", "title": "History" }, { "paragraph_id": 25, "text": "Responding to public demand to alleviate unemployment, Congress approved the Emergency Relief Appropriation Act of 1935, on April 8, 1935, which included continued funding for the CCC program through March 31, 1937. The age limit was expanded to 17–28 to include more men. April 1, 1935, to March 31, 1936, was the period of greatest activity and work accomplished by the CCC program. Enrollment peaked at 505,782 in about 2,900 camps by August 31, 1935, followed by a reduction to 350,000 enrollees in 2,019 camps by June 30, 1936. During this period the public response to the CCC program was overwhelmingly popular. A Gallup poll of April 18, 1936, asked: \"Are you in favor of the CCC camps?\"; 82% of respondents said \"yes\", including 92% of Democrats and 67% of Republicans.", "title": "History" }, { "paragraph_id": 26, "text": "On June 28, 1937, the Civilian Conservation Corps was legally established and transferred from its original designation as the Emergency Conservation Work program. Funding was extended for three more years by Public Law No. 163, 75th Congress, effective July 1, 1937. Congress changed the age limits to 17–23 years old and changed the requirement that enrollees be on relief to \"not regularly in attendance at school, or possessing full-time employment.\" The 1937 law mandated the inclusion of vocational and academic training for a minimum of 10 hours per week. Students in school were allowed to enroll during summer vacation. During this period, the CCC forces contributed to disaster relief following 1937 floods in New York, Vermont, and the Ohio and Mississippi river valleys, and response and clean-up after the 1938 hurricane in New England.", "title": "History" }, { "paragraph_id": 27, "text": "In 1939 Congress ended the independent status of the CCC, transferring it to the control of the Federal Security Agency. The National Youth Administration, U.S. Employment Service, the Office of Education, and the Works Progress Administration also had some responsibilities. About 5,000 reserve officers serving in the camps were affected, as they were transferred to federal Civil Service, and military ranks and titles were eliminated. Despite the loss of overt military leadership in the camps by July 1940, with war underway in Europe and Asia, the government directed an increasing number of CCC projects to resources for national defense. It developed infrastructure for military training facilities and forest protection. By 1940 the CCC was no longer wholly a relief agency, was rapidly losing its non-military character, and it was becoming a system for work-training, as its ranks had become increasingly younger and inexperienced.", "title": "History" }, { "paragraph_id": 28, "text": "Although the CCC was probably the most popular New Deal program, it never was authorized as a permanent agency. The program was reduced in scale as the Depression waned and employment opportunities improved. After conscription began in 1940, fewer eligible young men were available. Following the attack on Pearl Harbor in December 1941, the Roosevelt administration directed all federal programs to emphasize the war effort. Most CCC work, except for wildland firefighting, was shifted onto U.S. military bases to help with construction.", "title": "History" }, { "paragraph_id": 29, "text": "The CCC disbanded one year earlier than planned, as the 77th United States Congress ceased funding it. Operations were formally concluded at the end of the federal fiscal year on June 30, 1942. The end of the CCC program and closing of the camps involved arrangements to leave the incomplete work projects in the best possible state, the separation of about 1,800 appointed employees, the transfer of CCC property to the War and Navy Departments and other agencies, and the preparation of final accountability records. Liquidation of the CCC was ordered by Congress by the Labor-Federal Security Appropriation Act (56 Stat. 569) on July 2, 1942, and virtually completed on June 30, 1943. Liquidation appropriations for the CCC continued through April 20, 1948.", "title": "History" }, { "paragraph_id": 30, "text": "Some former CCC sites in good condition were reactivated from 1941 to 1947 as Civilian Public Service camps where conscientious objectors performed \"work of national importance\" as an alternative to military service. Other camps were used to hold Japanese, German and Italian Americans interned under the Western Defense Command's Enemy Alien Control Program, as well as Axis prisoners of war. Most of the Japanese American internment camps were built by the people held there. After the CCC disbanded, the federal agencies responsible for public lands organized their own seasonal fire crews, modeled after the CCC. These have performed a firefighting function formerly done by the CCC and provided the same sort of outdoor work experience for young people. Approximately 47 young men have died while in this line of duty.", "title": "History" }, { "paragraph_id": 31, "text": "In several cities where CCC workers worked, statues were erected to commemorate them.", "title": "Statues" }, { "paragraph_id": 32, "text": "The CCC program was never officially terminated. Congress provided funding for closing the remaining camps in 1942 with the equipment being reallocated. It became a model for conservation programs that were implemented in the period after World War II. Present-day corps are national, state, and local programs that engage primarily youth and young adults (ages 16–25) in community service, training, and educational activities. The nation's approximately 113 corps programs operate in 41 of the 50 states and Washington, D.C. During 2004, they enrolled more than 23,000 young people. The Corps Network, known originally as the National Association of Service and Conservation Corps (NASCC), works to expand and enhance corps-type programs throughout the country. The Corps Network began in 1985 when the nation's first 24 Corps directors banded together to secure an advocate at the federal level and a repository of information on how best to start and manage a corps. Early financial assistance from the Ford, Hewlett and Mott Foundations was critical to establishing the association.", "title": "Inspired programs" }, { "paragraph_id": 33, "text": "Similar active programs in the United States are: the National Civilian Community Corps, part of the AmeriCorps program, a team-based national service program in which young adults ages 18–24 spend 10 months working for non-profit and government organizations; and the Civilian Conservation Corps, USA, (CCCUSA) managed by its president, Thomas Hark, in 2016. Hark, his co-founder Mike Rama, currently the Deputy Director of the Corporate Eco Forum (CEF) founded by M. R. Rangaswami, and their team of strategic advisors have reimagined the federal Civilian Conservation Corps program of the 1930s as a private, locally governed, national social franchise. The goal of this recently established CCCUSA is to enroll a million young people annually, building a core set of values in each enrollee, who will then become the catalyst in their own communities and states to create a more civil society and stronger nation.", "title": "Inspired programs" }, { "paragraph_id": 34, "text": "The CCC program became a model for the creation of team-based national service youth conservation programs such as the Student Conservation Association (SCA). The SCA, founded in 1959, is a nonprofit organization that offers conservation internships and summer trail crew opportunities to more than 4,000 people each year.", "title": "Inspired programs" }, { "paragraph_id": 35, "text": "In 1976, Governor of California Jerry Brown established the California Conservation Corps. This program had many similar characteristics - residential centers, high expectations for participation, and emphasis on hard work on public lands. Young adults from different backgrounds were recruited for a term of one year. Corps members attended a training session called the Corpsmember Orientation Motivation Education and Training (COMET) program before being assigned to one of the various centers. Project work is also similar to the original CCC of the 1930s - work on public forests, state and federal parks.", "title": "Inspired programs" }, { "paragraph_id": 36, "text": "The Nevada Conservation Corps is a non-profit organization that partners with public land management agencies such as the Bureau of Land Management, United States Forest Service, National Park Service, and Nevada State Parks to complete conservation and restoration projects throughout Nevada. Conservation work includes fuel reductions through thinning, constructing and maintaining trails, invasive species removal, and performing biological surveys. The Nevada Conservation Corps was created through the Great Basin Institute and is part of the AmeriCorps program.", "title": "Inspired programs" }, { "paragraph_id": 37, "text": "Conservation Corps Minnesota & Iowa provides environmental stewardship and service-learning opportunities to youth and young adults while accomplishing conservation, natural resource management projects and emergency response work through its Young Adult Program and the Summer Youth Program. These programs emphasize the development of job and life skills by conservation and community service work.", "title": "Inspired programs" }, { "paragraph_id": 38, "text": "The Montana Conservation Corps (MCC) is a non-profit organization with a mission to equip young people with the skills and values to be vigorous citizens who improve their communities and environment. Collectively, MCC crews contribute more than 90,000 work hours each year. The MCC was established in 1991 by Montana's Human Resource Development Councils in Billings, Bozeman and Kalispell. Originally, it was a summer program for disadvantaged youth, although it has grown into an AmeriCorps-sponsored non-profit organization with six regional offices that serve Montana, Idaho, Wyoming, North Dakota, and South Dakota. All regions also offer Montana YES (Youth Engaged in Service) summer programs for teenagers who are 14 to 17 years old.", "title": "Inspired programs" }, { "paragraph_id": 39, "text": "Established in 1995, Environmental Corps, now Texas Conservation Corps (TxCC), is an American YouthWorks program which allows youth, ages 17 to 28, to contribute to the restoration and preservation of parks and public lands in Texas. The only conservation corps in Texas, TxcC is a nonprofit corporation based in Austin, Texas, which serves the entire state. Their work ranges from disaster relief to trail building to habitat restoration. TxCC has done projects in national, state, and city parks.", "title": "Inspired programs" }, { "paragraph_id": 40, "text": "The Washington Conservation Corps (WCC) is a sub-agency of the Washington State Department of Ecology. It employs men and women 18 to 25 years old in a program to protect and enhance Washington's natural resources. WCC is a part of the AmeriCorps program.", "title": "Inspired programs" }, { "paragraph_id": 41, "text": "The Vermont Youth Conservation Corps (VYCC) is a non-profit, youth service and education organization that hires Corps Members, aged 16–24, to work on high-priority conservation projects in Vermont. Through these work projects, Corps Members develop a strong work ethic, strengthen their leadership skills, and learn how to take personal responsibility for their actions. VYCC Crews work at VT State Parks, U.S. Forest Service Campgrounds, in local communities, and throughout the state's backcountry. The VYCC has also given aid to a similar program in North Carolina, which is currently in its infancy.", "title": "Inspired programs" }, { "paragraph_id": 42, "text": "The Youth Conservation Corps is a youth conservation program present in federal lands around the country. The program gives youth aged 13–17 the opportunity to participate in conservation projects in a team setting. YCC programs are available in land managed by the National Park Service, the Forest Service, and the Fish and Wildlife Service. Projects can last up to 10 weeks and typically run over the summer. Some YCC programs are residential, meaning the participants are given housing on the land they work on. Projects may necessitate youth to camp in backcountry settings in order to work on trails or campsites. Most require youth to commute daily or house youth for only a few days a week. Youth are typically paid for their work. YCC programs contribute to the maintenance of public lands and instill a value for hard work and the outdoors in those who participate.", "title": "Inspired programs" }, { "paragraph_id": 43, "text": "Conservation Legacy is a non-profit employment, job training, and education organization with locations across the United States including Arizona Conservation Corps in Tucson and Flagstaff, Arizona; Conservation Corps New Mexico in Las Cruces, New Mexico; Southwest Conservation Corps in Durango and Salida, Colorado; and Southeast Conservation Corps in Chattanooga, Tennessee. Conservation Legacy also operates an AmeriCorps VISTA team serving to improve the environment and economies of historic mining communities in the American West and Appalachia. Conservation Legacy also hosts the Environmental Stewards Program - providing internships with federal, state, municipal and NGO land management agencies nationwide. Conservation Legacy formed as a merger of the Southwest Youth Corps, San Luis Valley Youth Corps, The Youth Corps of Southern Arizona, and Coconino Rural Environmental Corps.", "title": "Inspired programs" }, { "paragraph_id": 44, "text": "Conservation Legacy engages young adults ages 14 to 26 and U.S. military veterans of all ages in personal and professional development experiences involving conservation projects on public lands. Corp members live, work, and learn in teams of six to eight for terms of service ranging from 3 months to 1 year.", "title": "Inspired programs" }, { "paragraph_id": 45, "text": "The Sea Ranger Service is a social enterprise, based in Netherlands, that has taken its inspiration from the Civilian Conservation Corps in running a permanent youth training program, supported by veterans, to manage ocean areas and carry out underwater landscape restoration. Unemployed youths are trained up as Sea Rangers during a bootcamp and subsequently offered full-time employment to manage and regenerate Marine Protected Areas and aid ocean conservation. The Sea Ranger Service works in close cooperation with the Dutch government and national maritime authorities.", "title": "Inspired programs" }, { "paragraph_id": 46, "text": "The Aina Corps performed environmental restoration work in Hawaii in 2020, funded by the CARES Act.", "title": "Inspired programs" } ]
The Civilian Conservation Corps (CCC) was a voluntary government work relief program that ran from 1933 to 1942 in the United States for unemployed, unmarried men ages 18–25 and eventually expanded to ages 17–28. The CCC was a major part of President Franklin D. Roosevelt's New Deal that supplied manual labor jobs related to the conservation and development of natural resources in rural lands owned by federal, state, and local governments. The CCC was designed to supply jobs for young men and to relieve families who had difficulty finding jobs during the Great Depression in the United States. Robert Fechner was the first director of this agency, succeeded by James McEntee following Fechner's death. The largest enrollment at any one time was 300,000. Through the course of its nine years in operation, three million young men took part in the CCC, which provided them with shelter, clothing, and food, together with a wage of $30 per month. The American public made the CCC the most popular of all the New Deal programs. Sources written at the time claimed an individual's enrollment in the CCC led to improved physical condition, heightened morale, and increased employability. The CCC also led to a greater public awareness and appreciation of the outdoors and the nation's natural resources, and the continued need for a carefully planned, comprehensive national program for the protection and development of natural resources. The CCC operated separate programs for veterans and Native Americans. Approximately 15,000 Native Americans took part in the program, helping them weather the Great Depression. By 1942, with World War II raging and the draft in effect, the need for work relief declined, and Congress voted to close the program.
2002-02-25T15:51:15Z
2023-12-13T17:37:04Z
[ "Template:Cite journal", "Template:Webarchive", "Template:Short description", "Template:Multiple image", "Template:Rp", "Template:Reflist", "Template:Cite web", "Template:Cite book", "Template:ISBN", "Template:Commons category-inline", "Template:Authority control", "Template:Sfn", "Template:Center", "Template:Cite magazine", "Template:Inflation", "Template:ISSN", "Template:Use mdy dates", "Template:Blockquote", "Template:Anchor", "Template:Citation needed", "Template:Cite news", "Template:New Deal" ]
https://en.wikipedia.org/wiki/Civilian_Conservation_Corps
7,822
Caribbean Sea
The Caribbean Sea is a sea of the Atlantic Ocean in the tropics of the Western Hemisphere. It is bounded by Mexico and Central America to the west and southwest, to the north by the Greater Antilles starting with Cuba, to the east by the Lesser Antilles, and to the south by the northern coast of South America. The Gulf of Mexico lies to the northwest. The entire Caribbean Sea area, the West Indies' numerous islands, and adjacent coasts are collectively known as the Caribbean. The Caribbean Sea is one of the largest seas and has an area of about 2,754,000 km (1,063,000 sq mi). The sea's deepest point is the Cayman Trough, between the Cayman Islands and Jamaica, at 7,686 m (25,217 ft) below sea level. The Caribbean coastline has many gulfs and bays: the Gulf of Gonâve, the Gulf of Venezuela, the Gulf of Darién, Golfo de los Mosquitos, the Gulf of Paria and the Gulf of Honduras. The Caribbean Sea has the world's second-largest barrier reef, the Mesoamerican Barrier Reef. It runs 1,000 km (620 mi) along Mexico, Belize, Guatemala, and Honduras coasts. The name Caribbean derives from the Caribs, one of the region's dominant Native American groups at the time of European contact during the late 15th century. After Christopher Columbus landed in the Bahamas in 1492, the Spanish term Antillas applied to the lands; stemming from this, the Sea of the Antilles became a common alternative name for the "Caribbean Sea" in various European languages. Spanish dominance in the region remained undisputed during the first century of European colonization. From the 16th century, Europeans visiting the Caribbean region distinguished the "South Sea" (the Pacific Ocean south of the isthmus of Panama) from the "North Sea" (the Caribbean Sea north of the same isthmus). The Caribbean Sea had been unknown to the populations of Eurasia until 1492, when Christopher Columbus sailed into Caribbean waters on a quest to find a sea route to Asia. At that time the Americas were generally unknown to most Europeans, although they had been visited in the 10th century by the Vikings. Following Columbus's discovery of the islands, the area was quickly colonized by several Western cultures (initially Spain, then later England, the Dutch Republic, France, Courland and Denmark). Following the colonization of the Caribbean islands, the Caribbean Sea became a busy area for European-based marine trading and transports, and this commerce eventually attracted pirates such as Samuel Bellamy and Blackbeard. As of 2015 the area is home to 22 island territories and borders 12 continental countries. The International Hydrographic Organization defines the limits of the Caribbean Sea as follows: Although Barbados is an island on the same continental shelf, it is considered to be in the Atlantic Ocean rather than the Caribbean Sea. The Caribbean Sea is an oceanic sea largely situated on the Caribbean Plate. The Caribbean Sea is separated from the ocean by several island arcs of various ages. The youngest stretches from the Lesser Antilles to the Virgin Islands to the north east of Trinidad and Tobago off the coast of Venezuela. This arc was formed by the collision of the South American Plate with the Caribbean Plate. It included active and extinct volcanoes such as Mount Pelee, the Quill on Sint Eustatius in the Caribbean Netherlands, La Soufrière in Saint Vincent and the Grenadines and Morne Trois Pitons on Dominica. The larger islands in the northern part of the sea Cuba, Hispaniola, Jamaica and Puerto Rico lie on an older island arc. The geological age of the Caribbean Sea is estimated to be between 160 and 180 million years and was formed by a horizontal fracture that split the supercontinent called Pangea in the Mesozoic Era. It is assumed the proto-caribbean basin existed in the Devonian period and in the early Carboniferous movement of Gondwana to the north and its convergence with the Euramerica basin decreased in size. The next stage of the Caribbean Sea's formation began in the Triassic. Powerful rifting led to the formation of narrow troughs, stretching from modern Newfoundland to the Gulf of Mexico's west coast, forming siliciclastic sedimentary rocks. In the early Jurassic due to powerful marine transgression, water broke into the present area of the Gulf of Mexico creating a vast shallow pool. Deep basins emerged in the Caribbean during the Middle Jurassic rifting. The emergence of these basins marked the beginning of the Atlantic Ocean and contributed to the destruction of Pangaea at the end of the late Jurassic. During the Cretaceous the Caribbean acquired a shape close to today. In the early Paleogene due to marine regression the Caribbean became separated from the Gulf of Mexico and the Atlantic Ocean by the land of Cuba and Haiti. The Caribbean remained like this for most of the Cenozoic until the Holocene when rising water levels of the oceans restored communication with the Atlantic Ocean. The Caribbean's floor is composed of sub-oceanic sediments of deep red clay in the deep basins and troughs. On continental slopes and ridges calcareous silts are found. Clay minerals have likely been deposited by the mainland river Orinoco and the Magdalena River. Deposits on the bottom of the Caribbean Sea and the Gulf of Mexico have a thickness of about 1 km (0.62 mi). Upper sedimentary layers relate to the period from the Mesozoic to the Cenozoic (250 million years ago) and the lower layers from the Paleozoic to the Mesozoic. The Caribbean sea floor is divided into five basins separated from each other by underwater ridges and mountain ranges. Atlantic Ocean water enters the Caribbean through the Anegada Passage between the Lesser Antilles and the Virgin Islands and the Windward Passage between Cuba and Haiti. The Yucatán Channel between Mexico and Cuba links the Gulf of Mexico with the Caribbean. The deepest points of the sea lie in Cayman Trough with depths reaching approximately 7,686 m (25,220 ft). Despite this, the Caribbean Sea is considered a relatively shallow sea in comparison to other bodies of water. The pressure of the South American Plate to the east of the Caribbean causes the region of the Lesser Antilles to have high volcanic activity. A very serious eruption of Mount Pelée in 1902 caused many casualties. The Caribbean sea floor is also home to two oceanic trenches: the Cayman Trench and the Puerto Rico Trench, which put the area at a high risk of earthquakes. Underwater earthquakes pose a threat of generating tsunamis which could have a devastating effect on the Caribbean islands. Scientific data reveals that over the last 500 years the area has seen a dozen earthquakes above 7.5 magnitude. Most recently, a 7.1 earthquake struck Haiti on January 12, 2010. The hydrology of the sea has a high level of homogeneity. Annual variations in monthly average water temperatures at the surface do not exceed 3 °C (5.4 °F). Over the past 50 years, the Caribbean has gone through three stages: cooling until 1974, a cold phase with peaks during 1974–1976 and 1984–1986, and finally a warming phase with an increase in temperature of 0.6 °C (1.1 °F) per year. Virtually all temperature extremes were associated with the phenomena of El Niño and La Niña. The salinity of the seawater is about 3.6%, and its density is 1,023.5–1,024.0 kg/m (63.90–63.93 lb/cu ft). The surface water colour is blue-green to green. The Caribbean's depth in its wider basins and deep-water temperatures are similar to those of the Atlantic. Atlantic deep water is thought to spill into the Caribbean and contribute to the general deep water of its sea. The surface water (30 m; 100 ft) acts as an extension of the northern Atlantic as the Guiana Current and part of the North Equatorial Current enter the sea on the east. On the western side of the sea, the trade winds influence a northerly current which causes an upwelling and a rich fishery near Yucatán. The Caribbean is home to about 9% of the world's coral reefs, covering about 50,000 km (19,000 sq mi), most of which are located off the Caribbean Islands and the Central American coast. Among them stands out the Belize Barrier Reef, with an area of 963 km (372 sq mi), which was declared a World Heritage Site in 1996. It forms part of the Great Mayan Reef (also known as the MBRS) and, being over 1,000 km (600 mi) in length, is the world's second longest. It runs along the Caribbean coasts of Mexico, Belize, Guatemala and Honduras. Since 2005 unusually warm Caribbean waters have been increasingly threatening Caribbean coral reefs. Coral reefs support some of the most diverse marine habitats in the world, but they are fragile ecosystems. When tropical waters become unusually warm for extended periods of time, microscopic plants called zooxanthellae, which are symbiotic partners living within the coral polyp tissues, die off. These plants provide food for the corals and give them their color. The result of the death and dispersal of these tiny plants is called coral bleaching, and can lead to the devastation of large areas of reef. Over 42% of corals are completely bleached, and 95% are experiencing some type of whitening. Historically the Caribbean is thought to contain 14% of the world's coral reefs. The habitats supported by the reefs are critical to such tourist activities as fishing and diving, and provide an annual economic value to Caribbean nations of US$3.1–4.6 billion. Continued destruction of the reefs could severely damage the region's economy. A Protocol of the Convention for the Protection and Development of the Marine Environment of the Wider Caribbean Region came in effect in 1986 to protect the various endangered marine life of the Caribbean through forbidding human activities that would advance the continued destruction of such marine life in various areas. Currently this protocol has been ratified by 15 countries. Also, several charitable organisations have been formed to preserve the Caribbean marine life, such as Caribbean Conservation Corporation which seeks to study and protect sea turtles while educating others about them. In connection with the foregoing, the Institute of Marine Sciences and Limnology of the National Autonomous University of Mexico, conducted a regional study, funded by the Department of Technical Cooperation of the International Atomic Energy Agency, in which specialists from 11 Latin American countries (Colombia, Costa Rica, Cuba, Guatemala, Haiti, Honduras, Mexico, Nicaragua, Panama, Dominican Republic, Venezuela) plus Jamaica participated. The findings indicate that heavy metals such as mercury, arsenic, and lead, have been identified in the coastal zone of the Caribbean Sea. Analysis of toxic metals and hydrocarbons is based on the investigation of coastal sediments that have accumulated less than 50 meters deep during the last hundred and fifty years. The project results were presented in Vienna in the forum "Water Matters", and the 2011 General Conference of said multilateral organization. After the Mediterranean, the Caribbean Sea is the second most polluted sea. Pollution (in the form of up to 300,000 tonnes of solid garbage dumped into the Caribbean Sea each year) is progressively endangering marine ecosystems, wiping out species, and harming the livelihoods of the local people, which is primarily reliant on tourism and fishing. KfW took part in a €25.7 million funding agreement to eliminate marine trash and boost the circular economy in the Caribbean's Small Island Developing States. The project "Sustainable finance methods for marine preservation in the Caribbean" will assist remove solid waste and keep it out of the marine and coastal environment by establishing a new facility under the Caribbean Biodiversity Fund (CBF). Non-governmental organizations, universities, public institutions, civil society organizations, and the corporate sector are all eligible for financing. The project is estimated to prevent and remove at least 15 000 tonnes of marine trash, benefiting at least 20 000 individuals. The climate of the Caribbean is driven by the low latitude and tropical ocean currents that run through it. The principal ocean current is the North Equatorial Current, which enters the region from the tropical Atlantic. The climate of the area is tropical, varying from tropical rainforest in some areas to tropical savanna in others. There are also some locations that are arid climates with considerable drought in some years. Rainfall varies with elevation, size, and water currents (cool upwelling keep the ABC islands arid). Warm, moist trade winds blow consistently from the east, creating both rainforest and semi-arid climates across the region. The tropical rainforest climates include lowland areas near the Caribbean Sea from Costa Rica north to Belize, as well as the Dominican Republic and Puerto Rico, while the more seasonal dry tropical savanna climates are found in Cuba, northern Venezuela, and southern Yucatán, Mexico. Arid climates are found along the extreme northern coast of Venezuela out to the islands including Aruba and Curaçao, as well as the northern tip of Yucatán Tropical cyclones are a threat to the nations that rim the Caribbean Sea. While landfalls are infrequent, the resulting loss of life and property damage makes them a significant hazard to life in the Caribbean. Tropical cyclones that impact the Caribbean often develop off the West coast of Africa and make their way west across the Atlantic Ocean toward the Caribbean, while other storms develop in the Caribbean itself. The Caribbean hurricane season as a whole lasts from June through November, with the majority of hurricanes occurring during August and September. On average around nine tropical storms form each year, with five reaching hurricane strength. According to the National Hurricane Center 385 hurricanes occurred in the Caribbean between 1494 and 1900. The region has a high level of biodiversity and many species are endemic to the Caribbean. The vegetation of the region is mostly tropical but differences in topography, soil and climatic conditions increase species diversity. Where there are porous limestone terraced islands these are generally poor in nutrients. It is estimated that 13,000 species of plants grow in the Caribbean of which 6,500 are endemic. For example, guaiac wood (Guaiacum officinale), the flower of which is the national flower of Jamaica and the Bayahibe rose (Pereskia quisqueyana) which is the national flower of the Dominican Republic and the ceiba which is the national tree of both Puerto Rico and Guatemala. The mahogany is the national tree of the Dominican Republic and Belize. The caimito (Chrysophyllum cainito) grows throughout the Caribbean. In coastal zones there are coconut palms and in lagoons and estuaries are found thick areas of black mangrove and red mangrove (Rhizophora mangle). In shallow water flora and fauna is concentrated around coral reefs where there is little variation in water temperature, purity and salinity. Leeward side of lagoons provide areas of growth for sea grasses. Turtle grass (Thalassia testudinum) is common in the Caribbean as is manatee grass (Syringodium filiforme) which can grow together as well as in fields of single species at depths up to 20 m (66 ft). Another type shoal grass (Halodule wrightii) grows on sand and mud surfaces at depths of up to 5 m (16 ft). In brackish water of harbours and estuaries at depths less than 2.5 m (8 ft 2 in) widgeongrass (Ruppia maritima) grows. Representatives of three species belonging to the genus Halophila, (Halophila baillonii, Halophila engelmannii and Halophila decipiens) are found at depths of up to 30 m (98 ft) except for Halophila engelmani which does not grow below 5 m (16 ft) and is confined to the Bahamas, Florida, the Greater Antilles and the western part of the Caribbean. Halophila baillonii has been found only in the Lesser Antilles. Marine biota in the region have representatives of both the Indian and Pacific oceans which were caught in the Caribbean before the emergence of the Isthmus of Panama four million years ago. In the Caribbean Sea there are around 1,000 documented species of fish, including sharks (bull shark, tiger shark, silky shark and Caribbean reef shark), flying fish, giant oceanic manta ray, angel fish, spotfin butterflyfish, parrotfish, Atlantic Goliath grouper, tarpon and moray eels. Throughout the Caribbean there is industrial catching of lobster and sardines (off the coast of Yucatán Peninsula). There are 90 species of mammals in the Caribbean including sperm whales, humpback whales and dolphins. The island of Jamaica is home to seals and manatees. The Caribbean monk seal which lived in the Caribbean is considered extinct. Solenodons and hutias are mammals found only in the Caribbean; only one extant species is not endangered. There are 500 species of reptiles (94% of which are endemic). Islands are inhabited by some endemic species such as rock iguanas and American crocodile. The blue iguana, endemic to the island of Grand Cayman, is endangered. The green iguana is invasive to Grand Cayman. The Mona ground iguana which inhabits the island of Mona, Puerto Rico, is endangered. The rhinoceros iguana from the island of Hispaniola which is shared between Haiti and the Dominican Republic is also endangered. The region has several types of sea turtle (loggerhead, green turtle, hawksbill, leatherback turtle, Atlantic ridley and olive ridley). Some species are threatened with extinction. Their populations have been greatly reduced since the 17th century – the number of green turtles has declined from 91 million to 300,000 and hawksbill turtles from 11 million to less than 30,000 by 2006. All 170 species of amphibians that live in the region are endemic. The habitats of almost all members of the toad family, poison dart frogs, tree frogs and leptodactylidae (a type of frog) are limited to only one island. The Golden coqui is in serious threat of extinction. In the Caribbean, 600 species of birds have been recorded, of which 163 are endemic such as todies, Fernandina's flicker and palmchat. The American yellow warbler is found in many areas, as is the green heron. Of the endemic species 48 are threatened with extinction including the Puerto Rican amazon, and the Zapata wren. According to Birdlife International in 2006 in Cuba 29 species of bird are in danger of extinction and two species officially extinct. The black-fronted piping guan is endangered. The Antilles along with Central America lie in the flight path of migrating birds from North America so the size of populations is subject to seasonal fluctuations. Parrots and bananaquits are found in forests. Over the open sea can be seen frigatebirds and tropicbirds. The Caribbean region has seen a significant increase in human activity since the colonization period. The sea is one of the largest oil production areas in the world, producing approximately 170 million tons per year. The area also generates a large fishing industry for the surrounding countries, accounting for 500,000 tonnes (490,000 long tons; 550,000 short tons) of fish a year. Human activity in the area also accounts for a significant amount of pollution. The Pan American Health Organization estimated in 1993 that only about 10% of the sewage from the Central American and Caribbean Island countries is properly treated before being released into the sea. The Caribbean region supports a large tourism industry. The Caribbean Tourism Organization calculates that about 12 million people a year visit the area, including (in 1991–1992) about 8 million cruise ship tourists. Tourism based upon scuba diving and snorkeling on coral reefs of many Caribbean islands makes a major contribution to their economies.
[ { "paragraph_id": 0, "text": "The Caribbean Sea is a sea of the Atlantic Ocean in the tropics of the Western Hemisphere. It is bounded by Mexico and Central America to the west and southwest, to the north by the Greater Antilles starting with Cuba, to the east by the Lesser Antilles, and to the south by the northern coast of South America. The Gulf of Mexico lies to the northwest.", "title": "" }, { "paragraph_id": 1, "text": "The entire Caribbean Sea area, the West Indies' numerous islands, and adjacent coasts are collectively known as the Caribbean. The Caribbean Sea is one of the largest seas and has an area of about 2,754,000 km (1,063,000 sq mi). The sea's deepest point is the Cayman Trough, between the Cayman Islands and Jamaica, at 7,686 m (25,217 ft) below sea level. The Caribbean coastline has many gulfs and bays: the Gulf of Gonâve, the Gulf of Venezuela, the Gulf of Darién, Golfo de los Mosquitos, the Gulf of Paria and the Gulf of Honduras.", "title": "" }, { "paragraph_id": 2, "text": "The Caribbean Sea has the world's second-largest barrier reef, the Mesoamerican Barrier Reef. It runs 1,000 km (620 mi) along Mexico, Belize, Guatemala, and Honduras coasts.", "title": "" }, { "paragraph_id": 3, "text": "The name Caribbean derives from the Caribs, one of the region's dominant Native American groups at the time of European contact during the late 15th century. After Christopher Columbus landed in the Bahamas in 1492, the Spanish term Antillas applied to the lands; stemming from this, the Sea of the Antilles became a common alternative name for the \"Caribbean Sea\" in various European languages. Spanish dominance in the region remained undisputed during the first century of European colonization.", "title": "History" }, { "paragraph_id": 4, "text": "From the 16th century, Europeans visiting the Caribbean region distinguished the \"South Sea\" (the Pacific Ocean south of the isthmus of Panama) from the \"North Sea\" (the Caribbean Sea north of the same isthmus).", "title": "History" }, { "paragraph_id": 5, "text": "The Caribbean Sea had been unknown to the populations of Eurasia until 1492, when Christopher Columbus sailed into Caribbean waters on a quest to find a sea route to Asia. At that time the Americas were generally unknown to most Europeans, although they had been visited in the 10th century by the Vikings. Following Columbus's discovery of the islands, the area was quickly colonized by several Western cultures (initially Spain, then later England, the Dutch Republic, France, Courland and Denmark). Following the colonization of the Caribbean islands, the Caribbean Sea became a busy area for European-based marine trading and transports, and this commerce eventually attracted pirates such as Samuel Bellamy and Blackbeard.", "title": "History" }, { "paragraph_id": 6, "text": "As of 2015 the area is home to 22 island territories and borders 12 continental countries.", "title": "History" }, { "paragraph_id": 7, "text": "The International Hydrographic Organization defines the limits of the Caribbean Sea as follows:", "title": "Extent" }, { "paragraph_id": 8, "text": "Although Barbados is an island on the same continental shelf, it is considered to be in the Atlantic Ocean rather than the Caribbean Sea.", "title": "Extent" }, { "paragraph_id": 9, "text": "The Caribbean Sea is an oceanic sea largely situated on the Caribbean Plate. The Caribbean Sea is separated from the ocean by several island arcs of various ages. The youngest stretches from the Lesser Antilles to the Virgin Islands to the north east of Trinidad and Tobago off the coast of Venezuela. This arc was formed by the collision of the South American Plate with the Caribbean Plate. It included active and extinct volcanoes such as Mount Pelee, the Quill on Sint Eustatius in the Caribbean Netherlands, La Soufrière in Saint Vincent and the Grenadines and Morne Trois Pitons on Dominica. The larger islands in the northern part of the sea Cuba, Hispaniola, Jamaica and Puerto Rico lie on an older island arc.", "title": "Geology" }, { "paragraph_id": 10, "text": "The geological age of the Caribbean Sea is estimated to be between 160 and 180 million years and was formed by a horizontal fracture that split the supercontinent called Pangea in the Mesozoic Era. It is assumed the proto-caribbean basin existed in the Devonian period and in the early Carboniferous movement of Gondwana to the north and its convergence with the Euramerica basin decreased in size. The next stage of the Caribbean Sea's formation began in the Triassic. Powerful rifting led to the formation of narrow troughs, stretching from modern Newfoundland to the Gulf of Mexico's west coast, forming siliciclastic sedimentary rocks. In the early Jurassic due to powerful marine transgression, water broke into the present area of the Gulf of Mexico creating a vast shallow pool. Deep basins emerged in the Caribbean during the Middle Jurassic rifting. The emergence of these basins marked the beginning of the Atlantic Ocean and contributed to the destruction of Pangaea at the end of the late Jurassic. During the Cretaceous the Caribbean acquired a shape close to today. In the early Paleogene due to marine regression the Caribbean became separated from the Gulf of Mexico and the Atlantic Ocean by the land of Cuba and Haiti. The Caribbean remained like this for most of the Cenozoic until the Holocene when rising water levels of the oceans restored communication with the Atlantic Ocean.", "title": "Geology" }, { "paragraph_id": 11, "text": "The Caribbean's floor is composed of sub-oceanic sediments of deep red clay in the deep basins and troughs. On continental slopes and ridges calcareous silts are found. Clay minerals have likely been deposited by the mainland river Orinoco and the Magdalena River. Deposits on the bottom of the Caribbean Sea and the Gulf of Mexico have a thickness of about 1 km (0.62 mi). Upper sedimentary layers relate to the period from the Mesozoic to the Cenozoic (250 million years ago) and the lower layers from the Paleozoic to the Mesozoic.", "title": "Geology" }, { "paragraph_id": 12, "text": "The Caribbean sea floor is divided into five basins separated from each other by underwater ridges and mountain ranges. Atlantic Ocean water enters the Caribbean through the Anegada Passage between the Lesser Antilles and the Virgin Islands and the Windward Passage between Cuba and Haiti. The Yucatán Channel between Mexico and Cuba links the Gulf of Mexico with the Caribbean. The deepest points of the sea lie in Cayman Trough with depths reaching approximately 7,686 m (25,220 ft). Despite this, the Caribbean Sea is considered a relatively shallow sea in comparison to other bodies of water. The pressure of the South American Plate to the east of the Caribbean causes the region of the Lesser Antilles to have high volcanic activity. A very serious eruption of Mount Pelée in 1902 caused many casualties.", "title": "Geology" }, { "paragraph_id": 13, "text": "The Caribbean sea floor is also home to two oceanic trenches: the Cayman Trench and the Puerto Rico Trench, which put the area at a high risk of earthquakes. Underwater earthquakes pose a threat of generating tsunamis which could have a devastating effect on the Caribbean islands. Scientific data reveals that over the last 500 years the area has seen a dozen earthquakes above 7.5 magnitude. Most recently, a 7.1 earthquake struck Haiti on January 12, 2010.", "title": "Geology" }, { "paragraph_id": 14, "text": "The hydrology of the sea has a high level of homogeneity. Annual variations in monthly average water temperatures at the surface do not exceed 3 °C (5.4 °F). Over the past 50 years, the Caribbean has gone through three stages: cooling until 1974, a cold phase with peaks during 1974–1976 and 1984–1986, and finally a warming phase with an increase in temperature of 0.6 °C (1.1 °F) per year. Virtually all temperature extremes were associated with the phenomena of El Niño and La Niña. The salinity of the seawater is about 3.6%, and its density is 1,023.5–1,024.0 kg/m (63.90–63.93 lb/cu ft). The surface water colour is blue-green to green.", "title": "Oceanography" }, { "paragraph_id": 15, "text": "The Caribbean's depth in its wider basins and deep-water temperatures are similar to those of the Atlantic. Atlantic deep water is thought to spill into the Caribbean and contribute to the general deep water of its sea. The surface water (30 m; 100 ft) acts as an extension of the northern Atlantic as the Guiana Current and part of the North Equatorial Current enter the sea on the east. On the western side of the sea, the trade winds influence a northerly current which causes an upwelling and a rich fishery near Yucatán.", "title": "Oceanography" }, { "paragraph_id": 16, "text": "The Caribbean is home to about 9% of the world's coral reefs, covering about 50,000 km (19,000 sq mi), most of which are located off the Caribbean Islands and the Central American coast. Among them stands out the Belize Barrier Reef, with an area of 963 km (372 sq mi), which was declared a World Heritage Site in 1996. It forms part of the Great Mayan Reef (also known as the MBRS) and, being over 1,000 km (600 mi) in length, is the world's second longest. It runs along the Caribbean coasts of Mexico, Belize, Guatemala and Honduras.", "title": "Ecology" }, { "paragraph_id": 17, "text": "Since 2005 unusually warm Caribbean waters have been increasingly threatening Caribbean coral reefs. Coral reefs support some of the most diverse marine habitats in the world, but they are fragile ecosystems. When tropical waters become unusually warm for extended periods of time, microscopic plants called zooxanthellae, which are symbiotic partners living within the coral polyp tissues, die off. These plants provide food for the corals and give them their color. The result of the death and dispersal of these tiny plants is called coral bleaching, and can lead to the devastation of large areas of reef. Over 42% of corals are completely bleached, and 95% are experiencing some type of whitening. Historically the Caribbean is thought to contain 14% of the world's coral reefs.", "title": "Ecology" }, { "paragraph_id": 18, "text": "The habitats supported by the reefs are critical to such tourist activities as fishing and diving, and provide an annual economic value to Caribbean nations of US$3.1–4.6 billion. Continued destruction of the reefs could severely damage the region's economy. A Protocol of the Convention for the Protection and Development of the Marine Environment of the Wider Caribbean Region came in effect in 1986 to protect the various endangered marine life of the Caribbean through forbidding human activities that would advance the continued destruction of such marine life in various areas. Currently this protocol has been ratified by 15 countries. Also, several charitable organisations have been formed to preserve the Caribbean marine life, such as Caribbean Conservation Corporation which seeks to study and protect sea turtles while educating others about them.", "title": "Ecology" }, { "paragraph_id": 19, "text": "In connection with the foregoing, the Institute of Marine Sciences and Limnology of the National Autonomous University of Mexico, conducted a regional study, funded by the Department of Technical Cooperation of the International Atomic Energy Agency, in which specialists from 11 Latin American countries (Colombia, Costa Rica, Cuba, Guatemala, Haiti, Honduras, Mexico, Nicaragua, Panama, Dominican Republic, Venezuela) plus Jamaica participated. The findings indicate that heavy metals such as mercury, arsenic, and lead, have been identified in the coastal zone of the Caribbean Sea. Analysis of toxic metals and hydrocarbons is based on the investigation of coastal sediments that have accumulated less than 50 meters deep during the last hundred and fifty years. The project results were presented in Vienna in the forum \"Water Matters\", and the 2011 General Conference of said multilateral organization.", "title": "Ecology" }, { "paragraph_id": 20, "text": "After the Mediterranean, the Caribbean Sea is the second most polluted sea. Pollution (in the form of up to 300,000 tonnes of solid garbage dumped into the Caribbean Sea each year) is progressively endangering marine ecosystems, wiping out species, and harming the livelihoods of the local people, which is primarily reliant on tourism and fishing.", "title": "Ecology" }, { "paragraph_id": 21, "text": "KfW took part in a €25.7 million funding agreement to eliminate marine trash and boost the circular economy in the Caribbean's Small Island Developing States. The project \"Sustainable finance methods for marine preservation in the Caribbean\" will assist remove solid waste and keep it out of the marine and coastal environment by establishing a new facility under the Caribbean Biodiversity Fund (CBF). Non-governmental organizations, universities, public institutions, civil society organizations, and the corporate sector are all eligible for financing. The project is estimated to prevent and remove at least 15 000 tonnes of marine trash, benefiting at least 20 000 individuals.", "title": "Ecology" }, { "paragraph_id": 22, "text": "The climate of the Caribbean is driven by the low latitude and tropical ocean currents that run through it. The principal ocean current is the North Equatorial Current, which enters the region from the tropical Atlantic. The climate of the area is tropical, varying from tropical rainforest in some areas to tropical savanna in others. There are also some locations that are arid climates with considerable drought in some years.", "title": "Climate" }, { "paragraph_id": 23, "text": "Rainfall varies with elevation, size, and water currents (cool upwelling keep the ABC islands arid). Warm, moist trade winds blow consistently from the east, creating both rainforest and semi-arid climates across the region. The tropical rainforest climates include lowland areas near the Caribbean Sea from Costa Rica north to Belize, as well as the Dominican Republic and Puerto Rico, while the more seasonal dry tropical savanna climates are found in Cuba, northern Venezuela, and southern Yucatán, Mexico. Arid climates are found along the extreme northern coast of Venezuela out to the islands including Aruba and Curaçao, as well as the northern tip of Yucatán", "title": "Climate" }, { "paragraph_id": 24, "text": "Tropical cyclones are a threat to the nations that rim the Caribbean Sea. While landfalls are infrequent, the resulting loss of life and property damage makes them a significant hazard to life in the Caribbean. Tropical cyclones that impact the Caribbean often develop off the West coast of Africa and make their way west across the Atlantic Ocean toward the Caribbean, while other storms develop in the Caribbean itself. The Caribbean hurricane season as a whole lasts from June through November, with the majority of hurricanes occurring during August and September. On average around nine tropical storms form each year, with five reaching hurricane strength. According to the National Hurricane Center 385 hurricanes occurred in the Caribbean between 1494 and 1900.", "title": "Climate" }, { "paragraph_id": 25, "text": "The region has a high level of biodiversity and many species are endemic to the Caribbean.", "title": "Flora and fauna" }, { "paragraph_id": 26, "text": "The vegetation of the region is mostly tropical but differences in topography, soil and climatic conditions increase species diversity. Where there are porous limestone terraced islands these are generally poor in nutrients. It is estimated that 13,000 species of plants grow in the Caribbean of which 6,500 are endemic. For example, guaiac wood (Guaiacum officinale), the flower of which is the national flower of Jamaica and the Bayahibe rose (Pereskia quisqueyana) which is the national flower of the Dominican Republic and the ceiba which is the national tree of both Puerto Rico and Guatemala. The mahogany is the national tree of the Dominican Republic and Belize. The caimito (Chrysophyllum cainito) grows throughout the Caribbean. In coastal zones there are coconut palms and in lagoons and estuaries are found thick areas of black mangrove and red mangrove (Rhizophora mangle).", "title": "Flora and fauna" }, { "paragraph_id": 27, "text": "In shallow water flora and fauna is concentrated around coral reefs where there is little variation in water temperature, purity and salinity. Leeward side of lagoons provide areas of growth for sea grasses. Turtle grass (Thalassia testudinum) is common in the Caribbean as is manatee grass (Syringodium filiforme) which can grow together as well as in fields of single species at depths up to 20 m (66 ft). Another type shoal grass (Halodule wrightii) grows on sand and mud surfaces at depths of up to 5 m (16 ft). In brackish water of harbours and estuaries at depths less than 2.5 m (8 ft 2 in) widgeongrass (Ruppia maritima) grows. Representatives of three species belonging to the genus Halophila, (Halophila baillonii, Halophila engelmannii and Halophila decipiens) are found at depths of up to 30 m (98 ft) except for Halophila engelmani which does not grow below 5 m (16 ft) and is confined to the Bahamas, Florida, the Greater Antilles and the western part of the Caribbean. Halophila baillonii has been found only in the Lesser Antilles.", "title": "Flora and fauna" }, { "paragraph_id": 28, "text": "Marine biota in the region have representatives of both the Indian and Pacific oceans which were caught in the Caribbean before the emergence of the Isthmus of Panama four million years ago. In the Caribbean Sea there are around 1,000 documented species of fish, including sharks (bull shark, tiger shark, silky shark and Caribbean reef shark), flying fish, giant oceanic manta ray, angel fish, spotfin butterflyfish, parrotfish, Atlantic Goliath grouper, tarpon and moray eels. Throughout the Caribbean there is industrial catching of lobster and sardines (off the coast of Yucatán Peninsula).", "title": "Flora and fauna" }, { "paragraph_id": 29, "text": "There are 90 species of mammals in the Caribbean including sperm whales, humpback whales and dolphins. The island of Jamaica is home to seals and manatees. The Caribbean monk seal which lived in the Caribbean is considered extinct. Solenodons and hutias are mammals found only in the Caribbean; only one extant species is not endangered.", "title": "Flora and fauna" }, { "paragraph_id": 30, "text": "There are 500 species of reptiles (94% of which are endemic). Islands are inhabited by some endemic species such as rock iguanas and American crocodile. The blue iguana, endemic to the island of Grand Cayman, is endangered. The green iguana is invasive to Grand Cayman. The Mona ground iguana which inhabits the island of Mona, Puerto Rico, is endangered. The rhinoceros iguana from the island of Hispaniola which is shared between Haiti and the Dominican Republic is also endangered. The region has several types of sea turtle (loggerhead, green turtle, hawksbill, leatherback turtle, Atlantic ridley and olive ridley). Some species are threatened with extinction. Their populations have been greatly reduced since the 17th century – the number of green turtles has declined from 91 million to 300,000 and hawksbill turtles from 11 million to less than 30,000 by 2006.", "title": "Flora and fauna" }, { "paragraph_id": 31, "text": "All 170 species of amphibians that live in the region are endemic. The habitats of almost all members of the toad family, poison dart frogs, tree frogs and leptodactylidae (a type of frog) are limited to only one island. The Golden coqui is in serious threat of extinction.", "title": "Flora and fauna" }, { "paragraph_id": 32, "text": "In the Caribbean, 600 species of birds have been recorded, of which 163 are endemic such as todies, Fernandina's flicker and palmchat. The American yellow warbler is found in many areas, as is the green heron. Of the endemic species 48 are threatened with extinction including the Puerto Rican amazon, and the Zapata wren. According to Birdlife International in 2006 in Cuba 29 species of bird are in danger of extinction and two species officially extinct. The black-fronted piping guan is endangered. The Antilles along with Central America lie in the flight path of migrating birds from North America so the size of populations is subject to seasonal fluctuations. Parrots and bananaquits are found in forests. Over the open sea can be seen frigatebirds and tropicbirds.", "title": "Flora and fauna" }, { "paragraph_id": 33, "text": "The Caribbean region has seen a significant increase in human activity since the colonization period. The sea is one of the largest oil production areas in the world, producing approximately 170 million tons per year. The area also generates a large fishing industry for the surrounding countries, accounting for 500,000 tonnes (490,000 long tons; 550,000 short tons) of fish a year.", "title": "Economy and human activity" }, { "paragraph_id": 34, "text": "Human activity in the area also accounts for a significant amount of pollution. The Pan American Health Organization estimated in 1993 that only about 10% of the sewage from the Central American and Caribbean Island countries is properly treated before being released into the sea.", "title": "Economy and human activity" }, { "paragraph_id": 35, "text": "The Caribbean region supports a large tourism industry. The Caribbean Tourism Organization calculates that about 12 million people a year visit the area, including (in 1991–1992) about 8 million cruise ship tourists. Tourism based upon scuba diving and snorkeling on coral reefs of many Caribbean islands makes a major contribution to their economies.", "title": "Economy and human activity" }, { "paragraph_id": 36, "text": "", "title": "Further reading" } ]
The Caribbean Sea is a sea of the Atlantic Ocean in the tropics of the Western Hemisphere. It is bounded by Mexico and Central America to the west and southwest, to the north by the Greater Antilles starting with Cuba, to the east by the Lesser Antilles, and to the south by the northern coast of South America. The Gulf of Mexico lies to the northwest. The entire Caribbean Sea area, the West Indies' numerous islands, and adjacent coasts are collectively known as the Caribbean. The Caribbean Sea is one of the largest seas and has an area of about 2,754,000 km2 (1,063,000 sq mi). The sea's deepest point is the Cayman Trough, between the Cayman Islands and Jamaica, at 7,686 m (25,217 ft) below sea level. The Caribbean coastline has many gulfs and bays: the Gulf of Gonâve, the Gulf of Venezuela, the Gulf of Darién, Golfo de los Mosquitos, the Gulf of Paria and the Gulf of Honduras. The Caribbean Sea has the world's second-largest barrier reef, the Mesoamerican Barrier Reef. It runs 1,000 km (620 mi) along Mexico, Belize, Guatemala, and Honduras coasts.
2002-01-21T21:59:03Z
2023-12-10T04:46:58Z
[ "Template:Pp-move", "Template:Cvt", "Template:Clarify", "Template:Portal", "Template:Short description", "Template:Infobox body of water", "Template:Cite journal", "Template:ISBN", "Template:Commons", "Template:List of seas", "Template:Reflist", "Template:Coord", "Template:Convert", "Template:Lang-fr", "Template:Lang-nl", "Template:As of", "Template:Clear", "Template:Lang-ht", "Template:Main", "Template:Lang-jam", "Template:Webarchive", "Template:Lang-es", "Template:Lang-pap", "Template:Authority control", "Template:Cite web", "Template:Marginal seas of the Atlantic Ocean", "Template:Caribbean topics" ]
https://en.wikipedia.org/wiki/Caribbean_Sea
7,824
Colin Maclaurin
Colin Maclaurin (/məˈklɔːrən/; Scottish Gaelic: Cailean MacLabhruinn; February 1698 – 14 June 1746) was a Scottish mathematician who made important contributions to geometry and algebra. He is also known for being a child prodigy and holding the record for being the youngest professor. The Maclaurin series, a special case of the Taylor series, is named after him. Owing to changes in orthography since that time (his name was originally rendered as M'Laurine), his surname is alternatively written MacLaurin. Maclaurin was born in Kilmodan, Argyll. His father, John Maclaurin, minister of Glendaruel, died when Maclaurin was in infancy, and his mother died before he reached nine years of age. He was then educated under the care of his uncle, Daniel Maclaurin, minister of Kilfinan. A child prodigy, he entered university at age 11. At eleven, Maclaurin, a child prodigy at the time, entered the University of Glasgow. He graduated Master of Arts three years later by defending a thesis on the Power of Gravity, and remained at Glasgow to study divinity until he was 19, when he was elected professor of mathematics in a ten-day competition at Marischal College and University in Aberdeen. This record as the world's youngest professor endured until March 2008, when the record was officially given to Alia Sabur. In the vacations of 1719 and 1721, Maclaurin went to London, where he became acquainted with Isaac Newton, Benjamin Hoadly, Samuel Clarke, Martin Folkes, and other philosophers. He was admitted a member of the Royal Society. In 1722, having provided a locum for his class at Aberdeen, he travelled on the Continent as tutor to George Hume, the son of Alexander Hume, 2nd Earl of Marchmont. During their time in Lorraine, he wrote his essay on the percussion of bodies (Demonstration des loix du choc des corps), which gained the prize of the Royal Academy of Sciences in 1724. Upon the death of his pupil at Montpellier, Maclaurin returned to Aberdeen. In 1725, Maclaurin was appointed deputy to the mathematical professor at the University of Edinburgh, James Gregory (brother of David Gregory and nephew of the esteemed James Gregory), upon the recommendation of Isaac Newton. On 3 November of that year Maclaurin succeeded Gregory, and went on to raise the character of that university as a school of science. Newton was so impressed with Maclaurin that he had offered to pay his salary himself. Maclaurin used Taylor series to characterize maxima, minima, and points of inflection for infinitely differentiable functions in his Treatise of Fluxions. Maclaurin attributed the series to Brook Taylor, though the series was known before to Newton and Gregory, and in special cases to Madhava of Sangamagrama in fourteenth century India. Nevertheless, Maclaurin received credit for his use of the series, and the Taylor series expanded around 0 is sometimes known as the Maclaurin series. Maclaurin also made significant contributions to the gravitation attraction of ellipsoids, a subject that furthermore attracted the attention of d'Alembert, A.-C. Clairaut, Euler, Laplace, Legendre, Poisson and Gauss. Maclaurin showed that an oblate spheroid was a possible equilibrium in Newton's theory of gravity. The subject continues to be of scientific interest, and Nobel Laureate Subramanyan Chandrasekhar dedicated a chapter of his book Ellipsoidal Figures of Equilibrium to Maclaurin spheroids. Maclaurin corresponded extensively with Clairaut, Maupertuis, and d'Ortous de Mairan. Independently from Euler and using the same methods, Maclaurin discovered the Euler–Maclaurin formula. He used it to sum powers of arithmetic progressions, derive Stirling's formula, and to derive the Newton-Cotes numerical integration formulas which includes Simpson's rule as a special case. Maclaurin contributed to the study of elliptic integrals, reducing many intractable integrals to problems of finding arcs for hyperbolas. His work was continued by d'Alembert and Euler, who gave a more concise approach. In his Treatise of Algebra (Ch. XII, Sect 86), published in 1748 two years after his death, Maclaurin proved a rule for solving square linear systems in the cases of 2 and 3 unknowns, and discussed the case of 4 unknowns. This publication preceded by two years Cramer's publication of a generalization of the rule to n unknowns, now commonly known as Cramer's rule. In 1733, Maclaurin married Anne Stewart, the daughter of Walter Stewart, the Solicitor General for Scotland, by whom he had seven children. His eldest son John Maclaurin studied Law, was a Senator of the College of Justice, and became Lord Dreghorn; he was also joint founder of the Royal Society of Edinburgh. Maclaurin actively opposed the Jacobite rising of 1745 and superintended the operations necessary for the defence of Edinburgh against the Highland army. Maclaurin compiled a diary of his exertions against the Jacobites, both within and without the city. When the Highland army entered the city, however, he fled to York, where he was invited to stay by the Archbishop of York. On his journey south, Maclaurin fell from his horse, and the fatigue, anxiety, and cold to which he was exposed on that occasion laid the foundations of dropsy. He returned to Edinburgh after the Jacobite army marched south, but died soon after his return. He is buried at Greyfriars Kirkyard, Edinburgh. The simple table stone is inscribed simply "C. M. Nat MDCXCVIII Ob MDCCXLVI" and stands close to the south-west corner of the church but is supplemented by a more wordy memorial on the outer wall of the church. Mathematician and former MIT President Richard Cockburn Maclaurin was from the same family. The Maclaurin Society (MacSoc), the Mathematics and Statistics Society at Glasgow University, is named in his honour. Colin MacLaurin Road within Edinburgh University's King's Buildings complex is named in his honour. Some of his important works are: Colin Maclaurin was the name used for the new Mathematics and Actuarial Mathematics and Statistics Building at Heriot-Watt University, Edinburgh.
[ { "paragraph_id": 0, "text": "Colin Maclaurin (/məˈklɔːrən/; Scottish Gaelic: Cailean MacLabhruinn; February 1698 – 14 June 1746) was a Scottish mathematician who made important contributions to geometry and algebra. He is also known for being a child prodigy and holding the record for being the youngest professor. The Maclaurin series, a special case of the Taylor series, is named after him.", "title": "" }, { "paragraph_id": 1, "text": "Owing to changes in orthography since that time (his name was originally rendered as M'Laurine), his surname is alternatively written MacLaurin.", "title": "" }, { "paragraph_id": 2, "text": "Maclaurin was born in Kilmodan, Argyll. His father, John Maclaurin, minister of Glendaruel, died when Maclaurin was in infancy, and his mother died before he reached nine years of age. He was then educated under the care of his uncle, Daniel Maclaurin, minister of Kilfinan. A child prodigy, he entered university at age 11.", "title": "Early life" }, { "paragraph_id": 3, "text": "At eleven, Maclaurin, a child prodigy at the time, entered the University of Glasgow. He graduated Master of Arts three years later by defending a thesis on the Power of Gravity, and remained at Glasgow to study divinity until he was 19, when he was elected professor of mathematics in a ten-day competition at Marischal College and University in Aberdeen. This record as the world's youngest professor endured until March 2008, when the record was officially given to Alia Sabur.", "title": "Academic career" }, { "paragraph_id": 4, "text": "In the vacations of 1719 and 1721, Maclaurin went to London, where he became acquainted with Isaac Newton, Benjamin Hoadly, Samuel Clarke, Martin Folkes, and other philosophers. He was admitted a member of the Royal Society.", "title": "Academic career" }, { "paragraph_id": 5, "text": "In 1722, having provided a locum for his class at Aberdeen, he travelled on the Continent as tutor to George Hume, the son of Alexander Hume, 2nd Earl of Marchmont. During their time in Lorraine, he wrote his essay on the percussion of bodies (Demonstration des loix du choc des corps), which gained the prize of the Royal Academy of Sciences in 1724. Upon the death of his pupil at Montpellier, Maclaurin returned to Aberdeen.", "title": "Academic career" }, { "paragraph_id": 6, "text": "In 1725, Maclaurin was appointed deputy to the mathematical professor at the University of Edinburgh, James Gregory (brother of David Gregory and nephew of the esteemed James Gregory), upon the recommendation of Isaac Newton. On 3 November of that year Maclaurin succeeded Gregory, and went on to raise the character of that university as a school of science. Newton was so impressed with Maclaurin that he had offered to pay his salary himself.", "title": "Academic career" }, { "paragraph_id": 7, "text": "Maclaurin used Taylor series to characterize maxima, minima, and points of inflection for infinitely differentiable functions in his Treatise of Fluxions. Maclaurin attributed the series to Brook Taylor, though the series was known before to Newton and Gregory, and in special cases to Madhava of Sangamagrama in fourteenth century India. Nevertheless, Maclaurin received credit for his use of the series, and the Taylor series expanded around 0 is sometimes known as the Maclaurin series.", "title": "Contributions to mathematics" }, { "paragraph_id": 8, "text": "Maclaurin also made significant contributions to the gravitation attraction of ellipsoids, a subject that furthermore attracted the attention of d'Alembert, A.-C. Clairaut, Euler, Laplace, Legendre, Poisson and Gauss. Maclaurin showed that an oblate spheroid was a possible equilibrium in Newton's theory of gravity. The subject continues to be of scientific interest, and Nobel Laureate Subramanyan Chandrasekhar dedicated a chapter of his book Ellipsoidal Figures of Equilibrium to Maclaurin spheroids. Maclaurin corresponded extensively with Clairaut, Maupertuis, and d'Ortous de Mairan.", "title": "Contributions to mathematics" }, { "paragraph_id": 9, "text": "Independently from Euler and using the same methods, Maclaurin discovered the Euler–Maclaurin formula. He used it to sum powers of arithmetic progressions, derive Stirling's formula, and to derive the Newton-Cotes numerical integration formulas which includes Simpson's rule as a special case.", "title": "Contributions to mathematics" }, { "paragraph_id": 10, "text": "Maclaurin contributed to the study of elliptic integrals, reducing many intractable integrals to problems of finding arcs for hyperbolas. His work was continued by d'Alembert and Euler, who gave a more concise approach.", "title": "Contributions to mathematics" }, { "paragraph_id": 11, "text": "In his Treatise of Algebra (Ch. XII, Sect 86), published in 1748 two years after his death, Maclaurin proved a rule for solving square linear systems in the cases of 2 and 3 unknowns, and discussed the case of 4 unknowns. This publication preceded by two years Cramer's publication of a generalization of the rule to n unknowns, now commonly known as Cramer's rule.", "title": "Contributions to mathematics" }, { "paragraph_id": 12, "text": "In 1733, Maclaurin married Anne Stewart, the daughter of Walter Stewart, the Solicitor General for Scotland, by whom he had seven children. His eldest son John Maclaurin studied Law, was a Senator of the College of Justice, and became Lord Dreghorn; he was also joint founder of the Royal Society of Edinburgh.", "title": "Personal life" }, { "paragraph_id": 13, "text": "Maclaurin actively opposed the Jacobite rising of 1745 and superintended the operations necessary for the defence of Edinburgh against the Highland army. Maclaurin compiled a diary of his exertions against the Jacobites, both within and without the city. When the Highland army entered the city, however, he fled to York, where he was invited to stay by the Archbishop of York.", "title": "Personal life" }, { "paragraph_id": 14, "text": "On his journey south, Maclaurin fell from his horse, and the fatigue, anxiety, and cold to which he was exposed on that occasion laid the foundations of dropsy. He returned to Edinburgh after the Jacobite army marched south, but died soon after his return.", "title": "Personal life" }, { "paragraph_id": 15, "text": "He is buried at Greyfriars Kirkyard, Edinburgh. The simple table stone is inscribed simply \"C. M. Nat MDCXCVIII Ob MDCCXLVI\" and stands close to the south-west corner of the church but is supplemented by a more wordy memorial on the outer wall of the church.", "title": "Personal life" }, { "paragraph_id": 16, "text": "Mathematician and former MIT President Richard Cockburn Maclaurin was from the same family.", "title": "Personal life" }, { "paragraph_id": 17, "text": "The Maclaurin Society (MacSoc), the Mathematics and Statistics Society at Glasgow University, is named in his honour.", "title": "Personal life" }, { "paragraph_id": 18, "text": "Colin MacLaurin Road within Edinburgh University's King's Buildings complex is named in his honour.", "title": "Personal life" }, { "paragraph_id": 19, "text": "Some of his important works are:", "title": "Notable works" }, { "paragraph_id": 20, "text": "Colin Maclaurin was the name used for the new Mathematics and Actuarial Mathematics and Statistics Building at Heriot-Watt University, Edinburgh.", "title": "Notable works" } ]
Colin Maclaurin was a Scottish mathematician who made important contributions to geometry and algebra. He is also known for being a child prodigy and holding the record for being the youngest professor. The Maclaurin series, a special case of the Taylor series, is named after him. Owing to changes in orthography since that time, his surname is alternatively written MacLaurin.
2002-02-25T15:51:15Z
2023-11-03T10:41:37Z
[ "Template:Short description", "Template:Use dmy dates", "Template:Infobox scientist", "Template:Cite book", "Template:Reflist", "Template:Cite EB1911", "Template:Cite web", "Template:Cite news", "Template:Citation", "Template:Use British English", "Template:IPAc-en", "Template:Unsourced", "Template:MacTutor Biography", "Template:Lang-gd", "Template:Cite journal", "Template:Commons and category", "Template:Pronunciation-needed", "Template:Cite ODNB", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Colin_Maclaurin
7,825
Celestial globe
Celestial globes show the apparent positions of the stars in the sky. They omit the Sun, Moon, and planets because the positions of these bodies vary relative to those of the stars, but the ecliptic, along which the Sun moves, is indicated. There is an issue regarding the “handedness” of celestial globes. If the globe is constructed so that the stars are in the positions they actually occupy on the imaginary celestial sphere, then the star field will appear reversed on the surface of the globe (all the constellations will appear as their mirror images). This is because the view from Earth, positioned at the centre of the celestial sphere, is of the gnomonic projection inside of the celestial sphere, whereas the celestial globe is orthographic projection as viewed from the outside. For this reason, celestial globes are often produced in mirror image, so that at least the constellations appear as viewed from earth. Some modern celestial globes address this problem by making the surface of the globe transparent. The stars can then be placed in their proper positions and viewed through the globe, so that the view is of the inside of the celestial sphere. However, the proper position from which to view the sphere would be from its centre, but the viewer of a transparent globe must be outside it, far from its centre. Viewing the inside of the sphere from the outside, through its transparent surface, produces serious distortions. Opaque celestial globes that are made with the constellations correctly placed, so they appear as mirror images when directly viewed from outside the globe, are often viewed in a mirror, so the constellations have their familiar appearances. Written material on the globe, e.g. constellation names, is printed in reverse, so it can easily be read in the mirror. Before Copernicus’s 16th-century discovery that the solar system is ‘heliocentric rather than geocentric and geostatic’ (that the earth orbits the sun and not the other way around) ‘the stars have been commonly, though perhaps not universally, perceived as though attached to the inside of a hollow sphere enclosing and rotating about the earth’. Working under the incorrect assumption that the cosmos was geocentric the second-century Greek astronomer Ptolemy composed the Almagest in which ‘the movements of the planets could be accurately represented by means of techniques involving the use of epicycles, deferents, eccentrics (whereby planetary motion is conceived as circular with respect to a point displaced from Earth), and equants (a device that posits a constant angular rate of rotation with respect to a point displaced from Earth)’. Guided by these ideas astronomers of the middle ages, Muslim and Christian alike, created celestial globes to ‘represent in a model the arrangement and movement of the stars’. In their most basic form celestial globes represent the stars as if the viewer were looking down upon the sky as a globe that surrounds the earth. The Roman writer Cicero ‘reported the statements of the Roman astronomer Gaius Sulpicius Gallus of the second century BC, the first globe was constructed by Thales of Miletus’. This could indicate that Celestial Globes were in production throughout antiquity however, without any Celestial Globes surviving from this time, it is difficult to say for sure. What is known is that in book VIII, chapter 3 of Ptolemy’s Almagest he outlines ideas for the design and production of a Celestial Globe. This includes some notes on how the globe should be decorated, suggesting ‘the sphere a dark colour resembling the night sky’. The Farnese Atlas, a 2nd-century AD Roman marble sculpture of Atlas which probably copies an earlier work of the Hellenistic era, is holding a celestial globe 65cm in diameter, which for many years was the only known celestial globe from the ancient world. No stars are depicted on the globe, but it shows over 40 classical Greek constellations in substantial detail. In the 1990s, two smaller celestial globes from antiquity became public: one from brass measuring 11cm held by the Römisch-Germanisches Zentralmuseum, and one from gilt silver measuring 6.3cm privately held by the Kugel family. Al-Sufi (Abu'l-Husayn 'Abd al-Rahman ibn 'Umar al-Sufi) was an important 10th century astronomer whose works were instrumental in the Islamic development of the Celestial Globe. His book, The Book of the Constellations, (‘designed to be accurate for the year 964 (353 AH)’ ) was a ‘description of the constellations that combines Greek/ Ptolemaic traditions with Arabic/Bedouin ones’. The Book of the Constellations then served as an important source of star coordinates for makers of astrolabes and globes across the Islamic world. Similarly this ‘treatise was instrumental in displacing the traditional Bedouin constellation imagery and replacing it with the Greek/Ptolemaic system which ultimately came to dominate all astronomy’. The earliest surviving Celestial Globe was made between 1080 and 1085 C.E by Ibrahim ibn Said al-Sahli, a well known astrolabe maker working in Valencia, Spain. Although the imagery on this globe appears to be unrelated to that in al-Sufi’s The Book of the Constellations al-Wazzan does seem to have been aware of this work as ‘all forty-eight of the classical Greek constellations are illustrated on the globe, just as in al-Sufi's treatise, with the stars indicated by circles’. In the 13th century a Celestial Globe, now housed in the Mathematisch-Physikalischer Salon in Dresden, ‘was produced at one of the most important centres of astronomy in intellectual history, the Ilkhanid observatory at Maragha in north-western Iran constructed in 1259 and headed by Nasir al-Dln TusT (d. 1274), the renowned polymath.’ This particular scientific instrument was made by the son of the renowned scientist Mu'ayyad al-’Urdi al-Dimashqi, Muhammad b. Mu'ayyad al-'Urdl in 1288. This globe is an interesting example of how Celestial Globes demonstrate both the scientific and the artistic talents of those who make them. All ‘forty-eight classical constellations used in Ptolemy's Almagest are represented on the globe, meaning it could then be ‘used in calculations for astronomy and astrology, such as navigation, time-keeping or determining a horoscope’. Artistically, this globe is an exciting insight into thirteenth century Iranian illustration as the ‘thirteenth century was a period when inlaid brass became a premier medium for figural imagery’ and so ‘the globes from this period are duly exceptional for the detail and clarity of their engraved figures’. A 17th-century celestial globe was made by Diya’ ad-din Muhammad in Lahore, 1668 (now in Pakistan). It is now housed at the National Museum of Scotland. It is encircled by a meridian ring and a horizon ring. The latitude angle of 32° indicates that the globe was made in the Lahore workshop. This specific 'workshop claims 21 signed globes—the largest number from a single shop’ making this globe a good example of Celestial Globe production at its peak. The globe itself has been manufactured in one piece, so as to be seamless. This complicated process was, if not invented, then certainly perfected, in the Lahore workshop Diya’ ad-din Muhammad worked in. There are grooves which encircle the surface of the globe that create 12 sections of 30° which pass through the ecliptic poles. While they are no longer used in astronomy today, they are called “ecliptic latitude circles” and help astronomers of the Arabic and Greek worlds find the co-ordinates of a particular star. Each of the 12 sections corresponds to a house in the zodiac.
[ { "paragraph_id": 0, "text": "Celestial globes show the apparent positions of the stars in the sky. They omit the Sun, Moon, and planets because the positions of these bodies vary relative to those of the stars, but the ecliptic, along which the Sun moves, is indicated.", "title": "" }, { "paragraph_id": 1, "text": "There is an issue regarding the “handedness” of celestial globes. If the globe is constructed so that the stars are in the positions they actually occupy on the imaginary celestial sphere, then the star field will appear reversed on the surface of the globe (all the constellations will appear as their mirror images). This is because the view from Earth, positioned at the centre of the celestial sphere, is of the gnomonic projection inside of the celestial sphere, whereas the celestial globe is orthographic projection as viewed from the outside. For this reason, celestial globes are often produced in mirror image, so that at least the constellations appear as viewed from earth. Some modern celestial globes address this problem by making the surface of the globe transparent. The stars can then be placed in their proper positions and viewed through the globe, so that the view is of the inside of the celestial sphere. However, the proper position from which to view the sphere would be from its centre, but the viewer of a transparent globe must be outside it, far from its centre. Viewing the inside of the sphere from the outside, through its transparent surface, produces serious distortions. Opaque celestial globes that are made with the constellations correctly placed, so they appear as mirror images when directly viewed from outside the globe, are often viewed in a mirror, so the constellations have their familiar appearances. Written material on the globe, e.g. constellation names, is printed in reverse, so it can easily be read in the mirror.", "title": "" }, { "paragraph_id": 2, "text": "Before Copernicus’s 16th-century discovery that the solar system is ‘heliocentric rather than geocentric and geostatic’ (that the earth orbits the sun and not the other way around) ‘the stars have been commonly, though perhaps not universally, perceived as though attached to the inside of a hollow sphere enclosing and rotating about the earth’. Working under the incorrect assumption that the cosmos was geocentric the second-century Greek astronomer Ptolemy composed the Almagest in which ‘the movements of the planets could be accurately represented by means of techniques involving the use of epicycles, deferents, eccentrics (whereby planetary motion is conceived as circular with respect to a point displaced from Earth), and equants (a device that posits a constant angular rate of rotation with respect to a point displaced from Earth)’. Guided by these ideas astronomers of the middle ages, Muslim and Christian alike, created celestial globes to ‘represent in a model the arrangement and movement of the stars’. In their most basic form celestial globes represent the stars as if the viewer were looking down upon the sky as a globe that surrounds the earth.", "title": "" }, { "paragraph_id": 3, "text": "The Roman writer Cicero ‘reported the statements of the Roman astronomer Gaius Sulpicius Gallus of the second century BC, the first globe was constructed by Thales of Miletus’. This could indicate that Celestial Globes were in production throughout antiquity however, without any Celestial Globes surviving from this time, it is difficult to say for sure. What is known is that in book VIII, chapter 3 of Ptolemy’s Almagest he outlines ideas for the design and production of a Celestial Globe. This includes some notes on how the globe should be decorated, suggesting ‘the sphere a dark colour resembling the night sky’.", "title": "History" }, { "paragraph_id": 4, "text": "The Farnese Atlas, a 2nd-century AD Roman marble sculpture of Atlas which probably copies an earlier work of the Hellenistic era, is holding a celestial globe 65cm in diameter, which for many years was the only known celestial globe from the ancient world. No stars are depicted on the globe, but it shows over 40 classical Greek constellations in substantial detail. In the 1990s, two smaller celestial globes from antiquity became public: one from brass measuring 11cm held by the Römisch-Germanisches Zentralmuseum, and one from gilt silver measuring 6.3cm privately held by the Kugel family.", "title": "History" }, { "paragraph_id": 5, "text": "Al-Sufi (Abu'l-Husayn 'Abd al-Rahman ibn 'Umar al-Sufi) was an important 10th century astronomer whose works were instrumental in the Islamic development of the Celestial Globe. His book, The Book of the Constellations, (‘designed to be accurate for the year 964 (353 AH)’ ) was a ‘description of the constellations that combines Greek/ Ptolemaic traditions with Arabic/Bedouin ones’. The Book of the Constellations then served as an important source of star coordinates for makers of astrolabes and globes across the Islamic world. Similarly this ‘treatise was instrumental in displacing the traditional Bedouin constellation imagery and replacing it with the Greek/Ptolemaic system which ultimately came to dominate all astronomy’.", "title": "History" }, { "paragraph_id": 6, "text": "The earliest surviving Celestial Globe was made between 1080 and 1085 C.E by Ibrahim ibn Said al-Sahli, a well known astrolabe maker working in Valencia, Spain. Although the imagery on this globe appears to be unrelated to that in al-Sufi’s The Book of the Constellations al-Wazzan does seem to have been aware of this work as ‘all forty-eight of the classical Greek constellations are illustrated on the globe, just as in al-Sufi's treatise, with the stars indicated by circles’.", "title": "History" }, { "paragraph_id": 7, "text": "In the 13th century a Celestial Globe, now housed in the Mathematisch-Physikalischer Salon in Dresden, ‘was produced at one of the most important centres of astronomy in intellectual history, the Ilkhanid observatory at Maragha in north-western Iran constructed in 1259 and headed by Nasir al-Dln TusT (d. 1274), the renowned polymath.’ This particular scientific instrument was made by the son of the renowned scientist Mu'ayyad al-’Urdi al-Dimashqi, Muhammad b. Mu'ayyad al-'Urdl in 1288. This globe is an interesting example of how Celestial Globes demonstrate both the scientific and the artistic talents of those who make them. All ‘forty-eight classical constellations used in Ptolemy's Almagest are represented on the globe, meaning it could then be ‘used in calculations for astronomy and astrology, such as navigation, time-keeping or determining a horoscope’. Artistically, this globe is an exciting insight into thirteenth century Iranian illustration as the ‘thirteenth century was a period when inlaid brass became a premier medium for figural imagery’ and so ‘the globes from this period are duly exceptional for the detail and clarity of their engraved figures’.", "title": "History" }, { "paragraph_id": 8, "text": "A 17th-century celestial globe was made by Diya’ ad-din Muhammad in Lahore, 1668 (now in Pakistan). It is now housed at the National Museum of Scotland. It is encircled by a meridian ring and a horizon ring. The latitude angle of 32° indicates that the globe was made in the Lahore workshop. This specific 'workshop claims 21 signed globes—the largest number from a single shop’ making this globe a good example of Celestial Globe production at its peak. The globe itself has been manufactured in one piece, so as to be seamless. This complicated process was, if not invented, then certainly perfected, in the Lahore workshop Diya’ ad-din Muhammad worked in.", "title": "History" }, { "paragraph_id": 9, "text": "There are grooves which encircle the surface of the globe that create 12 sections of 30° which pass through the ecliptic poles. While they are no longer used in astronomy today, they are called “ecliptic latitude circles” and help astronomers of the Arabic and Greek worlds find the co-ordinates of a particular star. Each of the 12 sections corresponds to a house in the zodiac.", "title": "History" } ]
Celestial globes show the apparent positions of the stars in the sky. They omit the Sun, Moon, and planets because the positions of these bodies vary relative to those of the stars, but the ecliptic, along which the Sun moves, is indicated. There is an issue regarding the “handedness” of celestial globes. If the globe is constructed so that the stars are in the positions they actually occupy on the imaginary celestial sphere, then the star field will appear reversed on the surface of the globe. This is because the view from Earth, positioned at the centre of the celestial sphere, is of the gnomonic projection inside of the celestial sphere, whereas the celestial globe is orthographic projection as viewed from the outside. For this reason, celestial globes are often produced in mirror image, so that at least the constellations appear as viewed from earth. Some modern celestial globes address this problem by making the surface of the globe transparent. The stars can then be placed in their proper positions and viewed through the globe, so that the view is of the inside of the celestial sphere. However, the proper position from which to view the sphere would be from its centre, but the viewer of a transparent globe must be outside it, far from its centre. Viewing the inside of the sphere from the outside, through its transparent surface, produces serious distortions. Opaque celestial globes that are made with the constellations correctly placed, so they appear as mirror images when directly viewed from outside the globe, are often viewed in a mirror, so the constellations have their familiar appearances. Written material on the globe, e.g. constellation names, is printed in reverse, so it can easily be read in the mirror. Before Copernicus’s 16th-century discovery that the solar system is ‘heliocentric rather than geocentric and geostatic’ ‘the stars have been commonly, though perhaps not universally, perceived as though attached to the inside of a hollow sphere enclosing and rotating about the earth’. Working under the incorrect assumption that the cosmos was geocentric the second-century Greek astronomer Ptolemy composed the Almagest in which ‘the movements of the planets could be accurately represented by means of techniques involving the use of epicycles, deferents, eccentrics, and equants’. Guided by these ideas astronomers of the middle ages, Muslim and Christian alike, created celestial globes to ‘represent in a model the arrangement and movement of the stars’. In their most basic form celestial globes represent the stars as if the viewer were looking down upon the sky as a globe that surrounds the earth.
2002-02-25T15:51:15Z
2023-11-20T04:18:17Z
[ "Template:Cite book", "Template:Cite journal", "Template:Cite web", "Template:Authority control", "Template:Short description", "Template:Sfn", "Template:Reflist" ]
https://en.wikipedia.org/wiki/Celestial_globe
7,827
Covenant-breaker
Covenant-breaker is a term used in the Baháʼí Faith to refer to a person who has been excommunicated from the Baháʼí community for breaking the Covenant of Baháʼu'lláh, meaning actively promoting schism in the religion or otherwise opposing the legitimacy of the chain of succession of leadership. Excommunication among Baháʼís is rare and not used for transgressions of community standards, intellectual dissent, or conversion to other religions. Instead, it is the most severe punishment, reserved for suppressing organized dissent that threatens the unity of believers. Currently, the Universal House of Justice has the sole authority to declare a person a Covenant-breaker, and once identified, all Baháʼís are expected to shun them, even if they are family members. According to ʻAbdu'l-Bahá, Covenant-breaking is a contagious disease. The Baháʼí writings forbid association with Covenant-breakers and Baháʼís are urged to avoid their literature, thus providing an exception to the Baháʼí principle of independent investigation of truth. Most Baháʼís are unaware of the small Baháʼí divisions that exist. Dr. Mikhail Sergeev wrote about the Baháʼí practice of excommunication, In dealing with organized dissent, and covenant-breaking as the most radical form of opposition, Baháʼís stand, as they do on many other controversial issues, somewhere between modernity and traditional religions. They are not as tolerant as the adherents of the Enlightenment ideology that institutionalizes opposition. Nor do they crush it as harshly as the fervent religious leaders of the past. The three largest attempts at alternative leadership—whose followers are considered Covenant-breakers—were from Subh-i-Azal, Mírzá Muhammad ʻAlí, and Charles Mason Remey. Others were declared Covenant-breakers for actively opposing or disobeying the head of the religion, or maliciously attacking the Baháʼí administration after leaving it. Covenant-breaking does not refer to attacks from non-Baháʼís or former Baháʼís. Rather, it is in reference to internal campaigns of opposition where the Covenant-breaker is seen as challenging the unity of the Baháʼí Faith, causing internal division, or by claiming or supporting an alternate succession of authority or administrative structure. The central purpose of the covenant is to prevent schism and dissension. In a letter to an individual dated 23 March 1975, the Universal House of Justice wrote: When a person declares his acceptance of Baháʼu'lláh as a Manifestation of God he becomes a party to the Covenant and accepts the totality of His Revelation. If he then turns round and attacks Baháʼu'lláh or the Central Institution of the Faith he violates the Covenant. If this happens every effort is made to help that person to see the illogicality and error of his actions, but if he persists he must, in accordance with the instructions of Baháʼu'lláh Himself, be shunned as a Covenant-breaker. The term Covenant-breaker was first used by ʻAbdu'l-Bahá to describe the partisans of his half-brother Mírzá Muhammad ʻAlí, who challenged his leadership. In ʻAbdu'l-Bahá's Will and Testament, he appointed Shoghi Effendi as the first Guardian, an institution of the religion now defined, and called for the election of the Universal House of Justice. ʻAbdul-Bahá defined in the same manner opposition to these two institutions as Covenant-breaking and advised all Baháʼís to shun anyone opposing the Covenant: "...one of the greatest and most fundamental principles of the Cause of God is to shun and avoid entirely the Covenant-breakers, for they will utterly destroy the Cause of God, exterminate His Law and render of no account all efforts exerted in the past." Most Covenant-breakers are involved in schismatic groups, but not always. For example, a Baháʼí who refuses to follow guidance on treatment of Covenant-breakers is at risk of being named one. One article originally written for the Baháʼí Encyclopedia, characterized Covenant-breakers that have emerged in the course of Baháʼí history as belonging to one of four categories: Shoghi Effendi wrote to the National Spiritual Assembly of Canada in 1957: People who have withdrawn from the Cause because they no longer feel that they can support its Teachings and Institutions sincerely, are not Covenant-breakers -- they are non-Baháʼís and should just be treated as such. Only those who ally themselves actively with known enemies of the Faith who are Covenant-breakers, and who attack the Faith in the same spirit as these people, can be considered, themselves, to be Covenant-breakers. Beyond this, many other relationships to the Baháʼí Faith exist, both positive and negative. Covenant-breaking does not apply to most of them. The following is a partial list of those who could not rightly be termed Covenant-breakers: Bábís are generally regarded as another religion altogether. Since Covenant-breaking presumes that one has submitted oneself to a covenant and then broken it, and Bábís never recognized or swore allegiance to Baháʼu'lláh, they are not Covenant-breakers. Followers of Subh-i-Azal, Baháʼu'lláh's half-brother who tried to poison him, engaged in active opposition to Baháʼís, and Shoghi Effendi did inform Baháʼís that they should avoid contact with his descendants, writing that "No intelligent and loyal Baha'i would associate with a descendant of Azal, if he traced the slightest breath of criticism of our Faith, in any aspect, from that person. In fact these people should be strenuously avoided as having an inherited spiritual disease -- the disease of Covenant-breaking!". Through the influence of Bahíyyih Khánum, the eldest daughter of Baháʼu'lláh, everyone in the household initially rallied around Shoghi Effendi after the death of ʻAbdu'l-Bahá. For several years his brother Husayn and several cousins served him as secretaries. The only ones publicly opposing him were Mírzá Muhammad ʻAlí and his followers, who were declared Covenant-breakers by ʻAbdu'l-Bahá. Contrary to ʻAbdu'l-Bahá's specific instruction, certain family members established illicit links with those whom ʻAbdu'l-Bahá had declared Covenant-breakers. After Bahíyyih Khánum died in 1932, Shoghi Effendi's eldest sister – Ruhangiz – married Nayyer Effendi Afnan, a son of Siyyid Ali Afnan, stepson of Baháʼu'lláh though Furughiyyih. The children of Furughiyyih sided with Muhammad ʻAlí and opposed ʻAbdu'l-Bahá, leaving only ʻAbdu'l-Bahá's own children as faithful among the descendants of Baháʼu'lláh. Moojan Momen describes these events as follows: All remained quiescent until the late 1930s when the case of the House of Bahá'u'lláh (q.v.) arose in Iraq. Shoghi Effendi asked Husayn Afnán (d. 1952), the son of Sayyid `Alí, to resign a high post that he held with the Iraqi government so that he would not be placed in the position of endorsing that government's actions in the case. Husayn refused and was expelled; one-by-one his brothers Faydí, Hasan, and Nayyir (Nayyir-`Alí, d. 1952) were also expelled. Events then proceeded rapidly. A series of marriages, engineered, according to Shoghi Effendi (MB), by Nayyir, occurred, linking the grandchildren of `Abdu'l-Bahá with the expelled sons of Sayyid `Alí Afnán. These marriages caused Ruhangiz, Mehrangiz, and Thurayyá to be declared Covenant-breakers by Shoghi Effendi, though there was some delay and concealment initially in order to avoid public degradation of the family. On 2 November 1941 Shoghi Effendi sent two cables announcing the expulsion of Túbá and her children Ruhi, Suhayl, and Fu'ad for consenting to the marriage of Thurayyá to Faydi. There was also mention that Ruhi's visit to America and Fu'ad's visit to England were without approval. In December 1941 he announced the expulsion of his sister Mehrangiz. Presumably being faced with a choice between shunning their disobedient family members and being themselves disobedient to ʻAbdu'l-Bahá and Shoghi Effendi, his cousins, aunts and uncles chose the latter. In 1944 Shoghi Effendi announced the expulsion of Munib Shahid, the grandson of ʻAbdu'l-Bahá's through Ruha, for marrying into the family of an enemy of the Baháʼís. In April 1945, he announced the expulsion of Husayn Ali, his brother, for joining the other Covenant-breakers. In a 1950 Shoghi Effendi sent another cable expelling the family of Ruha, another daughter of ʻAbdu'l-Bahá for showing "open defiance", and in December 1951 he announced a "fourth alliance" of members of the family of Siyyid Ali marrying into Ruha's family, and that his brother Riaz was included among the Covenant-breakers. In 1953 he cabled about Ruhi Afnan corresponding with Mirza Ahmad Sohrab, selling property of Baháʼu'lláh, and publicly "misrepresenting the teachings and deliberately causing confusion in minds of authorities and the local population". Most of the groups regarded by the larger group of Baháʼís as Covenant-breakers originated in the claims of Charles Mason Remey to the Guardianship in 1960. The Will and Testament of ʻAbdu'l-Bahá states that Guardians should be lineal descendants of Baháʼu'lláh, that each Guardian must select his successor during his lifetime, and that the nine Hands of the Cause of God permanently stationed in the holy land must approve the appointment by majority vote. Baháʼís interpret lineal descendency to mean physical familial relation to Baháʼu'lláh, of which Mason Remey was not. Almost all of Baháʼís accepted the determination of the Hands of the Cause that upon the death of Shoghi Effendi, he died "without having appointed his successor". There was an absence of a valid descendant of Baháʼu'lláh who could qualify under the terms of ʻAbdu'l-Bahá's will. Later the Universal House of Justice, initially elected in 1963, made a ruling on the subject that it was not possible for another Guardian to be appointed. In 1960 Remey, a Hand of the Cause himself, retracted his earlier position, and claimed to have been coerced. He claimed to be the successor to Shoghi Effendi. He and the small number of people who followed him were expelled from the mainstream Baháʼí community by the Hands of the Cause. Those close to Remey claimed that he went senile in old age, and by the time of his death he was largely abandoned, with his most prominent followers fighting amongst themselves for leadership. The largest group of the remaining followers of Remey, members of the "Orthodox Baháʼí Faith", believe that legitimate authority passed from Shoghi Effendi to Mason Remey to Joel Marangella. They, therefore, regard the Universal House of Justice in Haifa, Israel to be illegitimate, and its members and followers to be Covenant-breakers. In 2009, Jeffery Goldberg and Janice Franco, both from the mainstream Baháʼí community, joined the Orthodox Baháʼí Faith. Both of them were declared as Covenant-breakers and shunned. Goldberg's wife was told to divorce her husband. The present descendants of expelled members of Baháʼu'lláh's family have not specifically been declared Covenant-breakers, though they mostly do not associate themselves with the Baháʼí religion. A small group of Baháʼís in Northern New Mexico believe that these descendants are eligible for appointment to the Guardianship and are waiting for such a direct descendant of Baháʼu'lláh to arise as the rightful Guardian. Enayatullah (Zabih) Yazdani was designated a Covenant-breaker in June 2005, after many years of insisting on his views that Mason Remey was the legitimate successor to Shoghi Effendi and of accepting Donald Harvey as the third guardian. He is now the fifth guardian of a small group of Baháʼís and resides in Australia. There is also a small group in Montana, originally inspired by Leland Jensen, who claimed a status higher than that of the Guardian. His failed apocalyptic predictions and unsuccessful efforts to reestablish the Guardianship and the administration were apparent by his death in 1996. A dispute among Jensen's followers over the identity of the Guardian resulted in another division in 2001. Juan Cole, an American professor of Middle Eastern history who had been a Baháʼí for 25 years, left the religion in 1996 after being approached by a Continental Counselor about his involvement in a secret email list that was organizing opposition to certain Baháʼí institutions and policies. Cole was never labeled a Covenant-breaker, because he claimed to be a Unitarian-Universalist upon leaving. He went on to publish three papers in journals in 1998, 2000, and 2002. These heavily criticized the Baháʼí administration in the United States and suggested cult-like tendencies, particularly regarding the requirement of pre-publication review and the practice of shunning Covenant-breakers. For example, Cole wrote in 1998, "Baha’is, like members of the Watchtower and other cults, shun those who are excommunicated." In 2000, he wrote: "Baha'i authorities... keep believers in line by appealing to the welfare and unity of the community, and if these appeals fail then implicit or explicit threats of disfellowshipping and even shunning are invoked. ... Shunning is the central control mechanism in the Baha'i system" In 2002, he wrote: "Opportunistic sectarian-minded officials may have seen this... as a time when they could act arbitrarily and harshly against intellectuals and liberals, using summary expulsion and threats of shunning". Moojan Momen, a Baháʼí author, reviewed 66 exit narratives of former Baháʼís, and identified 1996 (Cole's departure) to 2002 as a period of "articulate and well-educated" apostates that used the newly available Internet to connect with each other and form a community with its own "mythology, creed and salvation stories becoming what could perhaps be called an anti-religion". According to Momen, the narrative among these apostates of a "fiercely aggressive religion where petty dictators rule" is the opposite experience of most members, who see "peace as a central teaching", "consultative decision-making", and "mechanisms to guard against individuals attacking the central institutions of the Bahá'í Faith or creating schisms." On the practice of shunning, Momen writes that it is "rarely used and is only applied after prolonged negotiations fail to resolve the situation. To the best knowledge of the present author it has been used against no more than a handful of individuals in over two decades and to only the first of the apostates described below [Francesco Ficicchia] more than twenty-five years ago - although it is regularly mentioned in the literature produced by the apostates as though it were a frequent occurrence."
[ { "paragraph_id": 0, "text": "Covenant-breaker is a term used in the Baháʼí Faith to refer to a person who has been excommunicated from the Baháʼí community for breaking the Covenant of Baháʼu'lláh, meaning actively promoting schism in the religion or otherwise opposing the legitimacy of the chain of succession of leadership. Excommunication among Baháʼís is rare and not used for transgressions of community standards, intellectual dissent, or conversion to other religions. Instead, it is the most severe punishment, reserved for suppressing organized dissent that threatens the unity of believers.", "title": "" }, { "paragraph_id": 1, "text": "Currently, the Universal House of Justice has the sole authority to declare a person a Covenant-breaker, and once identified, all Baháʼís are expected to shun them, even if they are family members. According to ʻAbdu'l-Bahá, Covenant-breaking is a contagious disease. The Baháʼí writings forbid association with Covenant-breakers and Baháʼís are urged to avoid their literature, thus providing an exception to the Baháʼí principle of independent investigation of truth. Most Baháʼís are unaware of the small Baháʼí divisions that exist.", "title": "" }, { "paragraph_id": 2, "text": "Dr. Mikhail Sergeev wrote about the Baháʼí practice of excommunication,", "title": "" }, { "paragraph_id": 3, "text": "In dealing with organized dissent, and covenant-breaking as the most radical form of opposition, Baháʼís stand, as they do on many other controversial issues, somewhere between modernity and traditional religions. They are not as tolerant as the adherents of the Enlightenment ideology that institutionalizes opposition. Nor do they crush it as harshly as the fervent religious leaders of the past.", "title": "" }, { "paragraph_id": 4, "text": "The three largest attempts at alternative leadership—whose followers are considered Covenant-breakers—were from Subh-i-Azal, Mírzá Muhammad ʻAlí, and Charles Mason Remey. Others were declared Covenant-breakers for actively opposing or disobeying the head of the religion, or maliciously attacking the Baháʼí administration after leaving it.", "title": "" }, { "paragraph_id": 5, "text": "Covenant-breaking does not refer to attacks from non-Baháʼís or former Baháʼís. Rather, it is in reference to internal campaigns of opposition where the Covenant-breaker is seen as challenging the unity of the Baháʼí Faith, causing internal division, or by claiming or supporting an alternate succession of authority or administrative structure. The central purpose of the covenant is to prevent schism and dissension.", "title": "Definition" }, { "paragraph_id": 6, "text": "In a letter to an individual dated 23 March 1975, the Universal House of Justice wrote:", "title": "Definition" }, { "paragraph_id": 7, "text": "When a person declares his acceptance of Baháʼu'lláh as a Manifestation of God he becomes a party to the Covenant and accepts the totality of His Revelation. If he then turns round and attacks Baháʼu'lláh or the Central Institution of the Faith he violates the Covenant. If this happens every effort is made to help that person to see the illogicality and error of his actions, but if he persists he must, in accordance with the instructions of Baháʼu'lláh Himself, be shunned as a Covenant-breaker.", "title": "Definition" }, { "paragraph_id": 8, "text": "The term Covenant-breaker was first used by ʻAbdu'l-Bahá to describe the partisans of his half-brother Mírzá Muhammad ʻAlí, who challenged his leadership. In ʻAbdu'l-Bahá's Will and Testament, he appointed Shoghi Effendi as the first Guardian, an institution of the religion now defined, and called for the election of the Universal House of Justice. ʻAbdul-Bahá defined in the same manner opposition to these two institutions as Covenant-breaking and advised all Baháʼís to shun anyone opposing the Covenant: \"...one of the greatest and most fundamental principles of the Cause of God is to shun and avoid entirely the Covenant-breakers, for they will utterly destroy the Cause of God, exterminate His Law and render of no account all efforts exerted in the past.\"", "title": "Definition" }, { "paragraph_id": 9, "text": "Most Covenant-breakers are involved in schismatic groups, but not always. For example, a Baháʼí who refuses to follow guidance on treatment of Covenant-breakers is at risk of being named one. One article originally written for the Baháʼí Encyclopedia, characterized Covenant-breakers that have emerged in the course of Baháʼí history as belonging to one of four categories:", "title": "Categorization" }, { "paragraph_id": 10, "text": "Shoghi Effendi wrote to the National Spiritual Assembly of Canada in 1957:", "title": "Categorization" }, { "paragraph_id": 11, "text": "People who have withdrawn from the Cause because they no longer feel that they can support its Teachings and Institutions sincerely, are not Covenant-breakers -- they are non-Baháʼís and should just be treated as such. Only those who ally themselves actively with known enemies of the Faith who are Covenant-breakers, and who attack the Faith in the same spirit as these people, can be considered, themselves, to be Covenant-breakers.", "title": "Categorization" }, { "paragraph_id": 12, "text": "Beyond this, many other relationships to the Baháʼí Faith exist, both positive and negative. Covenant-breaking does not apply to most of them. The following is a partial list of those who could not rightly be termed Covenant-breakers:", "title": "Categorization" }, { "paragraph_id": 13, "text": "Bábís are generally regarded as another religion altogether. Since Covenant-breaking presumes that one has submitted oneself to a covenant and then broken it, and Bábís never recognized or swore allegiance to Baháʼu'lláh, they are not Covenant-breakers.", "title": "Categorization" }, { "paragraph_id": 14, "text": "Followers of Subh-i-Azal, Baháʼu'lláh's half-brother who tried to poison him, engaged in active opposition to Baháʼís, and Shoghi Effendi did inform Baháʼís that they should avoid contact with his descendants, writing that \"No intelligent and loyal Baha'i would associate with a descendant of Azal, if he traced the slightest breath of criticism of our Faith, in any aspect, from that person. In fact these people should be strenuously avoided as having an inherited spiritual disease -- the disease of Covenant-breaking!\".", "title": "Categorization" }, { "paragraph_id": 15, "text": "Through the influence of Bahíyyih Khánum, the eldest daughter of Baháʼu'lláh, everyone in the household initially rallied around Shoghi Effendi after the death of ʻAbdu'l-Bahá. For several years his brother Husayn and several cousins served him as secretaries. The only ones publicly opposing him were Mírzá Muhammad ʻAlí and his followers, who were declared Covenant-breakers by ʻAbdu'l-Bahá. Contrary to ʻAbdu'l-Bahá's specific instruction, certain family members established illicit links with those whom ʻAbdu'l-Bahá had declared Covenant-breakers. After Bahíyyih Khánum died in 1932, Shoghi Effendi's eldest sister – Ruhangiz – married Nayyer Effendi Afnan, a son of Siyyid Ali Afnan, stepson of Baháʼu'lláh though Furughiyyih. The children of Furughiyyih sided with Muhammad ʻAlí and opposed ʻAbdu'l-Bahá, leaving only ʻAbdu'l-Bahá's own children as faithful among the descendants of Baháʼu'lláh. Moojan Momen describes these events as follows:", "title": "Shoghi Effendi's immediate family" }, { "paragraph_id": 16, "text": "All remained quiescent until the late 1930s when the case of the House of Bahá'u'lláh (q.v.) arose in Iraq. Shoghi Effendi asked Husayn Afnán (d. 1952), the son of Sayyid `Alí, to resign a high post that he held with the Iraqi government so that he would not be placed in the position of endorsing that government's actions in the case. Husayn refused and was expelled; one-by-one his brothers Faydí, Hasan, and Nayyir (Nayyir-`Alí, d. 1952) were also expelled. Events then proceeded rapidly. A series of marriages, engineered, according to Shoghi Effendi (MB), by Nayyir, occurred, linking the grandchildren of `Abdu'l-Bahá with the expelled sons of Sayyid `Alí Afnán.", "title": "Shoghi Effendi's immediate family" }, { "paragraph_id": 17, "text": "These marriages caused Ruhangiz, Mehrangiz, and Thurayyá to be declared Covenant-breakers by Shoghi Effendi, though there was some delay and concealment initially in order to avoid public degradation of the family. On 2 November 1941 Shoghi Effendi sent two cables announcing the expulsion of Túbá and her children Ruhi, Suhayl, and Fu'ad for consenting to the marriage of Thurayyá to Faydi. There was also mention that Ruhi's visit to America and Fu'ad's visit to England were without approval. In December 1941 he announced the expulsion of his sister Mehrangiz.", "title": "Shoghi Effendi's immediate family" }, { "paragraph_id": 18, "text": "Presumably being faced with a choice between shunning their disobedient family members and being themselves disobedient to ʻAbdu'l-Bahá and Shoghi Effendi, his cousins, aunts and uncles chose the latter.", "title": "Shoghi Effendi's immediate family" }, { "paragraph_id": 19, "text": "In 1944 Shoghi Effendi announced the expulsion of Munib Shahid, the grandson of ʻAbdu'l-Bahá's through Ruha, for marrying into the family of an enemy of the Baháʼís. In April 1945, he announced the expulsion of Husayn Ali, his brother, for joining the other Covenant-breakers. In a 1950 Shoghi Effendi sent another cable expelling the family of Ruha, another daughter of ʻAbdu'l-Bahá for showing \"open defiance\", and in December 1951 he announced a \"fourth alliance\" of members of the family of Siyyid Ali marrying into Ruha's family, and that his brother Riaz was included among the Covenant-breakers.", "title": "Shoghi Effendi's immediate family" }, { "paragraph_id": 20, "text": "In 1953 he cabled about Ruhi Afnan corresponding with Mirza Ahmad Sohrab, selling property of Baháʼu'lláh, and publicly \"misrepresenting the teachings and deliberately causing confusion in minds of authorities and the local population\".", "title": "Shoghi Effendi's immediate family" }, { "paragraph_id": 21, "text": "Most of the groups regarded by the larger group of Baháʼís as Covenant-breakers originated in the claims of Charles Mason Remey to the Guardianship in 1960. The Will and Testament of ʻAbdu'l-Bahá states that Guardians should be lineal descendants of Baháʼu'lláh, that each Guardian must select his successor during his lifetime, and that the nine Hands of the Cause of God permanently stationed in the holy land must approve the appointment by majority vote. Baháʼís interpret lineal descendency to mean physical familial relation to Baháʼu'lláh, of which Mason Remey was not.", "title": "Resultant groups" }, { "paragraph_id": 22, "text": "Almost all of Baháʼís accepted the determination of the Hands of the Cause that upon the death of Shoghi Effendi, he died \"without having appointed his successor\". There was an absence of a valid descendant of Baháʼu'lláh who could qualify under the terms of ʻAbdu'l-Bahá's will. Later the Universal House of Justice, initially elected in 1963, made a ruling on the subject that it was not possible for another Guardian to be appointed.", "title": "Resultant groups" }, { "paragraph_id": 23, "text": "In 1960 Remey, a Hand of the Cause himself, retracted his earlier position, and claimed to have been coerced. He claimed to be the successor to Shoghi Effendi. He and the small number of people who followed him were expelled from the mainstream Baháʼí community by the Hands of the Cause. Those close to Remey claimed that he went senile in old age, and by the time of his death he was largely abandoned, with his most prominent followers fighting amongst themselves for leadership.", "title": "Resultant groups" }, { "paragraph_id": 24, "text": "The largest group of the remaining followers of Remey, members of the \"Orthodox Baháʼí Faith\", believe that legitimate authority passed from Shoghi Effendi to Mason Remey to Joel Marangella. They, therefore, regard the Universal House of Justice in Haifa, Israel to be illegitimate, and its members and followers to be Covenant-breakers.", "title": "Resultant groups" }, { "paragraph_id": 25, "text": "In 2009, Jeffery Goldberg and Janice Franco, both from the mainstream Baháʼí community, joined the Orthodox Baháʼí Faith. Both of them were declared as Covenant-breakers and shunned. Goldberg's wife was told to divorce her husband.", "title": "Resultant groups" }, { "paragraph_id": 26, "text": "The present descendants of expelled members of Baháʼu'lláh's family have not specifically been declared Covenant-breakers, though they mostly do not associate themselves with the Baháʼí religion.", "title": "Resultant groups" }, { "paragraph_id": 27, "text": "A small group of Baháʼís in Northern New Mexico believe that these descendants are eligible for appointment to the Guardianship and are waiting for such a direct descendant of Baháʼu'lláh to arise as the rightful Guardian.", "title": "Resultant groups" }, { "paragraph_id": 28, "text": "Enayatullah (Zabih) Yazdani was designated a Covenant-breaker in June 2005, after many years of insisting on his views that Mason Remey was the legitimate successor to Shoghi Effendi and of accepting Donald Harvey as the third guardian. He is now the fifth guardian of a small group of Baháʼís and resides in Australia.", "title": "Resultant groups" }, { "paragraph_id": 29, "text": "There is also a small group in Montana, originally inspired by Leland Jensen, who claimed a status higher than that of the Guardian. His failed apocalyptic predictions and unsuccessful efforts to reestablish the Guardianship and the administration were apparent by his death in 1996. A dispute among Jensen's followers over the identity of the Guardian resulted in another division in 2001.", "title": "Resultant groups" }, { "paragraph_id": 30, "text": "Juan Cole, an American professor of Middle Eastern history who had been a Baháʼí for 25 years, left the religion in 1996 after being approached by a Continental Counselor about his involvement in a secret email list that was organizing opposition to certain Baháʼí institutions and policies. Cole was never labeled a Covenant-breaker, because he claimed to be a Unitarian-Universalist upon leaving. He went on to publish three papers in journals in 1998, 2000, and 2002. These heavily criticized the Baháʼí administration in the United States and suggested cult-like tendencies, particularly regarding the requirement of pre-publication review and the practice of shunning Covenant-breakers. For example, Cole wrote in 1998, \"Baha’is, like members of the Watchtower and other cults, shun those who are excommunicated.\" In 2000, he wrote: \"Baha'i authorities... keep believers in line by appealing to the welfare and unity of the community, and if these appeals fail then implicit or explicit threats of disfellowshipping and even shunning are invoked. ... Shunning is the central control mechanism in the Baha'i system\" In 2002, he wrote: \"Opportunistic sectarian-minded officials may have seen this... as a time when they could act arbitrarily and harshly against intellectuals and liberals, using summary expulsion and threats of shunning\".", "title": "Resultant groups" }, { "paragraph_id": 31, "text": "Moojan Momen, a Baháʼí author, reviewed 66 exit narratives of former Baháʼís, and identified 1996 (Cole's departure) to 2002 as a period of \"articulate and well-educated\" apostates that used the newly available Internet to connect with each other and form a community with its own \"mythology, creed and salvation stories becoming what could perhaps be called an anti-religion\". According to Momen, the narrative among these apostates of a \"fiercely aggressive religion where petty dictators rule\" is the opposite experience of most members, who see \"peace as a central teaching\", \"consultative decision-making\", and \"mechanisms to guard against individuals attacking the central institutions of the Bahá'í Faith or creating schisms.\" On the practice of shunning, Momen writes that it is \"rarely used and is only applied after prolonged negotiations fail to resolve the situation. To the best knowledge of the present author it has been used against no more than a handful of individuals in over two decades and to only the first of the apostates described below [Francesco Ficicchia] more than twenty-five years ago - although it is regularly mentioned in the literature produced by the apostates as though it were a frequent occurrence.\"", "title": "Resultant groups" } ]
Covenant-breaker is a term used in the Baháʼí Faith to refer to a person who has been excommunicated from the Baháʼí community for breaking the Covenant of Baháʼu'lláh, meaning actively promoting schism in the religion or otherwise opposing the legitimacy of the chain of succession of leadership. Excommunication among Baháʼís is rare and not used for transgressions of community standards, intellectual dissent, or conversion to other religions. Instead, it is the most severe punishment, reserved for suppressing organized dissent that threatens the unity of believers. Currently, the Universal House of Justice has the sole authority to declare a person a Covenant-breaker, and once identified, all Baháʼís are expected to shun them, even if they are family members. According to ʻAbdu'l-Bahá, Covenant-breaking is a contagious disease. The Baháʼí writings forbid association with Covenant-breakers and Baháʼís are urged to avoid their literature, thus providing an exception to the Baháʼí principle of independent investigation of truth. Most Baháʼís are unaware of the small Baháʼí divisions that exist. Dr. Mikhail Sergeev wrote about the Baháʼí practice of excommunication, The three largest attempts at alternative leadership—whose followers are considered Covenant-breakers—were from Subh-i-Azal, Mírzá Muhammad ʻAlí, and Charles Mason Remey. Others were declared Covenant-breakers for actively opposing or disobeying the head of the religion, or maliciously attacking the Baháʼí administration after leaving it.
2002-01-22T08:39:01Z
2023-09-12T15:28:39Z
[ "Template:Short description", "Template:Use dmy dates", "Template:Efn", "Template:Quote", "Template:Reflist", "Template:Cite encyclopedia", "Template:Blockquote", "Template:Or", "Template:Notelist", "Template:Cite thesis", "Template:Cite web", "Template:Baháʼí", "Template:Baháʼí sidebar", "Template:Third-party", "Template:Cite book", "Template:Sfn", "Template:Main", "Template:Unreferenced section", "Template:Cite news", "Template:Cite journal" ]
https://en.wikipedia.org/wiki/Covenant-breaker
7,828
Concord, Michigan
Concord is a village in Jackson County in the U.S. state of Michigan. The population was 1,050 at the 2010 census. The village is within Concord Township. Settled in 1831, much of the village's downtown area is designated as part of the Concord Village Historic District. The village is located along M-60 about 15 miles (24.1 km) southwest of Jackson. Concord first received a post office in 1836. It was incorporated as a village in 1871. The Michigan Historical Center operates a museum in Concord called the Mann House. The Mann House is an excellent example of typical middle-class domestic architecture of the early 1880s and features the family's sleigh and buggy as well as Jackson's Michigan State Prison made furniture. Concord is a general-law village incorporated within the Concord Township. According to the United States Census Bureau, the village has a total area of 1.62 square miles (4.20 km), of which 1.50 square miles (3.88 km) is land and 0.12 square miles (0.31 km) (7.41%) is water. The village is located within the T3S R3W survey township. Concord Community Schools (Enrollment 900) participate in Class C and Division 4 of MHSAA athletics. Their teams are known as the Yellow Jackets and play in the Big 8 Conference. The schools' colors are purple and gold. The boys' cross country and track & field teams both claimed MHSAA State Championships during the 2009–10 school year, as well as back to back MHSAA State Championships in the 2014 and 2015 school years. In 2011 and 2012, the boys cross country team won back to back MHSAA State Championships. As of the census of 2010, there were 1,050 people, 412 households, and 293 families living in the village. The population density was 700.0 inhabitants per square mile (270.3/km). There were 484 housing units at an average density of 322.7 per square mile (124.6/km). The racial makeup of the village was 99.0% White, 0.3% African American, 0.1% Native American, 0.1% Asian, 0.1% from other races, and 0.4% from two or more races. Hispanic or Latino of any race were 1.8% of the population. There were 412 households, of which 33.7% had children under the age of 18 living with them, 54.6% were married couples living together, 10.4% had a female householder with no husband present, 6.1% had a male householder with no wife present, and 28.9% were non-families. 25.7% of all households were made up of individuals, and 12.6% had someone living alone who was 65 years of age or older. The average household size was 2.55 and the average family size was 3.02. The median age in the village was 40.9 years. 26% of residents were under the age of 18; 8.3% were between the ages of 18 and 24; 21.4% were from 25 to 44; 28.7% were from 45 to 64; and 15.6% were 65 years of age or older. The gender makeup of the village was 48.9% male and 51.1% female. As of the census of 2000, there were 1,101 people, 428 households, and 308 families living in the village. The population density was 748.4 inhabitants per square mile (289.0/km). There were 499 housing units at an average density of 339.2 per square mile (131.0/km). The racial makeup of the village was 97.91% White, 0.09% Black or African American, 0.27% Native American, 0.73% Asian, 0.64% from other races, and 0.36% from two or more races. 0.82% of the population were Hispanic or Latino of any race. There were 428 households, out of which 34.3% had children under the age of 18 living with them, 57.9% were married couples living together, 10.7% had a female householder with no husband present, and 28.0% were non-families. 25.0% of all households were made up of individuals, and 10.5% had someone living alone who was 65 years of age or older. The average household size was 2.57 and the average family size was 3.09. In the village, the population was spread out, with 28.1% under the age of 18, 7.5% from 18 to 24, 28.2% from 25 to 44, 21.7% from 45 to 64, and 14.5% who were 65 years of age or older. The median age was 37 years. For every 100 females, there were 92.8 males. For every 100 females age 18 and over, there were 87.7 males. The median income for a household in the village was $46,500, and the median income for a family was $54,531. Males had a median income of $39,167 versus $23,594 for females. The per capita income for the village was $19,348. About 4.8% of families and 5.2% of the population were below the poverty line, including 3.1% of those under age 18 and 7.1% of those age 65 or over.
[ { "paragraph_id": 0, "text": "Concord is a village in Jackson County in the U.S. state of Michigan. The population was 1,050 at the 2010 census. The village is within Concord Township.", "title": "" }, { "paragraph_id": 1, "text": "Settled in 1831, much of the village's downtown area is designated as part of the Concord Village Historic District. The village is located along M-60 about 15 miles (24.1 km) southwest of Jackson.", "title": "" }, { "paragraph_id": 2, "text": "Concord first received a post office in 1836. It was incorporated as a village in 1871.", "title": "History" }, { "paragraph_id": 3, "text": "The Michigan Historical Center operates a museum in Concord called the Mann House. The Mann House is an excellent example of typical middle-class domestic architecture of the early 1880s and features the family's sleigh and buggy as well as Jackson's Michigan State Prison made furniture.", "title": "History" }, { "paragraph_id": 4, "text": "Concord is a general-law village incorporated within the Concord Township.", "title": "Government" }, { "paragraph_id": 5, "text": "According to the United States Census Bureau, the village has a total area of 1.62 square miles (4.20 km), of which 1.50 square miles (3.88 km) is land and 0.12 square miles (0.31 km) (7.41%) is water.", "title": "Geography" }, { "paragraph_id": 6, "text": "The village is located within the T3S R3W survey township.", "title": "Geography" }, { "paragraph_id": 7, "text": "Concord Community Schools (Enrollment 900) participate in Class C and Division 4 of MHSAA athletics. Their teams are known as the Yellow Jackets and play in the Big 8 Conference. The schools' colors are purple and gold. The boys' cross country and track & field teams both claimed MHSAA State Championships during the 2009–10 school year, as well as back to back MHSAA State Championships in the 2014 and 2015 school years. In 2011 and 2012, the boys cross country team won back to back MHSAA State Championships.", "title": "Demographics" }, { "paragraph_id": 8, "text": "As of the census of 2010, there were 1,050 people, 412 households, and 293 families living in the village. The population density was 700.0 inhabitants per square mile (270.3/km). There were 484 housing units at an average density of 322.7 per square mile (124.6/km). The racial makeup of the village was 99.0% White, 0.3% African American, 0.1% Native American, 0.1% Asian, 0.1% from other races, and 0.4% from two or more races. Hispanic or Latino of any race were 1.8% of the population.", "title": "Demographics" }, { "paragraph_id": 9, "text": "There were 412 households, of which 33.7% had children under the age of 18 living with them, 54.6% were married couples living together, 10.4% had a female householder with no husband present, 6.1% had a male householder with no wife present, and 28.9% were non-families. 25.7% of all households were made up of individuals, and 12.6% had someone living alone who was 65 years of age or older. The average household size was 2.55 and the average family size was 3.02.", "title": "Demographics" }, { "paragraph_id": 10, "text": "The median age in the village was 40.9 years. 26% of residents were under the age of 18; 8.3% were between the ages of 18 and 24; 21.4% were from 25 to 44; 28.7% were from 45 to 64; and 15.6% were 65 years of age or older. The gender makeup of the village was 48.9% male and 51.1% female.", "title": "Demographics" }, { "paragraph_id": 11, "text": "As of the census of 2000, there were 1,101 people, 428 households, and 308 families living in the village. The population density was 748.4 inhabitants per square mile (289.0/km). There were 499 housing units at an average density of 339.2 per square mile (131.0/km). The racial makeup of the village was 97.91% White, 0.09% Black or African American, 0.27% Native American, 0.73% Asian, 0.64% from other races, and 0.36% from two or more races. 0.82% of the population were Hispanic or Latino of any race.", "title": "Demographics" }, { "paragraph_id": 12, "text": "There were 428 households, out of which 34.3% had children under the age of 18 living with them, 57.9% were married couples living together, 10.7% had a female householder with no husband present, and 28.0% were non-families. 25.0% of all households were made up of individuals, and 10.5% had someone living alone who was 65 years of age or older. The average household size was 2.57 and the average family size was 3.09.", "title": "Demographics" }, { "paragraph_id": 13, "text": "In the village, the population was spread out, with 28.1% under the age of 18, 7.5% from 18 to 24, 28.2% from 25 to 44, 21.7% from 45 to 64, and 14.5% who were 65 years of age or older. The median age was 37 years. For every 100 females, there were 92.8 males. For every 100 females age 18 and over, there were 87.7 males.", "title": "Demographics" }, { "paragraph_id": 14, "text": "The median income for a household in the village was $46,500, and the median income for a family was $54,531. Males had a median income of $39,167 versus $23,594 for females. The per capita income for the village was $19,348. About 4.8% of families and 5.2% of the population were below the poverty line, including 3.1% of those under age 18 and 7.1% of those age 65 or over.", "title": "Demographics" } ]
Concord is a village in Jackson County in the U.S. state of Michigan. The population was 1,050 at the 2010 census. The village is within Concord Township. Settled in 1831, much of the village's downtown area is designated as part of the Concord Village Historic District. The village is located along M-60 about 15 miles (24.1 km) southwest of Jackson.
2023-01-09T03:46:01Z
[ "Template:Reflist", "Template:Cite web", "Template:Jackson County, Michigan", "Template:Authority control", "Template:Use mdy dates", "Template:Infobox settlement", "Template:Convert", "Template:US Census population" ]
https://en.wikipedia.org/wiki/Concord,_Michigan
7,829
Chaos Computer Club
The Chaos Computer Club (CCC) is Europe's largest association of hackers with 7,700 registered members. Founded in 1981, the association is incorporated as an eingetragener Verein in Germany, with local chapters (called Erfa-Kreise) in various cities in Germany and the surrounding countries, particularly where there are German-speaking communities. Since 1985, some chapters in Switzerland have organized an independent sister association called the Chaos Computer Club Schweiz [de] (CCC-CH) instead. The CCC describes itself as "a galactic community of life forms, independent of age, sex, race or societal orientation, which strives across borders for freedom of information…". In general, the CCC advocates more transparency in government, freedom of information, and the human right to communication. Supporting the principles of the hacker ethic, the club also fights for free universal access to computers and technological infrastructure as well as the use of open-source software. The CCC spreads an entrepreneurial vision refusing capitalist control. It has been characterised as "…one of the most influential digital organisations anywhere, the centre of German digital culture, hacker culture, hacktivism, and the intersection of any discussion of democratic and digital rights". Members of the CCC have demonstrated and publicized a number of important information security problems. The CCC frequently criticizes new legislation and products with weak information security which endanger citizen rights or the privacy of users. Notable members of the CCC regularly function as expert witnesses for the German constitutional court, organize lawsuits and campaigns, or otherwise influence the political process. The CCC hosts the annual Chaos Communication Congress, Europe's biggest hacker gathering. When the event was held in the Hamburg congress center in 2013, it drew 9,000 guests. For the 2016 installment, 11,000 guests were expected, with additional viewers following the event via live streaming. Every four years, the Chaos Communication Camp is the outdoor alternative for hackers worldwide. The CCC also held, from 2009 to 2013, a yearly conference called SIGINT in Cologne which focused on the impact of digitisation on society. The SIGINT conference was discontinued in 2014. The four-day conference Gulaschprogrammiernacht [de] in Karlsruhe is with more than 1,500 participants the second largest annual event. Another yearly CCC event taking place on the Easter weekend is the Easterhegg, which is more workshop oriented than the other events. The CCC often uses the c-base station located in Berlin as an event location or as function rooms. The CCC publishes the irregular magazine Datenschleuder (data slingshot) since 1984. The Berlin chapter produces a monthly radio show called Chaosradio [de] which picks up various technical and political topics in a two-hour talk radio show. The program is aired on a local radio station called Fritz [de] and on the internet. Other programs have emerged in the context of Chaosradio, including radio programs offered by some regional Chaos Groups and the podcast spin-off CRE by Tim Pritlove. Many of the chapters of CCC participate in the volunteer project Chaos macht Schule which supports teaching in local schools. Its aims are to improve technology and media literacy of pupils, parents, and teachers. CCC members are present in big tech companies and in administrative instances. One of the spokespersons of the CCC, as of 1986, Andy Müller-Maguhn, was a member of the executive committee of the ICANN (Internet Corporation for Assigned Names and Numbers) between 2000 and 2002. The CCC sensitises and introduces people to the questions of data privacy. Some of its local chapters support or organize so called CryptoParties to introduce people to the basics of practical cryptography and internet anonymity. The CCC was founded in West Berlin on 12 September 1981 at a table which had previously belonged to the Kommune 1 in the rooms of the newspaper Die Tageszeitung by Wau Holland and others in anticipation of the prominent role that information technology would play in the way people live and communicate. The CCC became world-famous in 1984 when they drew public attention to the security flaws of the German Bildschirmtext computer network by causing it to debit DM 134,000 (equivalent to €131,600 in 2021) in a Hamburg bank in favor of the club. The money was returned the next day in front of the press. Prior to the incident, the system provider had failed to react to proof of the security flaw provided by the CCC, claiming to the public that their system was safe. Bildschirmtext was the biggest commercially available online system targeted at the general public in its region at that time, run and heavily advertised by the German telecommunications agency Deutsche Bundespost which also strove to keep up-to-date alternatives out of the market. In 1987, the CCC was peripherally involved in the first cyberespionage case to make international headlines. A group of German hackers led by Karl Koch, who was loosely affiliated with the CCC, was arrested for breaking into US government and corporate computers, and then selling operating-system source code to the Soviet KGB. This incident was portrayed in the movie 23. In April 1998, the CCC successfully demonstrated the cloning of a GSM customer card, breaking the COMP128 encryption algorithm used at that time by many GSM SIMs. In 2001, the CCC celebrated its twentieth birthday with an interactive light installation dubbed Project Blinkenlights that turned the building Haus des Lehrers in Berlin into a giant computer screen. A follow-up installation, Arcade, was created in 2002 by the CCC for the Bibliothèque nationale de France. Later in October 2008 CCC's Project Blinkenlights went to Toronto, Ontario, Canada with project Stereoscope. In March 2008, the CCC acquired and published the fingerprints of German Minister of the Interior Wolfgang Schäuble. The magazine also included the fingerprint on a film that readers could use to fool fingerprint readers. This was done to protest the use of biometric data in German identity devices such as e-passports. The Staatstrojaner (Federal Trojan horse) is a computer surveillance program installed secretly on a suspect's computer, which the German police uses to wiretap Internet telephony. This "source wiretapping" is the only feasible way to wiretap in this case, since Internet telephony programs will usually encrypt the data when it leaves the computer. The Federal Constitutional Court of Germany has ruled that the police may only use such programs for telephony wiretapping, and for no other purpose, and that this restriction should be enforced through technical and legal means. On 8 October 2011, the CCC published an analysis of the Staatstrojaner software. The software was found to have the ability to remote control the target computer, to capture screenshots, and to fetch and run arbitrary extra code. The CCC says that having this functionality built in is in direct contradiction to the ruling of the constitutional court. In addition, there were a number of security problems with the implementation. The software was controllable over the Internet, but the commands were sent completely unencrypted, with no checks for authentication or integrity. This leaves any computer under surveillance using this software vulnerable to attack. The captured screenshots and audio files were encrypted, but so incompetently that the encryption was ineffective. All captured data was sent over a proxy server in the United States, which is problematic since the data is then temporarily outside the German jurisdiction. The CCC's findings were widely reported in the German press. This trojan has also been nicknamed R2-D2 because the string "C3PO-r2d2-POE" was found in its code; another alias for it is 0zapftis ("It's tapped!" in Bavarian, a sardonic reference to Oktoberfest). According to a Sophos analysis, the trojan's behavior matches that described in a confidential memo between the German Landeskriminalamt and a software firm called DigiTask [de]; the memo was leaked on WikiLeaks in 2008. Among other correlations is the dropper's file name scuinst.exe, short for Skype Capture Unit Installer. The 64-bit Windows version installs a digitally signed driver, but signed by the non-existing certificate authority "Goose Cert". DigiTask later admitted selling spy software to governments. The Federal Ministry of the Interior released a statement in which they denied that R2-D2 has been used by the Federal Criminal Police Office (BKA); this statement however does not eliminate the possibility that it has been used by state-level German police forces. The BKA had previously announced however (in 2007) that they had somewhat similar trojan software that can inspect a computer's hard drive. Former WikiLeaks spokesman Daniel Domscheit-Berg was expelled from the national CCC (but not the Berlin chapter) in August 2011. This decision was revoked in February 2012. As a result of his role in the expulsion, board member Andy Müller-Maguhn was not reelected for another term. The CCC has repeatedly warned phone users of the weakness of biometric identification in the wake of the 2008 Schäuble fingerprints affair. In their "hacker ethics" the CCC includes "protect people data", but also "Computers can change your life for the better". The club regards privacy as an individual right: the CCC does not discourage people from sharing or storing personal information on their phones, but advocates better privacy protection, and the use of specific browsing and sharing techniques by users. From a photograph of the user's fingerprint on a glass surface, using "easy everyday means", the biometrics hacking team of the CCC was able to unlock an iPhone 5S. The Samsung Galaxy S8's iris recognition system claims to be "one of the safest ways to keep your phone locked and the contents private" as "patterns in your irises are unique to you and are virtually impossible to replicate", as quoted in official Samsung content. However, in some cases, using a high resolution photograph of the phone owner's iris and a lens, the CCC claimed to be able to trick the authentication system. The Chaos Computer Club France (CCCF) was a fake hacker organisation created in 1989 in Lyon (France) by Jean-Bernard Condat, under the command of Jean-Luc Delacour, an agent of the Direction de la surveillance du territoire governmental agency. The primary goal of the CCCF was to watch and to gather information about the French hacker community, identifying the hackers who could harm the country. Journalist Jean Guisnel [fr] said that this organization also worked with the French National Gendarmerie. The CCCF had an electronic magazine called Chaos Digest (ChaosD). Between 4 January 1993 and 5 August 1993, seventy-three issues were published (ISSN 1244-4901).
[ { "paragraph_id": 0, "text": "The Chaos Computer Club (CCC) is Europe's largest association of hackers with 7,700 registered members. Founded in 1981, the association is incorporated as an eingetragener Verein in Germany, with local chapters (called Erfa-Kreise) in various cities in Germany and the surrounding countries, particularly where there are German-speaking communities. Since 1985, some chapters in Switzerland have organized an independent sister association called the Chaos Computer Club Schweiz [de] (CCC-CH) instead.", "title": "" }, { "paragraph_id": 1, "text": "The CCC describes itself as \"a galactic community of life forms, independent of age, sex, race or societal orientation, which strives across borders for freedom of information…\". In general, the CCC advocates more transparency in government, freedom of information, and the human right to communication. Supporting the principles of the hacker ethic, the club also fights for free universal access to computers and technological infrastructure as well as the use of open-source software. The CCC spreads an entrepreneurial vision refusing capitalist control. It has been characterised as \"…one of the most influential digital organisations anywhere, the centre of German digital culture, hacker culture, hacktivism, and the intersection of any discussion of democratic and digital rights\".", "title": "" }, { "paragraph_id": 2, "text": "Members of the CCC have demonstrated and publicized a number of important information security problems. The CCC frequently criticizes new legislation and products with weak information security which endanger citizen rights or the privacy of users. Notable members of the CCC regularly function as expert witnesses for the German constitutional court, organize lawsuits and campaigns, or otherwise influence the political process.", "title": "" }, { "paragraph_id": 3, "text": "The CCC hosts the annual Chaos Communication Congress, Europe's biggest hacker gathering. When the event was held in the Hamburg congress center in 2013, it drew 9,000 guests. For the 2016 installment, 11,000 guests were expected, with additional viewers following the event via live streaming.", "title": "Activities" }, { "paragraph_id": 4, "text": "Every four years, the Chaos Communication Camp is the outdoor alternative for hackers worldwide. The CCC also held, from 2009 to 2013, a yearly conference called SIGINT in Cologne which focused on the impact of digitisation on society. The SIGINT conference was discontinued in 2014. The four-day conference Gulaschprogrammiernacht [de] in Karlsruhe is with more than 1,500 participants the second largest annual event. Another yearly CCC event taking place on the Easter weekend is the Easterhegg, which is more workshop oriented than the other events.", "title": "Activities" }, { "paragraph_id": 5, "text": "The CCC often uses the c-base station located in Berlin as an event location or as function rooms.", "title": "Activities" }, { "paragraph_id": 6, "text": "The CCC publishes the irregular magazine Datenschleuder (data slingshot) since 1984. The Berlin chapter produces a monthly radio show called Chaosradio [de] which picks up various technical and political topics in a two-hour talk radio show. The program is aired on a local radio station called Fritz [de] and on the internet. Other programs have emerged in the context of Chaosradio, including radio programs offered by some regional Chaos Groups and the podcast spin-off CRE by Tim Pritlove.", "title": "Activities" }, { "paragraph_id": 7, "text": "Many of the chapters of CCC participate in the volunteer project Chaos macht Schule which supports teaching in local schools. Its aims are to improve technology and media literacy of pupils, parents, and teachers.", "title": "Activities" }, { "paragraph_id": 8, "text": "CCC members are present in big tech companies and in administrative instances. One of the spokespersons of the CCC, as of 1986, Andy Müller-Maguhn, was a member of the executive committee of the ICANN (Internet Corporation for Assigned Names and Numbers) between 2000 and 2002.", "title": "Activities" }, { "paragraph_id": 9, "text": "The CCC sensitises and introduces people to the questions of data privacy. Some of its local chapters support or organize so called CryptoParties to introduce people to the basics of practical cryptography and internet anonymity.", "title": "Activities" }, { "paragraph_id": 10, "text": "The CCC was founded in West Berlin on 12 September 1981 at a table which had previously belonged to the Kommune 1 in the rooms of the newspaper Die Tageszeitung by Wau Holland and others in anticipation of the prominent role that information technology would play in the way people live and communicate.", "title": "History" }, { "paragraph_id": 11, "text": "The CCC became world-famous in 1984 when they drew public attention to the security flaws of the German Bildschirmtext computer network by causing it to debit DM 134,000 (equivalent to €131,600 in 2021) in a Hamburg bank in favor of the club. The money was returned the next day in front of the press. Prior to the incident, the system provider had failed to react to proof of the security flaw provided by the CCC, claiming to the public that their system was safe. Bildschirmtext was the biggest commercially available online system targeted at the general public in its region at that time, run and heavily advertised by the German telecommunications agency Deutsche Bundespost which also strove to keep up-to-date alternatives out of the market.", "title": "History" }, { "paragraph_id": 12, "text": "In 1987, the CCC was peripherally involved in the first cyberespionage case to make international headlines. A group of German hackers led by Karl Koch, who was loosely affiliated with the CCC, was arrested for breaking into US government and corporate computers, and then selling operating-system source code to the Soviet KGB. This incident was portrayed in the movie 23.", "title": "History" }, { "paragraph_id": 13, "text": "In April 1998, the CCC successfully demonstrated the cloning of a GSM customer card, breaking the COMP128 encryption algorithm used at that time by many GSM SIMs.", "title": "History" }, { "paragraph_id": 14, "text": "In 2001, the CCC celebrated its twentieth birthday with an interactive light installation dubbed Project Blinkenlights that turned the building Haus des Lehrers in Berlin into a giant computer screen. A follow-up installation, Arcade, was created in 2002 by the CCC for the Bibliothèque nationale de France. Later in October 2008 CCC's Project Blinkenlights went to Toronto, Ontario, Canada with project Stereoscope.", "title": "History" }, { "paragraph_id": 15, "text": "In March 2008, the CCC acquired and published the fingerprints of German Minister of the Interior Wolfgang Schäuble. The magazine also included the fingerprint on a film that readers could use to fool fingerprint readers. This was done to protest the use of biometric data in German identity devices such as e-passports.", "title": "History" }, { "paragraph_id": 16, "text": "The Staatstrojaner (Federal Trojan horse) is a computer surveillance program installed secretly on a suspect's computer, which the German police uses to wiretap Internet telephony. This \"source wiretapping\" is the only feasible way to wiretap in this case, since Internet telephony programs will usually encrypt the data when it leaves the computer. The Federal Constitutional Court of Germany has ruled that the police may only use such programs for telephony wiretapping, and for no other purpose, and that this restriction should be enforced through technical and legal means.", "title": "History" }, { "paragraph_id": 17, "text": "On 8 October 2011, the CCC published an analysis of the Staatstrojaner software. The software was found to have the ability to remote control the target computer, to capture screenshots, and to fetch and run arbitrary extra code. The CCC says that having this functionality built in is in direct contradiction to the ruling of the constitutional court.", "title": "History" }, { "paragraph_id": 18, "text": "In addition, there were a number of security problems with the implementation. The software was controllable over the Internet, but the commands were sent completely unencrypted, with no checks for authentication or integrity. This leaves any computer under surveillance using this software vulnerable to attack. The captured screenshots and audio files were encrypted, but so incompetently that the encryption was ineffective. All captured data was sent over a proxy server in the United States, which is problematic since the data is then temporarily outside the German jurisdiction.", "title": "History" }, { "paragraph_id": 19, "text": "The CCC's findings were widely reported in the German press. This trojan has also been nicknamed R2-D2 because the string \"C3PO-r2d2-POE\" was found in its code; another alias for it is 0zapftis (\"It's tapped!\" in Bavarian, a sardonic reference to Oktoberfest). According to a Sophos analysis, the trojan's behavior matches that described in a confidential memo between the German Landeskriminalamt and a software firm called DigiTask [de]; the memo was leaked on WikiLeaks in 2008. Among other correlations is the dropper's file name scuinst.exe, short for Skype Capture Unit Installer. The 64-bit Windows version installs a digitally signed driver, but signed by the non-existing certificate authority \"Goose Cert\". DigiTask later admitted selling spy software to governments.", "title": "History" }, { "paragraph_id": 20, "text": "The Federal Ministry of the Interior released a statement in which they denied that R2-D2 has been used by the Federal Criminal Police Office (BKA); this statement however does not eliminate the possibility that it has been used by state-level German police forces. The BKA had previously announced however (in 2007) that they had somewhat similar trojan software that can inspect a computer's hard drive.", "title": "History" }, { "paragraph_id": 21, "text": "Former WikiLeaks spokesman Daniel Domscheit-Berg was expelled from the national CCC (but not the Berlin chapter) in August 2011. This decision was revoked in February 2012. As a result of his role in the expulsion, board member Andy Müller-Maguhn was not reelected for another term.", "title": "History" }, { "paragraph_id": 22, "text": "The CCC has repeatedly warned phone users of the weakness of biometric identification in the wake of the 2008 Schäuble fingerprints affair. In their \"hacker ethics\" the CCC includes \"protect people data\", but also \"Computers can change your life for the better\". The club regards privacy as an individual right: the CCC does not discourage people from sharing or storing personal information on their phones, but advocates better privacy protection, and the use of specific browsing and sharing techniques by users.", "title": "History" }, { "paragraph_id": 23, "text": "From a photograph of the user's fingerprint on a glass surface, using \"easy everyday means\", the biometrics hacking team of the CCC was able to unlock an iPhone 5S.", "title": "History" }, { "paragraph_id": 24, "text": "The Samsung Galaxy S8's iris recognition system claims to be \"one of the safest ways to keep your phone locked and the contents private\" as \"patterns in your irises are unique to you and are virtually impossible to replicate\", as quoted in official Samsung content. However, in some cases, using a high resolution photograph of the phone owner's iris and a lens, the CCC claimed to be able to trick the authentication system.", "title": "History" }, { "paragraph_id": 25, "text": "The Chaos Computer Club France (CCCF) was a fake hacker organisation created in 1989 in Lyon (France) by Jean-Bernard Condat, under the command of Jean-Luc Delacour, an agent of the Direction de la surveillance du territoire governmental agency. The primary goal of the CCCF was to watch and to gather information about the French hacker community, identifying the hackers who could harm the country. Journalist Jean Guisnel [fr] said that this organization also worked with the French National Gendarmerie.", "title": "Fake Chaos Computer Club France" }, { "paragraph_id": 26, "text": "The CCCF had an electronic magazine called Chaos Digest (ChaosD). Between 4 January 1993 and 5 August 1993, seventy-three issues were published (ISSN 1244-4901).", "title": "Fake Chaos Computer Club France" } ]
The Chaos Computer Club (CCC) is Europe's largest association of hackers with 7,700 registered members. Founded in 1981, the association is incorporated as an eingetragener Verein in Germany, with local chapters in various cities in Germany and the surrounding countries, particularly where there are German-speaking communities. Since 1985, some chapters in Switzerland have organized an independent sister association called the Chaos Computer Club Schweiz (CCC-CH) instead. The CCC describes itself as "a galactic community of life forms, independent of age, sex, race or societal orientation, which strives across borders for freedom of information…". In general, the CCC advocates more transparency in government, freedom of information, and the human right to communication. Supporting the principles of the hacker ethic, the club also fights for free universal access to computers and technological infrastructure as well as the use of open-source software. The CCC spreads an entrepreneurial vision refusing capitalist control. It has been characterised as "…one of the most influential digital organisations anywhere, the centre of German digital culture, hacker culture, hacktivism, and the intersection of any discussion of democratic and digital rights". Members of the CCC have demonstrated and publicized a number of important information security problems. The CCC frequently criticizes new legislation and products with weak information security which endanger citizen rights or the privacy of users. Notable members of the CCC regularly function as expert witnesses for the German constitutional court, organize lawsuits and campaigns, or otherwise influence the political process.
2002-02-25T15:51:15Z
2023-11-18T21:03:52Z
[ "Template:Expand German", "Template:Mono", "Template:ISSN", "Template:Cite news", "Template:Commons category", "Template:Official website", "Template:Infobox organization", "Template:Use dmy dates", "Template:Interlanguage link multi", "Template:Reflist", "Template:Distinguish", "Template:Ill", "Template:Lang", "Template:See also", "Template:Category see also", "Template:Cite web", "Template:Hacking in the 2010s", "Template:Authority control", "Template:Anchor", "Template:Failed verification", "Template:Main", "Template:Inflation", "Template:Webarchive", "Template:Citation", "Template:Short description" ]
https://en.wikipedia.org/wiki/Chaos_Computer_Club
7,830
Convention (norm)
A convention is a set of agreed, stipulated, or generally accepted standards, social norms, or other criteria, often taking the form of a custom. In a social context, a convention may retain the character of an "unwritten law" of custom (for example, the manner in which people greet each other, such as by shaking each other's hands). Certain types of rules or customs may become law and sometimes they may be further codified to formalize or enforce the convention (for example, laws that define on which side of the road vehicles must be driven). In physical sciences, numerical values (such as constants, quantities, or scales of measurement) are called conventional if they do not represent a measured property of nature, but originate in a convention, for example an average of many measurements, agreed between the scientists working with these values. A convention is a selection from among two or more alternatives, where the rule or alternative is agreed upon among participants. Often the word refers to unwritten customs shared throughout a community. For instance, it is conventional in many societies that strangers being introduced shake hands. Some conventions are explicitly legislated; for example, it is conventional in the United States and in Germany that motorists drive on the right side of the road, whereas in Australia, New Zealand, Japan, Nepal, India and the United Kingdom motorists drive on the left. The standardization of time is a human convention based on the solar cycle or calendar. The extent to which justice is conventional (as opposed to natural or objective) is historically an important debate among philosophers. The nature of conventions has raised long-lasting philosophical discussion. Quine, Davidson, and David Lewis published influential writings on the subject. Lewis's account of convention received an extended critique in Margaret Gilbert's On Social Facts (1989), where an alternative account is offered. Another view of convention comes from Ruth Millikan's Language: A Biological Model (2005), once more against Lewis. According to David Kalupahana, The Buddha described conventions—whether linguistic, social, political, moral, ethical, or even religious—as arising dependent on specific conditions. According to his paradigm, when conventions are considered absolute realities, they contribute to dogmatism, which in turn leads to conflict. This does not mean that conventions should be absolutely ignored as unreal and therefore useless. Instead, according to Buddhist thought, a wise person adopts a Middle Way without holding conventions to be ultimate or ignoring them when they are fruitful. In sociology a social rule refers to any social convention commonly adhered to in a society. These rules are not written in law or otherwise formalized. In social constructionism there is a great focus on social rules. It is argued that these rules are socially constructed, that these rules act upon every member of a society, but at the same time, are re-produced by the individuals. Sociologists representing symbolic interactionism argue that social rules are created through the interaction between the members of a society. The focus on active interaction highlights the fluid, shifting character of social rules. These are specific to the social context, a context that varies through time and place. That means a social rule changes over time within the same society. What was acceptable in the past may no longer be the case. Similarly, rules differ across space: what is acceptable in one society may not be so in another. Social rules reflect what is acceptable or normal behaviour in any situation. Michel Foucault's concept of discourse is closely related to social rules as it offers a possible explanation how these rules are shaped and change. It is the social rules that tell people what is normal behaviour for any specific category. Thus, social rules tell a woman how to behave in a womanly manner, and a man, how to be manly. Other such rules are as follows: In government, convention is a set of unwritten rules that participants in the government must follow. These rules can be ignored only if justification is clear, or can be provided. Otherwise, consequences follow. Consequences may include ignoring some other convention that has until now been followed. According to the traditional doctrine (Dicey), conventions cannot be enforced in courts, because they are non-legal sets of rules. Convention is particularly important in the Westminster System of government, where many of the rules are unwritten. The term "convention" is also used in international law to refer to certain formal statements of principle such as the Convention on the Rights of the Child. Conventions are adopted by international bodies such as the International Labour Organization and the United Nations. Conventions so adopted usually apply only to countries that ratify them, and do not automatically apply to member states of such bodies. These conventions are generally seen as having the force of international treaties for the ratifying countries. The best known of these are perhaps the several Geneva Conventions.
[ { "paragraph_id": 0, "text": "A convention is a set of agreed, stipulated, or generally accepted standards, social norms, or other criteria, often taking the form of a custom.", "title": "" }, { "paragraph_id": 1, "text": "In a social context, a convention may retain the character of an \"unwritten law\" of custom (for example, the manner in which people greet each other, such as by shaking each other's hands). Certain types of rules or customs may become law and sometimes they may be further codified to formalize or enforce the convention (for example, laws that define on which side of the road vehicles must be driven).", "title": "" }, { "paragraph_id": 2, "text": "In physical sciences, numerical values (such as constants, quantities, or scales of measurement) are called conventional if they do not represent a measured property of nature, but originate in a convention, for example an average of many measurements, agreed between the scientists working with these values.", "title": "" }, { "paragraph_id": 3, "text": "A convention is a selection from among two or more alternatives, where the rule or alternative is agreed upon among participants. Often the word refers to unwritten customs shared throughout a community. For instance, it is conventional in many societies that strangers being introduced shake hands. Some conventions are explicitly legislated; for example, it is conventional in the United States and in Germany that motorists drive on the right side of the road, whereas in Australia, New Zealand, Japan, Nepal, India and the United Kingdom motorists drive on the left. The standardization of time is a human convention based on the solar cycle or calendar. The extent to which justice is conventional (as opposed to natural or objective) is historically an important debate among philosophers.", "title": "General" }, { "paragraph_id": 4, "text": "The nature of conventions has raised long-lasting philosophical discussion. Quine, Davidson, and David Lewis published influential writings on the subject. Lewis's account of convention received an extended critique in Margaret Gilbert's On Social Facts (1989), where an alternative account is offered. Another view of convention comes from Ruth Millikan's Language: A Biological Model (2005), once more against Lewis.", "title": "General" }, { "paragraph_id": 5, "text": "According to David Kalupahana, The Buddha described conventions—whether linguistic, social, political, moral, ethical, or even religious—as arising dependent on specific conditions. According to his paradigm, when conventions are considered absolute realities, they contribute to dogmatism, which in turn leads to conflict. This does not mean that conventions should be absolutely ignored as unreal and therefore useless. Instead, according to Buddhist thought, a wise person adopts a Middle Way without holding conventions to be ultimate or ignoring them when they are fruitful.", "title": "General" }, { "paragraph_id": 6, "text": "In sociology a social rule refers to any social convention commonly adhered to in a society. These rules are not written in law or otherwise formalized. In social constructionism there is a great focus on social rules. It is argued that these rules are socially constructed, that these rules act upon every member of a society, but at the same time, are re-produced by the individuals.", "title": "Customary or social conventions" }, { "paragraph_id": 7, "text": "Sociologists representing symbolic interactionism argue that social rules are created through the interaction between the members of a society. The focus on active interaction highlights the fluid, shifting character of social rules. These are specific to the social context, a context that varies through time and place. That means a social rule changes over time within the same society. What was acceptable in the past may no longer be the case. Similarly, rules differ across space: what is acceptable in one society may not be so in another.", "title": "Customary or social conventions" }, { "paragraph_id": 8, "text": "Social rules reflect what is acceptable or normal behaviour in any situation. Michel Foucault's concept of discourse is closely related to social rules as it offers a possible explanation how these rules are shaped and change. It is the social rules that tell people what is normal behaviour for any specific category. Thus, social rules tell a woman how to behave in a womanly manner, and a man, how to be manly. Other such rules are as follows:", "title": "Customary or social conventions" }, { "paragraph_id": 9, "text": "In government, convention is a set of unwritten rules that participants in the government must follow. These rules can be ignored only if justification is clear, or can be provided. Otherwise, consequences follow. Consequences may include ignoring some other convention that has until now been followed. According to the traditional doctrine (Dicey), conventions cannot be enforced in courts, because they are non-legal sets of rules. Convention is particularly important in the Westminster System of government, where many of the rules are unwritten.", "title": "Government" }, { "paragraph_id": 10, "text": "The term \"convention\" is also used in international law to refer to certain formal statements of principle such as the Convention on the Rights of the Child. Conventions are adopted by international bodies such as the International Labour Organization and the United Nations. Conventions so adopted usually apply only to countries that ratify them, and do not automatically apply to member states of such bodies. These conventions are generally seen as having the force of international treaties for the ratifying countries. The best known of these are perhaps the several Geneva Conventions.", "title": "International law" } ]
A convention is a set of agreed, stipulated, or generally accepted standards, social norms, or other criteria, often taking the form of a custom. In a social context, a convention may retain the character of an "unwritten law" of custom. Certain types of rules or customs may become law and sometimes they may be further codified to formalize or enforce the convention. In physical sciences, numerical values are called conventional if they do not represent a measured property of nature, but originate in a convention, for example an average of many measurements, agreed between the scientists working with these values.
2002-01-22T14:07:07Z
2023-12-20T17:51:25Z
[ "Template:Example needed", "Template:Conservatism sidebar", "Template:Cite journal", "Template:Citation needed", "Template:Reflist", "Template:Cite web", "Template:World view", "Template:Authority control", "Template:Short description", "Template:Main", "Template:Primary source inline", "Template:Relevance inline", "Template:Better source needed", "Template:Nonverbal communication", "Template:Cite book" ]
https://en.wikipedia.org/wiki/Convention_(norm)
7,832
Complete metric space
In mathematical analysis, a metric space M is called complete (or a Cauchy space) if every Cauchy sequence of points in M has a limit that is also in M. Intuitively, a space is complete if there are no "points missing" from it (inside or at the boundary). For instance, the set of rational numbers is not complete, because e.g. 2 {\displaystyle {\sqrt {2}}} is "missing" from it, even though one can construct a Cauchy sequence of rational numbers that converges to it (see further examples below). It is always possible to "fill all the holes", leading to the completion of a given space, as explained below. Cauchy sequence A sequence x 1 , x 2 , x 3 , … {\displaystyle x_{1},x_{2},x_{3},\ldots } in a metric space ( X , d ) {\displaystyle (X,d)} is called Cauchy if for every positive real number r > 0 {\displaystyle r>0} there is a positive integer N {\displaystyle N} such that for all positive integers m , n > N , {\displaystyle m,n>N,} Complete space A metric space ( X , d ) {\displaystyle (X,d)} is complete if any of the following equivalent conditions are satisfied: The space Q of rational numbers, with the standard metric given by the absolute value of the difference, is not complete. Consider for instance the sequence defined by x 1 = 1 {\displaystyle x_{1}=1} and x n + 1 = x n 2 + 1 x n . {\displaystyle x_{n+1}={\frac {x_{n}}{2}}+{\frac {1}{x_{n}}}.} This is a Cauchy sequence of rational numbers, but it does not converge towards any rational limit: If the sequence did have a limit x , {\displaystyle x,} then by solving x = x 2 + 1 x {\displaystyle x={\frac {x}{2}}+{\frac {1}{x}}} necessarily x 2 = 2 , {\displaystyle x^{2}=2,} yet no rational number has this property. However, considered as a sequence of real numbers, it does converge to the irrational number 2 {\displaystyle {\sqrt {2}}} . The open interval (0,1), again with the absolute difference metric, is not complete either. The sequence defined by x n = 1 n {\displaystyle x_{n}={\tfrac {1}{n}}} is Cauchy, but does not have a limit in the given space. However the closed interval [0,1] is complete; for example the given sequence does have a limit in this interval, namely zero. The space R of real numbers and the space C of complex numbers (with the metric given by the absolute difference) are complete, and so is Euclidean space R, with the usual distance metric. In contrast, infinite-dimensional normed vector spaces may or may not be complete; those that are complete are Banach spaces. The space C[a, b] of continuous real-valued functions on a closed and bounded interval is a Banach space, and so a complete metric space, with respect to the supremum norm. However, the supremum norm does not give a norm on the space C(a, b) of continuous functions on (a, b), for it may contain unbounded functions. Instead, with the topology of compact convergence, C(a, b) can be given the structure of a Fréchet space: a locally convex topological vector space whose topology can be induced by a complete translation-invariant metric. The space Qp of p-adic numbers is complete for any prime number p . {\displaystyle p.} This space completes Q with the p-adic metric in the same way that R completes Q with the usual metric. If S {\displaystyle S} is an arbitrary set, then the set S of all sequences in S {\displaystyle S} becomes a complete metric space if we define the distance between the sequences ( x n ) {\displaystyle \left(x_{n}\right)} and ( y n ) {\displaystyle \left(y_{n}\right)} to be 1 N {\displaystyle {\tfrac {1}{N}}} where N {\displaystyle N} is the smallest index for which x N {\displaystyle x_{N}} is distinct from y N {\displaystyle y_{N}} or 0 {\displaystyle 0} if there is no such index. This space is homeomorphic to the product of a countable number of copies of the discrete space S . {\displaystyle S.} Riemannian manifolds which are complete are called geodesic manifolds; completeness follows from the Hopf–Rinow theorem. Every compact metric space is complete, though complete spaces need not be compact. In fact, a metric space is compact if and only if it is complete and totally bounded. This is a generalization of the Heine–Borel theorem, which states that any closed and bounded subspace S {\displaystyle S} of R is compact and therefore complete. Let ( X , d ) {\displaystyle (X,d)} be a complete metric space. If A ⊆ X {\displaystyle A\subseteq X} is a closed set, then A {\displaystyle A} is also complete. Let ( X , d ) {\displaystyle (X,d)} be a metric space. If A ⊆ X {\displaystyle A\subseteq X} is a complete subspace, then A {\displaystyle A} is also closed. If X {\displaystyle X} is a set and M {\displaystyle M} is a complete metric space, then the set B ( X , M ) {\displaystyle B(X,M)} of all bounded functions f from X to M {\displaystyle M} is a complete metric space. Here we define the distance in B ( X , M ) {\displaystyle B(X,M)} in terms of the distance in M {\displaystyle M} with the supremum norm If X {\displaystyle X} is a topological space and M {\displaystyle M} is a complete metric space, then the set C b ( X , M ) {\displaystyle C_{b}(X,M)} consisting of all continuous bounded functions f : X → M {\displaystyle f:X\to M} is a closed subspace of B ( X , M ) {\displaystyle B(X,M)} and hence also complete. The Baire category theorem says that every complete metric space is a Baire space. That is, the union of countably many nowhere dense subsets of the space has empty interior. The Banach fixed-point theorem states that a contraction mapping on a complete metric space admits a fixed point. The fixed-point theorem is often used to prove the inverse function theorem on complete metric spaces such as Banach spaces. Theorem (C. Ursescu) — Let X {\displaystyle X} be a complete metric space and let S 1 , S 2 , … {\displaystyle S_{1},S_{2},\ldots } be a sequence of subsets of X . {\displaystyle X.} For any metric space M, it is possible to construct a complete metric space M′ (which is also denoted as M ¯ {\displaystyle {\overline {M}}} ), which contains M as a dense subspace. It has the following universal property: if N is any complete metric space and f is any uniformly continuous function from M to N, then there exists a unique uniformly continuous function f′ from M′ to N that extends f. The space M' is determined up to isometry by this property (among all complete metric spaces isometrically containing M), and is called the completion of M. The completion of M can be constructed as a set of equivalence classes of Cauchy sequences in M. For any two Cauchy sequences x ∙ = ( x n ) {\displaystyle x_{\bullet }=\left(x_{n}\right)} and y ∙ = ( y n ) {\displaystyle y_{\bullet }=\left(y_{n}\right)} in M, we may define their distance as (This limit exists because the real numbers are complete.) This is only a pseudometric, not yet a metric, since two different Cauchy sequences may have the distance 0. But "having distance 0" is an equivalence relation on the set of all Cauchy sequences, and the set of equivalence classes is a metric space, the completion of M. The original space is embedded in this space via the identification of an element x of M' with the equivalence class of sequences in M converging to x (i.e., the equivalence class containing the sequence with constant value x). This defines an isometry onto a dense subspace, as required. Notice, however, that this construction makes explicit use of the completeness of the real numbers, so completion of the rational numbers needs a slightly different treatment. Cantor's construction of the real numbers is similar to the above construction; the real numbers are the completion of the rational numbers using the ordinary absolute value to measure distances. The additional subtlety to contend with is that it is not logically permissible to use the completeness of the real numbers in their own construction. Nevertheless, equivalence classes of Cauchy sequences are defined as above, and the set of equivalence classes is easily shown to be a field that has the rational numbers as a subfield. This field is complete, admits a natural total ordering, and is the unique totally ordered complete field (up to isomorphism). It is defined as the field of real numbers (see also Construction of the real numbers for more details). One way to visualize this identification with the real numbers as usually viewed is that the equivalence class consisting of those Cauchy sequences of rational numbers that "ought" to have a given real limit is identified with that real number. The truncations of the decimal expansion give just one choice of Cauchy sequence in the relevant equivalence class. For a prime p , {\displaystyle p,} the p-adic numbers arise by completing the rational numbers with respect to a different metric. If the earlier completion procedure is applied to a normed vector space, the result is a Banach space containing the original space as a dense subspace, and if it is applied to an inner product space, the result is a Hilbert space containing the original space as a dense subspace. Completeness is a property of the metric and not of the topology, meaning that a complete metric space can be homeomorphic to a non-complete one. An example is given by the real numbers, which are complete but homeomorphic to the open interval (0,1), which is not complete. In topology one considers completely metrizable spaces, spaces for which there exists at least one complete metric inducing the given topology. Completely metrizable spaces can be characterized as those spaces that can be written as an intersection of countably many open subsets of some complete metric space. Since the conclusion of the Baire category theorem is purely topological, it applies to these spaces as well. Completely metrizable spaces are often called topologically complete. However, the latter term is somewhat arbitrary since metric is not the most general structure on a topological space for which one can talk about completeness (see the section Alternatives and generalizations). Indeed, some authors use the term topologically complete for a wider class of topological spaces, the completely uniformizable spaces. A topological space homeomorphic to a separable complete metric space is called a Polish space. Since Cauchy sequences can also be defined in general topological groups, an alternative to relying on a metric structure for defining completeness and constructing the completion of a space is to use a group structure. This is most often seen in the context of topological vector spaces, but requires only the existence of a continuous "subtraction" operation. In this setting, the distance between two points x {\displaystyle x} and y {\displaystyle y} is gauged not by a real number ε {\displaystyle \varepsilon } via the metric d {\displaystyle d} in the comparison d ( x , y ) < ε , {\displaystyle d(x,y)<\varepsilon ,} but by an open neighbourhood N {\displaystyle N} of 0 {\displaystyle 0} via subtraction in the comparison x − y ∈ N . {\displaystyle x-y\in N.} A common generalisation of these definitions can be found in the context of a uniform space, where an entourage is a set of all pairs of points that are at no more than a particular "distance" from each other. It is also possible to replace Cauchy sequences in the definition of completeness by Cauchy nets or Cauchy filters. If every Cauchy net (or equivalently every Cauchy filter) has a limit in X , {\displaystyle X,} then X {\displaystyle X} is called complete. One can furthermore construct a completion for an arbitrary uniform space similar to the completion of metric spaces. The most general situation in which Cauchy nets apply is Cauchy spaces; these too have a notion of completeness and completion just like uniform spaces.
[ { "paragraph_id": 0, "text": "In mathematical analysis, a metric space M is called complete (or a Cauchy space) if every Cauchy sequence of points in M has a limit that is also in M.", "title": "" }, { "paragraph_id": 1, "text": "Intuitively, a space is complete if there are no \"points missing\" from it (inside or at the boundary). For instance, the set of rational numbers is not complete, because e.g. 2 {\\displaystyle {\\sqrt {2}}} is \"missing\" from it, even though one can construct a Cauchy sequence of rational numbers that converges to it (see further examples below). It is always possible to \"fill all the holes\", leading to the completion of a given space, as explained below.", "title": "" }, { "paragraph_id": 2, "text": "Cauchy sequence", "title": "Definition" }, { "paragraph_id": 3, "text": "A sequence x 1 , x 2 , x 3 , … {\\displaystyle x_{1},x_{2},x_{3},\\ldots } in a metric space ( X , d ) {\\displaystyle (X,d)} is called Cauchy if for every positive real number r > 0 {\\displaystyle r>0} there is a positive integer N {\\displaystyle N} such that for all positive integers m , n > N , {\\displaystyle m,n>N,}", "title": "Definition" }, { "paragraph_id": 4, "text": "Complete space", "title": "Definition" }, { "paragraph_id": 5, "text": "A metric space ( X , d ) {\\displaystyle (X,d)} is complete if any of the following equivalent conditions are satisfied:", "title": "Definition" }, { "paragraph_id": 6, "text": "The space Q of rational numbers, with the standard metric given by the absolute value of the difference, is not complete. Consider for instance the sequence defined by x 1 = 1 {\\displaystyle x_{1}=1} and x n + 1 = x n 2 + 1 x n . {\\displaystyle x_{n+1}={\\frac {x_{n}}{2}}+{\\frac {1}{x_{n}}}.} This is a Cauchy sequence of rational numbers, but it does not converge towards any rational limit: If the sequence did have a limit x , {\\displaystyle x,} then by solving x = x 2 + 1 x {\\displaystyle x={\\frac {x}{2}}+{\\frac {1}{x}}} necessarily x 2 = 2 , {\\displaystyle x^{2}=2,} yet no rational number has this property. However, considered as a sequence of real numbers, it does converge to the irrational number 2 {\\displaystyle {\\sqrt {2}}} .", "title": "Examples" }, { "paragraph_id": 7, "text": "The open interval (0,1), again with the absolute difference metric, is not complete either. The sequence defined by x n = 1 n {\\displaystyle x_{n}={\\tfrac {1}{n}}} is Cauchy, but does not have a limit in the given space. However the closed interval [0,1] is complete; for example the given sequence does have a limit in this interval, namely zero.", "title": "Examples" }, { "paragraph_id": 8, "text": "The space R of real numbers and the space C of complex numbers (with the metric given by the absolute difference) are complete, and so is Euclidean space R, with the usual distance metric. In contrast, infinite-dimensional normed vector spaces may or may not be complete; those that are complete are Banach spaces. The space C[a, b] of continuous real-valued functions on a closed and bounded interval is a Banach space, and so a complete metric space, with respect to the supremum norm. However, the supremum norm does not give a norm on the space C(a, b) of continuous functions on (a, b), for it may contain unbounded functions. Instead, with the topology of compact convergence, C(a, b) can be given the structure of a Fréchet space: a locally convex topological vector space whose topology can be induced by a complete translation-invariant metric.", "title": "Examples" }, { "paragraph_id": 9, "text": "The space Qp of p-adic numbers is complete for any prime number p . {\\displaystyle p.} This space completes Q with the p-adic metric in the same way that R completes Q with the usual metric.", "title": "Examples" }, { "paragraph_id": 10, "text": "If S {\\displaystyle S} is an arbitrary set, then the set S of all sequences in S {\\displaystyle S} becomes a complete metric space if we define the distance between the sequences ( x n ) {\\displaystyle \\left(x_{n}\\right)} and ( y n ) {\\displaystyle \\left(y_{n}\\right)} to be 1 N {\\displaystyle {\\tfrac {1}{N}}} where N {\\displaystyle N} is the smallest index for which x N {\\displaystyle x_{N}} is distinct from y N {\\displaystyle y_{N}} or 0 {\\displaystyle 0} if there is no such index. This space is homeomorphic to the product of a countable number of copies of the discrete space S . {\\displaystyle S.}", "title": "Examples" }, { "paragraph_id": 11, "text": "Riemannian manifolds which are complete are called geodesic manifolds; completeness follows from the Hopf–Rinow theorem.", "title": "Examples" }, { "paragraph_id": 12, "text": "Every compact metric space is complete, though complete spaces need not be compact. In fact, a metric space is compact if and only if it is complete and totally bounded. This is a generalization of the Heine–Borel theorem, which states that any closed and bounded subspace S {\\displaystyle S} of R is compact and therefore complete.", "title": "Some theorems" }, { "paragraph_id": 13, "text": "Let ( X , d ) {\\displaystyle (X,d)} be a complete metric space. If A ⊆ X {\\displaystyle A\\subseteq X} is a closed set, then A {\\displaystyle A} is also complete. Let ( X , d ) {\\displaystyle (X,d)} be a metric space. If A ⊆ X {\\displaystyle A\\subseteq X} is a complete subspace, then A {\\displaystyle A} is also closed.", "title": "Some theorems" }, { "paragraph_id": 14, "text": "If X {\\displaystyle X} is a set and M {\\displaystyle M} is a complete metric space, then the set B ( X , M ) {\\displaystyle B(X,M)} of all bounded functions f from X to M {\\displaystyle M} is a complete metric space. Here we define the distance in B ( X , M ) {\\displaystyle B(X,M)} in terms of the distance in M {\\displaystyle M} with the supremum norm", "title": "Some theorems" }, { "paragraph_id": 15, "text": "If X {\\displaystyle X} is a topological space and M {\\displaystyle M} is a complete metric space, then the set C b ( X , M ) {\\displaystyle C_{b}(X,M)} consisting of all continuous bounded functions f : X → M {\\displaystyle f:X\\to M} is a closed subspace of B ( X , M ) {\\displaystyle B(X,M)} and hence also complete.", "title": "Some theorems" }, { "paragraph_id": 16, "text": "The Baire category theorem says that every complete metric space is a Baire space. That is, the union of countably many nowhere dense subsets of the space has empty interior.", "title": "Some theorems" }, { "paragraph_id": 17, "text": "The Banach fixed-point theorem states that a contraction mapping on a complete metric space admits a fixed point. The fixed-point theorem is often used to prove the inverse function theorem on complete metric spaces such as Banach spaces.", "title": "Some theorems" }, { "paragraph_id": 18, "text": "Theorem (C. Ursescu) — Let X {\\displaystyle X} be a complete metric space and let S 1 , S 2 , … {\\displaystyle S_{1},S_{2},\\ldots } be a sequence of subsets of X . {\\displaystyle X.}", "title": "Some theorems" }, { "paragraph_id": 19, "text": "For any metric space M, it is possible to construct a complete metric space M′ (which is also denoted as M ¯ {\\displaystyle {\\overline {M}}} ), which contains M as a dense subspace. It has the following universal property: if N is any complete metric space and f is any uniformly continuous function from M to N, then there exists a unique uniformly continuous function f′ from M′ to N that extends f. The space M' is determined up to isometry by this property (among all complete metric spaces isometrically containing M), and is called the completion of M.", "title": "Completion" }, { "paragraph_id": 20, "text": "The completion of M can be constructed as a set of equivalence classes of Cauchy sequences in M. For any two Cauchy sequences x ∙ = ( x n ) {\\displaystyle x_{\\bullet }=\\left(x_{n}\\right)} and y ∙ = ( y n ) {\\displaystyle y_{\\bullet }=\\left(y_{n}\\right)} in M, we may define their distance as", "title": "Completion" }, { "paragraph_id": 21, "text": "(This limit exists because the real numbers are complete.) This is only a pseudometric, not yet a metric, since two different Cauchy sequences may have the distance 0. But \"having distance 0\" is an equivalence relation on the set of all Cauchy sequences, and the set of equivalence classes is a metric space, the completion of M. The original space is embedded in this space via the identification of an element x of M' with the equivalence class of sequences in M converging to x (i.e., the equivalence class containing the sequence with constant value x). This defines an isometry onto a dense subspace, as required. Notice, however, that this construction makes explicit use of the completeness of the real numbers, so completion of the rational numbers needs a slightly different treatment.", "title": "Completion" }, { "paragraph_id": 22, "text": "Cantor's construction of the real numbers is similar to the above construction; the real numbers are the completion of the rational numbers using the ordinary absolute value to measure distances. The additional subtlety to contend with is that it is not logically permissible to use the completeness of the real numbers in their own construction. Nevertheless, equivalence classes of Cauchy sequences are defined as above, and the set of equivalence classes is easily shown to be a field that has the rational numbers as a subfield. This field is complete, admits a natural total ordering, and is the unique totally ordered complete field (up to isomorphism). It is defined as the field of real numbers (see also Construction of the real numbers for more details). One way to visualize this identification with the real numbers as usually viewed is that the equivalence class consisting of those Cauchy sequences of rational numbers that \"ought\" to have a given real limit is identified with that real number. The truncations of the decimal expansion give just one choice of Cauchy sequence in the relevant equivalence class.", "title": "Completion" }, { "paragraph_id": 23, "text": "For a prime p , {\\displaystyle p,} the p-adic numbers arise by completing the rational numbers with respect to a different metric.", "title": "Completion" }, { "paragraph_id": 24, "text": "If the earlier completion procedure is applied to a normed vector space, the result is a Banach space containing the original space as a dense subspace, and if it is applied to an inner product space, the result is a Hilbert space containing the original space as a dense subspace.", "title": "Completion" }, { "paragraph_id": 25, "text": "Completeness is a property of the metric and not of the topology, meaning that a complete metric space can be homeomorphic to a non-complete one. An example is given by the real numbers, which are complete but homeomorphic to the open interval (0,1), which is not complete.", "title": "Topologically complete spaces" }, { "paragraph_id": 26, "text": "In topology one considers completely metrizable spaces, spaces for which there exists at least one complete metric inducing the given topology. Completely metrizable spaces can be characterized as those spaces that can be written as an intersection of countably many open subsets of some complete metric space. Since the conclusion of the Baire category theorem is purely topological, it applies to these spaces as well.", "title": "Topologically complete spaces" }, { "paragraph_id": 27, "text": "Completely metrizable spaces are often called topologically complete. However, the latter term is somewhat arbitrary since metric is not the most general structure on a topological space for which one can talk about completeness (see the section Alternatives and generalizations). Indeed, some authors use the term topologically complete for a wider class of topological spaces, the completely uniformizable spaces.", "title": "Topologically complete spaces" }, { "paragraph_id": 28, "text": "A topological space homeomorphic to a separable complete metric space is called a Polish space.", "title": "Topologically complete spaces" }, { "paragraph_id": 29, "text": "Since Cauchy sequences can also be defined in general topological groups, an alternative to relying on a metric structure for defining completeness and constructing the completion of a space is to use a group structure. This is most often seen in the context of topological vector spaces, but requires only the existence of a continuous \"subtraction\" operation. In this setting, the distance between two points x {\\displaystyle x} and y {\\displaystyle y} is gauged not by a real number ε {\\displaystyle \\varepsilon } via the metric d {\\displaystyle d} in the comparison d ( x , y ) < ε , {\\displaystyle d(x,y)<\\varepsilon ,} but by an open neighbourhood N {\\displaystyle N} of 0 {\\displaystyle 0} via subtraction in the comparison x − y ∈ N . {\\displaystyle x-y\\in N.}", "title": "Alternatives and generalizations" }, { "paragraph_id": 30, "text": "A common generalisation of these definitions can be found in the context of a uniform space, where an entourage is a set of all pairs of points that are at no more than a particular \"distance\" from each other.", "title": "Alternatives and generalizations" }, { "paragraph_id": 31, "text": "It is also possible to replace Cauchy sequences in the definition of completeness by Cauchy nets or Cauchy filters. If every Cauchy net (or equivalently every Cauchy filter) has a limit in X , {\\displaystyle X,} then X {\\displaystyle X} is called complete. One can furthermore construct a completion for an arbitrary uniform space similar to the completion of metric spaces. The most general situation in which Cauchy nets apply is Cauchy spaces; these too have a notion of completeness and completion just like uniform spaces.", "title": "Alternatives and generalizations" } ]
In mathematical analysis, a metric space M is called complete if every Cauchy sequence of points in M has a limit that is also in M. Intuitively, a space is complete if there are no "points missing" from it. For instance, the set of rational numbers is not complete, because e.g. 2 is "missing" from it, even though one can construct a Cauchy sequence of rational numbers that converges to it. It is always possible to "fill all the holes", leading to the completion of a given space, as explained below.
2002-01-22T17:40:44Z
2023-08-08T14:44:38Z
[ "Template:Annotated link", "Template:Cite book", "Template:ISBN", "Template:Short description", "Template:Redirect", "Template:Mvar", "Template:Math", "Template:Math theorem", "Template:Open-open", "Template:Closed-closed", "Template:Main", "Template:Reflist" ]
https://en.wikipedia.org/wiki/Complete_metric_space
7,833
The Amazing Criswell
Jeron Criswell King (August 18, 1907 – October 4, 1982), known by his stage-name The Amazing Criswell /ˈkrɪzwɛl/, was an American psychic known for wildly inaccurate predictions. In person, he went by Charles Criswell King, and was sometimes credited as Jeron King Criswell. Criswell was flamboyant, with spit curled hair, a stentorian style of speaking, and a sequined tuxedo. He owned a coffin in which he claimed to sleep. He grew up in a troubled family in Indiana with relatives who owned a funeral home, and said that he became comfortable with sleeping in caskets in the storeroom. He appeared in two films directed by Ed Wood—Plan 9 from Outer Space (1957) and Night of the Ghouls (1959)—and also appeared in Orgy of the Dead (1965), which was written by Wood. Criswell claimed that he never talked until the age of four. During a thunderstorm he first spoke, making his first prediction, "the rain will stop." From this point on he was talkative, often placing himself center stage at any opportunity. Criswell said he had once worked as a radio announcer and news broadcaster. He began buying time on a local Los Angeles television station in the early 1950s to run infomercials for his Criswell Family Vitamins. To fill the time, he began his "Criswell Predicts" part of the show. This made him a minor off-beat celebrity in Los Angeles and around Hollywood, and his friendship with old show-business people such as Mae West and rising fringe celebrities such as Korla Pandit made Criswell an entertaining presence at parties. His fame brought him appearances on The Jack Paar Show (1957–1962) which allowed him to publish his predictions in three publications of Spaceway Magazine (February 1955, April 1955, and June 1955), as well as run a weekly syndicated newspaper article starting on September 6, 1951. He later published three books of predictions; From Now to the Year 2000, Your Next Ten Years, and Forbidden Predictions. He also recorded a long playing record, Your Incredible Future (which was later released on CD), featuring 84 minutes of his predictions in his own voice. Criswell appeared in the movies of writer and director Ed Wood. After Criswell's death, his longtime friend Paul Marco released Criswell's song "Someone Walked Over My Grave" on a 7" record which was recorded by Criswell as a memorial song that he wanted released posthumously. Criswell's predictions were nationally syndicated and he appeared on the television show Criswell Predicts on KLAC Channel 13 (now KCOP-13) in Los Angeles as well as being recorded for syndication. His announcer was Bob Shields, who later played the judge on The Judge. Criswell wore heavy makeup in public after his live program was broadcast in Los Angeles. Only selected people were allowed in the KCOP studio during his broadcast. Criswell wrote several books of predictions, including 1968's Criswell Predicts: From Now to the Year 2000. In it, he claimed that Denver, Colorado, would be struck by a ray from space that would cause all metal to adopt the qualities of rubber, leading to horrific accidents at amusement parks. He predicted mass cannibalism and the end of planet Earth, which he set as happening on August 18, 1999. Criswell was a student of history. He believed history repeated itself, that the United States were the "modern Romans". Each day, he read the St. Louis Post-Dispatch looking for clues for his predictions. Some sources claim Criswell's most famous prediction was on The Jack Paar Program (1962–65) in March 1963, when he predicted that US President John F. Kennedy would not run for reelection in 1964 because something was going to happen to him in November 1963. Sources say that Criswell never claimed to be a real psychic; however, those who knew him, including actress and fellow Plan 9 alumna Maila Nurmi ("Vampira"), believed he was. According to writer Charles A. Coulombe, whose family rented an apartment from him, Criswell told Coulombe's father "[I] had the gift, but [I] lost it when I started taking money for it." Criswell married a former speakeasy dancer named Halo Meadows, who once appeared on You Bet Your Life, and whom Coulombe describes as "quite mad": "Mrs Criswell had a huge standard poodle (named "Buttercup") which she was convinced was the reincarnation of her cousin Thomas. She spent a great deal of time sunbathing ... which, given her size, was not too pleasing a sight." Mae West used Criswell as her personal psychic; he once predicted her rise to President of the United States, whereupon she, Criswell and George Liberace, the brother of showman Liberace, would take a rocket to the Moon. Criswell and West were great friends and she would lavish him with home-cooked food which she had delivered to the studio that he shared with Maila Nurmi ("Vampira"). It is said that West sold Criswell her old luxury cars for five dollars. Criswell died on October 4, 1982, at the age of 75; he was cremated days later.
[ { "paragraph_id": 0, "text": "Jeron Criswell King (August 18, 1907 – October 4, 1982), known by his stage-name The Amazing Criswell /ˈkrɪzwɛl/, was an American psychic known for wildly inaccurate predictions. In person, he went by Charles Criswell King, and was sometimes credited as Jeron King Criswell.", "title": "" }, { "paragraph_id": 1, "text": "Criswell was flamboyant, with spit curled hair, a stentorian style of speaking, and a sequined tuxedo. He owned a coffin in which he claimed to sleep. He grew up in a troubled family in Indiana with relatives who owned a funeral home, and said that he became comfortable with sleeping in caskets in the storeroom. He appeared in two films directed by Ed Wood—Plan 9 from Outer Space (1957) and Night of the Ghouls (1959)—and also appeared in Orgy of the Dead (1965), which was written by Wood.", "title": "" }, { "paragraph_id": 2, "text": "Criswell claimed that he never talked until the age of four. During a thunderstorm he first spoke, making his first prediction, \"the rain will stop.\" From this point on he was talkative, often placing himself center stage at any opportunity.", "title": "Early life" }, { "paragraph_id": 3, "text": "Criswell said he had once worked as a radio announcer and news broadcaster. He began buying time on a local Los Angeles television station in the early 1950s to run infomercials for his Criswell Family Vitamins. To fill the time, he began his \"Criswell Predicts\" part of the show. This made him a minor off-beat celebrity in Los Angeles and around Hollywood, and his friendship with old show-business people such as Mae West and rising fringe celebrities such as Korla Pandit made Criswell an entertaining presence at parties. His fame brought him appearances on The Jack Paar Show (1957–1962) which allowed him to publish his predictions in three publications of Spaceway Magazine (February 1955, April 1955, and June 1955), as well as run a weekly syndicated newspaper article starting on September 6, 1951. He later published three books of predictions; From Now to the Year 2000, Your Next Ten Years, and Forbidden Predictions. He also recorded a long playing record, Your Incredible Future (which was later released on CD), featuring 84 minutes of his predictions in his own voice. Criswell appeared in the movies of writer and director Ed Wood. After Criswell's death, his longtime friend Paul Marco released Criswell's song \"Someone Walked Over My Grave\" on a 7\" record which was recorded by Criswell as a memorial song that he wanted released posthumously.", "title": "Career" }, { "paragraph_id": 4, "text": "Criswell's predictions were nationally syndicated and he appeared on the television show Criswell Predicts on KLAC Channel 13 (now KCOP-13) in Los Angeles as well as being recorded for syndication. His announcer was Bob Shields, who later played the judge on The Judge. Criswell wore heavy makeup in public after his live program was broadcast in Los Angeles. Only selected people were allowed in the KCOP studio during his broadcast.", "title": "Predictions" }, { "paragraph_id": 5, "text": "Criswell wrote several books of predictions, including 1968's Criswell Predicts: From Now to the Year 2000. In it, he claimed that Denver, Colorado, would be struck by a ray from space that would cause all metal to adopt the qualities of rubber, leading to horrific accidents at amusement parks. He predicted mass cannibalism and the end of planet Earth, which he set as happening on August 18, 1999.", "title": "Predictions" }, { "paragraph_id": 6, "text": "Criswell was a student of history. He believed history repeated itself, that the United States were the \"modern Romans\". Each day, he read the St. Louis Post-Dispatch looking for clues for his predictions.", "title": "Predictions" }, { "paragraph_id": 7, "text": "Some sources claim Criswell's most famous prediction was on The Jack Paar Program (1962–65) in March 1963, when he predicted that US President John F. Kennedy would not run for reelection in 1964 because something was going to happen to him in November 1963.", "title": "Predictions" }, { "paragraph_id": 8, "text": "Sources say that Criswell never claimed to be a real psychic; however, those who knew him, including actress and fellow Plan 9 alumna Maila Nurmi (\"Vampira\"), believed he was. According to writer Charles A. Coulombe, whose family rented an apartment from him, Criswell told Coulombe's father \"[I] had the gift, but [I] lost it when I started taking money for it.\"", "title": "Predictions" }, { "paragraph_id": 9, "text": "Criswell married a former speakeasy dancer named Halo Meadows, who once appeared on You Bet Your Life, and whom Coulombe describes as \"quite mad\": \"Mrs Criswell had a huge standard poodle (named \"Buttercup\") which she was convinced was the reincarnation of her cousin Thomas. She spent a great deal of time sunbathing ... which, given her size, was not too pleasing a sight.\"", "title": "Private life" }, { "paragraph_id": 10, "text": "Mae West used Criswell as her personal psychic; he once predicted her rise to President of the United States, whereupon she, Criswell and George Liberace, the brother of showman Liberace, would take a rocket to the Moon. Criswell and West were great friends and she would lavish him with home-cooked food which she had delivered to the studio that he shared with Maila Nurmi (\"Vampira\"). It is said that West sold Criswell her old luxury cars for five dollars.", "title": "Private life" }, { "paragraph_id": 11, "text": "Criswell died on October 4, 1982, at the age of 75; he was cremated days later.", "title": "Private life" } ]
Jeron Criswell King, known by his stage-name The Amazing Criswell, was an American psychic known for wildly inaccurate predictions. In person, he went by Charles Criswell King, and was sometimes credited as Jeron King Criswell. Criswell was flamboyant, with spit curled hair, a stentorian style of speaking, and a sequined tuxedo. He owned a coffin in which he claimed to sleep. He grew up in a troubled family in Indiana with relatives who owned a funeral home, and said that he became comfortable with sleeping in caskets in the storeroom. He appeared in two films directed by Ed Wood—Plan 9 from Outer Space (1957) and Night of the Ghouls (1959)—and also appeared in Orgy of the Dead (1965), which was written by Wood.
2002-02-25T15:43:11Z
2023-12-23T04:18:56Z
[ "Template:Cite web", "Template:IMDb name", "Template:Reflist", "Template:Notelist", "Template:IMDb title", "Template:Amg name", "Template:Authority control", "Template:Nowrap", "Template:Infobox person", "Template:IPAc-en", "Template:Short description", "Template:Cn", "Template:Cite book", "Template:Cbignore", "Template:Wikiquote", "Template:Efn" ]
https://en.wikipedia.org/wiki/The_Amazing_Criswell
7,834
Chain reaction
A chain reaction is a sequence of reactions where a reactive product or by-product causes additional reactions to take place. In a chain reaction, positive feedback leads to a self-amplifying chain of events. Chain reactions are one way that systems which are not in thermodynamic equilibrium can release energy or increase entropy in order to reach a state of higher entropy. For example, a system may not be able to reach a lower energy state by releasing energy into the environment, because it is hindered or prevented in some way from taking the path that will result in the energy release. If a reaction results in a small energy release making way for more energy releases in an expanding chain, then the system will typically collapse explosively until much or all of the stored energy has been released. A macroscopic metaphor for chain reactions is thus a snowball causing a larger snowball until finally an avalanche results ("snowball effect"). This is a result of stored gravitational potential energy seeking a path of release over friction. Chemically, the equivalent to a snow avalanche is a spark causing a forest fire. In nuclear physics, a single stray neutron can result in a prompt critical event, which may finally be energetic enough for a nuclear reactor meltdown or (in a bomb) a nuclear explosion. Numerous chain reactions can be represented by a mathematical model based on Markov chains. In 1913, the German chemist Max Bodenstein first put forth the idea of chemical chain reactions. If two molecules react, not only molecules of the final reaction products are formed, but also some unstable molecules which can further react with the parent molecules with a far larger probability than the initial reactants. (In the new reaction, further unstable molecules are formed besides the stable products, and so on.) In 1918, Walther Nernst proposed that the photochemical reaction between hydrogen and chlorine is a chain reaction in order to explain what is known as the quantum yield phenomena. This means that one photon of light is responsible for the formation of as many as 10 molecules of the product HCl. Nernst suggested that the photon dissociates a Cl2 molecule into two Cl atoms which each initiate a long chain of reaction steps forming HCl. In 1923, Danish and Dutch scientists Christian Christiansen and Hendrik Anthony Kramers, in an analysis of the formation of polymers, pointed out that such a chain reaction need not start with a molecule excited by light, but could also start with two molecules colliding violently due to thermal energy as previously proposed for initiation of chemical reactions by van' t Hoff. Christiansen and Kramers also noted that if, in one link of the reaction chain, two or more unstable molecules are produced, the reaction chain would branch and grow. The result is in fact an exponential growth, thus giving rise to explosive increases in reaction rates, and indeed to chemical explosions themselves. This was the first proposal for the mechanism of chemical explosions. A quantitative chain chemical reaction theory was created later on by Soviet physicist Nikolay Semyonov in 1934. Semyonov shared the Nobel Prize in 1956 with Sir Cyril Norman Hinshelwood, who independently developed many of the same quantitative concepts. The main types of steps in chain reaction are of the following types. The chain length is defined as the average number of times the propagation cycle is repeated, and equals the overall reaction rate divided by the initiation rate. Some chain reactions have complex rate equations with fractional order or mixed order kinetics. The reaction H2 + Br2 → 2 HBr proceeds by the following mechanism: As can be explained using the steady-state approximation, the thermal reaction has an initial rate of fractional order (3/2), and a complete rate equation with a two-term denominator (mixed-order kinetics). The pyrolysis (thermal decomposition) of acetaldehyde, CH3CHO (g) → CH4 (g) + CO (g), proceeds via the Rice-Herzfeld mechanism: The methyl and CHO groups are free radicals. This reaction step provides methane, which is one of the two main products. The product •CH3CO (g) of the previous step gives rise to carbon monoxide (CO), which is the second main product. The sum of the two propagation steps corresponds to the overall reaction CH3CHO (g) → CH4 (g) + CO (g), catalyzed by a methyl radical •CH3. This reaction is the only source of ethane (minor product) and it is concluded to be the main chain ending step. Although this mechanism explains the principal products, there are others that are formed in a minor degree, such as acetone (CH3COCH3) and propanal (CH3CH2CHO). Applying the Steady State Approximation for the intermediate species CH3(g) and CH3CO(g), the rate law for the formation of methane and the order of reaction are found: The rate of formation of the product methane is ( 1 ) . . . d [ CH 4 ] d t = k 2 [ CH 3 ] [ CH 3 CHO ] {\displaystyle (1)...{\frac {d{\ce {[CH4]}}}{dt}}=k_{2}{\ce {[CH3]}}{\ce {[CH3CHO]}}} For the intermediates ( 2 ) . . . d [ CH 3 ] d t = k 1 [ CH 3 CHO ] − k 2 [ CH 3 ] [ CH 3 CHO ] + k 3 [ CH 3 CO ] − 2 k 4 [ CH 3 ] 2 = 0 {\displaystyle (2)...{\frac {d{\ce {[CH_3]}}}{dt}}=k_{1}{\ce {[CH3CHO]}}-k_{2}{\ce {[CH3]}}{\ce {[CH3CHO]}}+k_{3}{\ce {[CH3CO]}}-2k_{4}{\ce {[CH3]}}^{2}=0} and ( 3 ) . . . d [ CH 3 CO ] d t = k 2 [ CH 3 ] [ CH 3 CHO ] − k 3 [ CH 3 CO ] = 0 {\displaystyle (3)...{\frac {d{\ce {[CH3CO]}}}{dt}}=k_{2}{\ce {[CH3]}}{\ce {[CH3CHO]}}-k_{3}{\ce {[CH3CO]}}=0} Adding (2) and (3), we obtain k 1 [ CH 3 CHO ] − 2 k 4 [ CH 3 ] 2 = 0 {\displaystyle k_{1}{\ce {[CH3CHO]}}-2k_{4}{\ce {[CH3]}}^{2}=0} so that ( 4 ) . . . [ CH 3 ] = k 1 2 k 4 [ CH 3 CHO ] 1 / 2 {\displaystyle (4)...{\ce {[CH3]}}={\frac {k_{1}}{2k_{4}}}{\ce {[CH3CHO]}}^{1/2}} Using (4) in (1) gives the rate law ( 5 ) d [ CH 4 ] d t = k 1 2 k 4 k 2 [ CH 3 CHO ] 3 / 2 {\displaystyle (5){\frac {d{\ce {[CH4]}}}{dt}}={\frac {k_{1}}{2k_{4}}}k_{2}{\ce {[CH3CHO]}}^{3/2}} , which is order 3/2 in the reactant CH3CHO. A nuclear chain reaction was proposed by Leo Szilard in 1933, shortly after the neutron was discovered, yet more than five years before nuclear fission was first discovered. Szilárd knew of chemical chain reactions, and he had been reading about an energy-producing nuclear reaction involving high-energy protons bombarding lithium, demonstrated by John Cockcroft and Ernest Walton, in 1932. Now, Szilárd proposed to use neutrons theoretically produced from certain nuclear reactions in lighter isotopes, to induce further reactions in light isotopes that produced more neutrons. This would in theory produce a chain reaction at the level of the nucleus. He did not envision fission as one of these neutron-producing reactions, since this reaction was not known at the time. Experiments he proposed using beryllium and indium failed. Later, after fission was discovered in 1938, Szilárd immediately realized the possibility of using neutron-induced fission as the particular nuclear reaction necessary to create a chain-reaction, so long as fission also produced neutrons. In 1939, with Enrico Fermi, Szilárd proved this neutron-multiplying reaction in uranium. In this reaction, a neutron plus a fissionable atom causes a fission resulting in a larger number of neutrons than the single one that was consumed in the initial reaction. Thus was born the practical nuclear chain reaction by the mechanism of neutron-induced nuclear fission. Specifically, if one or more of the produced neutrons themselves interact with other fissionable nuclei, and these also undergo fission, then there is a possibility that the macroscopic overall fission reaction will not stop, but continue throughout the reaction material. This is then a self-propagating and thus self-sustaining chain reaction. This is the principle for nuclear reactors and atomic bombs. Demonstration of a self-sustaining nuclear chain reaction was accomplished by Enrico Fermi and others, in the successful operation of Chicago Pile-1, the first artificial nuclear reactor, in late 1942. An electron avalanche happens between two unconnected electrodes in a gas when an electric field exceeds a certain threshold. Random thermal collisions of gas atoms may result in a few free electrons and positively charged gas ions, in a process called impact ionization. Acceleration of these free electrons in a strong electric field causes them to gain energy, and when they impact other atoms, the energy causes release of new free electrons and ions (ionization), which fuels the same process. If this process happens faster than it is naturally quenched by ions recombining, the new ions multiply in successive cycles until the gas breaks down into a plasma and current flows freely in a discharge. Electron avalanches are essential to the dielectric breakdown process within gases. The process can culminate in corona discharges, streamers, leaders, or in a spark or continuous electric arc that completely bridges the gap. The process may extend huge sparks — streamers in lightning discharges propagate by formation of electron avalanches created in the high potential gradient ahead of the streamers' advancing tips. Once begun, avalanches are often intensified by the creation of photoelectrons as a result of ultraviolet radiation emitted by the excited medium's atoms in the aft-tip region. The extremely high temperature of the resulting plasma cracks the surrounding gas molecules and the free ions recombine to create new chemical compounds. The process can also be used to detect radiation that initiates the process, as the passage of a single particles can be amplified to large discharges. This is the mechanism of a Geiger counter and also the visualization possible with a spark chamber and other wire chambers. An avalanche breakdown process can happen in semiconductors, which in some ways conduct electricity analogously to a mildly ionized gas. Semiconductors rely on free electrons knocked out of the crystal by thermal vibration for conduction. Thus, unlike metals, semiconductors become better conductors the higher the temperature. This sets up conditions for the same type of positive feedback—heat from current flow causes temperature to rise, which increases charge carriers, lowering resistance, and causing more current to flow. This can continue to the point of complete breakdown of normal resistance at a semiconductor junction, and failure of the device (this may be temporary or permanent depending on whether there is physical damage to the crystal). Certain devices, such as avalanche diodes, deliberately make use of the effect. Examples of chain reactions in living organisms include excitation of neurons in epilepsy and lipid peroxidation. In peroxidation, a lipid radical reacts with oxygen to form a peroxyl radical (L• + O2 → LOO•). The peroxyl radical then oxidises another lipid, thus forming another lipid radical (LOO• + L–H → LOOH + L•). A chain reaction in glutamatergic synapses is the cause of synchronous discharge in some epileptic seizures.
[ { "paragraph_id": 0, "text": "A chain reaction is a sequence of reactions where a reactive product or by-product causes additional reactions to take place. In a chain reaction, positive feedback leads to a self-amplifying chain of events.", "title": "" }, { "paragraph_id": 1, "text": "Chain reactions are one way that systems which are not in thermodynamic equilibrium can release energy or increase entropy in order to reach a state of higher entropy. For example, a system may not be able to reach a lower energy state by releasing energy into the environment, because it is hindered or prevented in some way from taking the path that will result in the energy release. If a reaction results in a small energy release making way for more energy releases in an expanding chain, then the system will typically collapse explosively until much or all of the stored energy has been released.", "title": "" }, { "paragraph_id": 2, "text": "A macroscopic metaphor for chain reactions is thus a snowball causing a larger snowball until finally an avalanche results (\"snowball effect\"). This is a result of stored gravitational potential energy seeking a path of release over friction. Chemically, the equivalent to a snow avalanche is a spark causing a forest fire. In nuclear physics, a single stray neutron can result in a prompt critical event, which may finally be energetic enough for a nuclear reactor meltdown or (in a bomb) a nuclear explosion.", "title": "" }, { "paragraph_id": 3, "text": "Numerous chain reactions can be represented by a mathematical model based on Markov chains.", "title": "" }, { "paragraph_id": 4, "text": "In 1913, the German chemist Max Bodenstein first put forth the idea of chemical chain reactions. If two molecules react, not only molecules of the final reaction products are formed, but also some unstable molecules which can further react with the parent molecules with a far larger probability than the initial reactants. (In the new reaction, further unstable molecules are formed besides the stable products, and so on.)", "title": "Chemical chain reactions" }, { "paragraph_id": 5, "text": "In 1918, Walther Nernst proposed that the photochemical reaction between hydrogen and chlorine is a chain reaction in order to explain what is known as the quantum yield phenomena. This means that one photon of light is responsible for the formation of as many as 10 molecules of the product HCl. Nernst suggested that the photon dissociates a Cl2 molecule into two Cl atoms which each initiate a long chain of reaction steps forming HCl.", "title": "Chemical chain reactions" }, { "paragraph_id": 6, "text": "In 1923, Danish and Dutch scientists Christian Christiansen and Hendrik Anthony Kramers, in an analysis of the formation of polymers, pointed out that such a chain reaction need not start with a molecule excited by light, but could also start with two molecules colliding violently due to thermal energy as previously proposed for initiation of chemical reactions by van' t Hoff.", "title": "Chemical chain reactions" }, { "paragraph_id": 7, "text": "Christiansen and Kramers also noted that if, in one link of the reaction chain, two or more unstable molecules are produced, the reaction chain would branch and grow. The result is in fact an exponential growth, thus giving rise to explosive increases in reaction rates, and indeed to chemical explosions themselves. This was the first proposal for the mechanism of chemical explosions.", "title": "Chemical chain reactions" }, { "paragraph_id": 8, "text": "A quantitative chain chemical reaction theory was created later on by Soviet physicist Nikolay Semyonov in 1934. Semyonov shared the Nobel Prize in 1956 with Sir Cyril Norman Hinshelwood, who independently developed many of the same quantitative concepts.", "title": "Chemical chain reactions" }, { "paragraph_id": 9, "text": "The main types of steps in chain reaction are of the following types.", "title": "Chemical chain reactions" }, { "paragraph_id": 10, "text": "The chain length is defined as the average number of times the propagation cycle is repeated, and equals the overall reaction rate divided by the initiation rate.", "title": "Chemical chain reactions" }, { "paragraph_id": 11, "text": "Some chain reactions have complex rate equations with fractional order or mixed order kinetics.", "title": "Chemical chain reactions" }, { "paragraph_id": 12, "text": "The reaction H2 + Br2 → 2 HBr proceeds by the following mechanism:", "title": "Chemical chain reactions" }, { "paragraph_id": 13, "text": "As can be explained using the steady-state approximation, the thermal reaction has an initial rate of fractional order (3/2), and a complete rate equation with a two-term denominator (mixed-order kinetics).", "title": "Chemical chain reactions" }, { "paragraph_id": 14, "text": "The pyrolysis (thermal decomposition) of acetaldehyde, CH3CHO (g) → CH4 (g) + CO (g), proceeds via the Rice-Herzfeld mechanism:", "title": "Chemical chain reactions" }, { "paragraph_id": 15, "text": "The methyl and CHO groups are free radicals.", "title": "Chemical chain reactions" }, { "paragraph_id": 16, "text": "This reaction step provides methane, which is one of the two main products.", "title": "Chemical chain reactions" }, { "paragraph_id": 17, "text": "The product •CH3CO (g) of the previous step gives rise to carbon monoxide (CO), which is the second main product.", "title": "Chemical chain reactions" }, { "paragraph_id": 18, "text": "The sum of the two propagation steps corresponds to the overall reaction CH3CHO (g) → CH4 (g) + CO (g), catalyzed by a methyl radical •CH3.", "title": "Chemical chain reactions" }, { "paragraph_id": 19, "text": "This reaction is the only source of ethane (minor product) and it is concluded to be the main chain ending step.", "title": "Chemical chain reactions" }, { "paragraph_id": 20, "text": "Although this mechanism explains the principal products, there are others that are formed in a minor degree, such as acetone (CH3COCH3) and propanal (CH3CH2CHO).", "title": "Chemical chain reactions" }, { "paragraph_id": 21, "text": "Applying the Steady State Approximation for the intermediate species CH3(g) and CH3CO(g), the rate law for the formation of methane and the order of reaction are found:", "title": "Chemical chain reactions" }, { "paragraph_id": 22, "text": "The rate of formation of the product methane is", "title": "Chemical chain reactions" }, { "paragraph_id": 23, "text": "( 1 ) . . . d [ CH 4 ] d t = k 2 [ CH 3 ] [ CH 3 CHO ] {\\displaystyle (1)...{\\frac {d{\\ce {[CH4]}}}{dt}}=k_{2}{\\ce {[CH3]}}{\\ce {[CH3CHO]}}}", "title": "Chemical chain reactions" }, { "paragraph_id": 24, "text": "For the intermediates", "title": "Chemical chain reactions" }, { "paragraph_id": 25, "text": "( 2 ) . . . d [ CH 3 ] d t = k 1 [ CH 3 CHO ] − k 2 [ CH 3 ] [ CH 3 CHO ] + k 3 [ CH 3 CO ] − 2 k 4 [ CH 3 ] 2 = 0 {\\displaystyle (2)...{\\frac {d{\\ce {[CH_3]}}}{dt}}=k_{1}{\\ce {[CH3CHO]}}-k_{2}{\\ce {[CH3]}}{\\ce {[CH3CHO]}}+k_{3}{\\ce {[CH3CO]}}-2k_{4}{\\ce {[CH3]}}^{2}=0} and", "title": "Chemical chain reactions" }, { "paragraph_id": 26, "text": "( 3 ) . . . d [ CH 3 CO ] d t = k 2 [ CH 3 ] [ CH 3 CHO ] − k 3 [ CH 3 CO ] = 0 {\\displaystyle (3)...{\\frac {d{\\ce {[CH3CO]}}}{dt}}=k_{2}{\\ce {[CH3]}}{\\ce {[CH3CHO]}}-k_{3}{\\ce {[CH3CO]}}=0}", "title": "Chemical chain reactions" }, { "paragraph_id": 27, "text": "Adding (2) and (3), we obtain k 1 [ CH 3 CHO ] − 2 k 4 [ CH 3 ] 2 = 0 {\\displaystyle k_{1}{\\ce {[CH3CHO]}}-2k_{4}{\\ce {[CH3]}}^{2}=0}", "title": "Chemical chain reactions" }, { "paragraph_id": 28, "text": "so that ( 4 ) . . . [ CH 3 ] = k 1 2 k 4 [ CH 3 CHO ] 1 / 2 {\\displaystyle (4)...{\\ce {[CH3]}}={\\frac {k_{1}}{2k_{4}}}{\\ce {[CH3CHO]}}^{1/2}}", "title": "Chemical chain reactions" }, { "paragraph_id": 29, "text": "Using (4) in (1) gives the rate law ( 5 ) d [ CH 4 ] d t = k 1 2 k 4 k 2 [ CH 3 CHO ] 3 / 2 {\\displaystyle (5){\\frac {d{\\ce {[CH4]}}}{dt}}={\\frac {k_{1}}{2k_{4}}}k_{2}{\\ce {[CH3CHO]}}^{3/2}} , which is order 3/2 in the reactant CH3CHO.", "title": "Chemical chain reactions" }, { "paragraph_id": 30, "text": "A nuclear chain reaction was proposed by Leo Szilard in 1933, shortly after the neutron was discovered, yet more than five years before nuclear fission was first discovered. Szilárd knew of chemical chain reactions, and he had been reading about an energy-producing nuclear reaction involving high-energy protons bombarding lithium, demonstrated by John Cockcroft and Ernest Walton, in 1932. Now, Szilárd proposed to use neutrons theoretically produced from certain nuclear reactions in lighter isotopes, to induce further reactions in light isotopes that produced more neutrons. This would in theory produce a chain reaction at the level of the nucleus. He did not envision fission as one of these neutron-producing reactions, since this reaction was not known at the time. Experiments he proposed using beryllium and indium failed.", "title": "Nuclear chain reactions" }, { "paragraph_id": 31, "text": "Later, after fission was discovered in 1938, Szilárd immediately realized the possibility of using neutron-induced fission as the particular nuclear reaction necessary to create a chain-reaction, so long as fission also produced neutrons. In 1939, with Enrico Fermi, Szilárd proved this neutron-multiplying reaction in uranium. In this reaction, a neutron plus a fissionable atom causes a fission resulting in a larger number of neutrons than the single one that was consumed in the initial reaction. Thus was born the practical nuclear chain reaction by the mechanism of neutron-induced nuclear fission.", "title": "Nuclear chain reactions" }, { "paragraph_id": 32, "text": "Specifically, if one or more of the produced neutrons themselves interact with other fissionable nuclei, and these also undergo fission, then there is a possibility that the macroscopic overall fission reaction will not stop, but continue throughout the reaction material. This is then a self-propagating and thus self-sustaining chain reaction. This is the principle for nuclear reactors and atomic bombs.", "title": "Nuclear chain reactions" }, { "paragraph_id": 33, "text": "Demonstration of a self-sustaining nuclear chain reaction was accomplished by Enrico Fermi and others, in the successful operation of Chicago Pile-1, the first artificial nuclear reactor, in late 1942.", "title": "Nuclear chain reactions" }, { "paragraph_id": 34, "text": "An electron avalanche happens between two unconnected electrodes in a gas when an electric field exceeds a certain threshold. Random thermal collisions of gas atoms may result in a few free electrons and positively charged gas ions, in a process called impact ionization. Acceleration of these free electrons in a strong electric field causes them to gain energy, and when they impact other atoms, the energy causes release of new free electrons and ions (ionization), which fuels the same process. If this process happens faster than it is naturally quenched by ions recombining, the new ions multiply in successive cycles until the gas breaks down into a plasma and current flows freely in a discharge.", "title": "Electron avalanche in gases" }, { "paragraph_id": 35, "text": "Electron avalanches are essential to the dielectric breakdown process within gases. The process can culminate in corona discharges, streamers, leaders, or in a spark or continuous electric arc that completely bridges the gap. The process may extend huge sparks — streamers in lightning discharges propagate by formation of electron avalanches created in the high potential gradient ahead of the streamers' advancing tips. Once begun, avalanches are often intensified by the creation of photoelectrons as a result of ultraviolet radiation emitted by the excited medium's atoms in the aft-tip region. The extremely high temperature of the resulting plasma cracks the surrounding gas molecules and the free ions recombine to create new chemical compounds.", "title": "Electron avalanche in gases" }, { "paragraph_id": 36, "text": "The process can also be used to detect radiation that initiates the process, as the passage of a single particles can be amplified to large discharges. This is the mechanism of a Geiger counter and also the visualization possible with a spark chamber and other wire chambers.", "title": "Electron avalanche in gases" }, { "paragraph_id": 37, "text": "An avalanche breakdown process can happen in semiconductors, which in some ways conduct electricity analogously to a mildly ionized gas. Semiconductors rely on free electrons knocked out of the crystal by thermal vibration for conduction. Thus, unlike metals, semiconductors become better conductors the higher the temperature. This sets up conditions for the same type of positive feedback—heat from current flow causes temperature to rise, which increases charge carriers, lowering resistance, and causing more current to flow. This can continue to the point of complete breakdown of normal resistance at a semiconductor junction, and failure of the device (this may be temporary or permanent depending on whether there is physical damage to the crystal). Certain devices, such as avalanche diodes, deliberately make use of the effect.", "title": "Avalanche breakdown in semiconductors" }, { "paragraph_id": 38, "text": "Examples of chain reactions in living organisms include excitation of neurons in epilepsy and lipid peroxidation. In peroxidation, a lipid radical reacts with oxygen to form a peroxyl radical (L• + O2 → LOO•). The peroxyl radical then oxidises another lipid, thus forming another lipid radical (LOO• + L–H → LOOH + L•). A chain reaction in glutamatergic synapses is the cause of synchronous discharge in some epileptic seizures.", "title": "Living organisms" } ]
A chain reaction is a sequence of reactions where a reactive product or by-product causes additional reactions to take place. In a chain reaction, positive feedback leads to a self-amplifying chain of events. Chain reactions are one way that systems which are not in thermodynamic equilibrium can release energy or increase entropy in order to reach a state of higher entropy. For example, a system may not be able to reach a lower energy state by releasing energy into the environment, because it is hindered or prevented in some way from taking the path that will result in the energy release. If a reaction results in a small energy release making way for more energy releases in an expanding chain, then the system will typically collapse explosively until much or all of the stored energy has been released. A macroscopic metaphor for chain reactions is thus a snowball causing a larger snowball until finally an avalanche results. This is a result of stored gravitational potential energy seeking a path of release over friction. Chemically, the equivalent to a snow avalanche is a spark causing a forest fire. In nuclear physics, a single stray neutron can result in a prompt critical event, which may finally be energetic enough for a nuclear reactor meltdown or a nuclear explosion. Numerous chain reactions can be represented by a mathematical model based on Markov chains.
2002-02-25T15:51:15Z
2023-10-26T11:34:13Z
[ "Template:Cite web", "Template:Cite book", "Template:Cite journal", "Template:Commons category", "Template:About", "Template:Main", "Template:Reflist", "Template:ISBN", "Template:Wiktionary", "Template:Authority control", "Template:Short description" ]
https://en.wikipedia.org/wiki/Chain_reaction